drm for 5.18-rc1
dma-buf: - rename dma-buf-map to iosys-map core: - move buddy allocator to core - add pci/platform init macros - improve EDID parser deep color handling - EDID timing type 7 support - add GPD Win Max quirk - add yes/no helpers to string_helpers - flatten syncobj chains - add nomodeset support to lots of drivers - improve fb-helper clipping support - add default property value interface fbdev: - improve fbdev ops speed ttm: - add a backpointer from ttm bo->ttm resource dp: - move displayport headers - add a dp helper module bridge: - anx7625 atomic support, HDCP support panel: - split out panel-lvds and lvds bindings - find panels in OF subnodes privacy: - add chromeos privacy screen support fb: - hot unplug fw fb on forced removal simpledrm: - request region instead of marking ioresource busy - add panel oreintation property udmabuf: - fix oops with 0 pages amdgpu: - power management code cleanup - Enable freesync video mode by default - RAS code cleanup - Improve VRAM access for debug using SDMA - SR-IOV rework special register access and fixes - profiling power state request ioctl - expose IP discovery via sysfs - Cyan skillfish updates - GC 10.3.7, SDMA 5.2.7, DCN 3.1.6 updates - expose benchmark tests via debugfs - add module param to disable XGMI for testing - GPU reset debugfs register dumping support amdkfd: - CRIU support - SDMA queue fixes radeon: - UVD suspend fix - iMac backlight fix i915: - minimal parallel submission for execlists - DG2-G12 subplatform added - DG2 programming workarounds - DG2 accelerated migration support - flat CCS and CCS engine support for XeHP - initial small BAR support - drop fake LMEM support - ADL-N PCH support - bigjoiner updates - introduce VMA resources and async unbinding - register definitions cleanups - multi-FBC refactoring - DG1 OPROM over SPI support - ADL-N platform enabling - opregion mailbox #5 support - DP MST ESI improvements - drm device based logging - async flip optimisation for DG2 - CPU arch abstraction fixes - improve GuC ADS init to work on aarch64 - tweak TTM LRU priority hint - GuC 69.0.3 support - remove short term execbuf pins nouveau: - higher DP/eDP bitrates - backlight fixes msm: - dpu + dp support for sc8180x - dp support for sm8350 - dpu + dsi support for qcm2290 - 10nm dsi phy tuning support - bridge support for dp encoder - gpu support for additional 7c3 SKUs ingenic: - HDMI support for JZ4780 - aux channel EDID support ast: - AST2600 support - add wide screen support - create DP/DVI connectors omapdrm: - fix implicit dma_buf fencing vc4: - add CSC + full range support - better display firmware handoff panfrost: - add initial dual-core GPU support stm: - new revision support - fb handover support mediatek: - transfer display binding document to yaml format. - add mt8195 display device binding. - allow commands to be sent during video mode. - add wait_for_event for crtc disable by cmdq. tegra: - YUV format support rcar-du: - LVDS support for M3-W+ (R8A77961) exynos: - BGR pixel format for FIMD device -----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEEEKbZHaGwW9KfbeusDHTzWXnEhr4FAmI71h4ACgkQDHTzWXnE hr6wKg//SvKFiEOhptua8Ao8XYkhXpg1/tgdAs4D7bZ0YgJyF4Im0RuFOKMmF3mN 0Y8AwguqrsmrOAFbK8B1WEysB66DmGlZN/V2Q75X7fui8xs4uGF2Fcxyr+265zhf vONPwAoxYr+KXqwOI1p1BP2QEL6bJTdu+nrXRsXIBIrWnw8ehXJlw3fDhgvG5QBn RPdbU7lQnd47hdYxkbe5SiZvWnPC46dJmpqsRJir0xjskR6juU36f34C4IKhTGwO NDPeWVgusVXtIC/F4X6RebCWG0f66h+CUFa9zeYIleI/2/5yZWXfcw6Obx8HgPkt gieiI0R4TpkVxeHCApCQ5UpxWgfSOXdoDoyw172bKQw7JCHVEkSwenyMEEwNet6r SCJrRmlB1PBI/iTWmhm9qgrU46ZZyAnQoTlCsXGzJncdP3hzGlA1embl00yfEl7f wzM35N20qd5T4VKUEF8QYF0fLZYmKw4cWVASu4hQ3qmGal6frilphz2J8JK8hQNq KhFqNbVTnZsQNr9LBCbrf0kOPaMzpmW+2vQG9ApdAb1N3gNPZT7ctti0Xq5N2OUR AipWFAsDPS2NPADKmBtDU55PgFH9MqUIsoHHXLV4Qi76dvCqYoN68qRQxrL7rpSu b0gr0YKU2QcIB/uytjOPHcgtI5Xvrh+q8JPz/dJ38/Esgjmk4wo= =uRsT -----END PGP SIGNATURE----- Merge tag 'drm-next-2022-03-24' of git://anongit.freedesktop.org/drm/drm Pull drm updates from Dave Airlie: "Lots of work all over, Intel improving DG2 support, amdkfd CRIU support, msm new hw support, and faster fbdev support. dma-buf: - rename dma-buf-map to iosys-map core: - move buddy allocator to core - add pci/platform init macros - improve EDID parser deep color handling - EDID timing type 7 support - add GPD Win Max quirk - add yes/no helpers to string_helpers - flatten syncobj chains - add nomodeset support to lots of drivers - improve fb-helper clipping support - add default property value interface fbdev: - improve fbdev ops speed ttm: - add a backpointer from ttm bo->ttm resource dp: - move displayport headers - add a dp helper module bridge: - anx7625 atomic support, HDCP support panel: - split out panel-lvds and lvds bindings - find panels in OF subnodes privacy: - add chromeos privacy screen support fb: - hot unplug fw fb on forced removal simpledrm: - request region instead of marking ioresource busy - add panel oreintation property udmabuf: - fix oops with 0 pages amdgpu: - power management code cleanup - Enable freesync video mode by default - RAS code cleanup - Improve VRAM access for debug using SDMA - SR-IOV rework special register access and fixes - profiling power state request ioctl - expose IP discovery via sysfs - Cyan skillfish updates - GC 10.3.7, SDMA 5.2.7, DCN 3.1.6 updates - expose benchmark tests via debugfs - add module param to disable XGMI for testing - GPU reset debugfs register dumping support amdkfd: - CRIU support - SDMA queue fixes radeon: - UVD suspend fix - iMac backlight fix i915: - minimal parallel submission for execlists - DG2-G12 subplatform added - DG2 programming workarounds - DG2 accelerated migration support - flat CCS and CCS engine support for XeHP - initial small BAR support - drop fake LMEM support - ADL-N PCH support - bigjoiner updates - introduce VMA resources and async unbinding - register definitions cleanups - multi-FBC refactoring - DG1 OPROM over SPI support - ADL-N platform enabling - opregion mailbox #5 support - DP MST ESI improvements - drm device based logging - async flip optimisation for DG2 - CPU arch abstraction fixes - improve GuC ADS init to work on aarch64 - tweak TTM LRU priority hint - GuC 69.0.3 support - remove short term execbuf pins nouveau: - higher DP/eDP bitrates - backlight fixes msm: - dpu + dp support for sc8180x - dp support for sm8350 - dpu + dsi support for qcm2290 - 10nm dsi phy tuning support - bridge support for dp encoder - gpu support for additional 7c3 SKUs ingenic: - HDMI support for JZ4780 - aux channel EDID support ast: - AST2600 support - add wide screen support - create DP/DVI connectors omapdrm: - fix implicit dma_buf fencing vc4: - add CSC + full range support - better display firmware handoff panfrost: - add initial dual-core GPU support stm: - new revision support - fb handover support mediatek: - transfer display binding document to yaml format. - add mt8195 display device binding. - allow commands to be sent during video mode. - add wait_for_event for crtc disable by cmdq. tegra: - YUV format support rcar-du: - LVDS support for M3-W+ (R8A77961) exynos: - BGR pixel format for FIMD device" * tag 'drm-next-2022-03-24' of git://anongit.freedesktop.org/drm/drm: (1529 commits) drm/i915/display: Do not re-enable PSR after it was marked as not reliable drm/i915/display: Fix HPD short pulse handling for eDP drm/amdgpu: Use drm_mode_copy() drm/radeon: Use drm_mode_copy() drm/amdgpu: Use ternary operator in `vcn_v1_0_start()` drm/amdgpu: Remove pointless on stack mode copies drm/amd/pm: fix indenting in __smu_cmn_reg_print_error() drm/amdgpu/dc: fix typos in comments drm/amdgpu: fix typos in comments drm/amd/pm: fix typos in comments drm/amdgpu: Add stolen reserved memory for MI25 SRIOV. drm/amdgpu: Merge get_reserved_allocation to get_vbios_allocations. drm/amdkfd: evict svm bo worker handle error drm/amdgpu/vcn: fix vcn ring test failure in igt reload test drm/amdgpu: only allow secure submission on rings which support that drm/amdgpu: fixed the warnings reported by kernel test robot drm/amd/display: 3.2.177 drm/amd/display: [FW Promotion] Release 0.0.108.0 drm/amd/display: Add save/restore PANEL_PWRSEQ_REF_DIV2 drm/amd/display: Wait for hubp read line for Pollock ...
This commit is contained in:
commit
b14ffae378
|
@ -83,6 +83,9 @@ properties:
|
|||
type: boolean
|
||||
description: let the driver enable audio HDMI codec function or not.
|
||||
|
||||
aux-bus:
|
||||
$ref: /schemas/display/dp-aux-bus.yaml#
|
||||
|
||||
ports:
|
||||
$ref: /schemas/graph.yaml#/properties/ports
|
||||
|
||||
|
@ -150,5 +153,19 @@ examples:
|
|||
};
|
||||
};
|
||||
};
|
||||
|
||||
aux-bus {
|
||||
panel {
|
||||
compatible = "innolux,n125hce-gn1";
|
||||
power-supply = <&pp3300_disp_x>;
|
||||
backlight = <&backlight_lcd0>;
|
||||
|
||||
port {
|
||||
panel_in: endpoint {
|
||||
remote-endpoint = <&anx7625_out>;
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
|
|
@ -0,0 +1,82 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/display/bridge/ingenic,jz4780-hdmi.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Bindings for Ingenic JZ4780 HDMI Transmitter
|
||||
|
||||
maintainers:
|
||||
- H. Nikolaus Schaller <hns@goldelico.com>
|
||||
|
||||
description: |
|
||||
The HDMI Transmitter in the Ingenic JZ4780 is a Synopsys DesignWare HDMI 1.4
|
||||
TX controller IP with accompanying PHY IP.
|
||||
|
||||
allOf:
|
||||
- $ref: synopsys,dw-hdmi.yaml#
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
const: ingenic,jz4780-dw-hdmi
|
||||
|
||||
reg-io-width:
|
||||
const: 4
|
||||
|
||||
clocks:
|
||||
maxItems: 2
|
||||
|
||||
ports:
|
||||
$ref: /schemas/graph.yaml#/properties/ports
|
||||
|
||||
properties:
|
||||
port@0:
|
||||
$ref: /schemas/graph.yaml#/properties/port
|
||||
description: Input from LCD controller output.
|
||||
|
||||
port@1:
|
||||
$ref: /schemas/graph.yaml#/properties/port
|
||||
description: Link to the HDMI connector.
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- clocks
|
||||
- clock-names
|
||||
- ports
|
||||
- reg-io-width
|
||||
|
||||
unevaluatedProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
#include <dt-bindings/clock/ingenic,jz4780-cgu.h>
|
||||
|
||||
hdmi: hdmi@10180000 {
|
||||
compatible = "ingenic,jz4780-dw-hdmi";
|
||||
reg = <0x10180000 0x8000>;
|
||||
reg-io-width = <4>;
|
||||
ddc-i2c-bus = <&i2c4>;
|
||||
interrupt-parent = <&intc>;
|
||||
interrupts = <3>;
|
||||
clocks = <&cgu JZ4780_CLK_AHB0>, <&cgu JZ4780_CLK_HDMI>;
|
||||
clock-names = "iahb", "isfr";
|
||||
|
||||
ports {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
hdmi_in: port@0 {
|
||||
reg = <0>;
|
||||
dw_hdmi_in: endpoint {
|
||||
remote-endpoint = <&jz4780_lcd_out>;
|
||||
};
|
||||
};
|
||||
hdmi_out: port@1 {
|
||||
reg = <1>;
|
||||
dw_hdmi_out: endpoint {
|
||||
remote-endpoint = <&hdmi_con>;
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
...
|
|
@ -39,6 +39,7 @@ properties:
|
|||
- const: lvds-encoder # Generic LVDS encoder compatible fallback
|
||||
- items:
|
||||
- enum:
|
||||
- ti,ds90cf364a # For the DS90CF364A FPD-Link LVDS Receiver
|
||||
- ti,ds90cf384a # For the DS90CF384A FPD-Link LVDS Receiver
|
||||
- const: lvds-decoder # Generic LVDS decoders compatible fallback
|
||||
- enum:
|
||||
|
@ -67,7 +68,7 @@ properties:
|
|||
- vesa-24
|
||||
description: |
|
||||
The color signals mapping order. See details in
|
||||
Documentation/devicetree/bindings/display/panel/lvds.yaml
|
||||
Documentation/devicetree/bindings/display/lvds.yaml
|
||||
|
||||
port@1:
|
||||
$ref: /schemas/graph.yaml#/properties/port
|
||||
|
|
|
@ -28,6 +28,7 @@ properties:
|
|||
- renesas,r8a7793-lvds # for R-Car M2-N compatible LVDS encoders
|
||||
- renesas,r8a7795-lvds # for R-Car H3 compatible LVDS encoders
|
||||
- renesas,r8a7796-lvds # for R-Car M3-W compatible LVDS encoders
|
||||
- renesas,r8a77961-lvds # for R-Car M3-W+ compatible LVDS encoders
|
||||
- renesas,r8a77965-lvds # for R-Car M3-N compatible LVDS encoders
|
||||
- renesas,r8a77970-lvds # for R-Car V3M compatible LVDS encoders
|
||||
- renesas,r8a77980-lvds # for R-Car V3H compatible LVDS encoders
|
||||
|
|
|
@ -32,6 +32,9 @@ properties:
|
|||
maxItems: 1
|
||||
description: GPIO specifier for bridge_en pin (active high).
|
||||
|
||||
vcc-supply:
|
||||
description: A 1.8V power supply (see regulator/regulator.yaml).
|
||||
|
||||
ports:
|
||||
$ref: /schemas/graph.yaml#/properties/ports
|
||||
|
||||
|
@ -91,7 +94,6 @@ properties:
|
|||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- enable-gpios
|
||||
- ports
|
||||
|
||||
allOf:
|
||||
|
@ -133,6 +135,7 @@ examples:
|
|||
reg = <0x2d>;
|
||||
|
||||
enable-gpios = <&gpio2 1 GPIO_ACTIVE_HIGH>;
|
||||
vcc-supply = <®_sn65dsi83_1v8>;
|
||||
|
||||
ports {
|
||||
#address-cells = <1>;
|
||||
|
|
|
@ -1,10 +1,10 @@
|
|||
# SPDX-License-Identifier: GPL-2.0
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/display/panel/lvds.yaml#
|
||||
$id: http://devicetree.org/schemas/display/lvds.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: LVDS Display Panel
|
||||
title: LVDS Display Common Properties
|
||||
|
||||
maintainers:
|
||||
- Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
|
||||
|
@ -13,8 +13,8 @@ maintainers:
|
|||
description: |+
|
||||
LVDS is a physical layer specification defined in ANSI/TIA/EIA-644-A. Multiple
|
||||
incompatible data link layers have been used over time to transmit image data
|
||||
to LVDS panels. This bindings supports display panels compatible with the
|
||||
following specifications.
|
||||
to LVDS devices. This bindings supports devices compatible with the following
|
||||
specifications.
|
||||
|
||||
[JEIDA] "Digital Interface Standards for Monitor", JEIDA-59-1999, February
|
||||
1999 (Version 1.0), Japan Electronic Industry Development Association (JEIDA)
|
||||
|
@ -26,18 +26,7 @@ description: |+
|
|||
Device compatible with those specifications have been marketed under the
|
||||
FPD-Link and FlatLink brands.
|
||||
|
||||
allOf:
|
||||
- $ref: panel-common.yaml#
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
contains:
|
||||
const: panel-lvds
|
||||
description:
|
||||
Shall contain "panel-lvds" in addition to a mandatory panel-specific
|
||||
compatible string defined in individual panel bindings. The "panel-lvds"
|
||||
value shall never be used on its own.
|
||||
|
||||
data-mapping:
|
||||
enum:
|
||||
- jeida-18
|
||||
|
@ -96,22 +85,6 @@ properties:
|
|||
If set, reverse the bit order described in the data mappings below on all
|
||||
data lanes, transmitting bits for slots 6 to 0 instead of 0 to 6.
|
||||
|
||||
port: true
|
||||
ports: true
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- data-mapping
|
||||
- width-mm
|
||||
- height-mm
|
||||
- panel-timing
|
||||
|
||||
oneOf:
|
||||
- required:
|
||||
- port
|
||||
- required:
|
||||
- ports
|
||||
|
||||
additionalProperties: true
|
||||
|
||||
...
|
|
@ -0,0 +1,77 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/display/mediatek/mediatek,aal.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Mediatek display adaptive ambient light processor
|
||||
|
||||
maintainers:
|
||||
- Chun-Kuang Hu <chunkuang.hu@kernel.org>
|
||||
- Philipp Zabel <p.zabel@pengutronix.de>
|
||||
|
||||
description: |
|
||||
Mediatek display adaptive ambient light processor, namely AAL,
|
||||
is responsible for backlight power saving and sunlight visibility improving.
|
||||
AAL device node must be siblings to the central MMSYS_CONFIG node.
|
||||
For a description of the MMSYS_CONFIG binding, see
|
||||
Documentation/devicetree/bindings/arm/mediatek/mediatek,mmsys.yaml
|
||||
for details.
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
oneOf:
|
||||
- items:
|
||||
- const: mediatek,mt8173-disp-aal
|
||||
- items:
|
||||
- enum:
|
||||
- mediatek,mt2712-disp-aal
|
||||
- mediatek,mt8183-disp-aal
|
||||
- mediatek,mt8192-disp-aal
|
||||
- mediatek,mt8195-disp-aal
|
||||
- enum:
|
||||
- mediatek,mt8173-disp-aal
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
interrupts:
|
||||
maxItems: 1
|
||||
|
||||
power-domains:
|
||||
description: A phandle and PM domain specifier as defined by bindings of
|
||||
the power controller specified by phandle. See
|
||||
Documentation/devicetree/bindings/power/power-domain.yaml for details.
|
||||
|
||||
clocks:
|
||||
items:
|
||||
- description: AAL Clock
|
||||
|
||||
mediatek,gce-client-reg:
|
||||
description: The register of client driver can be configured by gce with
|
||||
4 arguments defined in this property, such as phandle of gce, subsys id,
|
||||
register offset and size. Each GCE subsys id is mapping to a client
|
||||
defined in the header include/dt-bindings/gce/<chip>-gce.h.
|
||||
$ref: /schemas/types.yaml#/definitions/phandle-array
|
||||
maxItems: 1
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- interrupts
|
||||
- power-domains
|
||||
- clocks
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
|
||||
aal@14015000 {
|
||||
compatible = "mediatek,mt8173-disp-aal";
|
||||
reg = <0 0x14015000 0 0x1000>;
|
||||
interrupts = <GIC_SPI 189 IRQ_TYPE_LEVEL_LOW>;
|
||||
power-domains = <&scpsys MT8173_POWER_DOMAIN_MM>;
|
||||
clocks = <&mmsys CLK_MM_DISP_AAL>;
|
||||
mediatek,gce-client-reg = <&gce SUBSYS_1401XXXX 0x5000 0x1000>;
|
||||
};
|
|
@ -0,0 +1,76 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/display/mediatek/mediatek,ccorr.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Mediatek display color correction
|
||||
|
||||
maintainers:
|
||||
- Chun-Kuang Hu <chunkuang.hu@kernel.org>
|
||||
- Philipp Zabel <p.zabel@pengutronix.de>
|
||||
|
||||
description: |
|
||||
Mediatek display color correction, namely CCORR, reproduces correct color
|
||||
on panels with different color gamut.
|
||||
CCORR device node must be siblings to the central MMSYS_CONFIG node.
|
||||
For a description of the MMSYS_CONFIG binding, see
|
||||
Documentation/devicetree/bindings/arm/mediatek/mediatek,mmsys.yaml
|
||||
for details.
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
oneOf:
|
||||
- items:
|
||||
- const: mediatek,mt8183-disp-ccorr
|
||||
- items:
|
||||
- const: mediatek,mt8192-disp-ccorr
|
||||
- items:
|
||||
- enum:
|
||||
- mediatek,mt8195-disp-ccorr
|
||||
- enum:
|
||||
- mediatek,mt8192-disp-ccorr
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
interrupts:
|
||||
maxItems: 1
|
||||
|
||||
power-domains:
|
||||
description: A phandle and PM domain specifier as defined by bindings of
|
||||
the power controller specified by phandle. See
|
||||
Documentation/devicetree/bindings/power/power-domain.yaml for details.
|
||||
|
||||
clocks:
|
||||
items:
|
||||
- description: CCORR Clock
|
||||
|
||||
mediatek,gce-client-reg:
|
||||
description: The register of client driver can be configured by gce with
|
||||
4 arguments defined in this property, such as phandle of gce, subsys id,
|
||||
register offset and size. Each GCE subsys id is mapping to a client
|
||||
defined in the header include/dt-bindings/gce/<chip>-gce.h.
|
||||
$ref: /schemas/types.yaml#/definitions/phandle-array
|
||||
maxItems: 1
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- interrupts
|
||||
- power-domains
|
||||
- clocks
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
|
||||
ccorr0: ccorr@1400f000 {
|
||||
compatible = "mediatek,mt8183-disp-ccorr";
|
||||
reg = <0 0x1400f000 0 0x1000>;
|
||||
interrupts = <GIC_SPI 232 IRQ_TYPE_LEVEL_LOW>;
|
||||
power-domains = <&spm MT8183_POWER_DOMAIN_DISP>;
|
||||
clocks = <&mmsys CLK_MM_DISP_CCORR0>;
|
||||
mediatek,gce-client-reg = <&gce SUBSYS_1400XXXX 0xf000 0x1000>;
|
||||
};
|
|
@ -0,0 +1,86 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/display/mediatek/mediatek,color.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Mediatek display color processor
|
||||
|
||||
maintainers:
|
||||
- Chun-Kuang Hu <chunkuang.hu@kernel.org>
|
||||
- Philipp Zabel <p.zabel@pengutronix.de>
|
||||
|
||||
description: |
|
||||
Mediatek display color processor, namely COLOR, provides hue, luma and
|
||||
saturation adjustments to get better picture quality and to have one panel
|
||||
resemble the other in their output characteristics.
|
||||
COLOR device node must be siblings to the central MMSYS_CONFIG node.
|
||||
For a description of the MMSYS_CONFIG binding, see
|
||||
Documentation/devicetree/bindings/arm/mediatek/mediatek,mmsys.yaml
|
||||
for details.
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
oneOf:
|
||||
- items:
|
||||
- const: mediatek,mt2701-disp-color
|
||||
- items:
|
||||
- const: mediatek,mt8167-disp-color
|
||||
- items:
|
||||
- const: mediatek,mt8173-disp-color
|
||||
- items:
|
||||
- enum:
|
||||
- mediatek,mt7623-disp-color
|
||||
- mediatek,mt2712-disp-color
|
||||
- enum:
|
||||
- mediatek,mt2701-disp-color
|
||||
- items:
|
||||
- enum:
|
||||
- mediatek,mt8183-disp-color
|
||||
- mediatek,mt8192-disp-color
|
||||
- mediatek,mt8195-disp-color
|
||||
- enum:
|
||||
- mediatek,mt8173-disp-color
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
interrupts:
|
||||
maxItems: 1
|
||||
|
||||
power-domains:
|
||||
description: A phandle and PM domain specifier as defined by bindings of
|
||||
the power controller specified by phandle. See
|
||||
Documentation/devicetree/bindings/power/power-domain.yaml for details.
|
||||
|
||||
clocks:
|
||||
items:
|
||||
- description: COLOR Clock
|
||||
|
||||
mediatek,gce-client-reg:
|
||||
description: The register of client driver can be configured by gce with
|
||||
4 arguments defined in this property, such as phandle of gce, subsys id,
|
||||
register offset and size. Each GCE subsys id is mapping to a client
|
||||
defined in the header include/dt-bindings/gce/<chip>-gce.h.
|
||||
$ref: /schemas/types.yaml#/definitions/phandle-array
|
||||
maxItems: 1
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- interrupts
|
||||
- power-domains
|
||||
- clocks
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
|
||||
color0: color@14013000 {
|
||||
compatible = "mediatek,mt8173-disp-color";
|
||||
reg = <0 0x14013000 0 0x1000>;
|
||||
interrupts = <GIC_SPI 187 IRQ_TYPE_LEVEL_LOW>;
|
||||
power-domains = <&scpsys MT8173_POWER_DOMAIN_MM>;
|
||||
clocks = <&mmsys CLK_MM_DISP_COLOR0>;
|
||||
mediatek,gce-client-reg = <&gce SUBSYS_1401XXXX 0x3000 0x1000>;
|
||||
};
|
|
@ -1,210 +0,0 @@
|
|||
Mediatek display subsystem
|
||||
==========================
|
||||
|
||||
The Mediatek display subsystem consists of various DISP function blocks in the
|
||||
MMSYS register space. The connections between them can be configured by output
|
||||
and input selectors in the MMSYS_CONFIG register space. Pixel clock and start
|
||||
of frame signal are distributed to the other function blocks by a DISP_MUTEX
|
||||
function block.
|
||||
|
||||
All DISP device tree nodes must be siblings to the central MMSYS_CONFIG node.
|
||||
For a description of the MMSYS_CONFIG binding, see
|
||||
Documentation/devicetree/bindings/arm/mediatek/mediatek,mmsys.yaml.
|
||||
|
||||
DISP function blocks
|
||||
====================
|
||||
|
||||
A display stream starts at a source function block that reads pixel data from
|
||||
memory and ends with a sink function block that drives pixels on a display
|
||||
interface, or writes pixels back to memory. All DISP function blocks have
|
||||
their own register space, interrupt, and clock gate. The blocks that can
|
||||
access memory additionally have to list the IOMMU and local arbiter they are
|
||||
connected to.
|
||||
|
||||
For a description of the display interface sink function blocks, see
|
||||
Documentation/devicetree/bindings/display/mediatek/mediatek,dsi.txt and
|
||||
Documentation/devicetree/bindings/display/mediatek/mediatek,dpi.yaml.
|
||||
|
||||
Required properties (all function blocks):
|
||||
- compatible: "mediatek,<chip>-disp-<function>", one of
|
||||
"mediatek,<chip>-disp-ovl" - overlay (4 layers, blending, csc)
|
||||
"mediatek,<chip>-disp-ovl-2l" - overlay (2 layers, blending, csc)
|
||||
"mediatek,<chip>-disp-rdma" - read DMA / line buffer
|
||||
"mediatek,<chip>-disp-wdma" - write DMA
|
||||
"mediatek,<chip>-disp-ccorr" - color correction
|
||||
"mediatek,<chip>-disp-color" - color processor
|
||||
"mediatek,<chip>-disp-dither" - dither
|
||||
"mediatek,<chip>-disp-aal" - adaptive ambient light controller
|
||||
"mediatek,<chip>-disp-gamma" - gamma correction
|
||||
"mediatek,<chip>-disp-merge" - merge streams from two RDMA sources
|
||||
"mediatek,<chip>-disp-postmask" - control round corner for display frame
|
||||
"mediatek,<chip>-disp-split" - split stream to two encoders
|
||||
"mediatek,<chip>-disp-ufoe" - data compression engine
|
||||
"mediatek,<chip>-dsi" - DSI controller, see mediatek,dsi.txt
|
||||
"mediatek,<chip>-dpi" - DPI controller, see mediatek,dpi.txt
|
||||
"mediatek,<chip>-disp-mutex" - display mutex
|
||||
"mediatek,<chip>-disp-od" - overdrive
|
||||
the supported chips are mt2701, mt7623, mt2712, mt8167, mt8173, mt8183 and mt8192.
|
||||
- reg: Physical base address and length of the function block register space
|
||||
- interrupts: The interrupt signal from the function block (required, except for
|
||||
merge and split function blocks).
|
||||
- clocks: device clocks
|
||||
See Documentation/devicetree/bindings/clock/clock-bindings.txt for details.
|
||||
For most function blocks this is just a single clock input. Only the DSI and
|
||||
DPI controller nodes have multiple clock inputs. These are documented in
|
||||
mediatek,dsi.txt and mediatek,dpi.txt, respectively.
|
||||
An exception is that the mt8183 mutex is always free running with no clocks property.
|
||||
|
||||
Required properties (DMA function blocks):
|
||||
- compatible: Should be one of
|
||||
"mediatek,<chip>-disp-ovl"
|
||||
"mediatek,<chip>-disp-rdma"
|
||||
"mediatek,<chip>-disp-wdma"
|
||||
the supported chips are mt2701, mt8167 and mt8173.
|
||||
- iommus: Should point to the respective IOMMU block with master port as
|
||||
argument, see Documentation/devicetree/bindings/iommu/mediatek,iommu.yaml
|
||||
for details.
|
||||
|
||||
Optional properties (RDMA function blocks):
|
||||
- mediatek,rdma-fifo-size: rdma fifo size may be different even in same SOC, add this
|
||||
property to the corresponding rdma
|
||||
the value is the Max value which defined in hardware data sheet.
|
||||
mediatek,rdma-fifo-size of mt8173-rdma0 is 8K
|
||||
mediatek,rdma-fifo-size of mt8183-rdma0 is 5K
|
||||
mediatek,rdma-fifo-size of mt8183-rdma1 is 2K
|
||||
|
||||
Examples:
|
||||
|
||||
mmsys: clock-controller@14000000 {
|
||||
compatible = "mediatek,mt8173-mmsys", "syscon";
|
||||
reg = <0 0x14000000 0 0x1000>;
|
||||
power-domains = <&scpsys MT8173_POWER_DOMAIN_MM>;
|
||||
#clock-cells = <1>;
|
||||
};
|
||||
|
||||
ovl0: ovl@1400c000 {
|
||||
compatible = "mediatek,mt8173-disp-ovl";
|
||||
reg = <0 0x1400c000 0 0x1000>;
|
||||
interrupts = <GIC_SPI 180 IRQ_TYPE_LEVEL_LOW>;
|
||||
power-domains = <&scpsys MT8173_POWER_DOMAIN_MM>;
|
||||
clocks = <&mmsys CLK_MM_DISP_OVL0>;
|
||||
iommus = <&iommu M4U_PORT_DISP_OVL0>;
|
||||
};
|
||||
|
||||
ovl1: ovl@1400d000 {
|
||||
compatible = "mediatek,mt8173-disp-ovl";
|
||||
reg = <0 0x1400d000 0 0x1000>;
|
||||
interrupts = <GIC_SPI 181 IRQ_TYPE_LEVEL_LOW>;
|
||||
power-domains = <&scpsys MT8173_POWER_DOMAIN_MM>;
|
||||
clocks = <&mmsys CLK_MM_DISP_OVL1>;
|
||||
iommus = <&iommu M4U_PORT_DISP_OVL1>;
|
||||
};
|
||||
|
||||
rdma0: rdma@1400e000 {
|
||||
compatible = "mediatek,mt8173-disp-rdma";
|
||||
reg = <0 0x1400e000 0 0x1000>;
|
||||
interrupts = <GIC_SPI 182 IRQ_TYPE_LEVEL_LOW>;
|
||||
power-domains = <&scpsys MT8173_POWER_DOMAIN_MM>;
|
||||
clocks = <&mmsys CLK_MM_DISP_RDMA0>;
|
||||
iommus = <&iommu M4U_PORT_DISP_RDMA0>;
|
||||
mediatek,rdma-fifosize = <8192>;
|
||||
};
|
||||
|
||||
rdma1: rdma@1400f000 {
|
||||
compatible = "mediatek,mt8173-disp-rdma";
|
||||
reg = <0 0x1400f000 0 0x1000>;
|
||||
interrupts = <GIC_SPI 183 IRQ_TYPE_LEVEL_LOW>;
|
||||
power-domains = <&scpsys MT8173_POWER_DOMAIN_MM>;
|
||||
clocks = <&mmsys CLK_MM_DISP_RDMA1>;
|
||||
iommus = <&iommu M4U_PORT_DISP_RDMA1>;
|
||||
};
|
||||
|
||||
rdma2: rdma@14010000 {
|
||||
compatible = "mediatek,mt8173-disp-rdma";
|
||||
reg = <0 0x14010000 0 0x1000>;
|
||||
interrupts = <GIC_SPI 184 IRQ_TYPE_LEVEL_LOW>;
|
||||
power-domains = <&scpsys MT8173_POWER_DOMAIN_MM>;
|
||||
clocks = <&mmsys CLK_MM_DISP_RDMA2>;
|
||||
iommus = <&iommu M4U_PORT_DISP_RDMA2>;
|
||||
};
|
||||
|
||||
wdma0: wdma@14011000 {
|
||||
compatible = "mediatek,mt8173-disp-wdma";
|
||||
reg = <0 0x14011000 0 0x1000>;
|
||||
interrupts = <GIC_SPI 185 IRQ_TYPE_LEVEL_LOW>;
|
||||
power-domains = <&scpsys MT8173_POWER_DOMAIN_MM>;
|
||||
clocks = <&mmsys CLK_MM_DISP_WDMA0>;
|
||||
iommus = <&iommu M4U_PORT_DISP_WDMA0>;
|
||||
};
|
||||
|
||||
wdma1: wdma@14012000 {
|
||||
compatible = "mediatek,mt8173-disp-wdma";
|
||||
reg = <0 0x14012000 0 0x1000>;
|
||||
interrupts = <GIC_SPI 186 IRQ_TYPE_LEVEL_LOW>;
|
||||
power-domains = <&scpsys MT8173_POWER_DOMAIN_MM>;
|
||||
clocks = <&mmsys CLK_MM_DISP_WDMA1>;
|
||||
iommus = <&iommu M4U_PORT_DISP_WDMA1>;
|
||||
};
|
||||
|
||||
color0: color@14013000 {
|
||||
compatible = "mediatek,mt8173-disp-color";
|
||||
reg = <0 0x14013000 0 0x1000>;
|
||||
interrupts = <GIC_SPI 187 IRQ_TYPE_LEVEL_LOW>;
|
||||
power-domains = <&scpsys MT8173_POWER_DOMAIN_MM>;
|
||||
clocks = <&mmsys CLK_MM_DISP_COLOR0>;
|
||||
};
|
||||
|
||||
color1: color@14014000 {
|
||||
compatible = "mediatek,mt8173-disp-color";
|
||||
reg = <0 0x14014000 0 0x1000>;
|
||||
interrupts = <GIC_SPI 188 IRQ_TYPE_LEVEL_LOW>;
|
||||
power-domains = <&scpsys MT8173_POWER_DOMAIN_MM>;
|
||||
clocks = <&mmsys CLK_MM_DISP_COLOR1>;
|
||||
};
|
||||
|
||||
aal@14015000 {
|
||||
compatible = "mediatek,mt8173-disp-aal";
|
||||
reg = <0 0x14015000 0 0x1000>;
|
||||
interrupts = <GIC_SPI 189 IRQ_TYPE_LEVEL_LOW>;
|
||||
power-domains = <&scpsys MT8173_POWER_DOMAIN_MM>;
|
||||
clocks = <&mmsys CLK_MM_DISP_AAL>;
|
||||
};
|
||||
|
||||
gamma@14016000 {
|
||||
compatible = "mediatek,mt8173-disp-gamma";
|
||||
reg = <0 0x14016000 0 0x1000>;
|
||||
interrupts = <GIC_SPI 190 IRQ_TYPE_LEVEL_LOW>;
|
||||
power-domains = <&scpsys MT8173_POWER_DOMAIN_MM>;
|
||||
clocks = <&mmsys CLK_MM_DISP_GAMMA>;
|
||||
};
|
||||
|
||||
ufoe@1401a000 {
|
||||
compatible = "mediatek,mt8173-disp-ufoe";
|
||||
reg = <0 0x1401a000 0 0x1000>;
|
||||
interrupts = <GIC_SPI 191 IRQ_TYPE_LEVEL_LOW>;
|
||||
power-domains = <&scpsys MT8173_POWER_DOMAIN_MM>;
|
||||
clocks = <&mmsys CLK_MM_DISP_UFOE>;
|
||||
};
|
||||
|
||||
dsi0: dsi@1401b000 {
|
||||
/* See mediatek,dsi.txt for details */
|
||||
};
|
||||
|
||||
dpi0: dpi@1401d000 {
|
||||
/* See mediatek,dpi.txt for details */
|
||||
};
|
||||
|
||||
mutex: mutex@14020000 {
|
||||
compatible = "mediatek,mt8173-disp-mutex";
|
||||
reg = <0 0x14020000 0 0x1000>;
|
||||
interrupts = <GIC_SPI 169 IRQ_TYPE_LEVEL_LOW>;
|
||||
power-domains = <&scpsys MT8173_POWER_DOMAIN_MM>;
|
||||
clocks = <&mmsys CLK_MM_MUTEX_32K>;
|
||||
};
|
||||
|
||||
od@14023000 {
|
||||
compatible = "mediatek,mt8173-disp-od";
|
||||
reg = <0 0x14023000 0 0x1000>;
|
||||
power-domains = <&scpsys MT8173_POWER_DOMAIN_MM>;
|
||||
clocks = <&mmsys CLK_MM_DISP_OD>;
|
||||
};
|
|
@ -0,0 +1,76 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/display/mediatek/mediatek,dither.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Mediatek display dither processor
|
||||
|
||||
maintainers:
|
||||
- Chun-Kuang Hu <chunkuang.hu@kernel.org>
|
||||
- Philipp Zabel <p.zabel@pengutronix.de>
|
||||
|
||||
description: |
|
||||
Mediatek display dither processor, namely DITHER, works by approximating
|
||||
unavailable colors with available colors and by mixing and matching available
|
||||
colors to mimic unavailable ones.
|
||||
DITHER device node must be siblings to the central MMSYS_CONFIG node.
|
||||
For a description of the MMSYS_CONFIG binding, see
|
||||
Documentation/devicetree/bindings/arm/mediatek/mediatek,mmsys.yaml
|
||||
for details.
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
oneOf:
|
||||
- items:
|
||||
- const: mediatek,mt8183-disp-dither
|
||||
- items:
|
||||
- enum:
|
||||
- mediatek,mt8192-disp-dither
|
||||
- mediatek,mt8195-disp-dither
|
||||
- enum:
|
||||
- mediatek,mt8183-disp-dither
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
interrupts:
|
||||
maxItems: 1
|
||||
|
||||
power-domains:
|
||||
description: A phandle and PM domain specifier as defined by bindings of
|
||||
the power controller specified by phandle. See
|
||||
Documentation/devicetree/bindings/power/power-domain.yaml for details.
|
||||
|
||||
clocks:
|
||||
items:
|
||||
- description: DITHER Clock
|
||||
|
||||
mediatek,gce-client-reg:
|
||||
description: The register of client driver can be configured by gce with
|
||||
4 arguments defined in this property, such as phandle of gce, subsys id,
|
||||
register offset and size. Each GCE subsys id is mapping to a client
|
||||
defined in the header include/dt-bindings/gce/<chip>-gce.h.
|
||||
$ref: /schemas/types.yaml#/definitions/phandle-array
|
||||
maxItems: 1
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- interrupts
|
||||
- power-domains
|
||||
- clocks
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
|
||||
dither0: dither@14012000 {
|
||||
compatible = "mediatek,mt8183-disp-dither";
|
||||
reg = <0 0x14012000 0 0x1000>;
|
||||
interrupts = <GIC_SPI 235 IRQ_TYPE_LEVEL_LOW>;
|
||||
power-domains = <&spm MT8183_POWER_DOMAIN_DISP>;
|
||||
clocks = <&mmsys CLK_MM_DISP_DITHER0>;
|
||||
mediatek,gce-client-reg = <&gce SUBSYS_1401XXXX 0x2000 0x1000>;
|
||||
};
|
|
@ -0,0 +1,71 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/display/mediatek/mediatek,dsc.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: mediatek display DSC controller
|
||||
|
||||
maintainers:
|
||||
- Chun-Kuang Hu <chunkuang.hu@kernel.org>
|
||||
- Philipp Zabel <p.zabel@pengutronix.de>
|
||||
|
||||
description: |
|
||||
The DSC standard is a specification of the algorithms used for
|
||||
compressing and decompressing image display streams, including
|
||||
the specification of the syntax and semantics of the compressed
|
||||
video bit stream. DSC is designed for real-time systems with
|
||||
real-time compression, transmission, decompression and Display.
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
oneOf:
|
||||
- items:
|
||||
- const: mediatek,mt8195-disp-dsc
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
interrupts:
|
||||
maxItems: 1
|
||||
|
||||
clocks:
|
||||
items:
|
||||
- description: DSC Wrapper Clock
|
||||
|
||||
power-domains:
|
||||
description: A phandle and PM domain specifier as defined by bindings of
|
||||
the power controller specified by phandle. See
|
||||
Documentation/devicetree/bindings/power/power-domain.yaml for details.
|
||||
|
||||
mediatek,gce-client-reg:
|
||||
description:
|
||||
The register of client driver can be configured by gce with 4 arguments
|
||||
defined in this property, such as phandle of gce, subsys id,
|
||||
register offset and size.
|
||||
Each subsys id is mapping to a base address of display function blocks
|
||||
register which is defined in the gce header
|
||||
include/dt-bindings/gce/<chip>-gce.h.
|
||||
$ref: /schemas/types.yaml#/definitions/phandle-array
|
||||
maxItems: 1
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- interrupts
|
||||
- power-domains
|
||||
- clocks
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
|
||||
dsc0: disp_dsc_wrap@1c009000 {
|
||||
compatible = "mediatek,mt8195-disp-dsc";
|
||||
reg = <0 0x1c009000 0 0x1000>;
|
||||
interrupts = <GIC_SPI 645 IRQ_TYPE_LEVEL_HIGH 0>;
|
||||
power-domains = <&spm MT8195_POWER_DOMAIN_VDOSYS0>;
|
||||
clocks = <&vdosys0 CLK_VDO0_DSC_WRAP0>;
|
||||
mediatek,gce-client-reg = <&gce1 SUBSYS_1c00XXXX 0x9000 0x1000>;
|
||||
};
|
|
@ -0,0 +1,147 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/display/mediatek/mediatek,ethdr.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Mediatek Ethdr Device Tree Bindings
|
||||
|
||||
maintainers:
|
||||
- Chun-Kuang Hu <chunkuang.hu@kernel.org>
|
||||
- Philipp Zabel <p.zabel@pengutronix.de>
|
||||
|
||||
description: |
|
||||
ETHDR is designed for HDR video and graphics conversion in the external display path.
|
||||
It handles multiple HDR input types and performs tone mapping, color space/color
|
||||
format conversion, and then combine different layers, output the required HDR or
|
||||
SDR signal to the subsequent display path. This engine is composed of two video
|
||||
frontends, two graphic frontends, one video backend and a mixer. ETHDR has two
|
||||
DMA function blocks, DS and ADL. These two function blocks read the pre-programmed
|
||||
registers from DRAM and set them to HW in the v-blanking period.
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
items:
|
||||
- const: mediatek,mt8195-disp-ethdr
|
||||
reg:
|
||||
maxItems: 7
|
||||
reg-names:
|
||||
items:
|
||||
- const: mixer
|
||||
- const: vdo_fe0
|
||||
- const: vdo_fe1
|
||||
- const: gfx_fe0
|
||||
- const: gfx_fe1
|
||||
- const: vdo_be
|
||||
- const: adl_ds
|
||||
interrupts:
|
||||
minItems: 1
|
||||
iommus:
|
||||
description: The compatible property is DMA function blocks.
|
||||
Should point to the respective IOMMU block with master port as argument,
|
||||
see Documentation/devicetree/bindings/iommu/mediatek,iommu.yaml for
|
||||
details.
|
||||
minItems: 1
|
||||
maxItems: 2
|
||||
clocks:
|
||||
items:
|
||||
- description: mixer clock
|
||||
- description: video frontend 0 clock
|
||||
- description: video frontend 1 clock
|
||||
- description: graphic frontend 0 clock
|
||||
- description: graphic frontend 1 clock
|
||||
- description: video backend clock
|
||||
- description: autodownload and menuload clock
|
||||
- description: video frontend 0 async clock
|
||||
- description: video frontend 1 async clock
|
||||
- description: graphic frontend 0 async clock
|
||||
- description: graphic frontend 1 async clock
|
||||
- description: video backend async clock
|
||||
- description: ethdr top clock
|
||||
clock-names:
|
||||
items:
|
||||
- const: mixer
|
||||
- const: vdo_fe0
|
||||
- const: vdo_fe1
|
||||
- const: gfx_fe0
|
||||
- const: gfx_fe1
|
||||
- const: vdo_be
|
||||
- const: adl_ds
|
||||
- const: vdo_fe0_async
|
||||
- const: vdo_fe1_async
|
||||
- const: gfx_fe0_async
|
||||
- const: gfx_fe1_async
|
||||
- const: vdo_be_async
|
||||
- const: ethdr_top
|
||||
power-domains:
|
||||
maxItems: 1
|
||||
resets:
|
||||
maxItems: 5
|
||||
mediatek,gce-client-reg:
|
||||
$ref: /schemas/types.yaml#/definitions/phandle-array
|
||||
description: The register of display function block to be set by gce.
|
||||
There are 4 arguments in this property, gce node, subsys id, offset and
|
||||
register size. The subsys id is defined in the gce header of each chips
|
||||
include/include/dt-bindings/gce/<chip>-gce.h, mapping to the register of
|
||||
display function block.
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- clocks
|
||||
- clock-names
|
||||
- interrupts
|
||||
- power-domains
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
|
||||
disp_ethdr@1c114000 {
|
||||
compatible = "mediatek,mt8195-disp-ethdr";
|
||||
reg = <0 0x1c114000 0 0x1000>,
|
||||
<0 0x1c115000 0 0x1000>,
|
||||
<0 0x1c117000 0 0x1000>,
|
||||
<0 0x1c119000 0 0x1000>,
|
||||
<0 0x1c11A000 0 0x1000>,
|
||||
<0 0x1c11B000 0 0x1000>,
|
||||
<0 0x1c11C000 0 0x1000>;
|
||||
reg-names = "mixer", "vdo_fe0", "vdo_fe1", "gfx_fe0", "gfx_fe1",
|
||||
"vdo_be", "adl_ds";
|
||||
mediatek,gce-client-reg = <&gce0 SUBSYS_1c11XXXX 0x4000 0x1000>,
|
||||
<&gce0 SUBSYS_1c11XXXX 0x5000 0x1000>,
|
||||
<&gce0 SUBSYS_1c11XXXX 0x7000 0x1000>,
|
||||
<&gce0 SUBSYS_1c11XXXX 0x9000 0x1000>,
|
||||
<&gce0 SUBSYS_1c11XXXX 0xA000 0x1000>,
|
||||
<&gce0 SUBSYS_1c11XXXX 0xB000 0x1000>,
|
||||
<&gce0 SUBSYS_1c11XXXX 0xC000 0x1000>;
|
||||
clocks = <&vdosys1 CLK_VDO1_DISP_MIXER>,
|
||||
<&vdosys1 CLK_VDO1_HDR_VDO_FE0>,
|
||||
<&vdosys1 CLK_VDO1_HDR_VDO_FE1>,
|
||||
<&vdosys1 CLK_VDO1_HDR_GFX_FE0>,
|
||||
<&vdosys1 CLK_VDO1_HDR_GFX_FE1>,
|
||||
<&vdosys1 CLK_VDO1_HDR_VDO_BE>,
|
||||
<&vdosys1 CLK_VDO1_26M_SLOW>,
|
||||
<&vdosys1 CLK_VDO1_HDR_VDO_FE0_DL_ASYNC>,
|
||||
<&vdosys1 CLK_VDO1_HDR_VDO_FE1_DL_ASYNC>,
|
||||
<&vdosys1 CLK_VDO1_HDR_GFX_FE0_DL_ASYNC>,
|
||||
<&vdosys1 CLK_VDO1_HDR_GFX_FE1_DL_ASYNC>,
|
||||
<&vdosys1 CLK_VDO1_HDR_VDO_BE_DL_ASYNC>,
|
||||
<&topckgen CLK_TOP_ETHDR_SEL>;
|
||||
clock-names = "mixer", "vdo_fe0", "vdo_fe1", "gfx_fe0", "gfx_fe1",
|
||||
"vdo_be", "adl_ds", "vdo_fe0_async", "vdo_fe1_async",
|
||||
"gfx_fe0_async", "gfx_fe1_async","vdo_be_async",
|
||||
"ethdr_top";
|
||||
power-domains = <&spm MT8195_POWER_DOMAIN_VDOSYS1>;
|
||||
iommus = <&iommu_vpp M4U_PORT_L3_HDR_DS>,
|
||||
<&iommu_vpp M4U_PORT_L3_HDR_ADL>;
|
||||
interrupts = <GIC_SPI 517 IRQ_TYPE_LEVEL_HIGH 0>; /* disp mixer */
|
||||
resets = <&vdosys1 MT8195_VDOSYS1_SW1_RST_B_HDR_VDO_FE0_DL_ASYNC>,
|
||||
<&vdosys1 MT8195_VDOSYS1_SW1_RST_B_HDR_VDO_FE1_DL_ASYNC>,
|
||||
<&vdosys1 MT8195_VDOSYS1_SW1_RST_B_HDR_GFX_FE0_DL_ASYNC>,
|
||||
<&vdosys1 MT8195_VDOSYS1_SW1_RST_B_HDR_GFX_FE1_DL_ASYNC>,
|
||||
<&vdosys1 MT8195_VDOSYS1_SW1_RST_B_HDR_VDO_BE_DL_ASYNC>;
|
||||
};
|
||||
|
||||
...
|
|
@ -0,0 +1,77 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/display/mediatek/mediatek,gamma.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Mediatek display gamma correction
|
||||
|
||||
maintainers:
|
||||
- Chun-Kuang Hu <chunkuang.hu@kernel.org>
|
||||
- Philipp Zabel <p.zabel@pengutronix.de>
|
||||
|
||||
description: |
|
||||
Mediatek display gamma correction, namely GAMMA, provides a nonlinear
|
||||
operation used to adjust luminance in display system.
|
||||
GAMMA device node must be siblings to the central MMSYS_CONFIG node.
|
||||
For a description of the MMSYS_CONFIG binding, see
|
||||
Documentation/devicetree/bindings/arm/mediatek/mediatek,mmsys.yaml
|
||||
for details.
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
oneOf:
|
||||
- items:
|
||||
- const: mediatek,mt8173-disp-gamma
|
||||
- items:
|
||||
- const: mediatek,mt8183-disp-gamma
|
||||
- items:
|
||||
- enum:
|
||||
- mediatek,mt8192-disp-gamma
|
||||
- mediatek,mt8195-disp-gamma
|
||||
- enum:
|
||||
- mediatek,mt8183-disp-gamma
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
interrupts:
|
||||
maxItems: 1
|
||||
|
||||
power-domains:
|
||||
description: A phandle and PM domain specifier as defined by bindings of
|
||||
the power controller specified by phandle. See
|
||||
Documentation/devicetree/bindings/power/power-domain.yaml for details.
|
||||
|
||||
clocks:
|
||||
items:
|
||||
- description: GAMMA Clock
|
||||
|
||||
mediatek,gce-client-reg:
|
||||
description: The register of client driver can be configured by gce with
|
||||
4 arguments defined in this property, such as phandle of gce, subsys id,
|
||||
register offset and size. Each GCE subsys id is mapping to a client
|
||||
defined in the header include/dt-bindings/gce/<chip>-gce.h.
|
||||
$ref: /schemas/types.yaml#/definitions/phandle-array
|
||||
maxItems: 1
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- interrupts
|
||||
- power-domains
|
||||
- clocks
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
|
||||
gamma@14016000 {
|
||||
compatible = "mediatek,mt8173-disp-gamma";
|
||||
reg = <0 0x14016000 0 0x1000>;
|
||||
interrupts = <GIC_SPI 190 IRQ_TYPE_LEVEL_LOW>;
|
||||
power-domains = <&scpsys MT8173_POWER_DOMAIN_MM>;
|
||||
clocks = <&mmsys CLK_MM_DISP_GAMMA>;
|
||||
mediatek,gce-client-reg = <&gce SUBSYS_1401XXXX 0x6000 0x1000>;
|
||||
};
|
|
@ -0,0 +1,110 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/display/mediatek/mediatek,merge.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Mediatek display merge
|
||||
|
||||
maintainers:
|
||||
- Chun-Kuang Hu <chunkuang.hu@kernel.org>
|
||||
- Philipp Zabel <p.zabel@pengutronix.de>
|
||||
|
||||
description: |
|
||||
Mediatek display merge, namely MERGE, is used to merge two slice-per-line
|
||||
inputs into one side-by-side output.
|
||||
MERGE device node must be siblings to the central MMSYS_CONFIG node.
|
||||
For a description of the MMSYS_CONFIG binding, see
|
||||
Documentation/devicetree/bindings/arm/mediatek/mediatek,mmsys.yaml
|
||||
for details.
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
oneOf:
|
||||
- items:
|
||||
- const: mediatek,mt8173-disp-merge
|
||||
- items:
|
||||
- const: mediatek,mt8195-disp-merge
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
interrupts:
|
||||
maxItems: 1
|
||||
|
||||
power-domains:
|
||||
description: A phandle and PM domain specifier as defined by bindings of
|
||||
the power controller specified by phandle. See
|
||||
Documentation/devicetree/bindings/power/power-domain.yaml for details.
|
||||
|
||||
clocks:
|
||||
maxItems: 2
|
||||
items:
|
||||
- description: MERGE Clock
|
||||
- description: MERGE Async Clock
|
||||
Controlling the synchronous process between MERGE and other display
|
||||
function blocks cross clock domain.
|
||||
|
||||
clock-names:
|
||||
maxItems: 2
|
||||
items:
|
||||
- const: merge
|
||||
- const: merge_async
|
||||
|
||||
mediatek,merge-fifo-en:
|
||||
description:
|
||||
The setting of merge fifo is mainly provided for the display latency
|
||||
buffer to ensure that the back-end panel display data will not be
|
||||
underrun, a little more data is needed in the fifo.
|
||||
According to the merge fifo settings, when the water level is detected
|
||||
to be insufficient, it will trigger RDMA sending ultra and preulra
|
||||
command to SMI to speed up the data rate.
|
||||
type: boolean
|
||||
|
||||
mediatek,merge-mute:
|
||||
description: Support mute function. Mute the content of merge output.
|
||||
type: boolean
|
||||
|
||||
mediatek,gce-client-reg:
|
||||
description: The register of client driver can be configured by gce with
|
||||
4 arguments defined in this property, such as phandle of gce, subsys id,
|
||||
register offset and size. Each GCE subsys id is mapping to a client
|
||||
defined in the header include/dt-bindings/gce/<chip>-gce.h.
|
||||
$ref: /schemas/types.yaml#/definitions/phandle-array
|
||||
maxItems: 1
|
||||
|
||||
resets:
|
||||
description: reset controller
|
||||
See Documentation/devicetree/bindings/reset/reset.txt for details.
|
||||
maxItems: 1
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- power-domains
|
||||
- clocks
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
|
||||
merge@14017000 {
|
||||
compatible = "mediatek,mt8173-disp-merge";
|
||||
reg = <0 0x14017000 0 0x1000>;
|
||||
power-domains = <&spm MT8173_POWER_DOMAIN_MM>;
|
||||
clocks = <&mmsys CLK_MM_DISP_MERGE>;
|
||||
};
|
||||
|
||||
merge5: disp_vpp_merge5@1c110000 {
|
||||
compatible = "mediatek,mt8195-disp-merge";
|
||||
reg = <0 0x1c110000 0 0x1000>;
|
||||
interrupts = <GIC_SPI 507 IRQ_TYPE_LEVEL_HIGH 0>;
|
||||
clocks = <&vdosys1 CLK_VDO1_VPP_MERGE4>,
|
||||
<&vdosys1 CLK_VDO1_MERGE4_DL_ASYNC>;
|
||||
clock-names = "merge","merge_async";
|
||||
power-domains = <&spm MT8195_POWER_DOMAIN_VDOSYS1>;
|
||||
mediatek,gce-client-reg = <&gce1 SUBSYS_1c11XXXX 0x0000 0x1000>;
|
||||
mediatek,merge-fifo-en = <1>;
|
||||
resets = <&vdosys1 MT8195_VDOSYS1_SW0_RST_B_MERGE4_DL_ASYNC>;
|
||||
};
|
|
@ -0,0 +1,83 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/display/mediatek/mediatek,mutex.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Mediatek mutex
|
||||
|
||||
maintainers:
|
||||
- Chun-Kuang Hu <chunkuang.hu@kernel.org>
|
||||
- Philipp Zabel <p.zabel@pengutronix.de>
|
||||
|
||||
description: |
|
||||
Mediatek mutex, namely MUTEX, is used to send the triggers signals called
|
||||
Start Of Frame (SOF) / End Of Frame (EOF) to each sub-modules on the display
|
||||
data path or MDP data path.
|
||||
In some SoC, such as mt2701, MUTEX could be a hardware mutex which protects
|
||||
the shadow register.
|
||||
MUTEX device node must be siblings to the central MMSYS_CONFIG node.
|
||||
For a description of the MMSYS_CONFIG binding, see
|
||||
Documentation/devicetree/bindings/arm/mediatek/mediatek,mmsys.yaml
|
||||
for details.
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
oneOf:
|
||||
- items:
|
||||
- const: mediatek,mt2701-disp-mutex
|
||||
- items:
|
||||
- const: mediatek,mt2712-disp-mutex
|
||||
- items:
|
||||
- const: mediatek,mt8167-disp-mutex
|
||||
- items:
|
||||
- const: mediatek,mt8173-disp-mutex
|
||||
- items:
|
||||
- const: mediatek,mt8183-disp-mutex
|
||||
- items:
|
||||
- const: mediatek,mt8192-disp-mutex
|
||||
- items:
|
||||
- const: mediatek,mt8195-disp-mutex
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
interrupts:
|
||||
maxItems: 1
|
||||
|
||||
power-domains:
|
||||
description: A phandle and PM domain specifier as defined by bindings of
|
||||
the power controller specified by phandle. See
|
||||
Documentation/devicetree/bindings/power/power-domain.yaml for details.
|
||||
|
||||
clocks:
|
||||
items:
|
||||
- description: MUTEX Clock
|
||||
|
||||
mediatek,gce-events:
|
||||
description:
|
||||
The event id which is mapping to the specific hardware event signal
|
||||
to gce. The event id is defined in the gce header
|
||||
include/dt-bindings/gce/<chip>-gce.h of each chips.
|
||||
$ref: /schemas/types.yaml#/definitions/phandle-array
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- interrupts
|
||||
- power-domains
|
||||
- clocks
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
|
||||
mutex: mutex@14020000 {
|
||||
compatible = "mediatek,mt8173-disp-mutex";
|
||||
reg = <0 0x14020000 0 0x1000>;
|
||||
interrupts = <GIC_SPI 169 IRQ_TYPE_LEVEL_LOW>;
|
||||
power-domains = <&spm MT8173_POWER_DOMAIN_MM>;
|
||||
clocks = <&mmsys CLK_MM_MUTEX_32K>;
|
||||
mediatek,gce-events = <CMDQ_EVENT_MUTEX0_STREAM_EOF>,
|
||||
<CMDQ_EVENT_MUTEX1_STREAM_EOF>;
|
||||
};
|
|
@ -0,0 +1,53 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/display/mediatek/mediatek,od.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Mediatek display overdirve
|
||||
|
||||
maintainers:
|
||||
- Chun-Kuang Hu <chunkuang.hu@kernel.org>
|
||||
- Philipp Zabel <p.zabel@pengutronix.de>
|
||||
|
||||
description: |
|
||||
Mediatek display overdrive, namely OD, increases the transition values
|
||||
of pixels between consecutive frames to make LCD rotate faster.
|
||||
OD device node must be siblings to the central MMSYS_CONFIG node.
|
||||
For a description of the MMSYS_CONFIG binding, see
|
||||
Documentation/devicetree/bindings/arm/mediatek/mediatek,mmsys.yaml
|
||||
for details.
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
oneOf:
|
||||
- items:
|
||||
- const: mediatek,mt2712-disp-od
|
||||
- items:
|
||||
- const: mediatek,mt8173-disp-od
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
interrupts:
|
||||
maxItems: 1
|
||||
|
||||
clocks:
|
||||
items:
|
||||
- description: OD Clock
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- clocks
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
|
||||
od@14023000 {
|
||||
compatible = "mediatek,mt8173-disp-od";
|
||||
reg = <0 0x14023000 0 0x1000>;
|
||||
clocks = <&mmsys CLK_MM_DISP_OD>;
|
||||
};
|
|
@ -0,0 +1,78 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/display/mediatek/mediatek,ovl-2l.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Mediatek display overlay 2 layer
|
||||
|
||||
maintainers:
|
||||
- Chun-Kuang Hu <chunkuang.hu@kernel.org>
|
||||
- Philipp Zabel <p.zabel@pengutronix.de>
|
||||
|
||||
description: |
|
||||
Mediatek display overlay 2 layer, namely OVL-2L, provides 2 more layer
|
||||
for OVL.
|
||||
OVL-2L device node must be siblings to the central MMSYS_CONFIG node.
|
||||
For a description of the MMSYS_CONFIG binding, see
|
||||
Documentation/devicetree/bindings/arm/mediatek/mediatek,mmsys.yaml
|
||||
for details.
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
oneOf:
|
||||
- items:
|
||||
- const: mediatek,mt8183-disp-ovl-2l
|
||||
- items:
|
||||
- const: mediatek,mt8192-disp-ovl-2l
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
interrupts:
|
||||
maxItems: 1
|
||||
|
||||
power-domains:
|
||||
description: A phandle and PM domain specifier as defined by bindings of
|
||||
the power controller specified by phandle. See
|
||||
Documentation/devicetree/bindings/power/power-domain.yaml for details.
|
||||
|
||||
clocks:
|
||||
items:
|
||||
- description: OVL-2L Clock
|
||||
|
||||
iommus:
|
||||
description:
|
||||
This property should point to the respective IOMMU block with master port as argument,
|
||||
see Documentation/devicetree/bindings/iommu/mediatek,iommu.yaml for details.
|
||||
|
||||
mediatek,gce-client-reg:
|
||||
description: The register of client driver can be configured by gce with
|
||||
4 arguments defined in this property, such as phandle of gce, subsys id,
|
||||
register offset and size. Each GCE subsys id is mapping to a client
|
||||
defined in the header include/dt-bindings/gce/<chip>-gce.h.
|
||||
$ref: /schemas/types.yaml#/definitions/phandle-array
|
||||
maxItems: 1
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- interrupts
|
||||
- power-domains
|
||||
- clocks
|
||||
- iommus
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
|
||||
ovl_2l0: ovl@14009000 {
|
||||
compatible = "mediatek,mt8183-disp-ovl-2l";
|
||||
reg = <0 0x14009000 0 0x1000>;
|
||||
interrupts = <GIC_SPI 226 IRQ_TYPE_LEVEL_LOW>;
|
||||
power-domains = <&spm MT8183_POWER_DOMAIN_DISP>;
|
||||
clocks = <&mmsys CLK_MM_DISP_OVL0_2L>;
|
||||
iommus = <&iommu M4U_PORT_DISP_2L_OVL0_LARB0>;
|
||||
mediatek,gce-client-reg = <&gce SUBSYS_1400XXXX 0x9000 0x1000>;
|
||||
};
|
|
@ -0,0 +1,93 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/display/mediatek/mediatek,ovl.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Mediatek display overlay
|
||||
|
||||
maintainers:
|
||||
- Chun-Kuang Hu <chunkuang.hu@kernel.org>
|
||||
- Philipp Zabel <p.zabel@pengutronix.de>
|
||||
|
||||
description: |
|
||||
Mediatek display overlay, namely OVL, can do alpha blending from
|
||||
the memory.
|
||||
OVL device node must be siblings to the central MMSYS_CONFIG node.
|
||||
For a description of the MMSYS_CONFIG binding, see
|
||||
Documentation/devicetree/bindings/arm/mediatek/mediatek,mmsys.yaml
|
||||
for details.
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
oneOf:
|
||||
- items:
|
||||
- const: mediatek,mt2701-disp-ovl
|
||||
- items:
|
||||
- const: mediatek,mt8173-disp-ovl
|
||||
- items:
|
||||
- const: mediatek,mt8183-disp-ovl
|
||||
- items:
|
||||
- const: mediatek,mt8192-disp-ovl
|
||||
- items:
|
||||
- enum:
|
||||
- mediatek,mt7623-disp-ovl
|
||||
- mediatek,mt2712-disp-ovl
|
||||
- enum:
|
||||
- mediatek,mt2701-disp-ovl
|
||||
- items:
|
||||
- enum:
|
||||
- mediatek,mt8195-disp-ovl
|
||||
- enum:
|
||||
- mediatek,mt8183-disp-ovl
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
interrupts:
|
||||
maxItems: 1
|
||||
|
||||
power-domains:
|
||||
description: A phandle and PM domain specifier as defined by bindings of
|
||||
the power controller specified by phandle. See
|
||||
Documentation/devicetree/bindings/power/power-domain.yaml for details.
|
||||
|
||||
clocks:
|
||||
items:
|
||||
- description: OVL Clock
|
||||
|
||||
iommus:
|
||||
description:
|
||||
This property should point to the respective IOMMU block with master port as argument,
|
||||
see Documentation/devicetree/bindings/iommu/mediatek,iommu.yaml for details.
|
||||
|
||||
mediatek,gce-client-reg:
|
||||
description: The register of client driver can be configured by gce with
|
||||
4 arguments defined in this property, such as phandle of gce, subsys id,
|
||||
register offset and size. Each GCE subsys id is mapping to a client
|
||||
defined in the header include/dt-bindings/gce/<chip>-gce.h.
|
||||
$ref: /schemas/types.yaml#/definitions/phandle-array
|
||||
maxItems: 1
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- interrupts
|
||||
- power-domains
|
||||
- clocks
|
||||
- iommu
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
|
||||
ovl0: ovl@1400c000 {
|
||||
compatible = "mediatek,mt8173-disp-ovl";
|
||||
reg = <0 0x1400c000 0 0x1000>;
|
||||
interrupts = <GIC_SPI 180 IRQ_TYPE_LEVEL_LOW>;
|
||||
power-domains = <&scpsys MT8173_POWER_DOMAIN_MM>;
|
||||
clocks = <&mmsys CLK_MM_DISP_OVL0>;
|
||||
iommus = <&iommu M4U_PORT_DISP_OVL0>;
|
||||
mediatek,gce-client-reg = <&gce SUBSYS_1400XXXX 0xc000 0x1000>;
|
||||
};
|
|
@ -0,0 +1,69 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/display/mediatek/mediatek,postmask.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Mediatek display postmask
|
||||
|
||||
maintainers:
|
||||
- Chun-Kuang Hu <chunkuang.hu@kernel.org>
|
||||
- Philipp Zabel <p.zabel@pengutronix.de>
|
||||
|
||||
description: |
|
||||
Mediatek display postmask, namely POSTMASK, provides round corner pattern
|
||||
generation.
|
||||
POSTMASK device node must be siblings to the central MMSYS_CONFIG node.
|
||||
For a description of the MMSYS_CONFIG binding, see
|
||||
Documentation/devicetree/bindings/arm/mediatek/mediatek,mmsys.yaml
|
||||
for details.
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
oneOf:
|
||||
- items:
|
||||
- const: mediatek,mt8192-disp-postmask
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
interrupts:
|
||||
maxItems: 1
|
||||
|
||||
power-domains:
|
||||
description: A phandle and PM domain specifier as defined by bindings of
|
||||
the power controller specified by phandle. See
|
||||
Documentation/devicetree/bindings/power/power-domain.yaml for details.
|
||||
|
||||
clocks:
|
||||
items:
|
||||
- description: POSTMASK Clock
|
||||
|
||||
mediatek,gce-client-reg:
|
||||
description: The register of client driver can be configured by gce with
|
||||
4 arguments defined in this property, such as phandle of gce, subsys id,
|
||||
register offset and size. Each GCE subsys id is mapping to a client
|
||||
defined in the header include/dt-bindings/gce/<chip>-gce.h.
|
||||
$ref: /schemas/types.yaml#/definitions/phandle-array
|
||||
maxItems: 1
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- interrupts
|
||||
- power-domains
|
||||
- clocks
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
|
||||
postmask0: postmask@1400d000 {
|
||||
compatible = "mediatek,mt8192-disp-postmask";
|
||||
reg = <0 0x1400d000 0 0x1000>;
|
||||
interrupts = <GIC_SPI 262 IRQ_TYPE_LEVEL_HIGH 0>;
|
||||
power-domains = <&scpsys MT8192_POWER_DOMAIN_DISP>;
|
||||
clocks = <&mmsys CLK_MM_DISP_POSTMASK0>;
|
||||
mediatek,gce-client-reg = <&gce SUBSYS_1400XXXX 0xd000 0x1000>;
|
||||
};
|
|
@ -0,0 +1,107 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/display/mediatek/mediatek,rdma.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Mediatek Read Direct Memory Access
|
||||
|
||||
maintainers:
|
||||
- Chun-Kuang Hu <chunkuang.hu@kernel.org>
|
||||
- Philipp Zabel <p.zabel@pengutronix.de>
|
||||
|
||||
description: |
|
||||
Mediatek Read Direct Memory Access(RDMA) component used to read the
|
||||
data into DMA. It provides real time data to the back-end panel
|
||||
driver, such as DSI, DPI and DP_INTF.
|
||||
It contains one line buffer to store the sufficient pixel data.
|
||||
RDMA device node must be siblings to the central MMSYS_CONFIG node.
|
||||
For a description of the MMSYS_CONFIG binding, see
|
||||
Documentation/devicetree/bindings/arm/mediatek/mediatek,mmsys.yaml
|
||||
for details.
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
oneOf:
|
||||
- items:
|
||||
- const: mediatek,mt2701-disp-rdma
|
||||
- items:
|
||||
- const: mediatek,mt8173-disp-rdma
|
||||
- items:
|
||||
- const: mediatek,mt8183-disp-rdma
|
||||
- items:
|
||||
- const: mediatek,mt8195-disp-rdma
|
||||
- items:
|
||||
- enum:
|
||||
- mediatek,mt7623-disp-rdma
|
||||
- mediatek,mt2712-disp-rdma
|
||||
- enum:
|
||||
- mediatek,mt2701-disp-rdma
|
||||
- items:
|
||||
- enum:
|
||||
- mediatek,mt8192-disp-rdma
|
||||
- enum:
|
||||
- mediatek,mt8183-disp-rdma
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
interrupts:
|
||||
maxItems: 1
|
||||
|
||||
power-domains:
|
||||
description: A phandle and PM domain specifier as defined by bindings of
|
||||
the power controller specified by phandle. See
|
||||
Documentation/devicetree/bindings/power/power-domain.yaml for details.
|
||||
|
||||
clocks:
|
||||
items:
|
||||
- description: RDMA Clock
|
||||
|
||||
iommus:
|
||||
description:
|
||||
This property should point to the respective IOMMU block with master port as argument,
|
||||
see Documentation/devicetree/bindings/iommu/mediatek,iommu.yaml for details.
|
||||
|
||||
mediatek,rdma-fifo-size:
|
||||
description:
|
||||
rdma fifo size may be different even in same SOC, add this property to the
|
||||
corresponding rdma.
|
||||
The value below is the Max value which defined in hardware data sheet
|
||||
mediatek,rdma-fifo-size of mt8173-rdma0 is 8K
|
||||
mediatek,rdma-fifo-size of mt8183-rdma0 is 5K
|
||||
mediatek,rdma-fifo-size of mt8183-rdma1 is 2K
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
enum: [8192, 5120, 2048]
|
||||
|
||||
mediatek,gce-client-reg:
|
||||
description: The register of client driver can be configured by gce with
|
||||
4 arguments defined in this property, such as phandle of gce, subsys id,
|
||||
register offset and size. Each GCE subsys id is mapping to a client
|
||||
defined in the header include/dt-bindings/gce/<chip>-gce.h.
|
||||
$ref: /schemas/types.yaml#/definitions/phandle-array
|
||||
maxItems: 1
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- interrupts
|
||||
- power-domains
|
||||
- clocks
|
||||
- iommus
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
|
||||
rdma0: rdma@1400e000 {
|
||||
compatible = "mediatek,mt8173-disp-rdma";
|
||||
reg = <0 0x1400e000 0 0x1000>;
|
||||
interrupts = <GIC_SPI 182 IRQ_TYPE_LEVEL_LOW>;
|
||||
power-domains = <&scpsys MT8173_POWER_DOMAIN_MM>;
|
||||
clocks = <&mmsys CLK_MM_DISP_RDMA0>;
|
||||
iommus = <&iommu M4U_PORT_DISP_RDMA0>;
|
||||
mediatek,rdma-fifosize = <8192>;
|
||||
mediatek,gce-client-reg = <&gce SUBSYS_1400XXXX 0xe000 0x1000>;
|
||||
};
|
|
@ -0,0 +1,58 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/display/mediatek/mediatek,split.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Mediatek display split
|
||||
|
||||
maintainers:
|
||||
- Chun-Kuang Hu <chunkuang.hu@kernel.org>
|
||||
- Philipp Zabel <p.zabel@pengutronix.de>
|
||||
|
||||
description: |
|
||||
Mediatek display split, namely SPLIT, is used to split stream to two
|
||||
encoders.
|
||||
SPLIT device node must be siblings to the central MMSYS_CONFIG node.
|
||||
For a description of the MMSYS_CONFIG binding, see
|
||||
Documentation/devicetree/bindings/arm/mediatek/mediatek,mmsys.yaml
|
||||
for details.
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
oneOf:
|
||||
- items:
|
||||
- const: mediatek,mt8173-disp-split
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
interrupts:
|
||||
maxItems: 1
|
||||
|
||||
power-domains:
|
||||
description: A phandle and PM domain specifier as defined by bindings of
|
||||
the power controller specified by phandle. See
|
||||
Documentation/devicetree/bindings/power/power-domain.yaml for details.
|
||||
|
||||
clocks:
|
||||
items:
|
||||
- description: SPLIT Clock
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- power-domains
|
||||
- clocks
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
|
||||
split0: split@14018000 {
|
||||
compatible = "mediatek,mt8173-disp-split";
|
||||
reg = <0 0x14018000 0 0x1000>;
|
||||
power-domains = <&spm MT8173_POWER_DOMAIN_MM>;
|
||||
clocks = <&mmsys CLK_MM_DISP_SPLIT0>;
|
||||
};
|
|
@ -0,0 +1,61 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/display/mediatek/mediatek,ufoe.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Mediatek display UFOe
|
||||
|
||||
maintainers:
|
||||
- Chun-Kuang Hu <chunkuang.hu@kernel.org>
|
||||
- Philipp Zabel <p.zabel@pengutronix.de>
|
||||
|
||||
description: |
|
||||
Mediatek display UFOe stands for Unified Frame Optimization engine.
|
||||
UFOe can cut the data rate for DSI port which may lead to reduce power
|
||||
consumption.
|
||||
UFOe device node must be siblings to the central MMSYS_CONFIG node.
|
||||
For a description of the MMSYS_CONFIG binding, see
|
||||
Documentation/devicetree/bindings/arm/mediatek/mediatek,mmsys.yaml
|
||||
for details.
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
oneOf:
|
||||
- items:
|
||||
- const: mediatek,mt8173-disp-ufoe
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
interrupts:
|
||||
maxItems: 1
|
||||
|
||||
power-domains:
|
||||
description: A phandle and PM domain specifier as defined by bindings of
|
||||
the power controller specified by phandle. See
|
||||
Documentation/devicetree/bindings/power/power-domain.yaml for details.
|
||||
|
||||
clocks:
|
||||
items:
|
||||
- description: UFOe Clock
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- interrupts
|
||||
- power-domains
|
||||
- clocks
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
|
||||
ufoe@1401a000 {
|
||||
compatible = "mediatek,mt8173-disp-ufoe";
|
||||
reg = <0 0x1401a000 0 0x1000>;
|
||||
interrupts = <GIC_SPI 191 IRQ_TYPE_LEVEL_LOW>;
|
||||
power-domains = <&scpsys MT8173_POWER_DOMAIN_MM>;
|
||||
clocks = <&mmsys CLK_MM_DISP_UFOE>;
|
||||
};
|
|
@ -0,0 +1,76 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/display/mediatek/mediatek,wdma.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Mediatek Write Direct Memory Access
|
||||
|
||||
maintainers:
|
||||
- Chun-Kuang Hu <chunkuang.hu@kernel.org>
|
||||
- Philipp Zabel <p.zabel@pengutronix.de>
|
||||
|
||||
description: |
|
||||
Mediatek Write Direct Memory Access(WDMA) component used to write
|
||||
the data into DMA.
|
||||
WDMA device node must be siblings to the central MMSYS_CONFIG node.
|
||||
For a description of the MMSYS_CONFIG binding, see
|
||||
Documentation/devicetree/bindings/arm/mediatek/mediatek,mmsys.yaml
|
||||
for details.
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
oneOf:
|
||||
- items:
|
||||
- const: mediatek,mt8173-disp-wdma
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
interrupts:
|
||||
maxItems: 1
|
||||
|
||||
power-domains:
|
||||
description: A phandle and PM domain specifier as defined by bindings of
|
||||
the power controller specified by phandle. See
|
||||
Documentation/devicetree/bindings/power/power-domain.yaml for details.
|
||||
|
||||
clocks:
|
||||
items:
|
||||
- description: WDMA Clock
|
||||
|
||||
iommus:
|
||||
description:
|
||||
This property should point to the respective IOMMU block with master port as argument,
|
||||
see Documentation/devicetree/bindings/iommu/mediatek,iommu.yaml for details.
|
||||
|
||||
mediatek,gce-client-reg:
|
||||
description: The register of client driver can be configured by gce with
|
||||
4 arguments defined in this property, such as phandle of gce, subsys id,
|
||||
register offset and size. Each GCE subsys id is mapping to a client
|
||||
defined in the header include/dt-bindings/gce/<chip>-gce.h.
|
||||
$ref: /schemas/types.yaml#/definitions/phandle-array
|
||||
maxItems: 1
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- interrupts
|
||||
- power-domains
|
||||
- clocks
|
||||
- iommus
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
|
||||
wdma0: wdma@14011000 {
|
||||
compatible = "mediatek,mt8173-disp-wdma";
|
||||
reg = <0 0x14011000 0 0x1000>;
|
||||
interrupts = <GIC_SPI 185 IRQ_TYPE_LEVEL_LOW>;
|
||||
power-domains = <&scpsys MT8173_POWER_DOMAIN_MM>;
|
||||
clocks = <&mmsys CLK_MM_DISP_WDMA0>;
|
||||
iommus = <&iommu M4U_PORT_DISP_WDMA0>;
|
||||
mediatek,gce-client-reg = <&gce SUBSYS_1401XXXX 0x1000 0x1000>;
|
||||
};
|
|
@ -21,6 +21,7 @@ properties:
|
|||
- qcom,sc7280-edp
|
||||
- qcom,sc8180x-dp
|
||||
- qcom,sc8180x-edp
|
||||
- qcom,sm8350-dp
|
||||
|
||||
reg:
|
||||
items:
|
||||
|
|
|
@ -0,0 +1,219 @@
|
|||
# SPDX-License-Identifier: GPL-2.0-only or BSD-2-Clause
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/display/msm/dpu-msm8998.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Qualcomm Display DPU dt properties for MSM8998 target
|
||||
|
||||
maintainers:
|
||||
- AngeloGioacchino Del Regno <angelogioacchino.delregno@somainline.org>
|
||||
|
||||
description: |
|
||||
Device tree bindings for MSM Mobile Display Subsystem(MDSS) that encapsulates
|
||||
sub-blocks like DPU display controller, DSI and DP interfaces etc. Device tree
|
||||
bindings of MDSS and DPU are mentioned for MSM8998 target.
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
items:
|
||||
- const: qcom,msm8998-mdss
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
reg-names:
|
||||
const: mdss
|
||||
|
||||
power-domains:
|
||||
maxItems: 1
|
||||
|
||||
clocks:
|
||||
items:
|
||||
- description: Display AHB clock
|
||||
- description: Display AXI clock
|
||||
- description: Display core clock
|
||||
|
||||
clock-names:
|
||||
items:
|
||||
- const: iface
|
||||
- const: bus
|
||||
- const: core
|
||||
|
||||
interrupts:
|
||||
maxItems: 1
|
||||
|
||||
interrupt-controller: true
|
||||
|
||||
"#address-cells": true
|
||||
|
||||
"#size-cells": true
|
||||
|
||||
"#interrupt-cells":
|
||||
const: 1
|
||||
|
||||
iommus:
|
||||
items:
|
||||
- description: Phandle to apps_smmu node with SID mask for Hard-Fail port0
|
||||
|
||||
ranges: true
|
||||
|
||||
patternProperties:
|
||||
"^display-controller@[0-9a-f]+$":
|
||||
type: object
|
||||
description: Node containing the properties of DPU.
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
items:
|
||||
- const: qcom,msm8998-dpu
|
||||
|
||||
reg:
|
||||
items:
|
||||
- description: Address offset and size for mdp register set
|
||||
- description: Address offset and size for regdma register set
|
||||
- description: Address offset and size for vbif register set
|
||||
- description: Address offset and size for non-realtime vbif register set
|
||||
|
||||
reg-names:
|
||||
items:
|
||||
- const: mdp
|
||||
- const: regdma
|
||||
- const: vbif
|
||||
- const: vbif_nrt
|
||||
|
||||
clocks:
|
||||
items:
|
||||
- description: Display ahb clock
|
||||
- description: Display axi clock
|
||||
- description: Display mem-noc clock
|
||||
- description: Display core clock
|
||||
- description: Display vsync clock
|
||||
|
||||
clock-names:
|
||||
items:
|
||||
- const: iface
|
||||
- const: bus
|
||||
- const: mnoc
|
||||
- const: core
|
||||
- const: vsync
|
||||
|
||||
interrupts:
|
||||
maxItems: 1
|
||||
|
||||
power-domains:
|
||||
maxItems: 1
|
||||
|
||||
operating-points-v2: true
|
||||
ports:
|
||||
$ref: /schemas/graph.yaml#/properties/ports
|
||||
description: |
|
||||
Contains the list of output ports from DPU device. These ports
|
||||
connect to interfaces that are external to the DPU hardware,
|
||||
such as DSI, DP etc. Each output port contains an endpoint that
|
||||
describes how it is connected to an external interface.
|
||||
|
||||
properties:
|
||||
port@0:
|
||||
$ref: /schemas/graph.yaml#/properties/port
|
||||
description: DPU_INTF1 (DSI1)
|
||||
|
||||
port@1:
|
||||
$ref: /schemas/graph.yaml#/properties/port
|
||||
description: DPU_INTF2 (DSI2)
|
||||
|
||||
required:
|
||||
- port@0
|
||||
- port@1
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- reg-names
|
||||
- clocks
|
||||
- interrupts
|
||||
- power-domains
|
||||
- operating-points-v2
|
||||
- ports
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- reg-names
|
||||
- power-domains
|
||||
- clocks
|
||||
- interrupts
|
||||
- interrupt-controller
|
||||
- iommus
|
||||
- ranges
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
#include <dt-bindings/clock/qcom,mmcc-msm8998.h>
|
||||
#include <dt-bindings/interrupt-controller/arm-gic.h>
|
||||
#include <dt-bindings/power/qcom-rpmpd.h>
|
||||
|
||||
mdss: display-subsystem@c900000 {
|
||||
compatible = "qcom,msm8998-mdss";
|
||||
reg = <0x0c900000 0x1000>;
|
||||
reg-names = "mdss";
|
||||
|
||||
clocks = <&mmcc MDSS_AHB_CLK>,
|
||||
<&mmcc MDSS_AXI_CLK>,
|
||||
<&mmcc MDSS_MDP_CLK>;
|
||||
clock-names = "iface", "bus", "core";
|
||||
|
||||
#address-cells = <1>;
|
||||
#interrupt-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
|
||||
interrupts = <GIC_SPI 83 IRQ_TYPE_LEVEL_HIGH>;
|
||||
interrupt-controller;
|
||||
iommus = <&mmss_smmu 0>;
|
||||
|
||||
power-domains = <&mmcc MDSS_GDSC>;
|
||||
ranges;
|
||||
|
||||
display-controller@c901000 {
|
||||
compatible = "qcom,msm8998-dpu";
|
||||
reg = <0x0c901000 0x8f000>,
|
||||
<0x0c9a8e00 0xf0>,
|
||||
<0x0c9b0000 0x2008>,
|
||||
<0x0c9b8000 0x1040>;
|
||||
reg-names = "mdp", "regdma", "vbif", "vbif_nrt";
|
||||
|
||||
clocks = <&mmcc MDSS_AHB_CLK>,
|
||||
<&mmcc MDSS_AXI_CLK>,
|
||||
<&mmcc MNOC_AHB_CLK>,
|
||||
<&mmcc MDSS_MDP_CLK>,
|
||||
<&mmcc MDSS_VSYNC_CLK>;
|
||||
clock-names = "iface", "bus", "mnoc", "core", "vsync";
|
||||
|
||||
interrupt-parent = <&mdss>;
|
||||
interrupts = <0>;
|
||||
operating-points-v2 = <&mdp_opp_table>;
|
||||
power-domains = <&rpmpd MSM8998_VDDMX>;
|
||||
|
||||
ports {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
|
||||
port@0 {
|
||||
reg = <0>;
|
||||
dpu_intf1_out: endpoint {
|
||||
remote-endpoint = <&dsi0_in>;
|
||||
};
|
||||
};
|
||||
|
||||
port@1 {
|
||||
reg = <1>;
|
||||
dpu_intf2_out: endpoint {
|
||||
remote-endpoint = <&dsi1_in>;
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
...
|
|
@ -0,0 +1,215 @@
|
|||
# SPDX-License-Identifier: GPL-2.0-only or BSD-2-Clause
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/display/msm/dpu-qcm2290.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Qualcomm Display DPU dt properties for QCM2290 target
|
||||
|
||||
maintainers:
|
||||
- Loic Poulain <loic.poulain@linaro.org>
|
||||
|
||||
description: |
|
||||
Device tree bindings for MSM Mobile Display Subsystem(MDSS) that encapsulates
|
||||
sub-blocks like DPU display controller and DSI. Device tree bindings of MDSS
|
||||
and DPU are mentioned for QCM2290 target.
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
items:
|
||||
- const: qcom,qcm2290-mdss
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
reg-names:
|
||||
const: mdss
|
||||
|
||||
power-domains:
|
||||
maxItems: 1
|
||||
|
||||
clocks:
|
||||
items:
|
||||
- description: Display AHB clock from gcc
|
||||
- description: Display AXI clock
|
||||
- description: Display core clock
|
||||
|
||||
clock-names:
|
||||
items:
|
||||
- const: iface
|
||||
- const: bus
|
||||
- const: core
|
||||
|
||||
interrupts:
|
||||
maxItems: 1
|
||||
|
||||
interrupt-controller: true
|
||||
|
||||
"#address-cells": true
|
||||
|
||||
"#size-cells": true
|
||||
|
||||
"#interrupt-cells":
|
||||
const: 1
|
||||
|
||||
iommus:
|
||||
items:
|
||||
- description: Phandle to apps_smmu node with SID mask for Hard-Fail port0
|
||||
- description: Phandle to apps_smmu node with SID mask for Hard-Fail port1
|
||||
|
||||
ranges: true
|
||||
|
||||
interconnects:
|
||||
items:
|
||||
- description: Interconnect path specifying the port ids for data bus
|
||||
|
||||
interconnect-names:
|
||||
const: mdp0-mem
|
||||
|
||||
patternProperties:
|
||||
"^display-controller@[0-9a-f]+$":
|
||||
type: object
|
||||
description: Node containing the properties of DPU.
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
items:
|
||||
- const: qcom,qcm2290-dpu
|
||||
|
||||
reg:
|
||||
items:
|
||||
- description: Address offset and size for mdp register set
|
||||
- description: Address offset and size for vbif register set
|
||||
|
||||
reg-names:
|
||||
items:
|
||||
- const: mdp
|
||||
- const: vbif
|
||||
|
||||
clocks:
|
||||
items:
|
||||
- description: Display AXI clock from gcc
|
||||
- description: Display AHB clock from dispcc
|
||||
- description: Display core clock from dispcc
|
||||
- description: Display lut clock from dispcc
|
||||
- description: Display vsync clock from dispcc
|
||||
|
||||
clock-names:
|
||||
items:
|
||||
- const: bus
|
||||
- const: iface
|
||||
- const: core
|
||||
- const: lut
|
||||
- const: vsync
|
||||
|
||||
interrupts:
|
||||
maxItems: 1
|
||||
|
||||
power-domains:
|
||||
maxItems: 1
|
||||
|
||||
operating-points-v2: true
|
||||
|
||||
ports:
|
||||
$ref: /schemas/graph.yaml#/properties/ports
|
||||
description: |
|
||||
Contains the list of output ports from DPU device. These ports
|
||||
connect to interfaces that are external to the DPU hardware,
|
||||
such as DSI. Each output port contains an endpoint that
|
||||
describes how it is connected to an external interface.
|
||||
|
||||
properties:
|
||||
port@0:
|
||||
$ref: /schemas/graph.yaml#/properties/port
|
||||
description: DPU_INTF1 (DSI1)
|
||||
|
||||
required:
|
||||
- port@0
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- reg-names
|
||||
- clocks
|
||||
- interrupts
|
||||
- power-domains
|
||||
- operating-points-v2
|
||||
- ports
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- reg-names
|
||||
- power-domains
|
||||
- clocks
|
||||
- interrupts
|
||||
- interrupt-controller
|
||||
- iommus
|
||||
- ranges
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
#include <dt-bindings/clock/qcom,dispcc-qcm2290.h>
|
||||
#include <dt-bindings/clock/qcom,gcc-qcm2290.h>
|
||||
#include <dt-bindings/interrupt-controller/arm-gic.h>
|
||||
#include <dt-bindings/interconnect/qcom,qcm2290.h>
|
||||
#include <dt-bindings/power/qcom-rpmpd.h>
|
||||
|
||||
mdss: mdss@5e00000 {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
compatible = "qcom,qcm2290-mdss", "qcom,mdss";
|
||||
reg = <0x05e00000 0x1000>;
|
||||
reg-names = "mdss";
|
||||
power-domains = <&dispcc MDSS_GDSC>;
|
||||
clocks = <&gcc GCC_DISP_AHB_CLK>,
|
||||
<&gcc GCC_DISP_HF_AXI_CLK>,
|
||||
<&dispcc DISP_CC_MDSS_MDP_CLK>;
|
||||
clock-names = "iface", "bus", "core";
|
||||
|
||||
interrupts = <GIC_SPI 186 IRQ_TYPE_LEVEL_HIGH>;
|
||||
interrupt-controller;
|
||||
#interrupt-cells = <1>;
|
||||
|
||||
interconnects = <&mmrt_virt MASTER_MDP0 &bimc SLAVE_EBI1>;
|
||||
interconnect-names = "mdp0-mem";
|
||||
|
||||
iommus = <&apps_smmu 0x420 0x2>,
|
||||
<&apps_smmu 0x421 0x0>;
|
||||
ranges;
|
||||
|
||||
mdss_mdp: mdp@5e01000 {
|
||||
compatible = "qcom,qcm2290-dpu";
|
||||
reg = <0x05e01000 0x8f000>,
|
||||
<0x05eb0000 0x2008>;
|
||||
reg-names = "mdp", "vbif";
|
||||
|
||||
clocks = <&gcc GCC_DISP_HF_AXI_CLK>,
|
||||
<&dispcc DISP_CC_MDSS_AHB_CLK>,
|
||||
<&dispcc DISP_CC_MDSS_MDP_CLK>,
|
||||
<&dispcc DISP_CC_MDSS_MDP_LUT_CLK>,
|
||||
<&dispcc DISP_CC_MDSS_VSYNC_CLK>;
|
||||
clock-names = "bus", "iface", "core", "lut", "vsync";
|
||||
|
||||
operating-points-v2 = <&mdp_opp_table>;
|
||||
power-domains = <&rpmpd QCM2290_VDDCX>;
|
||||
|
||||
interrupt-parent = <&mdss>;
|
||||
interrupts = <0>;
|
||||
|
||||
ports {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
|
||||
port@0 {
|
||||
reg = <0>;
|
||||
dpu_intf1_out: endpoint {
|
||||
remote-endpoint = <&dsi0_in>;
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
...
|
|
@ -14,8 +14,9 @@ allOf:
|
|||
|
||||
properties:
|
||||
compatible:
|
||||
items:
|
||||
- const: qcom,mdss-dsi-ctrl
|
||||
enum:
|
||||
- qcom,mdss-dsi-ctrl
|
||||
- qcom,dsi-ctrl-6g-qcm2290
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
|
|
@ -35,6 +35,38 @@ properties:
|
|||
Connected to DSI0_MIPI_DSI_PLL_VDDA0P9 pin for sc7180 target and
|
||||
connected to VDDA_MIPI_DSI_0_PLL_0P9 pin for sdm845 target
|
||||
|
||||
qcom,phy-rescode-offset-top:
|
||||
$ref: /schemas/types.yaml#/definitions/int8-array
|
||||
minItems: 5
|
||||
maxItems: 5
|
||||
description:
|
||||
Integer array of offset for pull-up legs rescode for all five lanes.
|
||||
To offset the drive strength from the calibrated value in an increasing
|
||||
manner, -32 is the weakest and +31 is the strongest.
|
||||
items:
|
||||
minimum: -32
|
||||
maximum: 31
|
||||
|
||||
qcom,phy-rescode-offset-bot:
|
||||
$ref: /schemas/types.yaml#/definitions/int8-array
|
||||
minItems: 5
|
||||
maxItems: 5
|
||||
description:
|
||||
Integer array of offset for pull-down legs rescode for all five lanes.
|
||||
To offset the drive strength from the calibrated value in a decreasing
|
||||
manner, -32 is the weakest and +31 is the strongest.
|
||||
items:
|
||||
minimum: -32
|
||||
maximum: 31
|
||||
|
||||
qcom,phy-drive-ldo-level:
|
||||
$ref: "/schemas/types.yaml#/definitions/uint32"
|
||||
description:
|
||||
The PHY LDO has an amplitude tuning feature to adjust the LDO output
|
||||
for the HSTX drive. Use supported levels (mV) to offset the drive level
|
||||
from the default value.
|
||||
enum: [ 375, 400, 425, 450, 475, 500 ]
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
|
@ -64,5 +96,9 @@ examples:
|
|||
clocks = <&dispcc DISP_CC_MDSS_AHB_CLK>,
|
||||
<&rpmhcc RPMH_CXO_CLK>;
|
||||
clock-names = "iface", "ref";
|
||||
|
||||
qcom,phy-rescode-offset-top = /bits/ 8 <0 0 0 0 0>;
|
||||
qcom,phy-rescode-offset-bot = /bits/ 8 <0 0 0 0 0>;
|
||||
qcom,phy-drive-ldo-level = <400>;
|
||||
};
|
||||
...
|
||||
|
|
|
@ -11,13 +11,23 @@ maintainers:
|
|||
- Thierry Reding <thierry.reding@gmail.com>
|
||||
|
||||
allOf:
|
||||
- $ref: lvds.yaml#
|
||||
- $ref: panel-common.yaml#
|
||||
- $ref: /schemas/display/lvds.yaml/#
|
||||
|
||||
select:
|
||||
properties:
|
||||
compatible:
|
||||
contains:
|
||||
const: advantech,idk-1110wr
|
||||
|
||||
required:
|
||||
- compatible
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
items:
|
||||
- const: advantech,idk-1110wr
|
||||
- {} # panel-lvds, but not listed here to avoid false select
|
||||
- const: panel-lvds
|
||||
|
||||
data-mapping:
|
||||
const: jeida-24
|
||||
|
@ -35,6 +45,11 @@ additionalProperties: false
|
|||
|
||||
required:
|
||||
- compatible
|
||||
- data-mapping
|
||||
- width-mm
|
||||
- height-mm
|
||||
- panel-timing
|
||||
- port
|
||||
|
||||
examples:
|
||||
- |+
|
||||
|
|
|
@ -11,15 +11,26 @@ maintainers:
|
|||
- Thierry Reding <thierry.reding@gmail.com>
|
||||
|
||||
allOf:
|
||||
- $ref: lvds.yaml#
|
||||
- $ref: panel-common.yaml#
|
||||
- $ref: /schemas/display/lvds.yaml/#
|
||||
|
||||
select:
|
||||
properties:
|
||||
compatible:
|
||||
contains:
|
||||
const: innolux,ee101ia-01d
|
||||
|
||||
required:
|
||||
- compatible
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
items:
|
||||
- const: innolux,ee101ia-01d
|
||||
- {} # panel-lvds, but not listed here to avoid false select
|
||||
- const: panel-lvds
|
||||
|
||||
backlight: true
|
||||
data-mapping: true
|
||||
enable-gpios: true
|
||||
power-supply: true
|
||||
width-mm: true
|
||||
|
@ -27,5 +38,13 @@ properties:
|
|||
panel-timing: true
|
||||
port: true
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- data-mapping
|
||||
- width-mm
|
||||
- height-mm
|
||||
- panel-timing
|
||||
- port
|
||||
|
||||
additionalProperties: false
|
||||
...
|
||||
|
|
|
@ -11,13 +11,23 @@ maintainers:
|
|||
- Thierry Reding <thierry.reding@gmail.com>
|
||||
|
||||
allOf:
|
||||
- $ref: lvds.yaml#
|
||||
- $ref: panel-common.yaml#
|
||||
- $ref: /schemas/display/lvds.yaml/#
|
||||
|
||||
select:
|
||||
properties:
|
||||
compatible:
|
||||
contains:
|
||||
const: mitsubishi,aa104xd12
|
||||
|
||||
required:
|
||||
- compatible
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
items:
|
||||
- const: mitsubishi,aa104xd12
|
||||
- {} # panel-lvds, but not listed here to avoid false select
|
||||
- const: panel-lvds
|
||||
|
||||
vcc-supply:
|
||||
description: Reference to the regulator powering the panel VCC pins.
|
||||
|
@ -39,6 +49,11 @@ additionalProperties: false
|
|||
required:
|
||||
- compatible
|
||||
- vcc-supply
|
||||
- data-mapping
|
||||
- width-mm
|
||||
- height-mm
|
||||
- panel-timing
|
||||
- port
|
||||
|
||||
examples:
|
||||
- |+
|
||||
|
|
|
@ -11,13 +11,23 @@ maintainers:
|
|||
- Thierry Reding <thierry.reding@gmail.com>
|
||||
|
||||
allOf:
|
||||
- $ref: lvds.yaml#
|
||||
- $ref: panel-common.yaml#
|
||||
- $ref: /schemas/display/lvds.yaml/#
|
||||
|
||||
select:
|
||||
properties:
|
||||
compatible:
|
||||
contains:
|
||||
const: mitsubishi,aa121td01
|
||||
|
||||
required:
|
||||
- compatible
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
items:
|
||||
- const: mitsubishi,aa121td01
|
||||
- {} # panel-lvds, but not listed here to avoid false select
|
||||
- const: panel-lvds
|
||||
|
||||
vcc-supply:
|
||||
description: Reference to the regulator powering the panel VCC pins.
|
||||
|
@ -39,6 +49,11 @@ additionalProperties: false
|
|||
required:
|
||||
- compatible
|
||||
- vcc-supply
|
||||
- data-mapping
|
||||
- width-mm
|
||||
- height-mm
|
||||
- panel-timing
|
||||
- port
|
||||
|
||||
examples:
|
||||
- |+
|
||||
|
|
|
@ -0,0 +1,57 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/display/panel/panel-lvds.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Generic LVDS Display Panel Device Tree Bindings
|
||||
|
||||
maintainers:
|
||||
- Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
|
||||
- Thierry Reding <thierry.reding@gmail.com>
|
||||
|
||||
allOf:
|
||||
- $ref: panel-common.yaml#
|
||||
- $ref: /schemas/display/lvds.yaml/#
|
||||
|
||||
select:
|
||||
properties:
|
||||
compatible:
|
||||
contains:
|
||||
const: panel-lvds
|
||||
|
||||
not:
|
||||
properties:
|
||||
compatible:
|
||||
contains:
|
||||
enum:
|
||||
- advantech,idk-1110wr
|
||||
- advantech,idk-2121wr
|
||||
- innolux,ee101ia-01d
|
||||
- mitsubishi,aa104xd12
|
||||
- mitsubishi,aa121td01
|
||||
- sgd,gktw70sdae4se
|
||||
|
||||
required:
|
||||
- compatible
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
items:
|
||||
- enum:
|
||||
- auo,b101ew05
|
||||
- tbs,a711-panel
|
||||
|
||||
- const: panel-lvds
|
||||
|
||||
unevaluatedProperties: false
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- data-mapping
|
||||
- width-mm
|
||||
- height-mm
|
||||
- panel-timing
|
||||
- port
|
||||
|
||||
...
|
|
@ -0,0 +1,126 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0-only or BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/display/panel/panel-mipi-dbi-spi.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: MIPI DBI SPI Panel
|
||||
|
||||
maintainers:
|
||||
- Noralf Trønnes <noralf@tronnes.org>
|
||||
|
||||
description: |
|
||||
This binding is for display panels using a MIPI DBI compatible controller
|
||||
in SPI mode.
|
||||
|
||||
The MIPI Alliance Standard for Display Bus Interface defines the electrical
|
||||
and logical interfaces for display controllers historically used in mobile
|
||||
phones. The standard defines 4 display architecture types and this binding is
|
||||
for type 1 which has full frame memory. There are 3 interface types in the
|
||||
standard and type C is the serial interface.
|
||||
|
||||
The standard defines the following interface signals for type C:
|
||||
- Power:
|
||||
- Vdd: Power supply for display module
|
||||
- Vddi: Logic level supply for interface signals
|
||||
Combined into one in this binding called: power-supply
|
||||
- Interface:
|
||||
- CSx: Chip select
|
||||
- SCL: Serial clock
|
||||
- Dout: Serial out
|
||||
- Din: Serial in
|
||||
- SDA: Bidrectional in/out
|
||||
- D/CX: Data/command selection, high=data, low=command
|
||||
Called dc-gpios in this binding.
|
||||
- RESX: Reset when low
|
||||
Called reset-gpios in this binding.
|
||||
|
||||
The type C interface has 3 options:
|
||||
|
||||
- Option 1: 9-bit mode and D/CX as the 9th bit
|
||||
| Command | the next command or following data |
|
||||
|<0><D7><D6><D5><D4><D3><D2><D1><D0>|<D/CX><D7><D6><D5><D4><D3><D2><D1><D0>|
|
||||
|
||||
- Option 2: 16-bit mode and D/CX as a 9th bit
|
||||
| Command or data |
|
||||
|<X><X><X><X><X><X><X><D/CX><D7><D6><D5><D4><D3><D2><D1><D0>|
|
||||
|
||||
- Option 3: 8-bit mode and D/CX as a separate interface line
|
||||
| Command or data |
|
||||
|<D7><D6><D5><D4><D3><D2><D1><D0>|
|
||||
|
||||
The panel resolution is specified using the panel-timing node properties
|
||||
hactive (width) and vactive (height). The other mandatory panel-timing
|
||||
properties should be set to zero except clock-frequency which can be
|
||||
optionally set to inform about the actual pixel clock frequency.
|
||||
|
||||
If the panel is wired to the controller at an offset specify this using
|
||||
hback-porch (x-offset) and vback-porch (y-offset).
|
||||
|
||||
allOf:
|
||||
- $ref: panel-common.yaml#
|
||||
- $ref: /schemas/spi/spi-peripheral-props.yaml#
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
items:
|
||||
- enum:
|
||||
- sainsmart18
|
||||
- const: panel-mipi-dbi-spi
|
||||
|
||||
write-only:
|
||||
type: boolean
|
||||
description:
|
||||
Controller is not readable (ie. Din (MISO on the SPI interface) is not
|
||||
wired up).
|
||||
|
||||
dc-gpios:
|
||||
maxItems: 1
|
||||
description: |
|
||||
Controller data/command selection (D/CX) in 4-line SPI mode.
|
||||
If not set, the controller is in 3-line SPI mode.
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- panel-timing
|
||||
|
||||
unevaluatedProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
#include <dt-bindings/gpio/gpio.h>
|
||||
|
||||
spi {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
|
||||
display@0{
|
||||
compatible = "sainsmart18", "panel-mipi-dbi-spi";
|
||||
reg = <0>;
|
||||
spi-max-frequency = <40000000>;
|
||||
|
||||
dc-gpios = <&gpio 24 GPIO_ACTIVE_HIGH>;
|
||||
reset-gpios = <&gpio 25 GPIO_ACTIVE_HIGH>;
|
||||
write-only;
|
||||
|
||||
backlight = <&backlight>;
|
||||
|
||||
width-mm = <35>;
|
||||
height-mm = <28>;
|
||||
|
||||
panel-timing {
|
||||
hactive = <160>;
|
||||
vactive = <128>;
|
||||
hback-porch = <0>;
|
||||
vback-porch = <0>;
|
||||
clock-frequency = <0>;
|
||||
hfront-porch = <0>;
|
||||
hsync-len = <0>;
|
||||
vfront-porch = <0>;
|
||||
vsync-len = <0>;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
...
|
|
@ -222,6 +222,8 @@ properties:
|
|||
- logictechno,lttd800480070-l6wh-rt
|
||||
# Mitsubishi "AA070MC01 7.0" WVGA TFT LCD panel
|
||||
- mitsubishi,aa070mc01-ca1
|
||||
# Multi-Inno Technology Co.,Ltd MI0700S4T-6 7" 800x480 TFT Resistive Touch Module
|
||||
- multi-inno,mi0700s4t-6
|
||||
# Multi-Inno Technology Co.,Ltd MI1010AIT-1CP 10.1" 1280x800 LVDS IPS Cap Touch Mod.
|
||||
- multi-inno,mi1010ait-1cp
|
||||
# NEC LCD Technologies, Ltd. 12.1" WXGA (1280x800) LVDS TFT LCD panel
|
||||
|
@ -282,6 +284,8 @@ properties:
|
|||
- sharp,lq101k1ly04
|
||||
# Sharp 12.3" (2400x1600 pixels) TFT LCD panel
|
||||
- sharp,lq123p1jx31
|
||||
# Sharp 14" (1920x1080 pixels) TFT LCD panel
|
||||
- sharp,lq140m1jw46
|
||||
# Sharp LS020B1DD01D 2.0" HQVGA TFT LCD panel
|
||||
- sharp,ls020b1dd01d
|
||||
# Shelly SCA07010-BFN-LNN 7.0" WVGA TFT LCD panel
|
||||
|
|
|
@ -11,13 +11,23 @@ maintainers:
|
|||
- Thierry Reding <thierry.reding@gmail.com>
|
||||
|
||||
allOf:
|
||||
- $ref: lvds.yaml#
|
||||
- $ref: panel-common.yaml#
|
||||
- $ref: /schemas/display/lvds.yaml/#
|
||||
|
||||
select:
|
||||
properties:
|
||||
compatible:
|
||||
contains:
|
||||
const: sgd,gktw70sdae4se
|
||||
|
||||
required:
|
||||
- compatible
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
items:
|
||||
- const: sgd,gktw70sdae4se
|
||||
- {} # panel-lvds, but not listed here to avoid false select
|
||||
- const: panel-lvds
|
||||
|
||||
data-mapping:
|
||||
const: jeida-18
|
||||
|
@ -35,6 +45,11 @@ additionalProperties: false
|
|||
|
||||
required:
|
||||
- compatible
|
||||
- port
|
||||
- data-mapping
|
||||
- width-mm
|
||||
- height-mm
|
||||
- panel-timing
|
||||
|
||||
examples:
|
||||
- |+
|
||||
|
|
|
@ -4,7 +4,12 @@
|
|||
$id: http://devicetree.org/schemas/display/panel/sony,acx424akp.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Sony ACX424AKP 4" 480x864 AMOLED panel
|
||||
title: Sony ACX424AKP/ACX424AKM 4" 480x864/480x854 AMOLED panel
|
||||
|
||||
description: The Sony ACX424AKP and ACX424AKM are panels built around
|
||||
the Novatek NT35560 display controller. The only difference is that
|
||||
the AKM is configured to use 10 pixels less in the Y axis than the
|
||||
AKP.
|
||||
|
||||
maintainers:
|
||||
- Linus Walleij <linus.walleij@linaro.org>
|
||||
|
@ -14,7 +19,9 @@ allOf:
|
|||
|
||||
properties:
|
||||
compatible:
|
||||
const: sony,acx424akp
|
||||
enum:
|
||||
- sony,acx424akp
|
||||
- sony,acx424akm
|
||||
reg: true
|
||||
reset-gpios: true
|
||||
vddi-supply:
|
||||
|
|
|
@ -8,6 +8,7 @@ title: Solomon SSD1307 OLED Controller Framebuffer
|
|||
|
||||
maintainers:
|
||||
- Maxime Ripard <mripard@kernel.org>
|
||||
- Javier Martinez Canillas <javierm@redhat.com>
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
|
|
|
@ -159,6 +159,21 @@ allOf:
|
|||
power-domains:
|
||||
maxItems: 1
|
||||
sram-supply: false
|
||||
- if:
|
||||
properties:
|
||||
compatible:
|
||||
contains:
|
||||
const: rockchip,rk3568-mali
|
||||
then:
|
||||
properties:
|
||||
clocks:
|
||||
minItems: 2
|
||||
clock-names:
|
||||
items:
|
||||
- const: gpu
|
||||
- const: bus
|
||||
required:
|
||||
- clock-names
|
||||
|
||||
examples:
|
||||
- |
|
||||
|
|
|
@ -502,6 +502,15 @@ pcim_iomap()
|
|||
Not using these wrappers may make drivers unusable on certain platforms with
|
||||
stricter rules for mapping I/O memory.
|
||||
|
||||
Generalizing Access to System and I/O Memory
|
||||
============================================
|
||||
|
||||
.. kernel-doc:: include/linux/iosys-map.h
|
||||
:doc: overview
|
||||
|
||||
.. kernel-doc:: include/linux/iosys-map.h
|
||||
:internal:
|
||||
|
||||
Public Functions Provided
|
||||
=========================
|
||||
|
||||
|
|
|
@ -128,15 +128,6 @@ Kernel Functions and Structures Reference
|
|||
.. kernel-doc:: include/linux/dma-buf.h
|
||||
:internal:
|
||||
|
||||
Buffer Mapping Helpers
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. kernel-doc:: include/linux/dma-buf-map.h
|
||||
:doc: overview
|
||||
|
||||
.. kernel-doc:: include/linux/dma-buf-map.h
|
||||
:internal:
|
||||
|
||||
Reservation Objects
|
||||
-------------------
|
||||
|
||||
|
|
|
@ -75,6 +75,12 @@ update it, its value is mostly useless. The DRM core prints it to the
|
|||
kernel log at initialization time and passes it to userspace through the
|
||||
DRM_IOCTL_VERSION ioctl.
|
||||
|
||||
Module Initialization
|
||||
---------------------
|
||||
|
||||
.. kernel-doc:: include/drm/drm_module.h
|
||||
:doc: overview
|
||||
|
||||
Managing Ownership of the Framebuffer Aperture
|
||||
----------------------------------------------
|
||||
|
||||
|
|
|
@ -232,34 +232,34 @@ HDCP Helper Functions Reference
|
|||
Display Port Helper Functions Reference
|
||||
=======================================
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/drm_dp_helper.c
|
||||
.. kernel-doc:: drivers/gpu/drm/dp/drm_dp.c
|
||||
:doc: dp helpers
|
||||
|
||||
.. kernel-doc:: include/drm/drm_dp_helper.h
|
||||
.. kernel-doc:: include/drm/dp/drm_dp_helper.h
|
||||
:internal:
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/drm_dp_helper.c
|
||||
.. kernel-doc:: drivers/gpu/drm/dp/drm_dp.c
|
||||
:export:
|
||||
|
||||
Display Port CEC Helper Functions Reference
|
||||
===========================================
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/drm_dp_cec.c
|
||||
.. kernel-doc:: drivers/gpu/drm/dp/drm_dp_cec.c
|
||||
:doc: dp cec helpers
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/drm_dp_cec.c
|
||||
.. kernel-doc:: drivers/gpu/drm/dp/drm_dp_cec.c
|
||||
:export:
|
||||
|
||||
Display Port Dual Mode Adaptor Helper Functions Reference
|
||||
=========================================================
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/drm_dp_dual_mode_helper.c
|
||||
.. kernel-doc:: drivers/gpu/drm/dp/drm_dp_dual_mode_helper.c
|
||||
:doc: dp dual mode helpers
|
||||
|
||||
.. kernel-doc:: include/drm/drm_dp_dual_mode_helper.h
|
||||
.. kernel-doc:: include/drm/dp/drm_dp_dual_mode_helper.h
|
||||
:internal:
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/drm_dp_dual_mode_helper.c
|
||||
.. kernel-doc:: drivers/gpu/drm/dp/drm_dp_dual_mode_helper.c
|
||||
:export:
|
||||
|
||||
Display Port MST Helpers
|
||||
|
@ -268,19 +268,19 @@ Display Port MST Helpers
|
|||
Overview
|
||||
--------
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/drm_dp_mst_topology.c
|
||||
.. kernel-doc:: drivers/gpu/drm/dp/drm_dp_mst_topology.c
|
||||
:doc: dp mst helper
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/drm_dp_mst_topology.c
|
||||
.. kernel-doc:: drivers/gpu/drm/dp/drm_dp_mst_topology.c
|
||||
:doc: Branch device and port refcounting
|
||||
|
||||
Functions Reference
|
||||
-------------------
|
||||
|
||||
.. kernel-doc:: include/drm/drm_dp_mst_helper.h
|
||||
.. kernel-doc:: include/drm/dp/drm_dp_mst_helper.h
|
||||
:internal:
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/drm_dp_mst_topology.c
|
||||
.. kernel-doc:: drivers/gpu/drm/dp/drm_dp_mst_topology.c
|
||||
:export:
|
||||
|
||||
Topology Lifetime Internals
|
||||
|
@ -289,7 +289,7 @@ Topology Lifetime Internals
|
|||
These functions aren't exported to drivers, but are documented here to help make
|
||||
the MST topology helpers easier to understand
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/drm_dp_mst_topology.c
|
||||
.. kernel-doc:: drivers/gpu/drm/dp/drm_dp_mst_topology.c
|
||||
:functions: drm_dp_mst_topology_try_get_mstb drm_dp_mst_topology_get_mstb
|
||||
drm_dp_mst_topology_put_mstb
|
||||
drm_dp_mst_topology_try_get_port drm_dp_mst_topology_get_port
|
||||
|
|
|
@ -423,12 +423,12 @@ Connector Functions Reference
|
|||
Writeback Connectors
|
||||
--------------------
|
||||
|
||||
.. kernel-doc:: include/drm/drm_writeback.h
|
||||
:internal:
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/drm_writeback.c
|
||||
:doc: overview
|
||||
|
||||
.. kernel-doc:: include/drm/drm_writeback.h
|
||||
:internal:
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/drm_writeback.c
|
||||
:export:
|
||||
|
||||
|
|
|
@ -8,7 +8,7 @@ the very dynamic nature of many of that data, managing graphics memory
|
|||
efficiently is thus crucial for the graphics stack and plays a central
|
||||
role in the DRM infrastructure.
|
||||
|
||||
The DRM core includes two memory managers, namely Translation Table Maps
|
||||
The DRM core includes two memory managers, namely Translation Table Manager
|
||||
(TTM) and Graphics Execution Manager (GEM). TTM was the first DRM memory
|
||||
manager to be developed and tried to be a one-size-fits-them all
|
||||
solution. It provides a single userspace API to accommodate the need of
|
||||
|
|
|
@ -539,6 +539,7 @@ GuC ABI
|
|||
.. kernel-doc:: drivers/gpu/drm/i915/gt/uc/abi/guc_communication_mmio_abi.h
|
||||
.. kernel-doc:: drivers/gpu/drm/i915/gt/uc/abi/guc_communication_ctb_abi.h
|
||||
.. kernel-doc:: drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h
|
||||
.. kernel-doc:: drivers/gpu/drm/i915/gt/uc/abi/guc_klvs_abi.h
|
||||
|
||||
HuC
|
||||
---
|
||||
|
|
|
@ -222,7 +222,7 @@ Convert drivers to use drm_fbdev_generic_setup()
|
|||
Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
|
||||
atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
|
||||
expected the framebuffer in system memory or system-like memory. By employing
|
||||
struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
|
||||
struct iosys_map, drivers with frambuffers in I/O memory can be supported
|
||||
as well.
|
||||
|
||||
Contact: Maintainer of the driver you plan to convert
|
||||
|
@ -234,13 +234,35 @@ Reimplement functions in drm_fbdev_fb_ops without fbdev
|
|||
|
||||
A number of callback functions in drm_fbdev_fb_ops could benefit from
|
||||
being rewritten without dependencies on the fbdev module. Some of the
|
||||
helpers could further benefit from using struct dma_buf_map instead of
|
||||
helpers could further benefit from using struct iosys_map instead of
|
||||
raw pointers.
|
||||
|
||||
Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
|
||||
|
||||
Level: Advanced
|
||||
|
||||
Benchmark and optimize blitting and format-conversion function
|
||||
--------------------------------------------------------------
|
||||
|
||||
Drawing to dispay memory quickly is crucial for many applications'
|
||||
performance.
|
||||
|
||||
On at least x86-64, sys_imageblit() is significantly slower than
|
||||
cfb_imageblit(), even though both use the same blitting algorithm and
|
||||
the latter is written for I/O memory. It turns out that cfb_imageblit()
|
||||
uses movl instructions, while sys_imageblit apparently does not. This
|
||||
seems to be a problem with gcc's optimizer. DRM's format-conversion
|
||||
helpers might be subject to similar issues.
|
||||
|
||||
Benchmark and optimize fbdev's sys_() helpers and DRM's format-conversion
|
||||
helpers. In cases that can be further optimized, maybe implement a different
|
||||
algorithm. For micro-optimizations, use movl/movq instructions explicitly.
|
||||
That might possibly require architecture-specific helpers (e.g., storel()
|
||||
storeq()).
|
||||
|
||||
Contact: Thomas Zimmermann <tzimmermann@suse.de>
|
||||
|
||||
Level: Intermediate
|
||||
|
||||
drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
|
||||
-----------------------------------------------------------------
|
||||
|
@ -410,19 +432,19 @@ Contact: Emil Velikov, respective driver maintainers
|
|||
|
||||
Level: Intermediate
|
||||
|
||||
Use struct dma_buf_map throughout codebase
|
||||
------------------------------------------
|
||||
Use struct iosys_map throughout codebase
|
||||
----------------------------------------
|
||||
|
||||
Pointers to shared device memory are stored in struct dma_buf_map. Each
|
||||
Pointers to shared device memory are stored in struct iosys_map. Each
|
||||
instance knows whether it refers to system or I/O memory. Most of the DRM-wide
|
||||
interface have been converted to use struct dma_buf_map, but implementations
|
||||
interface have been converted to use struct iosys_map, but implementations
|
||||
often still use raw pointers.
|
||||
|
||||
The task is to use struct dma_buf_map where it makes sense.
|
||||
The task is to use struct iosys_map where it makes sense.
|
||||
|
||||
* Memory managers should use struct dma_buf_map for dma-buf-imported buffers.
|
||||
* TTM might benefit from using struct dma_buf_map internally.
|
||||
* Framebuffer copying and blitting helpers should operate on struct dma_buf_map.
|
||||
* Memory managers should use struct iosys_map for dma-buf-imported buffers.
|
||||
* TTM might benefit from using struct iosys_map internally.
|
||||
* Framebuffer copying and blitting helpers should operate on struct iosys_map.
|
||||
|
||||
Contact: Thomas Zimmermann <tzimmermann@suse.de>, Christian König, Daniel Vetter
|
||||
|
||||
|
@ -443,6 +465,21 @@ Contact: Thomas Zimmermann <tzimmermann@suse.de>
|
|||
|
||||
Level: Intermediate
|
||||
|
||||
Request memory regions in all drivers
|
||||
-------------------------------------
|
||||
|
||||
Go through all drivers and add code to request the memory regions that the
|
||||
driver uses. This requires adding calls to request_mem_region(),
|
||||
pci_request_region() or similar functions. Use helpers for managed cleanup
|
||||
where possible.
|
||||
|
||||
Drivers are pretty bad at doing this and there used to be conflicts among
|
||||
DRM and fbdev drivers. Still, it's the correct thing to do.
|
||||
|
||||
Contact: Thomas Zimmermann <tzimmermann@suse.de>
|
||||
|
||||
Level: Starter
|
||||
|
||||
|
||||
Core refactorings
|
||||
=================
|
||||
|
@ -460,8 +497,12 @@ This is a really varied tasks with lots of little bits and pieces:
|
|||
achieved by using an IPI to the local processor.
|
||||
|
||||
* There's a massive confusion of different panic handlers. DRM fbdev emulation
|
||||
helpers have one, but on top of that the fbcon code itself also has one. We
|
||||
need to make sure that they stop fighting over each another.
|
||||
helpers had their own (long removed), but on top of that the fbcon code itself
|
||||
also has one. We need to make sure that they stop fighting over each other.
|
||||
This is worked around by checking ``oops_in_progress`` at various entry points
|
||||
into the DRM fbdev emulation helpers. A much cleaner approach here would be to
|
||||
switch fbcon to the `threaded printk support
|
||||
<https://lwn.net/Articles/800946/>`_.
|
||||
|
||||
* ``drm_can_sleep()`` is a mess. It hides real bugs in normal operations and
|
||||
isn't a full solution for panic paths. We need to make sure that it only
|
||||
|
@ -473,16 +514,15 @@ This is a really varied tasks with lots of little bits and pieces:
|
|||
even spinlocks (because NMI and hardirq can panic too). We need to either
|
||||
make sure to not call such paths, or trylock everything. Really tricky.
|
||||
|
||||
* For the above locking troubles reasons it's pretty much impossible to
|
||||
attempt a synchronous modeset from panic handlers. The only thing we could
|
||||
try to achive is an atomic ``set_base`` of the primary plane, and hope that
|
||||
it shows up. Everything else probably needs to be delayed to some worker or
|
||||
something else which happens later on. Otherwise it just kills the box
|
||||
harder, prevent the panic from going out on e.g. netconsole.
|
||||
* A clean solution would be an entirely separate panic output support in KMS,
|
||||
bypassing the current fbcon support. See `[PATCH v2 0/3] drm: Add panic handling
|
||||
<https://lore.kernel.org/dri-devel/20190311174218.51899-1-noralf@tronnes.org/>`_.
|
||||
|
||||
* There's also proposal for a simplied DRM console instead of the full-blown
|
||||
fbcon and DRM fbdev emulation. Any kind of panic handling tricks should
|
||||
obviously work for both console, in case we ever get kmslog merged.
|
||||
* Encoding the actual oops and preceding dmesg in a QR might help with the
|
||||
dread "important stuff scrolled away" problem. See `[RFC][PATCH] Oops messages
|
||||
transfer using QR codes
|
||||
<https://lore.kernel.org/lkml/1446217392-11981-1-git-send-email-alexandru.murtaza@intel.com/>`_
|
||||
for some example code that could be reused.
|
||||
|
||||
Contact: Daniel Vetter
|
||||
|
||||
|
|
|
@ -124,8 +124,6 @@ Add Plane Features
|
|||
|
||||
There's lots of plane features we could add support for:
|
||||
|
||||
- Multiple overlay planes. [Good to get started]
|
||||
|
||||
- Clearing primary plane: clear primary plane before plane composition (at the
|
||||
start) for correctness of pixel blend ops. It also guarantees alpha channel
|
||||
is cleared in the target buffer for stable crc. [Good to get started]
|
||||
|
|
40
MAINTAINERS
40
MAINTAINERS
|
@ -5751,7 +5751,7 @@ T: git git://anongit.freedesktop.org/drm/drm-misc
|
|||
F: Documentation/driver-api/dma-buf.rst
|
||||
F: drivers/dma-buf/
|
||||
F: include/linux/*fence.h
|
||||
F: include/linux/dma-buf*
|
||||
F: include/linux/dma-buf.h
|
||||
F: include/linux/dma-resv.h
|
||||
K: \bdma_(?:buf|fence|resv)\b
|
||||
|
||||
|
@ -6094,7 +6094,8 @@ L: dri-devel@lists.freedesktop.org
|
|||
T: git git://anongit.freedesktop.org/drm/drm-misc
|
||||
S: Maintained
|
||||
F: drivers/gpu/drm/panel/panel-lvds.c
|
||||
F: Documentation/devicetree/bindings/display/panel/lvds.yaml
|
||||
F: Documentation/devicetree/bindings/display/lvds.yaml
|
||||
F: Documentation/devicetree/bindings/display/panel/panel-lvds.yaml
|
||||
|
||||
DRM DRIVER FOR MANTIX MLAF057WE51 PANELS
|
||||
M: Guido Günther <agx@sigxcpu.org>
|
||||
|
@ -6123,6 +6124,14 @@ T: git git://anongit.freedesktop.org/drm/drm-misc
|
|||
F: Documentation/devicetree/bindings/display/multi-inno,mi0283qt.txt
|
||||
F: drivers/gpu/drm/tiny/mi0283qt.c
|
||||
|
||||
DRM DRIVER FOR MIPI DBI compatible panels
|
||||
M: Noralf Trønnes <noralf@tronnes.org>
|
||||
S: Maintained
|
||||
W: https://github.com/notro/panel-mipi-dbi/wiki
|
||||
T: git git://anongit.freedesktop.org/drm/drm-misc
|
||||
F: Documentation/devicetree/bindings/display/panel/panel-mipi-dbi-spi.yaml
|
||||
F: drivers/gpu/drm/tiny/panel-mipi-dbi.c
|
||||
|
||||
DRM DRIVER FOR MSM ADRENO GPU
|
||||
M: Rob Clark <robdclark@gmail.com>
|
||||
M: Sean Paul <sean@poorly.run>
|
||||
|
@ -6143,6 +6152,13 @@ T: git git://anongit.freedesktop.org/drm/drm-misc
|
|||
F: Documentation/devicetree/bindings/display/panel/novatek,nt35510.yaml
|
||||
F: drivers/gpu/drm/panel/panel-novatek-nt35510.c
|
||||
|
||||
DRM DRIVER FOR NOVATEK NT35560 PANELS
|
||||
M: Linus Walleij <linus.walleij@linaro.org>
|
||||
S: Maintained
|
||||
T: git git://anongit.freedesktop.org/drm/drm-misc
|
||||
F: Documentation/devicetree/bindings/display/panel/sony,acx424akp.yaml
|
||||
F: drivers/gpu/drm/panel/panel-novatek-nt35560.c
|
||||
|
||||
DRM DRIVER FOR NOVATEK NT36672A PANELS
|
||||
M: Sumit Semwal <sumit.semwal@linaro.org>
|
||||
S: Maintained
|
||||
|
@ -6179,6 +6195,13 @@ T: git git://anongit.freedesktop.org/drm/drm-misc
|
|||
F: Documentation/devicetree/bindings/display/repaper.txt
|
||||
F: drivers/gpu/drm/tiny/repaper.c
|
||||
|
||||
DRM DRIVER FOR SOLOMON SSD130X OLED DISPLAYS
|
||||
M: Javier Martinez Canillas <javierm@redhat.com>
|
||||
S: Maintained
|
||||
T: git git://anongit.freedesktop.org/drm/drm-misc
|
||||
F: Documentation/devicetree/bindings/display/solomon,ssd1307fb.yaml
|
||||
F: drivers/gpu/drm/solomon/ssd130x*
|
||||
|
||||
DRM DRIVER FOR QEMU'S CIRRUS DEVICE
|
||||
M: Dave Airlie <airlied@redhat.com>
|
||||
M: Gerd Hoffmann <kraxel@redhat.com>
|
||||
|
@ -6267,12 +6290,6 @@ T: git git://anongit.freedesktop.org/drm/drm-misc
|
|||
F: Documentation/devicetree/bindings/display/sitronix,st7735r.yaml
|
||||
F: drivers/gpu/drm/tiny/st7735r.c
|
||||
|
||||
DRM DRIVER FOR SONY ACX424AKP PANELS
|
||||
M: Linus Walleij <linus.walleij@linaro.org>
|
||||
S: Maintained
|
||||
T: git git://anongit.freedesktop.org/drm/drm-misc
|
||||
F: drivers/gpu/drm/panel/panel-sony-acx424akp.c
|
||||
|
||||
DRM DRIVER FOR ST-ERICSSON MCDE
|
||||
M: Linus Walleij <linus.walleij@linaro.org>
|
||||
S: Maintained
|
||||
|
@ -10103,6 +10120,13 @@ F: include/linux/iova.h
|
|||
F: include/linux/of_iommu.h
|
||||
F: include/uapi/linux/iommu.h
|
||||
|
||||
IOSYS-MAP HELPERS
|
||||
M: Thomas Zimmermann <tzimmermann@suse.de>
|
||||
L: dri-devel@lists.freedesktop.org
|
||||
S: Maintained
|
||||
T: git git://anongit.freedesktop.org/drm/drm-misc
|
||||
F: include/linux/iosys-map.h
|
||||
|
||||
IO_URING
|
||||
M: Jens Axboe <axboe@kernel.dk>
|
||||
R: Pavel Begunkov <asml.silence@gmail.com>
|
||||
|
|
|
@ -555,6 +555,7 @@ static const struct pci_device_id intel_early_ids[] __initconst = {
|
|||
INTEL_RKL_IDS(&gen11_early_ops),
|
||||
INTEL_ADLS_IDS(&gen11_early_ops),
|
||||
INTEL_ADLP_IDS(&gen11_early_ops),
|
||||
INTEL_ADLN_IDS(&gen11_early_ops),
|
||||
INTEL_RPLS_IDS(&gen11_early_ops),
|
||||
};
|
||||
|
||||
|
|
|
@ -55,7 +55,7 @@ static struct _ati_generic_private {
|
|||
|
||||
static int ati_create_page_map(struct ati_page_map *page_map)
|
||||
{
|
||||
int i, err = 0;
|
||||
int i, err;
|
||||
|
||||
page_map->real = (unsigned long *) __get_free_page(GFP_KERNEL);
|
||||
if (page_map->real == NULL)
|
||||
|
@ -63,6 +63,10 @@ static int ati_create_page_map(struct ati_page_map *page_map)
|
|||
|
||||
set_memory_uc((unsigned long)page_map->real, 1);
|
||||
err = map_page_into_agp(virt_to_page(page_map->real));
|
||||
if (err) {
|
||||
free_page((unsigned long)page_map->real);
|
||||
return err;
|
||||
}
|
||||
page_map->remapped = page_map->real;
|
||||
|
||||
for (i = 0; i < PAGE_SIZE / sizeof(unsigned long); i++) {
|
||||
|
@ -303,7 +307,7 @@ static int ati_insert_memory(struct agp_memory * mem,
|
|||
for (i = 0, j = pg_start; i < mem->page_count; i++, j++) {
|
||||
addr = (j * PAGE_SIZE) + agp_bridge->gart_bus_addr;
|
||||
cur_gatt = GET_GATT(addr);
|
||||
writel(agp_bridge->driver->mask_memory(agp_bridge,
|
||||
writel(agp_bridge->driver->mask_memory(agp_bridge,
|
||||
page_to_phys(mem->pages[i]),
|
||||
mem->type),
|
||||
cur_gatt+GET_GATT_OFF(addr));
|
||||
|
|
|
@ -62,6 +62,7 @@ EXPORT_SYMBOL(agp_find_bridge);
|
|||
|
||||
/**
|
||||
* agp_backend_acquire - attempt to acquire an agp backend.
|
||||
* @pdev: the PCI device
|
||||
*
|
||||
*/
|
||||
struct agp_bridge_data *agp_backend_acquire(struct pci_dev *pdev)
|
||||
|
@ -83,6 +84,7 @@ EXPORT_SYMBOL(agp_backend_acquire);
|
|||
|
||||
/**
|
||||
* agp_backend_release - release the lock on the agp backend.
|
||||
* @bridge: the AGP backend to release
|
||||
*
|
||||
* The caller must insure that the graphics aperture translation table
|
||||
* is read for use by another entity.
|
||||
|
|
|
@ -39,7 +39,9 @@
|
|||
#include <linux/fs.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/uaccess.h>
|
||||
|
||||
#include "agp.h"
|
||||
#include "compat_ioctl.h"
|
||||
|
||||
struct agp_front_data agp_fe;
|
||||
|
||||
|
@ -1017,7 +1019,7 @@ static long agp_ioctl(struct file *file,
|
|||
case AGPIOC_UNBIND:
|
||||
ret_val = agpioc_unbind_wrap(curr_priv, (void __user *) arg);
|
||||
break;
|
||||
|
||||
|
||||
case AGPIOC_CHIPSET_FLUSH:
|
||||
break;
|
||||
}
|
||||
|
|
|
@ -261,7 +261,8 @@ static int nvidia_remove_memory(struct agp_memory *mem, off_t pg_start, int type
|
|||
static void nvidia_tlbflush(struct agp_memory *mem)
|
||||
{
|
||||
unsigned long end;
|
||||
u32 wbc_reg, temp;
|
||||
u32 wbc_reg;
|
||||
u32 __maybe_unused temp;
|
||||
int i;
|
||||
|
||||
/* flush chipset */
|
||||
|
|
|
@ -262,13 +262,10 @@ static void serverworks_tlbflush(struct agp_memory *temp)
|
|||
|
||||
static int serverworks_configure(void)
|
||||
{
|
||||
struct aper_size_info_lvl2 *current_size;
|
||||
u32 temp;
|
||||
u8 enable_reg;
|
||||
u16 cap_reg;
|
||||
|
||||
current_size = A_SIZE_LVL2(agp_bridge->current_size);
|
||||
|
||||
/* Get the memory mapped registers */
|
||||
pci_read_config_dword(agp_bridge->dev, serverworks_private.mm_addr_ofs, &temp);
|
||||
temp = (temp & PCI_BASE_ADDRESS_MEM_MASK);
|
||||
|
@ -350,7 +347,7 @@ static int serverworks_insert_memory(struct agp_memory *mem,
|
|||
for (i = 0, j = pg_start; i < mem->page_count; i++, j++) {
|
||||
addr = (j * PAGE_SIZE) + agp_bridge->gart_bus_addr;
|
||||
cur_gatt = SVRWRKS_GET_GATT(addr);
|
||||
writel(agp_bridge->driver->mask_memory(agp_bridge,
|
||||
writel(agp_bridge->driver->mask_memory(agp_bridge,
|
||||
page_to_phys(mem->pages[i]), mem->type),
|
||||
cur_gatt+GET_GATT_OFF(addr));
|
||||
}
|
||||
|
|
|
@ -128,9 +128,6 @@ static int via_fetch_size_agp3(void)
|
|||
static int via_configure_agp3(void)
|
||||
{
|
||||
u32 temp;
|
||||
struct aper_size_info_16 *current_size;
|
||||
|
||||
current_size = A_SIZE_16(agp_bridge->current_size);
|
||||
|
||||
/* address to map to */
|
||||
agp_bridge->gart_bus_addr = pci_bus_address(agp_bridge->dev,
|
||||
|
|
|
@ -1047,8 +1047,8 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, DMA_BUF);
|
|||
*
|
||||
* Interfaces::
|
||||
*
|
||||
* void \*dma_buf_vmap(struct dma_buf \*dmabuf, struct dma_buf_map \*map)
|
||||
* void dma_buf_vunmap(struct dma_buf \*dmabuf, struct dma_buf_map \*map)
|
||||
* void \*dma_buf_vmap(struct dma_buf \*dmabuf, struct iosys_map \*map)
|
||||
* void dma_buf_vunmap(struct dma_buf \*dmabuf, struct iosys_map \*map)
|
||||
*
|
||||
* The vmap call can fail if there is no vmap support in the exporter, or if
|
||||
* it runs out of vmalloc space. Note that the dma-buf layer keeps a reference
|
||||
|
@ -1260,12 +1260,12 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_mmap, DMA_BUF);
|
|||
*
|
||||
* Returns 0 on success, or a negative errno code otherwise.
|
||||
*/
|
||||
int dma_buf_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
|
||||
int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map)
|
||||
{
|
||||
struct dma_buf_map ptr;
|
||||
struct iosys_map ptr;
|
||||
int ret = 0;
|
||||
|
||||
dma_buf_map_clear(map);
|
||||
iosys_map_clear(map);
|
||||
|
||||
if (WARN_ON(!dmabuf))
|
||||
return -EINVAL;
|
||||
|
@ -1276,12 +1276,12 @@ int dma_buf_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
|
|||
mutex_lock(&dmabuf->lock);
|
||||
if (dmabuf->vmapping_counter) {
|
||||
dmabuf->vmapping_counter++;
|
||||
BUG_ON(dma_buf_map_is_null(&dmabuf->vmap_ptr));
|
||||
BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr));
|
||||
*map = dmabuf->vmap_ptr;
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
BUG_ON(dma_buf_map_is_set(&dmabuf->vmap_ptr));
|
||||
BUG_ON(iosys_map_is_set(&dmabuf->vmap_ptr));
|
||||
|
||||
ret = dmabuf->ops->vmap(dmabuf, &ptr);
|
||||
if (WARN_ON_ONCE(ret))
|
||||
|
@ -1303,20 +1303,20 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_vmap, DMA_BUF);
|
|||
* @dmabuf: [in] buffer to vunmap
|
||||
* @map: [in] vmap pointer to vunmap
|
||||
*/
|
||||
void dma_buf_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
|
||||
void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map)
|
||||
{
|
||||
if (WARN_ON(!dmabuf))
|
||||
return;
|
||||
|
||||
BUG_ON(dma_buf_map_is_null(&dmabuf->vmap_ptr));
|
||||
BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr));
|
||||
BUG_ON(dmabuf->vmapping_counter == 0);
|
||||
BUG_ON(!dma_buf_map_is_equal(&dmabuf->vmap_ptr, map));
|
||||
BUG_ON(!iosys_map_is_equal(&dmabuf->vmap_ptr, map));
|
||||
|
||||
mutex_lock(&dmabuf->lock);
|
||||
if (--dmabuf->vmapping_counter == 0) {
|
||||
if (dmabuf->ops->vunmap)
|
||||
dmabuf->ops->vunmap(dmabuf, map);
|
||||
dma_buf_map_clear(&dmabuf->vmap_ptr);
|
||||
iosys_map_clear(&dmabuf->vmap_ptr);
|
||||
}
|
||||
mutex_unlock(&dmabuf->lock);
|
||||
}
|
||||
|
|
|
@ -176,6 +176,20 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
|
|||
|
||||
array->base.error = PENDING_ERROR;
|
||||
|
||||
/*
|
||||
* dma_fence_array objects should never contain any other fence
|
||||
* containers or otherwise we run into recursion and potential kernel
|
||||
* stack overflow on operations on the dma_fence_array.
|
||||
*
|
||||
* The correct way of handling this is to flatten out the array by the
|
||||
* caller instead.
|
||||
*
|
||||
* Enforce this here by checking that we don't create a dma_fence_array
|
||||
* with any container inside.
|
||||
*/
|
||||
while (num_fences--)
|
||||
WARN_ON(dma_fence_is_container(fences[num_fences]));
|
||||
|
||||
return array;
|
||||
}
|
||||
EXPORT_SYMBOL(dma_fence_array_create);
|
||||
|
|
|
@ -148,8 +148,7 @@ static bool dma_fence_chain_enable_signaling(struct dma_fence *fence)
|
|||
|
||||
dma_fence_get(&head->base);
|
||||
dma_fence_chain_for_each(fence, &head->base) {
|
||||
struct dma_fence_chain *chain = to_dma_fence_chain(fence);
|
||||
struct dma_fence *f = chain ? chain->fence : fence;
|
||||
struct dma_fence *f = dma_fence_chain_contained(fence);
|
||||
|
||||
dma_fence_get(f);
|
||||
if (!dma_fence_add_callback(f, &head->cb, dma_fence_chain_cb)) {
|
||||
|
@ -165,8 +164,7 @@ static bool dma_fence_chain_enable_signaling(struct dma_fence *fence)
|
|||
static bool dma_fence_chain_signaled(struct dma_fence *fence)
|
||||
{
|
||||
dma_fence_chain_for_each(fence, fence) {
|
||||
struct dma_fence_chain *chain = to_dma_fence_chain(fence);
|
||||
struct dma_fence *f = chain ? chain->fence : fence;
|
||||
struct dma_fence *f = dma_fence_chain_contained(fence);
|
||||
|
||||
if (!dma_fence_is_signaled(f)) {
|
||||
dma_fence_put(fence);
|
||||
|
@ -254,5 +252,14 @@ void dma_fence_chain_init(struct dma_fence_chain *chain,
|
|||
|
||||
dma_fence_init(&chain->base, &dma_fence_chain_ops,
|
||||
&chain->lock, context, seqno);
|
||||
|
||||
/*
|
||||
* Chaining dma_fence_chain container together is only allowed through
|
||||
* the prev fence and not through the contained fence.
|
||||
*
|
||||
* The correct way of handling this is to flatten out the fence
|
||||
* structure into a dma_fence_array by the caller instead.
|
||||
*/
|
||||
WARN_ON(dma_fence_is_chain(fence));
|
||||
}
|
||||
EXPORT_SYMBOL(dma_fence_chain_init);
|
||||
|
|
|
@ -256,6 +256,11 @@ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence)
|
|||
|
||||
dma_resv_assert_held(obj);
|
||||
|
||||
/* Drivers should not add containers here, instead add each fence
|
||||
* individually.
|
||||
*/
|
||||
WARN_ON(dma_fence_is_container(fence));
|
||||
|
||||
fobj = dma_resv_shared_list(obj);
|
||||
count = fobj->shared_count;
|
||||
|
||||
|
@ -323,12 +328,8 @@ void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence)
|
|||
}
|
||||
EXPORT_SYMBOL(dma_resv_add_excl_fence);
|
||||
|
||||
/**
|
||||
* dma_resv_iter_restart_unlocked - restart the unlocked iterator
|
||||
* @cursor: The dma_resv_iter object to restart
|
||||
*
|
||||
* Restart the unlocked iteration by initializing the cursor object.
|
||||
*/
|
||||
/* Restart the iterator by initializing all the necessary fields, but not the
|
||||
* relation to the dma_resv object. */
|
||||
static void dma_resv_iter_restart_unlocked(struct dma_resv_iter *cursor)
|
||||
{
|
||||
cursor->seq = read_seqcount_begin(&cursor->obj->seq);
|
||||
|
@ -344,14 +345,7 @@ static void dma_resv_iter_restart_unlocked(struct dma_resv_iter *cursor)
|
|||
cursor->is_restarted = true;
|
||||
}
|
||||
|
||||
/**
|
||||
* dma_resv_iter_walk_unlocked - walk over fences in a dma_resv obj
|
||||
* @cursor: cursor to record the current position
|
||||
*
|
||||
* Return all the fences in the dma_resv object which are not yet signaled.
|
||||
* The returned fence has an extra local reference so will stay alive.
|
||||
* If a concurrent modify is detected the whole iteration is started over again.
|
||||
*/
|
||||
/* Walk to the next not signaled fence and grab a reference to it */
|
||||
static void dma_resv_iter_walk_unlocked(struct dma_resv_iter *cursor)
|
||||
{
|
||||
struct dma_resv *obj = cursor->obj;
|
||||
|
@ -387,6 +381,12 @@ static void dma_resv_iter_walk_unlocked(struct dma_resv_iter *cursor)
|
|||
* dma_resv_iter_first_unlocked - first fence in an unlocked dma_resv obj.
|
||||
* @cursor: the cursor with the current position
|
||||
*
|
||||
* Subsequent fences are iterated with dma_resv_iter_next_unlocked().
|
||||
*
|
||||
* Beware that the iterator can be restarted. Code which accumulates statistics
|
||||
* or similar needs to check for this with dma_resv_iter_is_restarted(). For
|
||||
* this reason prefer the locked dma_resv_iter_first() whenver possible.
|
||||
*
|
||||
* Returns the first fence from an unlocked dma_resv obj.
|
||||
*/
|
||||
struct dma_fence *dma_resv_iter_first_unlocked(struct dma_resv_iter *cursor)
|
||||
|
@ -406,6 +406,10 @@ EXPORT_SYMBOL(dma_resv_iter_first_unlocked);
|
|||
* dma_resv_iter_next_unlocked - next fence in an unlocked dma_resv obj.
|
||||
* @cursor: the cursor with the current position
|
||||
*
|
||||
* Beware that the iterator can be restarted. Code which accumulates statistics
|
||||
* or similar needs to check for this with dma_resv_iter_is_restarted(). For
|
||||
* this reason prefer the locked dma_resv_iter_next() whenver possible.
|
||||
*
|
||||
* Returns the next fence from an unlocked dma_resv obj.
|
||||
*/
|
||||
struct dma_fence *dma_resv_iter_next_unlocked(struct dma_resv_iter *cursor)
|
||||
|
@ -431,6 +435,8 @@ EXPORT_SYMBOL(dma_resv_iter_next_unlocked);
|
|||
* dma_resv_iter_first - first fence from a locked dma_resv object
|
||||
* @cursor: cursor to record the current position
|
||||
*
|
||||
* Subsequent fences are iterated with dma_resv_iter_next_unlocked().
|
||||
*
|
||||
* Return the first fence in the dma_resv object while holding the
|
||||
* &dma_resv.lock.
|
||||
*/
|
||||
|
@ -542,57 +548,45 @@ EXPORT_SYMBOL(dma_resv_copy_fences);
|
|||
* dma_resv_get_fences - Get an object's shared and exclusive
|
||||
* fences without update side lock held
|
||||
* @obj: the reservation object
|
||||
* @fence_excl: the returned exclusive fence (or NULL)
|
||||
* @shared_count: the number of shared fences returned
|
||||
* @shared: the array of shared fence ptrs returned (array is krealloc'd to
|
||||
* the required size, and must be freed by caller)
|
||||
* @write: true if we should return all fences
|
||||
* @num_fences: the number of fences returned
|
||||
* @fences: the array of fence ptrs returned (array is krealloc'd to the
|
||||
* required size, and must be freed by caller)
|
||||
*
|
||||
* Retrieve all fences from the reservation object. If the pointer for the
|
||||
* exclusive fence is not specified the fence is put into the array of the
|
||||
* shared fences as well. Returns either zero or -ENOMEM.
|
||||
* Retrieve all fences from the reservation object.
|
||||
* Returns either zero or -ENOMEM.
|
||||
*/
|
||||
int dma_resv_get_fences(struct dma_resv *obj, struct dma_fence **fence_excl,
|
||||
unsigned int *shared_count, struct dma_fence ***shared)
|
||||
int dma_resv_get_fences(struct dma_resv *obj, bool write,
|
||||
unsigned int *num_fences, struct dma_fence ***fences)
|
||||
{
|
||||
struct dma_resv_iter cursor;
|
||||
struct dma_fence *fence;
|
||||
|
||||
*shared_count = 0;
|
||||
*shared = NULL;
|
||||
*num_fences = 0;
|
||||
*fences = NULL;
|
||||
|
||||
if (fence_excl)
|
||||
*fence_excl = NULL;
|
||||
|
||||
dma_resv_iter_begin(&cursor, obj, true);
|
||||
dma_resv_iter_begin(&cursor, obj, write);
|
||||
dma_resv_for_each_fence_unlocked(&cursor, fence) {
|
||||
|
||||
if (dma_resv_iter_is_restarted(&cursor)) {
|
||||
unsigned int count;
|
||||
|
||||
while (*shared_count)
|
||||
dma_fence_put((*shared)[--(*shared_count)]);
|
||||
while (*num_fences)
|
||||
dma_fence_put((*fences)[--(*num_fences)]);
|
||||
|
||||
if (fence_excl)
|
||||
dma_fence_put(*fence_excl);
|
||||
|
||||
count = cursor.shared_count;
|
||||
count += fence_excl ? 0 : 1;
|
||||
count = cursor.shared_count + 1;
|
||||
|
||||
/* Eventually re-allocate the array */
|
||||
*shared = krealloc_array(*shared, count,
|
||||
*fences = krealloc_array(*fences, count,
|
||||
sizeof(void *),
|
||||
GFP_KERNEL);
|
||||
if (count && !*shared) {
|
||||
if (count && !*fences) {
|
||||
dma_resv_iter_end(&cursor);
|
||||
return -ENOMEM;
|
||||
}
|
||||
}
|
||||
|
||||
dma_fence_get(fence);
|
||||
if (dma_resv_iter_is_exclusive(&cursor) && fence_excl)
|
||||
*fence_excl = fence;
|
||||
else
|
||||
(*shared)[(*shared_count)++] = fence;
|
||||
(*fences)[(*num_fences)++] = dma_fence_get(fence);
|
||||
}
|
||||
dma_resv_iter_end(&cursor);
|
||||
|
||||
|
|
|
@ -202,7 +202,7 @@ static void *cma_heap_do_vmap(struct cma_heap_buffer *buffer)
|
|||
return vaddr;
|
||||
}
|
||||
|
||||
static int cma_heap_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
|
||||
static int cma_heap_vmap(struct dma_buf *dmabuf, struct iosys_map *map)
|
||||
{
|
||||
struct cma_heap_buffer *buffer = dmabuf->priv;
|
||||
void *vaddr;
|
||||
|
@ -211,7 +211,7 @@ static int cma_heap_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
|
|||
mutex_lock(&buffer->lock);
|
||||
if (buffer->vmap_cnt) {
|
||||
buffer->vmap_cnt++;
|
||||
dma_buf_map_set_vaddr(map, buffer->vaddr);
|
||||
iosys_map_set_vaddr(map, buffer->vaddr);
|
||||
goto out;
|
||||
}
|
||||
|
||||
|
@ -222,14 +222,14 @@ static int cma_heap_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
|
|||
}
|
||||
buffer->vaddr = vaddr;
|
||||
buffer->vmap_cnt++;
|
||||
dma_buf_map_set_vaddr(map, buffer->vaddr);
|
||||
iosys_map_set_vaddr(map, buffer->vaddr);
|
||||
out:
|
||||
mutex_unlock(&buffer->lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void cma_heap_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
|
||||
static void cma_heap_vunmap(struct dma_buf *dmabuf, struct iosys_map *map)
|
||||
{
|
||||
struct cma_heap_buffer *buffer = dmabuf->priv;
|
||||
|
||||
|
@ -239,7 +239,7 @@ static void cma_heap_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
|
|||
buffer->vaddr = NULL;
|
||||
}
|
||||
mutex_unlock(&buffer->lock);
|
||||
dma_buf_map_clear(map);
|
||||
iosys_map_clear(map);
|
||||
}
|
||||
|
||||
static void cma_heap_dma_buf_release(struct dma_buf *dmabuf)
|
||||
|
|
|
@ -241,7 +241,7 @@ static void *system_heap_do_vmap(struct system_heap_buffer *buffer)
|
|||
return vaddr;
|
||||
}
|
||||
|
||||
static int system_heap_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
|
||||
static int system_heap_vmap(struct dma_buf *dmabuf, struct iosys_map *map)
|
||||
{
|
||||
struct system_heap_buffer *buffer = dmabuf->priv;
|
||||
void *vaddr;
|
||||
|
@ -250,7 +250,7 @@ static int system_heap_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
|
|||
mutex_lock(&buffer->lock);
|
||||
if (buffer->vmap_cnt) {
|
||||
buffer->vmap_cnt++;
|
||||
dma_buf_map_set_vaddr(map, buffer->vaddr);
|
||||
iosys_map_set_vaddr(map, buffer->vaddr);
|
||||
goto out;
|
||||
}
|
||||
|
||||
|
@ -262,14 +262,14 @@ static int system_heap_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
|
|||
|
||||
buffer->vaddr = vaddr;
|
||||
buffer->vmap_cnt++;
|
||||
dma_buf_map_set_vaddr(map, buffer->vaddr);
|
||||
iosys_map_set_vaddr(map, buffer->vaddr);
|
||||
out:
|
||||
mutex_unlock(&buffer->lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void system_heap_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
|
||||
static void system_heap_vunmap(struct dma_buf *dmabuf, struct iosys_map *map)
|
||||
{
|
||||
struct system_heap_buffer *buffer = dmabuf->priv;
|
||||
|
||||
|
@ -279,7 +279,7 @@ static void system_heap_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
|
|||
buffer->vaddr = NULL;
|
||||
}
|
||||
mutex_unlock(&buffer->lock);
|
||||
dma_buf_map_clear(map);
|
||||
iosys_map_clear(map);
|
||||
}
|
||||
|
||||
static void system_heap_dma_buf_release(struct dma_buf *dmabuf)
|
||||
|
|
|
@ -275,7 +275,7 @@ static int test_shared_for_each_unlocked(void *arg)
|
|||
|
||||
static int test_get_fences(void *arg, bool shared)
|
||||
{
|
||||
struct dma_fence *f, *excl = NULL, **fences = NULL;
|
||||
struct dma_fence *f, **fences = NULL;
|
||||
struct dma_resv resv;
|
||||
int r, i;
|
||||
|
||||
|
@ -304,35 +304,19 @@ static int test_get_fences(void *arg, bool shared)
|
|||
}
|
||||
dma_resv_unlock(&resv);
|
||||
|
||||
r = dma_resv_get_fences(&resv, &excl, &i, &fences);
|
||||
r = dma_resv_get_fences(&resv, shared, &i, &fences);
|
||||
if (r) {
|
||||
pr_err("get_fences failed\n");
|
||||
goto err_free;
|
||||
}
|
||||
|
||||
if (shared) {
|
||||
if (excl != NULL) {
|
||||
pr_err("get_fences returned unexpected excl fence\n");
|
||||
goto err_free;
|
||||
}
|
||||
if (i != 1 || fences[0] != f) {
|
||||
pr_err("get_fences returned unexpected shared fence\n");
|
||||
goto err_free;
|
||||
}
|
||||
} else {
|
||||
if (excl != f) {
|
||||
pr_err("get_fences returned unexpected excl fence\n");
|
||||
goto err_free;
|
||||
}
|
||||
if (i != 0) {
|
||||
pr_err("get_fences returned unexpected shared fence\n");
|
||||
goto err_free;
|
||||
}
|
||||
if (i != 1 || fences[0] != f) {
|
||||
pr_err("get_fences returned unexpected fence\n");
|
||||
goto err_free;
|
||||
}
|
||||
|
||||
dma_fence_signal(f);
|
||||
err_free:
|
||||
dma_fence_put(excl);
|
||||
while (i--)
|
||||
dma_fence_put(fences[i]);
|
||||
kfree(fences);
|
||||
|
|
|
@ -190,6 +190,10 @@ static long udmabuf_create(struct miscdevice *device,
|
|||
if (ubuf->pagecount > pglimit)
|
||||
goto err;
|
||||
}
|
||||
|
||||
if (!ubuf->pagecount)
|
||||
goto err;
|
||||
|
||||
ubuf->pages = kmalloc_array(ubuf->pagecount, sizeof(*ubuf->pages),
|
||||
GFP_KERNEL);
|
||||
if (!ubuf->pages) {
|
||||
|
|
|
@ -99,7 +99,7 @@ __init int sysfb_create_simplefb(const struct screen_info *si,
|
|||
|
||||
/* setup IORESOURCE_MEM as framebuffer memory */
|
||||
memset(&res, 0, sizeof(res));
|
||||
res.flags = IORESOURCE_MEM | IORESOURCE_BUSY;
|
||||
res.flags = IORESOURCE_MEM;
|
||||
res.name = simplefb_resname;
|
||||
res.start = base;
|
||||
res.end = res.start + length - 1;
|
||||
|
|
|
@ -68,8 +68,10 @@ config DRM_DEBUG_SELFTEST
|
|||
depends on DRM
|
||||
depends on DEBUG_KERNEL
|
||||
select PRIME_NUMBERS
|
||||
select DRM_DP_HELPER
|
||||
select DRM_LIB_RANDOM
|
||||
select DRM_KMS_HELPER
|
||||
select DRM_BUDDY
|
||||
select DRM_EXPORT_FOR_TESTS if m
|
||||
default n
|
||||
help
|
||||
|
@ -80,6 +82,12 @@ config DRM_DEBUG_SELFTEST
|
|||
|
||||
If in doubt, say "N".
|
||||
|
||||
config DRM_DP_HELPER
|
||||
tristate
|
||||
depends on DRM
|
||||
help
|
||||
DRM helpers for DisplayPort.
|
||||
|
||||
config DRM_KMS_HELPER
|
||||
tristate
|
||||
depends on DRM
|
||||
|
@ -198,6 +206,12 @@ config DRM_TTM
|
|||
GPU memory types. Will be enabled automatically if a device driver
|
||||
uses it.
|
||||
|
||||
config DRM_BUDDY
|
||||
tristate
|
||||
depends on DRM
|
||||
help
|
||||
A page based buddy allocator
|
||||
|
||||
config DRM_VRAM_HELPER
|
||||
tristate
|
||||
depends on DRM
|
||||
|
@ -236,6 +250,7 @@ config DRM_RADEON
|
|||
depends on DRM && PCI && MMU
|
||||
depends on AGP || !AGP
|
||||
select FW_LOADER
|
||||
select DRM_DP_HELPER
|
||||
select DRM_KMS_HELPER
|
||||
select DRM_TTM
|
||||
select DRM_TTM_HELPER
|
||||
|
@ -256,6 +271,7 @@ config DRM_AMDGPU
|
|||
tristate "AMD GPU"
|
||||
depends on DRM && PCI && MMU
|
||||
select FW_LOADER
|
||||
select DRM_DP_HELPER
|
||||
select DRM_KMS_HELPER
|
||||
select DRM_SCHED
|
||||
select DRM_TTM
|
||||
|
@ -388,6 +404,8 @@ source "drivers/gpu/drm/xlnx/Kconfig"
|
|||
|
||||
source "drivers/gpu/drm/gud/Kconfig"
|
||||
|
||||
source "drivers/gpu/drm/solomon/Kconfig"
|
||||
|
||||
source "drivers/gpu/drm/sprd/Kconfig"
|
||||
|
||||
config DRM_HYPERV
|
||||
|
|
|
@ -31,8 +31,6 @@ drm-$(CONFIG_DEBUG_FS) += drm_debugfs.o drm_debugfs_crc.o
|
|||
drm-$(CONFIG_DRM_LOAD_EDID_FIRMWARE) += drm_edid_load.o
|
||||
drm-$(CONFIG_DRM_PRIVACY_SCREEN) += drm_privacy_screen.o drm_privacy_screen_x86.o
|
||||
|
||||
obj-$(CONFIG_DRM_DP_AUX_BUS) += drm_dp_aux_bus.o
|
||||
|
||||
obj-$(CONFIG_DRM_NOMODESET) += drm_nomodeset.o
|
||||
|
||||
drm_cma_helper-y := drm_gem_cma_helper.o
|
||||
|
@ -42,27 +40,26 @@ obj-$(CONFIG_DRM_GEM_CMA_HELPER) += drm_cma_helper.o
|
|||
drm_shmem_helper-y := drm_gem_shmem_helper.o
|
||||
obj-$(CONFIG_DRM_GEM_SHMEM_HELPER) += drm_shmem_helper.o
|
||||
|
||||
obj-$(CONFIG_DRM_BUDDY) += drm_buddy.o
|
||||
|
||||
drm_vram_helper-y := drm_gem_vram_helper.o
|
||||
obj-$(CONFIG_DRM_VRAM_HELPER) += drm_vram_helper.o
|
||||
|
||||
drm_ttm_helper-y := drm_gem_ttm_helper.o
|
||||
obj-$(CONFIG_DRM_TTM_HELPER) += drm_ttm_helper.o
|
||||
|
||||
drm_kms_helper-y := drm_bridge_connector.o drm_crtc_helper.o drm_dp_helper.o \
|
||||
drm_kms_helper-y := drm_bridge_connector.o drm_crtc_helper.o \
|
||||
drm_dsc.o drm_encoder_slave.o drm_flip_work.o drm_hdcp.o \
|
||||
drm_probe_helper.o \
|
||||
drm_plane_helper.o drm_dp_mst_topology.o drm_atomic_helper.o \
|
||||
drm_kms_helper_common.o drm_dp_dual_mode_helper.o \
|
||||
drm_plane_helper.o drm_atomic_helper.o \
|
||||
drm_kms_helper_common.o \
|
||||
drm_simple_kms_helper.o drm_modeset_helper.o \
|
||||
drm_scdc_helper.o drm_gem_atomic_helper.o \
|
||||
drm_gem_framebuffer_helper.o \
|
||||
drm_atomic_state_helper.o drm_damage_helper.o \
|
||||
drm_format_helper.o drm_self_refresh_helper.o drm_rect.o
|
||||
|
||||
drm_kms_helper-$(CONFIG_DRM_PANEL_BRIDGE) += bridge/panel.o
|
||||
drm_kms_helper-$(CONFIG_DRM_FBDEV_EMULATION) += drm_fb_helper.o
|
||||
drm_kms_helper-$(CONFIG_DRM_DP_AUX_CHARDEV) += drm_dp_aux_dev.o
|
||||
drm_kms_helper-$(CONFIG_DRM_DP_CEC) += drm_dp_cec.o
|
||||
|
||||
obj-$(CONFIG_DRM_KMS_HELPER) += drm_kms_helper.o
|
||||
obj-$(CONFIG_DRM_DEBUG_SELFTEST) += selftests/
|
||||
|
@ -72,6 +69,7 @@ obj-$(CONFIG_DRM_MIPI_DBI) += drm_mipi_dbi.o
|
|||
obj-$(CONFIG_DRM_MIPI_DSI) += drm_mipi_dsi.o
|
||||
obj-$(CONFIG_DRM_PANEL_ORIENTATION_QUIRKS) += drm_panel_orientation_quirks.o
|
||||
obj-y += arm/
|
||||
obj-y += dp/
|
||||
obj-$(CONFIG_DRM_TTM) += ttm/
|
||||
obj-$(CONFIG_DRM_SCHED) += scheduler/
|
||||
obj-$(CONFIG_DRM_TDFX) += tdfx/
|
||||
|
@ -134,4 +132,5 @@ obj-$(CONFIG_DRM_TIDSS) += tidss/
|
|||
obj-y += xlnx/
|
||||
obj-y += gud/
|
||||
obj-$(CONFIG_DRM_HYPERV) += hyperv/
|
||||
obj-y += solomon/
|
||||
obj-$(CONFIG_DRM_SPRD) += sprd/
|
||||
|
|
|
@ -46,18 +46,18 @@ amdgpu-y += amdgpu_device.o amdgpu_kms.o \
|
|||
atom.o amdgpu_fence.o amdgpu_ttm.o amdgpu_object.o amdgpu_gart.o \
|
||||
amdgpu_encoders.o amdgpu_display.o amdgpu_i2c.o \
|
||||
amdgpu_gem.o amdgpu_ring.o \
|
||||
amdgpu_cs.o amdgpu_bios.o amdgpu_benchmark.o amdgpu_test.o \
|
||||
amdgpu_cs.o amdgpu_bios.o amdgpu_benchmark.o \
|
||||
atombios_dp.o amdgpu_afmt.o amdgpu_trace_points.o \
|
||||
atombios_encoders.o amdgpu_sa.o atombios_i2c.o \
|
||||
amdgpu_dma_buf.o amdgpu_vm.o amdgpu_ib.o amdgpu_pll.o \
|
||||
amdgpu_ucode.o amdgpu_bo_list.o amdgpu_ctx.o amdgpu_sync.o \
|
||||
amdgpu_gtt_mgr.o amdgpu_preempt_mgr.o amdgpu_vram_mgr.o amdgpu_virt.o \
|
||||
amdgpu_atomfirmware.o amdgpu_vf_error.o amdgpu_sched.o \
|
||||
amdgpu_debugfs.o amdgpu_ids.o amdgpu_gmc.o amdgpu_mmhub.o \
|
||||
amdgpu_debugfs.o amdgpu_ids.o amdgpu_gmc.o \
|
||||
amdgpu_xgmi.o amdgpu_csa.o amdgpu_ras.o amdgpu_vm_cpu.o \
|
||||
amdgpu_vm_sdma.o amdgpu_discovery.o amdgpu_ras_eeprom.o amdgpu_nbio.o \
|
||||
amdgpu_umc.o smu_v11_0_i2c.o amdgpu_fru_eeprom.o amdgpu_rap.o \
|
||||
amdgpu_fw_attestation.o amdgpu_securedisplay.o amdgpu_hdp.o \
|
||||
amdgpu_fw_attestation.o amdgpu_securedisplay.o \
|
||||
amdgpu_eeprom.o amdgpu_mca.o
|
||||
|
||||
amdgpu-$(CONFIG_PROC_FS) += amdgpu_fdinfo.o
|
||||
|
|
|
@ -31,6 +31,17 @@
|
|||
#include "amdgpu_psp.h"
|
||||
#include "amdgpu_xgmi.h"
|
||||
|
||||
static bool aldebaran_is_mode2_default(struct amdgpu_reset_control *reset_ctl)
|
||||
{
|
||||
struct amdgpu_device *adev = (struct amdgpu_device *)reset_ctl->handle;
|
||||
|
||||
if ((adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 2) &&
|
||||
adev->gmc.xgmi.connected_to_cpu))
|
||||
return true;
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
static struct amdgpu_reset_handler *
|
||||
aldebaran_get_reset_handler(struct amdgpu_reset_control *reset_ctl,
|
||||
struct amdgpu_reset_context *reset_context)
|
||||
|
@ -48,7 +59,7 @@ aldebaran_get_reset_handler(struct amdgpu_reset_control *reset_ctl,
|
|||
}
|
||||
}
|
||||
|
||||
if (adev->gmc.xgmi.connected_to_cpu) {
|
||||
if (aldebaran_is_mode2_default(reset_ctl)) {
|
||||
list_for_each_entry(handler, &reset_ctl->reset_handlers,
|
||||
handler_list) {
|
||||
if (handler->reset_method == AMD_RESET_METHOD_MODE2) {
|
||||
|
@ -136,18 +147,31 @@ static int
|
|||
aldebaran_mode2_perform_reset(struct amdgpu_reset_control *reset_ctl,
|
||||
struct amdgpu_reset_context *reset_context)
|
||||
{
|
||||
struct amdgpu_device *tmp_adev = NULL;
|
||||
struct amdgpu_device *adev = (struct amdgpu_device *)reset_ctl->handle;
|
||||
struct amdgpu_device *tmp_adev = NULL;
|
||||
struct list_head reset_device_list;
|
||||
int r = 0;
|
||||
|
||||
dev_dbg(adev->dev, "aldebaran perform hw reset\n");
|
||||
if (reset_context->hive == NULL) {
|
||||
if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 2) &&
|
||||
reset_context->hive == NULL) {
|
||||
/* Wrong context, return error */
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
list_for_each_entry(tmp_adev, &reset_context->hive->device_list,
|
||||
gmc.xgmi.head) {
|
||||
INIT_LIST_HEAD(&reset_device_list);
|
||||
if (reset_context->hive) {
|
||||
list_for_each_entry (tmp_adev,
|
||||
&reset_context->hive->device_list,
|
||||
gmc.xgmi.head)
|
||||
list_add_tail(&tmp_adev->reset_list,
|
||||
&reset_device_list);
|
||||
} else {
|
||||
list_add_tail(&reset_context->reset_req_dev->reset_list,
|
||||
&reset_device_list);
|
||||
}
|
||||
|
||||
list_for_each_entry (tmp_adev, &reset_device_list, reset_list) {
|
||||
mutex_lock(&tmp_adev->reset_cntl->reset_lock);
|
||||
tmp_adev->reset_cntl->active_reset = AMD_RESET_METHOD_MODE2;
|
||||
}
|
||||
|
@ -155,8 +179,7 @@ aldebaran_mode2_perform_reset(struct amdgpu_reset_control *reset_ctl,
|
|||
* Mode2 reset doesn't need any sync between nodes in XGMI hive, instead launch
|
||||
* them together so that they can be completed asynchronously on multiple nodes
|
||||
*/
|
||||
list_for_each_entry(tmp_adev, &reset_context->hive->device_list,
|
||||
gmc.xgmi.head) {
|
||||
list_for_each_entry (tmp_adev, &reset_device_list, reset_list) {
|
||||
/* For XGMI run all resets in parallel to speed up the process */
|
||||
if (tmp_adev->gmc.xgmi.num_physical_nodes > 1) {
|
||||
if (!queue_work(system_unbound_wq,
|
||||
|
@ -174,9 +197,7 @@ aldebaran_mode2_perform_reset(struct amdgpu_reset_control *reset_ctl,
|
|||
|
||||
/* For XGMI wait for all resets to complete before proceed */
|
||||
if (!r) {
|
||||
list_for_each_entry(tmp_adev,
|
||||
&reset_context->hive->device_list,
|
||||
gmc.xgmi.head) {
|
||||
list_for_each_entry (tmp_adev, &reset_device_list, reset_list) {
|
||||
if (tmp_adev->gmc.xgmi.num_physical_nodes > 1) {
|
||||
flush_work(&tmp_adev->reset_cntl->reset_work);
|
||||
r = tmp_adev->asic_reset_res;
|
||||
|
@ -186,8 +207,7 @@ aldebaran_mode2_perform_reset(struct amdgpu_reset_control *reset_ctl,
|
|||
}
|
||||
}
|
||||
|
||||
list_for_each_entry(tmp_adev, &reset_context->hive->device_list,
|
||||
gmc.xgmi.head) {
|
||||
list_for_each_entry (tmp_adev, &reset_device_list, reset_list) {
|
||||
mutex_unlock(&tmp_adev->reset_cntl->reset_lock);
|
||||
tmp_adev->reset_cntl->active_reset = AMD_RESET_METHOD_NONE;
|
||||
}
|
||||
|
@ -260,7 +280,7 @@ static int aldebaran_mode2_restore_ip(struct amdgpu_device *adev)
|
|||
adev->gfx.rlc.funcs->resume(adev);
|
||||
|
||||
/* Wait for FW reset event complete */
|
||||
r = smu_wait_for_event(adev, SMU_EVENT_RESET_COMPLETE, 0);
|
||||
r = amdgpu_dpm_wait_for_event(adev, SMU_EVENT_RESET_COMPLETE, 0);
|
||||
if (r) {
|
||||
dev_err(adev->dev,
|
||||
"Failed to get response from firmware after reset\n");
|
||||
|
@ -319,16 +339,30 @@ static int
|
|||
aldebaran_mode2_restore_hwcontext(struct amdgpu_reset_control *reset_ctl,
|
||||
struct amdgpu_reset_context *reset_context)
|
||||
{
|
||||
int r;
|
||||
struct amdgpu_device *tmp_adev = NULL;
|
||||
struct list_head reset_device_list;
|
||||
int r;
|
||||
|
||||
if (reset_context->hive == NULL) {
|
||||
if (reset_context->reset_req_dev->ip_versions[MP1_HWIP][0] ==
|
||||
IP_VERSION(13, 0, 2) &&
|
||||
reset_context->hive == NULL) {
|
||||
/* Wrong context, return error */
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
list_for_each_entry(tmp_adev, &reset_context->hive->device_list,
|
||||
gmc.xgmi.head) {
|
||||
INIT_LIST_HEAD(&reset_device_list);
|
||||
if (reset_context->hive) {
|
||||
list_for_each_entry (tmp_adev,
|
||||
&reset_context->hive->device_list,
|
||||
gmc.xgmi.head)
|
||||
list_add_tail(&tmp_adev->reset_list,
|
||||
&reset_device_list);
|
||||
} else {
|
||||
list_add_tail(&reset_context->reset_req_dev->reset_list,
|
||||
&reset_device_list);
|
||||
}
|
||||
|
||||
list_for_each_entry (tmp_adev, &reset_device_list, reset_list) {
|
||||
dev_info(tmp_adev->dev,
|
||||
"GPU reset succeeded, trying to resume\n");
|
||||
r = aldebaran_mode2_restore_ip(tmp_adev);
|
||||
|
|
|
@ -60,7 +60,6 @@
|
|||
#include <drm/amdgpu_drm.h>
|
||||
#include <drm/drm_gem.h>
|
||||
#include <drm/drm_ioctl.h>
|
||||
#include <drm/gpu_scheduler.h>
|
||||
|
||||
#include <kgd_kfd_interface.h>
|
||||
#include "dm_pp_interface.h"
|
||||
|
@ -99,7 +98,6 @@
|
|||
#include "amdgpu_gem.h"
|
||||
#include "amdgpu_doorbell.h"
|
||||
#include "amdgpu_amdkfd.h"
|
||||
#include "amdgpu_smu.h"
|
||||
#include "amdgpu_discovery.h"
|
||||
#include "amdgpu_mes.h"
|
||||
#include "amdgpu_umc.h"
|
||||
|
@ -109,6 +107,7 @@
|
|||
#include "amdgpu_smuio.h"
|
||||
#include "amdgpu_fdinfo.h"
|
||||
#include "amdgpu_mca.h"
|
||||
#include "amdgpu_ras.h"
|
||||
|
||||
#define MAX_GPU_INSTANCE 16
|
||||
|
||||
|
@ -155,8 +154,6 @@ extern int amdgpu_vis_vram_limit;
|
|||
extern int amdgpu_gart_size;
|
||||
extern int amdgpu_gtt_size;
|
||||
extern int amdgpu_moverate;
|
||||
extern int amdgpu_benchmarking;
|
||||
extern int amdgpu_testing;
|
||||
extern int amdgpu_audio;
|
||||
extern int amdgpu_disp_priority;
|
||||
extern int amdgpu_hw_i2c;
|
||||
|
@ -197,7 +194,6 @@ extern int amdgpu_emu_mode;
|
|||
extern uint amdgpu_smu_memory_pool_size;
|
||||
extern int amdgpu_smu_pptable_id;
|
||||
extern uint amdgpu_dc_feature_mask;
|
||||
extern uint amdgpu_freesync_vid_mode;
|
||||
extern uint amdgpu_dc_debug_mask;
|
||||
extern uint amdgpu_dm_abm_level;
|
||||
extern int amdgpu_backlight;
|
||||
|
@ -214,6 +210,7 @@ extern int amdgpu_mes;
|
|||
extern int amdgpu_noretry;
|
||||
extern int amdgpu_force_asic_type;
|
||||
extern int amdgpu_smartshift_bias;
|
||||
extern int amdgpu_use_xgmi_p2p;
|
||||
#ifdef CONFIG_HSA_AMD
|
||||
extern int sched_policy;
|
||||
extern bool debug_evictions;
|
||||
|
@ -235,6 +232,9 @@ extern int amdgpu_cik_support;
|
|||
#endif
|
||||
extern int amdgpu_num_kcq;
|
||||
|
||||
#define AMDGPU_VCNFW_LOG_SIZE (32 * 1024)
|
||||
extern int amdgpu_vcnfw_log;
|
||||
|
||||
#define AMDGPU_VM_MAX_NUM_CTX 4096
|
||||
#define AMDGPU_SG_THRESHOLD (256*1024*1024)
|
||||
#define AMDGPU_DEFAULT_GTT_SIZE_MB 3072ULL /* 3GB by default */
|
||||
|
@ -276,9 +276,6 @@ extern int amdgpu_num_kcq;
|
|||
#define AMDGPU_SMARTSHIFT_MIN_BIAS (-100)
|
||||
|
||||
struct amdgpu_device;
|
||||
struct amdgpu_ib;
|
||||
struct amdgpu_cs_parser;
|
||||
struct amdgpu_job;
|
||||
struct amdgpu_irq_src;
|
||||
struct amdgpu_fpriv;
|
||||
struct amdgpu_bo_va_mapping;
|
||||
|
@ -373,7 +370,8 @@ int amdgpu_device_ip_block_add(struct amdgpu_device *adev,
|
|||
*/
|
||||
bool amdgpu_get_bios(struct amdgpu_device *adev);
|
||||
bool amdgpu_read_bios(struct amdgpu_device *adev);
|
||||
|
||||
bool amdgpu_soc15_read_bios_from_rom(struct amdgpu_device *adev,
|
||||
u8 *bios, u32 length_bytes);
|
||||
/*
|
||||
* Clocks
|
||||
*/
|
||||
|
@ -465,20 +463,6 @@ struct amdgpu_flip_work {
|
|||
};
|
||||
|
||||
|
||||
/*
|
||||
* CP & rings.
|
||||
*/
|
||||
|
||||
struct amdgpu_ib {
|
||||
struct amdgpu_sa_bo *sa_bo;
|
||||
uint32_t length_dw;
|
||||
uint64_t gpu_addr;
|
||||
uint32_t *ptr;
|
||||
uint32_t flags;
|
||||
};
|
||||
|
||||
extern const struct drm_sched_backend_ops amdgpu_sched_ops;
|
||||
|
||||
/*
|
||||
* file private structure
|
||||
*/
|
||||
|
@ -494,79 +478,6 @@ struct amdgpu_fpriv {
|
|||
|
||||
int amdgpu_file_to_fpriv(struct file *filp, struct amdgpu_fpriv **fpriv);
|
||||
|
||||
int amdgpu_ib_get(struct amdgpu_device *adev, struct amdgpu_vm *vm,
|
||||
unsigned size,
|
||||
enum amdgpu_ib_pool_type pool,
|
||||
struct amdgpu_ib *ib);
|
||||
void amdgpu_ib_free(struct amdgpu_device *adev, struct amdgpu_ib *ib,
|
||||
struct dma_fence *f);
|
||||
int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs,
|
||||
struct amdgpu_ib *ibs, struct amdgpu_job *job,
|
||||
struct dma_fence **f);
|
||||
int amdgpu_ib_pool_init(struct amdgpu_device *adev);
|
||||
void amdgpu_ib_pool_fini(struct amdgpu_device *adev);
|
||||
int amdgpu_ib_ring_tests(struct amdgpu_device *adev);
|
||||
|
||||
/*
|
||||
* CS.
|
||||
*/
|
||||
struct amdgpu_cs_chunk {
|
||||
uint32_t chunk_id;
|
||||
uint32_t length_dw;
|
||||
void *kdata;
|
||||
};
|
||||
|
||||
struct amdgpu_cs_post_dep {
|
||||
struct drm_syncobj *syncobj;
|
||||
struct dma_fence_chain *chain;
|
||||
u64 point;
|
||||
};
|
||||
|
||||
struct amdgpu_cs_parser {
|
||||
struct amdgpu_device *adev;
|
||||
struct drm_file *filp;
|
||||
struct amdgpu_ctx *ctx;
|
||||
|
||||
/* chunks */
|
||||
unsigned nchunks;
|
||||
struct amdgpu_cs_chunk *chunks;
|
||||
|
||||
/* scheduler job object */
|
||||
struct amdgpu_job *job;
|
||||
struct drm_sched_entity *entity;
|
||||
|
||||
/* buffer objects */
|
||||
struct ww_acquire_ctx ticket;
|
||||
struct amdgpu_bo_list *bo_list;
|
||||
struct amdgpu_mn *mn;
|
||||
struct amdgpu_bo_list_entry vm_pd;
|
||||
struct list_head validated;
|
||||
struct dma_fence *fence;
|
||||
uint64_t bytes_moved_threshold;
|
||||
uint64_t bytes_moved_vis_threshold;
|
||||
uint64_t bytes_moved;
|
||||
uint64_t bytes_moved_vis;
|
||||
|
||||
/* user fence */
|
||||
struct amdgpu_bo_list_entry uf_entry;
|
||||
|
||||
unsigned num_post_deps;
|
||||
struct amdgpu_cs_post_dep *post_deps;
|
||||
};
|
||||
|
||||
static inline u32 amdgpu_get_ib_value(struct amdgpu_cs_parser *p,
|
||||
uint32_t ib_idx, int idx)
|
||||
{
|
||||
return p->job->ibs[ib_idx].ptr[idx];
|
||||
}
|
||||
|
||||
static inline void amdgpu_set_ib_value(struct amdgpu_cs_parser *p,
|
||||
uint32_t ib_idx, int idx,
|
||||
uint32_t value)
|
||||
{
|
||||
p->job->ibs[ib_idx].ptr[idx] = value;
|
||||
}
|
||||
|
||||
/*
|
||||
* Writeback
|
||||
*/
|
||||
|
@ -586,13 +497,7 @@ void amdgpu_device_wb_free(struct amdgpu_device *adev, u32 wb);
|
|||
/*
|
||||
* Benchmarking
|
||||
*/
|
||||
void amdgpu_benchmark(struct amdgpu_device *adev, int test_number);
|
||||
|
||||
|
||||
/*
|
||||
* Testing
|
||||
*/
|
||||
void amdgpu_test_moves(struct amdgpu_device *adev);
|
||||
int amdgpu_benchmark(struct amdgpu_device *adev, int test_number);
|
||||
|
||||
/*
|
||||
* ASIC specific register table accessible by UMD
|
||||
|
@ -771,6 +676,8 @@ struct amd_powerplay {
|
|||
const struct amd_pm_funcs *pp_funcs;
|
||||
};
|
||||
|
||||
struct ip_discovery_top;
|
||||
|
||||
/* polaris10 kickers */
|
||||
#define ASICID_IS_P20(did, rid) (((did == 0x67DF) && \
|
||||
((rid == 0xE3) || \
|
||||
|
@ -813,6 +720,8 @@ struct amd_powerplay {
|
|||
#define AMDGPU_RESET_MAGIC_NUM 64
|
||||
#define AMDGPU_MAX_DF_PERFMONS 4
|
||||
#define AMDGPU_PRODUCT_NAME_LEN 64
|
||||
struct amdgpu_reset_domain;
|
||||
|
||||
struct amdgpu_device {
|
||||
struct device *dev;
|
||||
struct pci_dev *pdev;
|
||||
|
@ -950,12 +859,6 @@ struct amdgpu_device {
|
|||
|
||||
/* powerplay */
|
||||
struct amd_powerplay powerplay;
|
||||
bool pp_force_state_enabled;
|
||||
|
||||
/* smu */
|
||||
struct smu_context smu;
|
||||
|
||||
/* dpm */
|
||||
struct amdgpu_pm pm;
|
||||
u32 cg_flags;
|
||||
u32 pg_flags;
|
||||
|
@ -1054,9 +957,7 @@ struct amdgpu_device {
|
|||
bool in_s4;
|
||||
bool in_s0ix;
|
||||
|
||||
atomic_t in_gpu_reset;
|
||||
enum pp_mp1_state mp1_state;
|
||||
struct rw_semaphore reset_sem;
|
||||
struct amdgpu_doorbell_index doorbell_index;
|
||||
|
||||
struct mutex notifier_lock;
|
||||
|
@ -1100,6 +1001,18 @@ struct amdgpu_device {
|
|||
uint32_t ip_versions[MAX_HWIP][HWIP_MAX_INSTANCE];
|
||||
|
||||
bool ram_is_direct_mapped;
|
||||
|
||||
struct list_head ras_list;
|
||||
|
||||
struct ip_discovery_top *ip_top;
|
||||
|
||||
struct amdgpu_reset_domain *reset_domain;
|
||||
|
||||
struct mutex benchmark_mutex;
|
||||
|
||||
/* reset dump register */
|
||||
uint32_t *reset_dump_reg_list;
|
||||
int num_regs;
|
||||
};
|
||||
|
||||
static inline struct amdgpu_device *drm_to_adev(struct drm_device *ddev)
|
||||
|
@ -1293,9 +1206,12 @@ bool amdgpu_device_has_job_running(struct amdgpu_device *adev);
|
|||
bool amdgpu_device_should_recover_gpu(struct amdgpu_device *adev);
|
||||
int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
|
||||
struct amdgpu_job* job);
|
||||
int amdgpu_device_gpu_recover_imp(struct amdgpu_device *adev,
|
||||
struct amdgpu_job *job);
|
||||
void amdgpu_device_pci_config_reset(struct amdgpu_device *adev);
|
||||
int amdgpu_device_pci_reset(struct amdgpu_device *adev);
|
||||
bool amdgpu_device_need_post(struct amdgpu_device *adev);
|
||||
bool amdgpu_device_should_use_aspm(struct amdgpu_device *adev);
|
||||
|
||||
void amdgpu_cs_report_moved_bytes(struct amdgpu_device *adev, u64 num_bytes,
|
||||
u64 num_vis_bytes);
|
||||
|
@ -1321,6 +1237,10 @@ void amdgpu_device_invalidate_hdp(struct amdgpu_device *adev,
|
|||
struct amdgpu_ring *ring);
|
||||
|
||||
void amdgpu_device_halt(struct amdgpu_device *adev);
|
||||
u32 amdgpu_device_pcie_port_rreg(struct amdgpu_device *adev,
|
||||
u32 reg);
|
||||
void amdgpu_device_pcie_port_wreg(struct amdgpu_device *adev,
|
||||
u32 reg, u32 v);
|
||||
|
||||
/* atpx handler */
|
||||
#if defined(CONFIG_VGA_SWITCHEROO)
|
||||
|
@ -1428,10 +1348,6 @@ static inline bool amdgpu_acpi_is_s0ix_active(struct amdgpu_device *adev) { retu
|
|||
static inline bool amdgpu_acpi_is_s3_active(struct amdgpu_device *adev) { return false; }
|
||||
#endif
|
||||
|
||||
int amdgpu_cs_find_mapping(struct amdgpu_cs_parser *parser,
|
||||
uint64_t addr, struct amdgpu_bo **bo,
|
||||
struct amdgpu_bo_va_mapping **mapping);
|
||||
|
||||
#if defined(CONFIG_DRM_AMD_DC)
|
||||
int amdgpu_dm_display_resume(struct amdgpu_device *adev );
|
||||
#else
|
||||
|
@ -1458,6 +1374,15 @@ int amdgpu_device_set_cg_state(struct amdgpu_device *adev,
|
|||
int amdgpu_device_set_pg_state(struct amdgpu_device *adev,
|
||||
enum amd_powergating_state state);
|
||||
|
||||
static inline bool amdgpu_device_has_timeouts_enabled(struct amdgpu_device *adev)
|
||||
{
|
||||
return amdgpu_gpu_recovery != 0 &&
|
||||
adev->gfx_timeout != MAX_SCHEDULE_TIMEOUT &&
|
||||
adev->compute_timeout != MAX_SCHEDULE_TIMEOUT &&
|
||||
adev->sdma_timeout != MAX_SCHEDULE_TIMEOUT &&
|
||||
adev->video_timeout != MAX_SCHEDULE_TIMEOUT;
|
||||
}
|
||||
|
||||
#include "amdgpu_object.h"
|
||||
|
||||
static inline bool amdgpu_is_tmz(struct amdgpu_device *adev)
|
||||
|
@ -1465,8 +1390,6 @@ static inline bool amdgpu_is_tmz(struct amdgpu_device *adev)
|
|||
return adev->gmc.tmz_enabled;
|
||||
}
|
||||
|
||||
static inline int amdgpu_in_reset(struct amdgpu_device *adev)
|
||||
{
|
||||
return atomic_read(&adev->in_gpu_reset);
|
||||
}
|
||||
int amdgpu_in_reset(struct amdgpu_device *adev);
|
||||
|
||||
#endif
|
||||
|
|
|
@ -131,6 +131,7 @@ struct amdkfd_process_info {
|
|||
atomic_t evicted_bos;
|
||||
struct delayed_work restore_userptr_work;
|
||||
struct pid *pid;
|
||||
bool block_mmu_notifications;
|
||||
};
|
||||
|
||||
int amdgpu_amdkfd_init(void);
|
||||
|
@ -268,7 +269,7 @@ uint64_t amdgpu_amdkfd_gpuvm_get_process_page_dir(void *drm_priv);
|
|||
int amdgpu_amdkfd_gpuvm_alloc_memory_of_gpu(
|
||||
struct amdgpu_device *adev, uint64_t va, uint64_t size,
|
||||
void *drm_priv, struct kgd_mem **mem,
|
||||
uint64_t *offset, uint32_t flags);
|
||||
uint64_t *offset, uint32_t flags, bool criu_resume);
|
||||
int amdgpu_amdkfd_gpuvm_free_memory_of_gpu(
|
||||
struct amdgpu_device *adev, struct kgd_mem *mem, void *drm_priv,
|
||||
uint64_t *size);
|
||||
|
@ -297,6 +298,10 @@ int amdgpu_amdkfd_get_tile_config(struct amdgpu_device *adev,
|
|||
struct tile_config *config);
|
||||
void amdgpu_amdkfd_ras_poison_consumption_handler(struct amdgpu_device *adev,
|
||||
bool reset);
|
||||
bool amdgpu_amdkfd_bo_mapped_to_dev(struct amdgpu_device *adev, struct kgd_mem *mem);
|
||||
void amdgpu_amdkfd_block_mmu_notifications(void *p);
|
||||
int amdgpu_amdkfd_criu_resume(void *p);
|
||||
|
||||
#if IS_ENABLED(CONFIG_HSA_AMD)
|
||||
void amdgpu_amdkfd_gpuvm_init_mem_limits(void);
|
||||
void amdgpu_amdkfd_gpuvm_destroy_cb(struct amdgpu_device *adev,
|
||||
|
|
|
@ -37,10 +37,7 @@ const struct kfd2kgd_calls aldebaran_kfd2kgd = {
|
|||
.hqd_sdma_is_occupied = kgd_arcturus_hqd_sdma_is_occupied,
|
||||
.hqd_destroy = kgd_gfx_v9_hqd_destroy,
|
||||
.hqd_sdma_destroy = kgd_arcturus_hqd_sdma_destroy,
|
||||
.address_watch_disable = kgd_gfx_v9_address_watch_disable,
|
||||
.address_watch_execute = kgd_gfx_v9_address_watch_execute,
|
||||
.wave_control_execute = kgd_gfx_v9_wave_control_execute,
|
||||
.address_watch_get_offset = kgd_gfx_v9_address_watch_get_offset,
|
||||
.get_atc_vmid_pasid_mapping_info =
|
||||
kgd_gfx_v9_get_atc_vmid_pasid_mapping_info,
|
||||
.set_vm_context_page_table_base = kgd_gfx_v9_set_vm_context_page_table_base,
|
||||
|
|
|
@ -289,10 +289,7 @@ const struct kfd2kgd_calls arcturus_kfd2kgd = {
|
|||
.hqd_sdma_is_occupied = kgd_arcturus_hqd_sdma_is_occupied,
|
||||
.hqd_destroy = kgd_gfx_v9_hqd_destroy,
|
||||
.hqd_sdma_destroy = kgd_arcturus_hqd_sdma_destroy,
|
||||
.address_watch_disable = kgd_gfx_v9_address_watch_disable,
|
||||
.address_watch_execute = kgd_gfx_v9_address_watch_execute,
|
||||
.wave_control_execute = kgd_gfx_v9_wave_control_execute,
|
||||
.address_watch_get_offset = kgd_gfx_v9_address_watch_get_offset,
|
||||
.get_atc_vmid_pasid_mapping_info =
|
||||
kgd_gfx_v9_get_atc_vmid_pasid_mapping_info,
|
||||
.set_vm_context_page_table_base =
|
||||
|
|
|
@ -671,20 +671,6 @@ static bool get_atc_vmid_pasid_mapping_info(struct amdgpu_device *adev,
|
|||
return !!(value & ATC_VMID0_PASID_MAPPING__VALID_MASK);
|
||||
}
|
||||
|
||||
static int kgd_address_watch_disable(struct amdgpu_device *adev)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int kgd_address_watch_execute(struct amdgpu_device *adev,
|
||||
unsigned int watch_point_id,
|
||||
uint32_t cntl_val,
|
||||
uint32_t addr_hi,
|
||||
uint32_t addr_lo)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int kgd_wave_control_execute(struct amdgpu_device *adev,
|
||||
uint32_t gfx_index_val,
|
||||
uint32_t sq_cmd)
|
||||
|
@ -709,13 +695,6 @@ static int kgd_wave_control_execute(struct amdgpu_device *adev,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static uint32_t kgd_address_watch_get_offset(struct amdgpu_device *adev,
|
||||
unsigned int watch_point_id,
|
||||
unsigned int reg_offset)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void set_vm_context_page_table_base(struct amdgpu_device *adev,
|
||||
uint32_t vmid, uint64_t page_table_base)
|
||||
{
|
||||
|
@ -767,10 +746,7 @@ const struct kfd2kgd_calls gfx_v10_kfd2kgd = {
|
|||
.hqd_sdma_is_occupied = kgd_hqd_sdma_is_occupied,
|
||||
.hqd_destroy = kgd_hqd_destroy,
|
||||
.hqd_sdma_destroy = kgd_hqd_sdma_destroy,
|
||||
.address_watch_disable = kgd_address_watch_disable,
|
||||
.address_watch_execute = kgd_address_watch_execute,
|
||||
.wave_control_execute = kgd_wave_control_execute,
|
||||
.address_watch_get_offset = kgd_address_watch_get_offset,
|
||||
.get_atc_vmid_pasid_mapping_info =
|
||||
get_atc_vmid_pasid_mapping_info,
|
||||
.set_vm_context_page_table_base = set_vm_context_page_table_base,
|
||||
|
|
|
@ -26,6 +26,8 @@
|
|||
#include "gc/gc_10_3_0_sh_mask.h"
|
||||
#include "oss/osssys_5_0_0_offset.h"
|
||||
#include "oss/osssys_5_0_0_sh_mask.h"
|
||||
#include "athub/athub_2_1_0_offset.h"
|
||||
#include "athub/athub_2_1_0_sh_mask.h"
|
||||
#include "soc15_common.h"
|
||||
#include "v10_structs.h"
|
||||
#include "nv.h"
|
||||
|
@ -582,21 +584,6 @@ static int hqd_sdma_destroy_v10_3(struct amdgpu_device *adev, void *mqd,
|
|||
return 0;
|
||||
}
|
||||
|
||||
|
||||
static int address_watch_disable_v10_3(struct amdgpu_device *adev)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int address_watch_execute_v10_3(struct amdgpu_device *adev,
|
||||
unsigned int watch_point_id,
|
||||
uint32_t cntl_val,
|
||||
uint32_t addr_hi,
|
||||
uint32_t addr_lo)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int wave_control_execute_v10_3(struct amdgpu_device *adev,
|
||||
uint32_t gfx_index_val,
|
||||
uint32_t sq_cmd)
|
||||
|
@ -621,11 +608,16 @@ static int wave_control_execute_v10_3(struct amdgpu_device *adev,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static uint32_t address_watch_get_offset_v10_3(struct amdgpu_device *adev,
|
||||
unsigned int watch_point_id,
|
||||
unsigned int reg_offset)
|
||||
static bool get_atc_vmid_pasid_mapping_info_v10_3(struct amdgpu_device *adev,
|
||||
uint8_t vmid, uint16_t *p_pasid)
|
||||
{
|
||||
return 0;
|
||||
uint32_t value;
|
||||
|
||||
value = RREG32(SOC15_REG_OFFSET(ATHUB, 0, mmATC_VMID0_PASID_MAPPING)
|
||||
+ vmid);
|
||||
*p_pasid = value & ATC_VMID0_PASID_MAPPING__PASID_MASK;
|
||||
|
||||
return !!(value & ATC_VMID0_PASID_MAPPING__VALID_MASK);
|
||||
}
|
||||
|
||||
static void set_vm_context_page_table_base_v10_3(struct amdgpu_device *adev,
|
||||
|
@ -809,11 +801,8 @@ const struct kfd2kgd_calls gfx_v10_3_kfd2kgd = {
|
|||
.hqd_sdma_is_occupied = hqd_sdma_is_occupied_v10_3,
|
||||
.hqd_destroy = hqd_destroy_v10_3,
|
||||
.hqd_sdma_destroy = hqd_sdma_destroy_v10_3,
|
||||
.address_watch_disable = address_watch_disable_v10_3,
|
||||
.address_watch_execute = address_watch_execute_v10_3,
|
||||
.wave_control_execute = wave_control_execute_v10_3,
|
||||
.address_watch_get_offset = address_watch_get_offset_v10_3,
|
||||
.get_atc_vmid_pasid_mapping_info = NULL,
|
||||
.get_atc_vmid_pasid_mapping_info = get_atc_vmid_pasid_mapping_info_v10_3,
|
||||
.set_vm_context_page_table_base = set_vm_context_page_table_base_v10_3,
|
||||
.program_trap_handler_settings = program_trap_handler_settings_v10_3,
|
||||
#if 0
|
||||
|
|
|
@ -45,43 +45,6 @@ enum {
|
|||
MAX_WATCH_ADDRESSES = 4
|
||||
};
|
||||
|
||||
enum {
|
||||
ADDRESS_WATCH_REG_ADDR_HI = 0,
|
||||
ADDRESS_WATCH_REG_ADDR_LO,
|
||||
ADDRESS_WATCH_REG_CNTL,
|
||||
ADDRESS_WATCH_REG_MAX
|
||||
};
|
||||
|
||||
/* not defined in the CI/KV reg file */
|
||||
enum {
|
||||
ADDRESS_WATCH_REG_CNTL_ATC_BIT = 0x10000000UL,
|
||||
ADDRESS_WATCH_REG_CNTL_DEFAULT_MASK = 0x00FFFFFF,
|
||||
ADDRESS_WATCH_REG_ADDLOW_MASK_EXTENSION = 0x03000000,
|
||||
/* extend the mask to 26 bits to match the low address field */
|
||||
ADDRESS_WATCH_REG_ADDLOW_SHIFT = 6,
|
||||
ADDRESS_WATCH_REG_ADDHIGH_MASK = 0xFFFF
|
||||
};
|
||||
|
||||
static const uint32_t watchRegs[MAX_WATCH_ADDRESSES * ADDRESS_WATCH_REG_MAX] = {
|
||||
mmTCP_WATCH0_ADDR_H, mmTCP_WATCH0_ADDR_L, mmTCP_WATCH0_CNTL,
|
||||
mmTCP_WATCH1_ADDR_H, mmTCP_WATCH1_ADDR_L, mmTCP_WATCH1_CNTL,
|
||||
mmTCP_WATCH2_ADDR_H, mmTCP_WATCH2_ADDR_L, mmTCP_WATCH2_CNTL,
|
||||
mmTCP_WATCH3_ADDR_H, mmTCP_WATCH3_ADDR_L, mmTCP_WATCH3_CNTL
|
||||
};
|
||||
|
||||
union TCP_WATCH_CNTL_BITS {
|
||||
struct {
|
||||
uint32_t mask:24;
|
||||
uint32_t vmid:4;
|
||||
uint32_t atc:1;
|
||||
uint32_t mode:2;
|
||||
uint32_t valid:1;
|
||||
} bitfields, bits;
|
||||
uint32_t u32All;
|
||||
signed int i32All;
|
||||
float f32All;
|
||||
};
|
||||
|
||||
static void lock_srbm(struct amdgpu_device *adev, uint32_t mec, uint32_t pipe,
|
||||
uint32_t queue, uint32_t vmid)
|
||||
{
|
||||
|
@ -529,55 +492,6 @@ static int kgd_hqd_sdma_destroy(struct amdgpu_device *adev, void *mqd,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int kgd_address_watch_disable(struct amdgpu_device *adev)
|
||||
{
|
||||
union TCP_WATCH_CNTL_BITS cntl;
|
||||
unsigned int i;
|
||||
|
||||
cntl.u32All = 0;
|
||||
|
||||
cntl.bitfields.valid = 0;
|
||||
cntl.bitfields.mask = ADDRESS_WATCH_REG_CNTL_DEFAULT_MASK;
|
||||
cntl.bitfields.atc = 1;
|
||||
|
||||
/* Turning off this address until we set all the registers */
|
||||
for (i = 0; i < MAX_WATCH_ADDRESSES; i++)
|
||||
WREG32(watchRegs[i * ADDRESS_WATCH_REG_MAX +
|
||||
ADDRESS_WATCH_REG_CNTL], cntl.u32All);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int kgd_address_watch_execute(struct amdgpu_device *adev,
|
||||
unsigned int watch_point_id,
|
||||
uint32_t cntl_val,
|
||||
uint32_t addr_hi,
|
||||
uint32_t addr_lo)
|
||||
{
|
||||
union TCP_WATCH_CNTL_BITS cntl;
|
||||
|
||||
cntl.u32All = cntl_val;
|
||||
|
||||
/* Turning off this watch point until we set all the registers */
|
||||
cntl.bitfields.valid = 0;
|
||||
WREG32(watchRegs[watch_point_id * ADDRESS_WATCH_REG_MAX +
|
||||
ADDRESS_WATCH_REG_CNTL], cntl.u32All);
|
||||
|
||||
WREG32(watchRegs[watch_point_id * ADDRESS_WATCH_REG_MAX +
|
||||
ADDRESS_WATCH_REG_ADDR_HI], addr_hi);
|
||||
|
||||
WREG32(watchRegs[watch_point_id * ADDRESS_WATCH_REG_MAX +
|
||||
ADDRESS_WATCH_REG_ADDR_LO], addr_lo);
|
||||
|
||||
/* Enable the watch point */
|
||||
cntl.bitfields.valid = 1;
|
||||
|
||||
WREG32(watchRegs[watch_point_id * ADDRESS_WATCH_REG_MAX +
|
||||
ADDRESS_WATCH_REG_CNTL], cntl.u32All);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int kgd_wave_control_execute(struct amdgpu_device *adev,
|
||||
uint32_t gfx_index_val,
|
||||
uint32_t sq_cmd)
|
||||
|
@ -602,13 +516,6 @@ static int kgd_wave_control_execute(struct amdgpu_device *adev,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static uint32_t kgd_address_watch_get_offset(struct amdgpu_device *adev,
|
||||
unsigned int watch_point_id,
|
||||
unsigned int reg_offset)
|
||||
{
|
||||
return watchRegs[watch_point_id * ADDRESS_WATCH_REG_MAX + reg_offset];
|
||||
}
|
||||
|
||||
static bool get_atc_vmid_pasid_mapping_info(struct amdgpu_device *adev,
|
||||
uint8_t vmid, uint16_t *p_pasid)
|
||||
{
|
||||
|
@ -665,10 +572,7 @@ const struct kfd2kgd_calls gfx_v7_kfd2kgd = {
|
|||
.hqd_sdma_is_occupied = kgd_hqd_sdma_is_occupied,
|
||||
.hqd_destroy = kgd_hqd_destroy,
|
||||
.hqd_sdma_destroy = kgd_hqd_sdma_destroy,
|
||||
.address_watch_disable = kgd_address_watch_disable,
|
||||
.address_watch_execute = kgd_address_watch_execute,
|
||||
.wave_control_execute = kgd_wave_control_execute,
|
||||
.address_watch_get_offset = kgd_address_watch_get_offset,
|
||||
.get_atc_vmid_pasid_mapping_info = get_atc_vmid_pasid_mapping_info,
|
||||
.set_scratch_backing_va = set_scratch_backing_va,
|
||||
.set_vm_context_page_table_base = set_vm_context_page_table_base,
|
||||
|
|
|
@ -538,20 +538,6 @@ static bool get_atc_vmid_pasid_mapping_info(struct amdgpu_device *adev,
|
|||
return !!(value & ATC_VMID0_PASID_MAPPING__VALID_MASK);
|
||||
}
|
||||
|
||||
static int kgd_address_watch_disable(struct amdgpu_device *adev)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int kgd_address_watch_execute(struct amdgpu_device *adev,
|
||||
unsigned int watch_point_id,
|
||||
uint32_t cntl_val,
|
||||
uint32_t addr_hi,
|
||||
uint32_t addr_lo)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int kgd_wave_control_execute(struct amdgpu_device *adev,
|
||||
uint32_t gfx_index_val,
|
||||
uint32_t sq_cmd)
|
||||
|
@ -576,13 +562,6 @@ static int kgd_wave_control_execute(struct amdgpu_device *adev,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static uint32_t kgd_address_watch_get_offset(struct amdgpu_device *adev,
|
||||
unsigned int watch_point_id,
|
||||
unsigned int reg_offset)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void set_scratch_backing_va(struct amdgpu_device *adev,
|
||||
uint64_t va, uint32_t vmid)
|
||||
{
|
||||
|
@ -614,10 +593,7 @@ const struct kfd2kgd_calls gfx_v8_kfd2kgd = {
|
|||
.hqd_sdma_is_occupied = kgd_hqd_sdma_is_occupied,
|
||||
.hqd_destroy = kgd_hqd_destroy,
|
||||
.hqd_sdma_destroy = kgd_hqd_sdma_destroy,
|
||||
.address_watch_disable = kgd_address_watch_disable,
|
||||
.address_watch_execute = kgd_address_watch_execute,
|
||||
.wave_control_execute = kgd_wave_control_execute,
|
||||
.address_watch_get_offset = kgd_address_watch_get_offset,
|
||||
.get_atc_vmid_pasid_mapping_info =
|
||||
get_atc_vmid_pasid_mapping_info,
|
||||
.set_scratch_backing_va = set_scratch_backing_va,
|
||||
|
|
|
@ -622,20 +622,6 @@ bool kgd_gfx_v9_get_atc_vmid_pasid_mapping_info(struct amdgpu_device *adev,
|
|||
return !!(value & ATC_VMID0_PASID_MAPPING__VALID_MASK);
|
||||
}
|
||||
|
||||
int kgd_gfx_v9_address_watch_disable(struct amdgpu_device *adev)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
int kgd_gfx_v9_address_watch_execute(struct amdgpu_device *adev,
|
||||
unsigned int watch_point_id,
|
||||
uint32_t cntl_val,
|
||||
uint32_t addr_hi,
|
||||
uint32_t addr_lo)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
int kgd_gfx_v9_wave_control_execute(struct amdgpu_device *adev,
|
||||
uint32_t gfx_index_val,
|
||||
uint32_t sq_cmd)
|
||||
|
@ -660,13 +646,6 @@ int kgd_gfx_v9_wave_control_execute(struct amdgpu_device *adev,
|
|||
return 0;
|
||||
}
|
||||
|
||||
uint32_t kgd_gfx_v9_address_watch_get_offset(struct amdgpu_device *adev,
|
||||
unsigned int watch_point_id,
|
||||
unsigned int reg_offset)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
void kgd_gfx_v9_set_vm_context_page_table_base(struct amdgpu_device *adev,
|
||||
uint32_t vmid, uint64_t page_table_base)
|
||||
{
|
||||
|
@ -736,7 +715,7 @@ static void get_wave_count(struct amdgpu_device *adev, int queue_idx,
|
|||
* process whose pasid is provided as a parameter. The process could have ZERO
|
||||
* or more queues running and submitting waves to compute units.
|
||||
*
|
||||
* @kgd: Handle of device from which to get number of waves in flight
|
||||
* @adev: Handle of device from which to get number of waves in flight
|
||||
* @pasid: Identifies the process for which this query call is invoked
|
||||
* @pasid_wave_cnt: Output parameter updated with number of waves in flight that
|
||||
* belong to process with given pasid
|
||||
|
@ -745,7 +724,7 @@ static void get_wave_count(struct amdgpu_device *adev, int queue_idx,
|
|||
*
|
||||
* Note: It's possible that the device has too many queues (oversubscription)
|
||||
* in which case a VMID could be remapped to a different PASID. This could lead
|
||||
* to an iaccurate wave count. Following is a high-level sequence:
|
||||
* to an inaccurate wave count. Following is a high-level sequence:
|
||||
* Time T1: vmid = getVmid(); vmid is associated with Pasid P1
|
||||
* Time T2: passId = getPasId(vmid); vmid is associated with Pasid P2
|
||||
* In the sequence above wave count obtained from time T1 will be incorrectly
|
||||
|
@ -888,10 +867,7 @@ const struct kfd2kgd_calls gfx_v9_kfd2kgd = {
|
|||
.hqd_sdma_is_occupied = kgd_hqd_sdma_is_occupied,
|
||||
.hqd_destroy = kgd_gfx_v9_hqd_destroy,
|
||||
.hqd_sdma_destroy = kgd_hqd_sdma_destroy,
|
||||
.address_watch_disable = kgd_gfx_v9_address_watch_disable,
|
||||
.address_watch_execute = kgd_gfx_v9_address_watch_execute,
|
||||
.wave_control_execute = kgd_gfx_v9_wave_control_execute,
|
||||
.address_watch_get_offset = kgd_gfx_v9_address_watch_get_offset,
|
||||
.get_atc_vmid_pasid_mapping_info =
|
||||
kgd_gfx_v9_get_atc_vmid_pasid_mapping_info,
|
||||
.set_vm_context_page_table_base = kgd_gfx_v9_set_vm_context_page_table_base,
|
||||
|
|
|
@ -46,19 +46,9 @@ int kgd_gfx_v9_hqd_destroy(struct amdgpu_device *adev, void *mqd,
|
|||
enum kfd_preempt_type reset_type,
|
||||
unsigned int utimeout, uint32_t pipe_id,
|
||||
uint32_t queue_id);
|
||||
int kgd_gfx_v9_address_watch_disable(struct amdgpu_device *adev);
|
||||
int kgd_gfx_v9_address_watch_execute(struct amdgpu_device *adev,
|
||||
unsigned int watch_point_id,
|
||||
uint32_t cntl_val,
|
||||
uint32_t addr_hi,
|
||||
uint32_t addr_lo);
|
||||
int kgd_gfx_v9_wave_control_execute(struct amdgpu_device *adev,
|
||||
uint32_t gfx_index_val,
|
||||
uint32_t sq_cmd);
|
||||
uint32_t kgd_gfx_v9_address_watch_get_offset(struct amdgpu_device *adev,
|
||||
unsigned int watch_point_id,
|
||||
unsigned int reg_offset);
|
||||
|
||||
bool kgd_gfx_v9_get_atc_vmid_pasid_mapping_info(struct amdgpu_device *adev,
|
||||
uint8_t vmid, uint16_t *p_pasid);
|
||||
|
||||
|
|
|
@ -121,7 +121,7 @@ static size_t amdgpu_amdkfd_acc_size(uint64_t size)
|
|||
}
|
||||
|
||||
/**
|
||||
* @amdgpu_amdkfd_reserve_mem_limit() - Decrease available memory by size
|
||||
* amdgpu_amdkfd_reserve_mem_limit() - Decrease available memory by size
|
||||
* of buffer including any reserved for control structures
|
||||
*
|
||||
* @adev: Device to which allocated BO belongs to
|
||||
|
@ -778,7 +778,7 @@ unwind:
|
|||
continue;
|
||||
if (attachment[i]->bo_va) {
|
||||
amdgpu_bo_reserve(bo[i], true);
|
||||
amdgpu_vm_bo_rmv(adev, attachment[i]->bo_va);
|
||||
amdgpu_vm_bo_del(adev, attachment[i]->bo_va);
|
||||
amdgpu_bo_unreserve(bo[i]);
|
||||
list_del(&attachment[i]->list);
|
||||
}
|
||||
|
@ -795,7 +795,7 @@ static void kfd_mem_detach(struct kfd_mem_attachment *attachment)
|
|||
|
||||
pr_debug("\t remove VA 0x%llx in entry %p\n",
|
||||
attachment->va, attachment);
|
||||
amdgpu_vm_bo_rmv(attachment->adev, attachment->bo_va);
|
||||
amdgpu_vm_bo_del(attachment->adev, attachment->bo_va);
|
||||
drm_gem_object_put(&bo->tbo.base);
|
||||
list_del(&attachment->list);
|
||||
kfree(attachment);
|
||||
|
@ -842,7 +842,8 @@ static void remove_kgd_mem_from_kfd_bo_list(struct kgd_mem *mem,
|
|||
*
|
||||
* Returns 0 for success, negative errno for errors.
|
||||
*/
|
||||
static int init_user_pages(struct kgd_mem *mem, uint64_t user_addr)
|
||||
static int init_user_pages(struct kgd_mem *mem, uint64_t user_addr,
|
||||
bool criu_resume)
|
||||
{
|
||||
struct amdkfd_process_info *process_info = mem->process_info;
|
||||
struct amdgpu_bo *bo = mem->bo;
|
||||
|
@ -864,6 +865,18 @@ static int init_user_pages(struct kgd_mem *mem, uint64_t user_addr)
|
|||
goto out;
|
||||
}
|
||||
|
||||
if (criu_resume) {
|
||||
/*
|
||||
* During a CRIU restore operation, the userptr buffer objects
|
||||
* will be validated in the restore_userptr_work worker at a
|
||||
* later stage when it is scheduled by another ioctl called by
|
||||
* CRIU master process for the target pid for restore.
|
||||
*/
|
||||
atomic_inc(&mem->invalid);
|
||||
mutex_unlock(&process_info->lock);
|
||||
return 0;
|
||||
}
|
||||
|
||||
ret = amdgpu_ttm_tt_get_user_pages(bo, bo->tbo.ttm->pages);
|
||||
if (ret) {
|
||||
pr_err("%s: Failed to get user pages: %d\n", __func__, ret);
|
||||
|
@ -1452,10 +1465,39 @@ uint64_t amdgpu_amdkfd_gpuvm_get_process_page_dir(void *drm_priv)
|
|||
return avm->pd_phys_addr;
|
||||
}
|
||||
|
||||
void amdgpu_amdkfd_block_mmu_notifications(void *p)
|
||||
{
|
||||
struct amdkfd_process_info *pinfo = (struct amdkfd_process_info *)p;
|
||||
|
||||
mutex_lock(&pinfo->lock);
|
||||
WRITE_ONCE(pinfo->block_mmu_notifications, true);
|
||||
mutex_unlock(&pinfo->lock);
|
||||
}
|
||||
|
||||
int amdgpu_amdkfd_criu_resume(void *p)
|
||||
{
|
||||
int ret = 0;
|
||||
struct amdkfd_process_info *pinfo = (struct amdkfd_process_info *)p;
|
||||
|
||||
mutex_lock(&pinfo->lock);
|
||||
pr_debug("scheduling work\n");
|
||||
atomic_inc(&pinfo->evicted_bos);
|
||||
if (!READ_ONCE(pinfo->block_mmu_notifications)) {
|
||||
ret = -EINVAL;
|
||||
goto out_unlock;
|
||||
}
|
||||
WRITE_ONCE(pinfo->block_mmu_notifications, false);
|
||||
schedule_delayed_work(&pinfo->restore_userptr_work, 0);
|
||||
|
||||
out_unlock:
|
||||
mutex_unlock(&pinfo->lock);
|
||||
return ret;
|
||||
}
|
||||
|
||||
int amdgpu_amdkfd_gpuvm_alloc_memory_of_gpu(
|
||||
struct amdgpu_device *adev, uint64_t va, uint64_t size,
|
||||
void *drm_priv, struct kgd_mem **mem,
|
||||
uint64_t *offset, uint32_t flags)
|
||||
uint64_t *offset, uint32_t flags, bool criu_resume)
|
||||
{
|
||||
struct amdgpu_vm *avm = drm_priv_to_vm(drm_priv);
|
||||
enum ttm_bo_type bo_type = ttm_bo_type_device;
|
||||
|
@ -1558,7 +1600,8 @@ int amdgpu_amdkfd_gpuvm_alloc_memory_of_gpu(
|
|||
add_kgd_mem_to_kfd_bo_list(*mem, avm->process_info, user_addr);
|
||||
|
||||
if (user_addr) {
|
||||
ret = init_user_pages(*mem, user_addr);
|
||||
pr_debug("creating userptr BO for user_addr = %llu\n", user_addr);
|
||||
ret = init_user_pages(*mem, user_addr, criu_resume);
|
||||
if (ret)
|
||||
goto allocate_init_user_pages_failed;
|
||||
} else if (flags & (KFD_IOC_ALLOC_MEM_FLAGS_DOORBELL |
|
||||
|
@ -1813,12 +1856,6 @@ int amdgpu_amdkfd_gpuvm_map_memory_to_gpu(
|
|||
true);
|
||||
ret = unreserve_bo_and_vms(&ctx, false, false);
|
||||
|
||||
/* Only apply no TLB flush on Aldebaran to
|
||||
* workaround regressions on other Asics.
|
||||
*/
|
||||
if (table_freed && (adev->asic_type != CHIP_ALDEBARAN))
|
||||
*table_freed = true;
|
||||
|
||||
goto out;
|
||||
|
||||
out_unreserve:
|
||||
|
@ -2068,6 +2105,10 @@ int amdgpu_amdkfd_evict_userptr(struct kgd_mem *mem,
|
|||
int evicted_bos;
|
||||
int r = 0;
|
||||
|
||||
/* Do not process MMU notifications until stage-4 IOCTL is received */
|
||||
if (READ_ONCE(process_info->block_mmu_notifications))
|
||||
return 0;
|
||||
|
||||
atomic_inc(&mem->invalid);
|
||||
evicted_bos = atomic_inc_return(&process_info->evicted_bos);
|
||||
if (evicted_bos == 1) {
|
||||
|
@ -2635,3 +2676,14 @@ int amdgpu_amdkfd_get_tile_config(struct amdgpu_device *adev,
|
|||
|
||||
return 0;
|
||||
}
|
||||
|
||||
bool amdgpu_amdkfd_bo_mapped_to_dev(struct amdgpu_device *adev, struct kgd_mem *mem)
|
||||
{
|
||||
struct kfd_mem_attachment *entry;
|
||||
|
||||
list_for_each_entry(entry, &mem->attachments, list) {
|
||||
if (entry->is_mapped && entry->adev == adev)
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
|
|
@ -1083,6 +1083,7 @@ int amdgpu_atombios_get_clock_dividers(struct amdgpu_device *adev,
|
|||
return 0;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_DRM_AMDGPU_SI
|
||||
int amdgpu_atombios_get_memory_pll_dividers(struct amdgpu_device *adev,
|
||||
u32 clock,
|
||||
bool strobe_mode,
|
||||
|
@ -1503,6 +1504,7 @@ int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device *adev,
|
|||
}
|
||||
return -EINVAL;
|
||||
}
|
||||
#endif
|
||||
|
||||
bool amdgpu_atombios_has_gpu_virtualization_table(struct amdgpu_device *adev)
|
||||
{
|
||||
|
|
|
@ -160,6 +160,7 @@ int amdgpu_atombios_get_clock_dividers(struct amdgpu_device *adev,
|
|||
bool strobe_mode,
|
||||
struct atom_clock_dividers *dividers);
|
||||
|
||||
#ifdef CONFIG_DRM_AMDGPU_SI
|
||||
int amdgpu_atombios_get_memory_pll_dividers(struct amdgpu_device *adev,
|
||||
u32 clock,
|
||||
bool strobe_mode,
|
||||
|
@ -179,6 +180,17 @@ int amdgpu_atombios_get_voltage_table(struct amdgpu_device *adev,
|
|||
int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device *adev,
|
||||
u8 module_index,
|
||||
struct atom_mc_reg_table *reg_table);
|
||||
int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev, u8 voltage_type,
|
||||
u16 voltage_id, u16 *voltage);
|
||||
int amdgpu_atombios_get_leakage_vddc_based_on_leakage_idx(struct amdgpu_device *adev,
|
||||
u16 *voltage,
|
||||
u16 leakage_idx);
|
||||
void amdgpu_atombios_get_default_voltages(struct amdgpu_device *adev,
|
||||
u16 *vddc, u16 *vddci, u16 *mvdd);
|
||||
int amdgpu_atombios_get_svi2_info(struct amdgpu_device *adev,
|
||||
u8 voltage_type,
|
||||
u8 *svd_gpio_id, u8 *svc_gpio_id);
|
||||
#endif
|
||||
|
||||
bool amdgpu_atombios_has_gpu_virtualization_table(struct amdgpu_device *adev);
|
||||
|
||||
|
@ -190,21 +202,11 @@ void amdgpu_atombios_scratch_regs_set_backlight_level(struct amdgpu_device *adev
|
|||
bool amdgpu_atombios_scratch_need_asic_init(struct amdgpu_device *adev);
|
||||
|
||||
void amdgpu_atombios_copy_swap(u8 *dst, u8 *src, u8 num_bytes, bool to_le);
|
||||
int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev, u8 voltage_type,
|
||||
u16 voltage_id, u16 *voltage);
|
||||
int amdgpu_atombios_get_leakage_vddc_based_on_leakage_idx(struct amdgpu_device *adev,
|
||||
u16 *voltage,
|
||||
u16 leakage_idx);
|
||||
void amdgpu_atombios_get_default_voltages(struct amdgpu_device *adev,
|
||||
u16 *vddc, u16 *vddci, u16 *mvdd);
|
||||
int amdgpu_atombios_get_clock_dividers(struct amdgpu_device *adev,
|
||||
u8 clock_type,
|
||||
u32 clock,
|
||||
bool strobe_mode,
|
||||
struct atom_clock_dividers *dividers);
|
||||
int amdgpu_atombios_get_svi2_info(struct amdgpu_device *adev,
|
||||
u8 voltage_type,
|
||||
u8 *svd_gpio_id, u8 *svc_gpio_id);
|
||||
|
||||
int amdgpu_atombios_get_data_table(struct amdgpu_device *adev,
|
||||
uint32_t table,
|
||||
|
|
|
@ -29,14 +29,13 @@
|
|||
#define AMDGPU_BENCHMARK_COMMON_MODES_N 17
|
||||
|
||||
static int amdgpu_benchmark_do_move(struct amdgpu_device *adev, unsigned size,
|
||||
uint64_t saddr, uint64_t daddr, int n)
|
||||
uint64_t saddr, uint64_t daddr, int n, s64 *time_ms)
|
||||
{
|
||||
unsigned long start_jiffies;
|
||||
unsigned long end_jiffies;
|
||||
ktime_t stime, etime;
|
||||
struct dma_fence *fence;
|
||||
int i, r;
|
||||
|
||||
start_jiffies = jiffies;
|
||||
stime = ktime_get();
|
||||
for (i = 0; i < n; i++) {
|
||||
struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
|
||||
r = amdgpu_copy_buffer(ring, saddr, daddr, size, NULL, &fence,
|
||||
|
@ -48,120 +47,81 @@ static int amdgpu_benchmark_do_move(struct amdgpu_device *adev, unsigned size,
|
|||
if (r)
|
||||
goto exit_do_move;
|
||||
}
|
||||
end_jiffies = jiffies;
|
||||
r = jiffies_to_msecs(end_jiffies - start_jiffies);
|
||||
|
||||
exit_do_move:
|
||||
etime = ktime_get();
|
||||
*time_ms = ktime_ms_delta(etime, stime);
|
||||
|
||||
return r;
|
||||
}
|
||||
|
||||
|
||||
static void amdgpu_benchmark_log_results(int n, unsigned size,
|
||||
unsigned int time,
|
||||
static void amdgpu_benchmark_log_results(struct amdgpu_device *adev,
|
||||
int n, unsigned size,
|
||||
s64 time_ms,
|
||||
unsigned sdomain, unsigned ddomain,
|
||||
char *kind)
|
||||
{
|
||||
unsigned int throughput = (n * (size >> 10)) / time;
|
||||
DRM_INFO("amdgpu: %s %u bo moves of %u kB from"
|
||||
" %d to %d in %u ms, throughput: %u Mb/s or %u MB/s\n",
|
||||
kind, n, size >> 10, sdomain, ddomain, time,
|
||||
s64 throughput = (n * (size >> 10));
|
||||
|
||||
throughput = div64_s64(throughput, time_ms);
|
||||
|
||||
dev_info(adev->dev, "amdgpu: %s %u bo moves of %u kB from"
|
||||
" %d to %d in %lld ms, throughput: %lld Mb/s or %lld MB/s\n",
|
||||
kind, n, size >> 10, sdomain, ddomain, time_ms,
|
||||
throughput * 8, throughput);
|
||||
}
|
||||
|
||||
static void amdgpu_benchmark_move(struct amdgpu_device *adev, unsigned size,
|
||||
unsigned sdomain, unsigned ddomain)
|
||||
static int amdgpu_benchmark_move(struct amdgpu_device *adev, unsigned size,
|
||||
unsigned sdomain, unsigned ddomain)
|
||||
{
|
||||
struct amdgpu_bo *dobj = NULL;
|
||||
struct amdgpu_bo *sobj = NULL;
|
||||
struct amdgpu_bo_param bp;
|
||||
uint64_t saddr, daddr;
|
||||
s64 time_ms;
|
||||
int r, n;
|
||||
int time;
|
||||
|
||||
memset(&bp, 0, sizeof(bp));
|
||||
bp.size = size;
|
||||
bp.byte_align = PAGE_SIZE;
|
||||
bp.domain = sdomain;
|
||||
bp.flags = 0;
|
||||
bp.type = ttm_bo_type_kernel;
|
||||
bp.resv = NULL;
|
||||
bp.bo_ptr_size = sizeof(struct amdgpu_bo);
|
||||
|
||||
n = AMDGPU_BENCHMARK_ITERATIONS;
|
||||
r = amdgpu_bo_create(adev, &bp, &sobj);
|
||||
if (r) {
|
||||
|
||||
r = amdgpu_bo_create_kernel(adev, size,
|
||||
PAGE_SIZE, sdomain,
|
||||
&sobj,
|
||||
&saddr,
|
||||
NULL);
|
||||
if (r)
|
||||
goto out_cleanup;
|
||||
}
|
||||
r = amdgpu_bo_reserve(sobj, false);
|
||||
if (unlikely(r != 0))
|
||||
r = amdgpu_bo_create_kernel(adev, size,
|
||||
PAGE_SIZE, ddomain,
|
||||
&dobj,
|
||||
&daddr,
|
||||
NULL);
|
||||
if (r)
|
||||
goto out_cleanup;
|
||||
r = amdgpu_bo_pin(sobj, sdomain);
|
||||
if (r) {
|
||||
amdgpu_bo_unreserve(sobj);
|
||||
goto out_cleanup;
|
||||
}
|
||||
r = amdgpu_ttm_alloc_gart(&sobj->tbo);
|
||||
amdgpu_bo_unreserve(sobj);
|
||||
if (r) {
|
||||
goto out_cleanup;
|
||||
}
|
||||
saddr = amdgpu_bo_gpu_offset(sobj);
|
||||
bp.domain = ddomain;
|
||||
r = amdgpu_bo_create(adev, &bp, &dobj);
|
||||
if (r) {
|
||||
goto out_cleanup;
|
||||
}
|
||||
r = amdgpu_bo_reserve(dobj, false);
|
||||
if (unlikely(r != 0))
|
||||
goto out_cleanup;
|
||||
r = amdgpu_bo_pin(dobj, ddomain);
|
||||
if (r) {
|
||||
amdgpu_bo_unreserve(sobj);
|
||||
goto out_cleanup;
|
||||
}
|
||||
r = amdgpu_ttm_alloc_gart(&dobj->tbo);
|
||||
amdgpu_bo_unreserve(dobj);
|
||||
if (r) {
|
||||
goto out_cleanup;
|
||||
}
|
||||
daddr = amdgpu_bo_gpu_offset(dobj);
|
||||
|
||||
if (adev->mman.buffer_funcs) {
|
||||
time = amdgpu_benchmark_do_move(adev, size, saddr, daddr, n);
|
||||
if (time < 0)
|
||||
r = amdgpu_benchmark_do_move(adev, size, saddr, daddr, n, &time_ms);
|
||||
if (r)
|
||||
goto out_cleanup;
|
||||
if (time > 0)
|
||||
amdgpu_benchmark_log_results(n, size, time,
|
||||
else
|
||||
amdgpu_benchmark_log_results(adev, n, size, time_ms,
|
||||
sdomain, ddomain, "dma");
|
||||
}
|
||||
|
||||
out_cleanup:
|
||||
/* Check error value now. The value can be overwritten when clean up.*/
|
||||
if (r) {
|
||||
DRM_ERROR("Error while benchmarking BO move.\n");
|
||||
}
|
||||
if (r < 0)
|
||||
dev_info(adev->dev, "Error while benchmarking BO move.\n");
|
||||
|
||||
if (sobj) {
|
||||
r = amdgpu_bo_reserve(sobj, true);
|
||||
if (likely(r == 0)) {
|
||||
amdgpu_bo_unpin(sobj);
|
||||
amdgpu_bo_unreserve(sobj);
|
||||
}
|
||||
amdgpu_bo_unref(&sobj);
|
||||
}
|
||||
if (dobj) {
|
||||
r = amdgpu_bo_reserve(dobj, true);
|
||||
if (likely(r == 0)) {
|
||||
amdgpu_bo_unpin(dobj);
|
||||
amdgpu_bo_unreserve(dobj);
|
||||
}
|
||||
amdgpu_bo_unref(&dobj);
|
||||
}
|
||||
if (sobj)
|
||||
amdgpu_bo_free_kernel(&sobj, &saddr, NULL);
|
||||
if (dobj)
|
||||
amdgpu_bo_free_kernel(&dobj, &daddr, NULL);
|
||||
return r;
|
||||
}
|
||||
|
||||
void amdgpu_benchmark(struct amdgpu_device *adev, int test_number)
|
||||
int amdgpu_benchmark(struct amdgpu_device *adev, int test_number)
|
||||
{
|
||||
int i;
|
||||
int i, r;
|
||||
static const int common_modes[AMDGPU_BENCHMARK_COMMON_MODES_N] = {
|
||||
640 * 480 * 4,
|
||||
720 * 480 * 4,
|
||||
|
@ -182,63 +142,119 @@ void amdgpu_benchmark(struct amdgpu_device *adev, int test_number)
|
|||
1920 * 1200 * 4
|
||||
};
|
||||
|
||||
mutex_lock(&adev->benchmark_mutex);
|
||||
switch (test_number) {
|
||||
case 1:
|
||||
dev_info(adev->dev,
|
||||
"benchmark test: %d (simple test, VRAM to GTT and GTT to VRAM)\n",
|
||||
test_number);
|
||||
/* simple test, VRAM to GTT and GTT to VRAM */
|
||||
amdgpu_benchmark_move(adev, 1024*1024, AMDGPU_GEM_DOMAIN_GTT,
|
||||
AMDGPU_GEM_DOMAIN_VRAM);
|
||||
amdgpu_benchmark_move(adev, 1024*1024, AMDGPU_GEM_DOMAIN_VRAM,
|
||||
AMDGPU_GEM_DOMAIN_GTT);
|
||||
r = amdgpu_benchmark_move(adev, 1024*1024, AMDGPU_GEM_DOMAIN_GTT,
|
||||
AMDGPU_GEM_DOMAIN_VRAM);
|
||||
if (r)
|
||||
goto done;
|
||||
r = amdgpu_benchmark_move(adev, 1024*1024, AMDGPU_GEM_DOMAIN_VRAM,
|
||||
AMDGPU_GEM_DOMAIN_GTT);
|
||||
if (r)
|
||||
goto done;
|
||||
break;
|
||||
case 2:
|
||||
dev_info(adev->dev,
|
||||
"benchmark test: %d (simple test, VRAM to VRAM)\n",
|
||||
test_number);
|
||||
/* simple test, VRAM to VRAM */
|
||||
amdgpu_benchmark_move(adev, 1024*1024, AMDGPU_GEM_DOMAIN_VRAM,
|
||||
AMDGPU_GEM_DOMAIN_VRAM);
|
||||
r = amdgpu_benchmark_move(adev, 1024*1024, AMDGPU_GEM_DOMAIN_VRAM,
|
||||
AMDGPU_GEM_DOMAIN_VRAM);
|
||||
if (r)
|
||||
goto done;
|
||||
break;
|
||||
case 3:
|
||||
dev_info(adev->dev,
|
||||
"benchmark test: %d (GTT to VRAM, buffer size sweep, powers of 2)\n",
|
||||
test_number);
|
||||
/* GTT to VRAM, buffer size sweep, powers of 2 */
|
||||
for (i = 1; i <= 16384; i <<= 1)
|
||||
amdgpu_benchmark_move(adev, i * AMDGPU_GPU_PAGE_SIZE,
|
||||
AMDGPU_GEM_DOMAIN_GTT,
|
||||
AMDGPU_GEM_DOMAIN_VRAM);
|
||||
for (i = 1; i <= 16384; i <<= 1) {
|
||||
r = amdgpu_benchmark_move(adev, i * AMDGPU_GPU_PAGE_SIZE,
|
||||
AMDGPU_GEM_DOMAIN_GTT,
|
||||
AMDGPU_GEM_DOMAIN_VRAM);
|
||||
if (r)
|
||||
goto done;
|
||||
}
|
||||
break;
|
||||
case 4:
|
||||
dev_info(adev->dev,
|
||||
"benchmark test: %d (VRAM to GTT, buffer size sweep, powers of 2)\n",
|
||||
test_number);
|
||||
/* VRAM to GTT, buffer size sweep, powers of 2 */
|
||||
for (i = 1; i <= 16384; i <<= 1)
|
||||
amdgpu_benchmark_move(adev, i * AMDGPU_GPU_PAGE_SIZE,
|
||||
AMDGPU_GEM_DOMAIN_VRAM,
|
||||
AMDGPU_GEM_DOMAIN_GTT);
|
||||
for (i = 1; i <= 16384; i <<= 1) {
|
||||
r = amdgpu_benchmark_move(adev, i * AMDGPU_GPU_PAGE_SIZE,
|
||||
AMDGPU_GEM_DOMAIN_VRAM,
|
||||
AMDGPU_GEM_DOMAIN_GTT);
|
||||
if (r)
|
||||
goto done;
|
||||
}
|
||||
break;
|
||||
case 5:
|
||||
dev_info(adev->dev,
|
||||
"benchmark test: %d (VRAM to VRAM, buffer size sweep, powers of 2)\n",
|
||||
test_number);
|
||||
/* VRAM to VRAM, buffer size sweep, powers of 2 */
|
||||
for (i = 1; i <= 16384; i <<= 1)
|
||||
amdgpu_benchmark_move(adev, i * AMDGPU_GPU_PAGE_SIZE,
|
||||
AMDGPU_GEM_DOMAIN_VRAM,
|
||||
AMDGPU_GEM_DOMAIN_VRAM);
|
||||
for (i = 1; i <= 16384; i <<= 1) {
|
||||
r = amdgpu_benchmark_move(adev, i * AMDGPU_GPU_PAGE_SIZE,
|
||||
AMDGPU_GEM_DOMAIN_VRAM,
|
||||
AMDGPU_GEM_DOMAIN_VRAM);
|
||||
if (r)
|
||||
goto done;
|
||||
}
|
||||
break;
|
||||
case 6:
|
||||
dev_info(adev->dev,
|
||||
"benchmark test: %d (GTT to VRAM, buffer size sweep, common modes)\n",
|
||||
test_number);
|
||||
/* GTT to VRAM, buffer size sweep, common modes */
|
||||
for (i = 0; i < AMDGPU_BENCHMARK_COMMON_MODES_N; i++)
|
||||
amdgpu_benchmark_move(adev, common_modes[i],
|
||||
AMDGPU_GEM_DOMAIN_GTT,
|
||||
AMDGPU_GEM_DOMAIN_VRAM);
|
||||
for (i = 0; i < AMDGPU_BENCHMARK_COMMON_MODES_N; i++) {
|
||||
r = amdgpu_benchmark_move(adev, common_modes[i],
|
||||
AMDGPU_GEM_DOMAIN_GTT,
|
||||
AMDGPU_GEM_DOMAIN_VRAM);
|
||||
if (r)
|
||||
goto done;
|
||||
}
|
||||
break;
|
||||
case 7:
|
||||
dev_info(adev->dev,
|
||||
"benchmark test: %d (VRAM to GTT, buffer size sweep, common modes)\n",
|
||||
test_number);
|
||||
/* VRAM to GTT, buffer size sweep, common modes */
|
||||
for (i = 0; i < AMDGPU_BENCHMARK_COMMON_MODES_N; i++)
|
||||
amdgpu_benchmark_move(adev, common_modes[i],
|
||||
AMDGPU_GEM_DOMAIN_VRAM,
|
||||
AMDGPU_GEM_DOMAIN_GTT);
|
||||
for (i = 0; i < AMDGPU_BENCHMARK_COMMON_MODES_N; i++) {
|
||||
r = amdgpu_benchmark_move(adev, common_modes[i],
|
||||
AMDGPU_GEM_DOMAIN_VRAM,
|
||||
AMDGPU_GEM_DOMAIN_GTT);
|
||||
if (r)
|
||||
goto done;
|
||||
}
|
||||
break;
|
||||
case 8:
|
||||
dev_info(adev->dev,
|
||||
"benchmark test: %d (VRAM to VRAM, buffer size sweep, common modes)\n",
|
||||
test_number);
|
||||
/* VRAM to VRAM, buffer size sweep, common modes */
|
||||
for (i = 0; i < AMDGPU_BENCHMARK_COMMON_MODES_N; i++)
|
||||
amdgpu_benchmark_move(adev, common_modes[i],
|
||||
for (i = 0; i < AMDGPU_BENCHMARK_COMMON_MODES_N; i++) {
|
||||
r = amdgpu_benchmark_move(adev, common_modes[i],
|
||||
AMDGPU_GEM_DOMAIN_VRAM,
|
||||
AMDGPU_GEM_DOMAIN_VRAM);
|
||||
if (r)
|
||||
goto done;
|
||||
}
|
||||
break;
|
||||
|
||||
default:
|
||||
DRM_ERROR("Unknown benchmark\n");
|
||||
dev_info(adev->dev, "Unknown benchmark %d\n", test_number);
|
||||
r = -EINVAL;
|
||||
break;
|
||||
}
|
||||
|
||||
done:
|
||||
mutex_unlock(&adev->benchmark_mutex);
|
||||
|
||||
return r;
|
||||
}
|
||||
|
|
|
@ -464,3 +464,41 @@ success:
|
|||
adev->is_atom_fw = (adev->asic_type >= CHIP_VEGA10) ? true : false;
|
||||
return true;
|
||||
}
|
||||
|
||||
/* helper function for soc15 and onwards to read bios from rom */
|
||||
bool amdgpu_soc15_read_bios_from_rom(struct amdgpu_device *adev,
|
||||
u8 *bios, u32 length_bytes)
|
||||
{
|
||||
u32 *dw_ptr;
|
||||
u32 i, length_dw;
|
||||
u32 rom_index_offset;
|
||||
u32 rom_data_offset;
|
||||
|
||||
if (bios == NULL)
|
||||
return false;
|
||||
if (length_bytes == 0)
|
||||
return false;
|
||||
/* APU vbios image is part of sbios image */
|
||||
if (adev->flags & AMD_IS_APU)
|
||||
return false;
|
||||
if (!adev->smuio.funcs ||
|
||||
!adev->smuio.funcs->get_rom_index_offset ||
|
||||
!adev->smuio.funcs->get_rom_data_offset)
|
||||
return false;
|
||||
|
||||
dw_ptr = (u32 *)bios;
|
||||
length_dw = ALIGN(length_bytes, 4) / 4;
|
||||
|
||||
rom_index_offset =
|
||||
adev->smuio.funcs->get_rom_index_offset(adev);
|
||||
rom_data_offset =
|
||||
adev->smuio.funcs->get_rom_data_offset(adev);
|
||||
|
||||
/* set rom index to 0 */
|
||||
WREG32(rom_index_offset, 0);
|
||||
/* read out the rom data */
|
||||
for (i = 0; i < length_dw; i++)
|
||||
dw_ptr[i] = RREG32(rom_data_offset);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
|
|
@ -26,7 +26,7 @@
|
|||
|
||||
#include <drm/drm_edid.h>
|
||||
#include <drm/drm_fb_helper.h>
|
||||
#include <drm/drm_dp_helper.h>
|
||||
#include <drm/dp/drm_dp_helper.h>
|
||||
#include <drm/drm_probe_helper.h>
|
||||
#include <drm/amdgpu_drm.h>
|
||||
#include "amdgpu.h"
|
||||
|
@ -175,7 +175,7 @@ int amdgpu_connector_get_monitor_bpc(struct drm_connector *connector)
|
|||
|
||||
/* Check if bpc is within clock limit. Try to degrade gracefully otherwise */
|
||||
if ((bpc == 12) && (mode_clock * 3/2 > max_tmds_clock)) {
|
||||
if ((connector->display_info.edid_hdmi_dc_modes & DRM_EDID_HDMI_DC_30) &&
|
||||
if ((connector->display_info.edid_hdmi_rgb444_dc_modes & DRM_EDID_HDMI_DC_30) &&
|
||||
(mode_clock * 5/4 <= max_tmds_clock))
|
||||
bpc = 10;
|
||||
else
|
||||
|
@ -626,7 +626,7 @@ amdgpu_connector_fixup_lcd_native_mode(struct drm_encoder *encoder,
|
|||
if (mode->type & DRM_MODE_TYPE_PREFERRED) {
|
||||
if (mode->hdisplay != native_mode->hdisplay ||
|
||||
mode->vdisplay != native_mode->vdisplay)
|
||||
memcpy(native_mode, mode, sizeof(*mode));
|
||||
drm_mode_copy(native_mode, mode);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -635,7 +635,7 @@ amdgpu_connector_fixup_lcd_native_mode(struct drm_encoder *encoder,
|
|||
list_for_each_entry_safe(mode, t, &connector->probed_modes, head) {
|
||||
if (mode->hdisplay == native_mode->hdisplay &&
|
||||
mode->vdisplay == native_mode->vdisplay) {
|
||||
*native_mode = *mode;
|
||||
drm_mode_copy(native_mode, mode);
|
||||
drm_mode_set_crtcinfo(native_mode, CRTC_INTERLACE_HALVE_V);
|
||||
DRM_DEBUG_KMS("Determined LVDS native mode details from EDID\n");
|
||||
break;
|
||||
|
|
|
@ -32,6 +32,7 @@
|
|||
|
||||
#include <drm/amdgpu_drm.h>
|
||||
#include <drm/drm_syncobj.h>
|
||||
#include "amdgpu_cs.h"
|
||||
#include "amdgpu.h"
|
||||
#include "amdgpu_trace.h"
|
||||
#include "amdgpu_gmc.h"
|
||||
|
@ -127,8 +128,6 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs
|
|||
goto free_chunk;
|
||||
}
|
||||
|
||||
mutex_lock(&p->ctx->lock);
|
||||
|
||||
/* skip guilty context job */
|
||||
if (atomic_read(&p->ctx->guilty) == 1) {
|
||||
ret = -ECANCELED;
|
||||
|
@ -314,7 +313,7 @@ static void amdgpu_cs_get_threshold_for_moves(struct amdgpu_device *adev,
|
|||
}
|
||||
|
||||
total_vram = adev->gmc.real_vram_size - atomic64_read(&adev->vram_pin_size);
|
||||
used_vram = amdgpu_vram_mgr_usage(&adev->mman.vram_mgr);
|
||||
used_vram = ttm_resource_manager_usage(&adev->mman.vram_mgr.manager);
|
||||
free_vram = used_vram >= total_vram ? 0 : total_vram - used_vram;
|
||||
|
||||
spin_lock(&adev->mm_stats.lock);
|
||||
|
@ -341,7 +340,7 @@ static void amdgpu_cs_get_threshold_for_moves(struct amdgpu_device *adev,
|
|||
if (free_vram >= 128 * 1024 * 1024 || free_vram >= total_vram / 8) {
|
||||
s64 min_us;
|
||||
|
||||
/* Be more aggresive on dGPUs. Try to fill a portion of free
|
||||
/* Be more aggressive on dGPUs. Try to fill a portion of free
|
||||
* VRAM now.
|
||||
*/
|
||||
if (!(adev->flags & AMD_IS_APU))
|
||||
|
@ -585,6 +584,16 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
|
|||
}
|
||||
}
|
||||
|
||||
/* Move fence waiting after getting reservation lock of
|
||||
* PD root. Then there is no need on a ctx mutex lock.
|
||||
*/
|
||||
r = amdgpu_ctx_wait_prev_fence(p->ctx, p->entity);
|
||||
if (unlikely(r != 0)) {
|
||||
if (r != -ERESTARTSYS)
|
||||
DRM_ERROR("amdgpu_ctx_wait_prev_fence failed.\n");
|
||||
goto error_validate;
|
||||
}
|
||||
|
||||
amdgpu_cs_get_threshold_for_moves(p->adev, &p->bytes_moved_threshold,
|
||||
&p->bytes_moved_vis_threshold);
|
||||
p->bytes_moved = 0;
|
||||
|
@ -700,7 +709,6 @@ static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser, int error,
|
|||
dma_fence_put(parser->fence);
|
||||
|
||||
if (parser->ctx) {
|
||||
mutex_unlock(&parser->ctx->lock);
|
||||
amdgpu_ctx_put(parser->ctx);
|
||||
}
|
||||
if (parser->bo_list)
|
||||
|
@ -775,12 +783,12 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
|
|||
memcpy(ib->ptr, kptr, chunk_ib->ib_bytes);
|
||||
amdgpu_bo_kunmap(aobj);
|
||||
|
||||
r = amdgpu_ring_parse_cs(ring, p, j);
|
||||
r = amdgpu_ring_parse_cs(ring, p, p->job, ib);
|
||||
if (r)
|
||||
return r;
|
||||
} else {
|
||||
ib->ptr = (uint32_t *)kptr;
|
||||
r = amdgpu_ring_patch_cs_in_place(ring, p, j);
|
||||
r = amdgpu_ring_patch_cs_in_place(ring, p, p->job, ib);
|
||||
amdgpu_bo_kunmap(aobj);
|
||||
if (r)
|
||||
return r;
|
||||
|
@ -944,7 +952,7 @@ static int amdgpu_cs_ib_fill(struct amdgpu_device *adev,
|
|||
if (parser->job->uf_addr && ring->funcs->no_user_fence)
|
||||
return -EINVAL;
|
||||
|
||||
return amdgpu_ctx_wait_prev_fence(parser->ctx, parser->entity);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int amdgpu_cs_process_fence_dep(struct amdgpu_cs_parser *p,
|
||||
|
@ -1272,16 +1280,13 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
|
|||
continue;
|
||||
|
||||
/*
|
||||
* Work around dma_resv shortcommings by wrapping up the
|
||||
* Work around dma_resv shortcomings by wrapping up the
|
||||
* submission in a dma_fence_chain and add it as exclusive
|
||||
* fence, but first add the submission as shared fence to make
|
||||
* sure that shared fences never signal before the exclusive
|
||||
* one.
|
||||
* fence.
|
||||
*/
|
||||
dma_fence_chain_init(chain, dma_resv_excl_fence(resv),
|
||||
dma_fence_get(p->fence), 1);
|
||||
|
||||
dma_resv_add_shared_fence(resv, p->fence);
|
||||
rcu_assign_pointer(resv->fence_excl, &chain->base);
|
||||
e->chain = NULL;
|
||||
}
|
||||
|
@ -1363,7 +1368,6 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
|
|||
goto out;
|
||||
|
||||
r = amdgpu_cs_submit(&parser, cs);
|
||||
|
||||
out:
|
||||
amdgpu_cs_parser_fini(&parser, r, reserved_buffers);
|
||||
|
||||
|
@ -1509,6 +1513,7 @@ int amdgpu_cs_fence_to_handle_ioctl(struct drm_device *dev, void *data,
|
|||
return 0;
|
||||
|
||||
default:
|
||||
dma_fence_put(fence);
|
||||
return -EINVAL;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -0,0 +1,80 @@
|
|||
/*
|
||||
* Copyright 2022 Advanced Micro Devices, Inc.
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a
|
||||
* copy of this software and associated documentation files (the "Software"),
|
||||
* to deal in the Software without restriction, including without limitation
|
||||
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
|
||||
* and/or sell copies of the Software, and to permit persons to whom the
|
||||
* Software is furnished to do so, subject to the following conditions:
|
||||
*
|
||||
* The above copyright notice and this permission notice shall be included in
|
||||
* all copies or substantial portions of the Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
||||
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
|
||||
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
|
||||
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
|
||||
* OTHER DEALINGS IN THE SOFTWARE.
|
||||
*
|
||||
*/
|
||||
#ifndef __AMDGPU_CS_H__
|
||||
#define __AMDGPU_CS_H__
|
||||
|
||||
#include "amdgpu_job.h"
|
||||
#include "amdgpu_bo_list.h"
|
||||
#include "amdgpu_ring.h"
|
||||
|
||||
struct amdgpu_bo_va_mapping;
|
||||
|
||||
struct amdgpu_cs_chunk {
|
||||
uint32_t chunk_id;
|
||||
uint32_t length_dw;
|
||||
void *kdata;
|
||||
};
|
||||
|
||||
struct amdgpu_cs_post_dep {
|
||||
struct drm_syncobj *syncobj;
|
||||
struct dma_fence_chain *chain;
|
||||
u64 point;
|
||||
};
|
||||
|
||||
struct amdgpu_cs_parser {
|
||||
struct amdgpu_device *adev;
|
||||
struct drm_file *filp;
|
||||
struct amdgpu_ctx *ctx;
|
||||
|
||||
/* chunks */
|
||||
unsigned nchunks;
|
||||
struct amdgpu_cs_chunk *chunks;
|
||||
|
||||
/* scheduler job object */
|
||||
struct amdgpu_job *job;
|
||||
struct drm_sched_entity *entity;
|
||||
|
||||
/* buffer objects */
|
||||
struct ww_acquire_ctx ticket;
|
||||
struct amdgpu_bo_list *bo_list;
|
||||
struct amdgpu_mn *mn;
|
||||
struct amdgpu_bo_list_entry vm_pd;
|
||||
struct list_head validated;
|
||||
struct dma_fence *fence;
|
||||
uint64_t bytes_moved_threshold;
|
||||
uint64_t bytes_moved_vis_threshold;
|
||||
uint64_t bytes_moved;
|
||||
uint64_t bytes_moved_vis;
|
||||
|
||||
/* user fence */
|
||||
struct amdgpu_bo_list_entry uf_entry;
|
||||
|
||||
unsigned num_post_deps;
|
||||
struct amdgpu_cs_post_dep *post_deps;
|
||||
};
|
||||
|
||||
int amdgpu_cs_find_mapping(struct amdgpu_cs_parser *parser,
|
||||
uint64_t addr, struct amdgpu_bo **bo,
|
||||
struct amdgpu_bo_va_mapping **mapping);
|
||||
|
||||
#endif
|
|
@ -98,7 +98,7 @@ int amdgpu_map_static_csa(struct amdgpu_device *adev, struct amdgpu_vm *vm,
|
|||
|
||||
if (r) {
|
||||
DRM_ERROR("failed to do bo_map on static CSA, err=%d\n", r);
|
||||
amdgpu_vm_bo_rmv(adev, *bo_va);
|
||||
amdgpu_vm_bo_del(adev, *bo_va);
|
||||
ttm_eu_backoff_reservation(&ticket, &list);
|
||||
return r;
|
||||
}
|
||||
|
|
|
@ -23,6 +23,7 @@
|
|||
*/
|
||||
|
||||
#include <drm/drm_auth.h>
|
||||
#include <drm/drm_drv.h>
|
||||
#include "amdgpu.h"
|
||||
#include "amdgpu_sched.h"
|
||||
#include "amdgpu_ras.h"
|
||||
|
@ -204,9 +205,15 @@ static int amdgpu_ctx_init_entity(struct amdgpu_ctx *ctx, u32 hw_ip,
|
|||
if (r)
|
||||
goto error_free_entity;
|
||||
|
||||
ctx->entities[hw_ip][ring] = entity;
|
||||
/* It's not an error if we fail to install the new entity */
|
||||
if (cmpxchg(&ctx->entities[hw_ip][ring], NULL, entity))
|
||||
goto cleanup_entity;
|
||||
|
||||
return 0;
|
||||
|
||||
cleanup_entity:
|
||||
drm_sched_entity_fini(&entity->entity);
|
||||
|
||||
error_free_entity:
|
||||
kfree(entity);
|
||||
|
||||
|
@ -230,13 +237,13 @@ static int amdgpu_ctx_init(struct amdgpu_device *adev,
|
|||
|
||||
kref_init(&ctx->refcount);
|
||||
spin_lock_init(&ctx->ring_lock);
|
||||
mutex_init(&ctx->lock);
|
||||
|
||||
ctx->reset_counter = atomic_read(&adev->gpu_reset_counter);
|
||||
ctx->reset_counter_query = ctx->reset_counter;
|
||||
ctx->vram_lost_counter = atomic_read(&adev->vram_lost_counter);
|
||||
ctx->init_priority = priority;
|
||||
ctx->override_priority = AMDGPU_CTX_PRIORITY_UNSET;
|
||||
ctx->stable_pstate = AMDGPU_CTX_STABLE_PSTATE_NONE;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -255,11 +262,85 @@ static void amdgpu_ctx_fini_entity(struct amdgpu_ctx_entity *entity)
|
|||
kfree(entity);
|
||||
}
|
||||
|
||||
static int amdgpu_ctx_get_stable_pstate(struct amdgpu_ctx *ctx,
|
||||
u32 *stable_pstate)
|
||||
{
|
||||
struct amdgpu_device *adev = ctx->adev;
|
||||
enum amd_dpm_forced_level current_level;
|
||||
|
||||
current_level = amdgpu_dpm_get_performance_level(adev);
|
||||
|
||||
switch (current_level) {
|
||||
case AMD_DPM_FORCED_LEVEL_PROFILE_STANDARD:
|
||||
*stable_pstate = AMDGPU_CTX_STABLE_PSTATE_STANDARD;
|
||||
break;
|
||||
case AMD_DPM_FORCED_LEVEL_PROFILE_MIN_SCLK:
|
||||
*stable_pstate = AMDGPU_CTX_STABLE_PSTATE_MIN_SCLK;
|
||||
break;
|
||||
case AMD_DPM_FORCED_LEVEL_PROFILE_MIN_MCLK:
|
||||
*stable_pstate = AMDGPU_CTX_STABLE_PSTATE_MIN_MCLK;
|
||||
break;
|
||||
case AMD_DPM_FORCED_LEVEL_PROFILE_PEAK:
|
||||
*stable_pstate = AMDGPU_CTX_STABLE_PSTATE_PEAK;
|
||||
break;
|
||||
default:
|
||||
*stable_pstate = AMDGPU_CTX_STABLE_PSTATE_NONE;
|
||||
break;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int amdgpu_ctx_set_stable_pstate(struct amdgpu_ctx *ctx,
|
||||
u32 stable_pstate)
|
||||
{
|
||||
struct amdgpu_device *adev = ctx->adev;
|
||||
enum amd_dpm_forced_level level;
|
||||
int r;
|
||||
|
||||
mutex_lock(&adev->pm.stable_pstate_ctx_lock);
|
||||
if (adev->pm.stable_pstate_ctx && adev->pm.stable_pstate_ctx != ctx) {
|
||||
r = -EBUSY;
|
||||
goto done;
|
||||
}
|
||||
|
||||
switch (stable_pstate) {
|
||||
case AMDGPU_CTX_STABLE_PSTATE_NONE:
|
||||
level = AMD_DPM_FORCED_LEVEL_AUTO;
|
||||
break;
|
||||
case AMDGPU_CTX_STABLE_PSTATE_STANDARD:
|
||||
level = AMD_DPM_FORCED_LEVEL_PROFILE_STANDARD;
|
||||
break;
|
||||
case AMDGPU_CTX_STABLE_PSTATE_MIN_SCLK:
|
||||
level = AMD_DPM_FORCED_LEVEL_PROFILE_MIN_SCLK;
|
||||
break;
|
||||
case AMDGPU_CTX_STABLE_PSTATE_MIN_MCLK:
|
||||
level = AMD_DPM_FORCED_LEVEL_PROFILE_MIN_MCLK;
|
||||
break;
|
||||
case AMDGPU_CTX_STABLE_PSTATE_PEAK:
|
||||
level = AMD_DPM_FORCED_LEVEL_PROFILE_PEAK;
|
||||
break;
|
||||
default:
|
||||
r = -EINVAL;
|
||||
goto done;
|
||||
}
|
||||
|
||||
r = amdgpu_dpm_force_performance_level(adev, level);
|
||||
|
||||
if (level == AMD_DPM_FORCED_LEVEL_AUTO)
|
||||
adev->pm.stable_pstate_ctx = NULL;
|
||||
else
|
||||
adev->pm.stable_pstate_ctx = ctx;
|
||||
done:
|
||||
mutex_unlock(&adev->pm.stable_pstate_ctx_lock);
|
||||
|
||||
return r;
|
||||
}
|
||||
|
||||
static void amdgpu_ctx_fini(struct kref *ref)
|
||||
{
|
||||
struct amdgpu_ctx *ctx = container_of(ref, struct amdgpu_ctx, refcount);
|
||||
struct amdgpu_device *adev = ctx->adev;
|
||||
unsigned i, j;
|
||||
unsigned i, j, idx;
|
||||
|
||||
if (!adev)
|
||||
return;
|
||||
|
@ -271,7 +352,11 @@ static void amdgpu_ctx_fini(struct kref *ref)
|
|||
}
|
||||
}
|
||||
|
||||
mutex_destroy(&ctx->lock);
|
||||
if (drm_dev_enter(&adev->ddev, &idx)) {
|
||||
amdgpu_ctx_set_stable_pstate(ctx, AMDGPU_CTX_STABLE_PSTATE_NONE);
|
||||
drm_dev_exit(idx);
|
||||
}
|
||||
|
||||
kfree(ctx);
|
||||
}
|
||||
|
||||
|
@ -467,11 +552,41 @@ static int amdgpu_ctx_query2(struct amdgpu_device *adev,
|
|||
return 0;
|
||||
}
|
||||
|
||||
|
||||
|
||||
static int amdgpu_ctx_stable_pstate(struct amdgpu_device *adev,
|
||||
struct amdgpu_fpriv *fpriv, uint32_t id,
|
||||
bool set, u32 *stable_pstate)
|
||||
{
|
||||
struct amdgpu_ctx *ctx;
|
||||
struct amdgpu_ctx_mgr *mgr;
|
||||
int r;
|
||||
|
||||
if (!fpriv)
|
||||
return -EINVAL;
|
||||
|
||||
mgr = &fpriv->ctx_mgr;
|
||||
mutex_lock(&mgr->lock);
|
||||
ctx = idr_find(&mgr->ctx_handles, id);
|
||||
if (!ctx) {
|
||||
mutex_unlock(&mgr->lock);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (set)
|
||||
r = amdgpu_ctx_set_stable_pstate(ctx, *stable_pstate);
|
||||
else
|
||||
r = amdgpu_ctx_get_stable_pstate(ctx, stable_pstate);
|
||||
|
||||
mutex_unlock(&mgr->lock);
|
||||
return r;
|
||||
}
|
||||
|
||||
int amdgpu_ctx_ioctl(struct drm_device *dev, void *data,
|
||||
struct drm_file *filp)
|
||||
{
|
||||
int r;
|
||||
uint32_t id;
|
||||
uint32_t id, stable_pstate;
|
||||
int32_t priority;
|
||||
|
||||
union drm_amdgpu_ctx *args = data;
|
||||
|
@ -500,6 +615,21 @@ int amdgpu_ctx_ioctl(struct drm_device *dev, void *data,
|
|||
case AMDGPU_CTX_OP_QUERY_STATE2:
|
||||
r = amdgpu_ctx_query2(adev, fpriv, id, &args->out);
|
||||
break;
|
||||
case AMDGPU_CTX_OP_GET_STABLE_PSTATE:
|
||||
if (args->in.flags)
|
||||
return -EINVAL;
|
||||
r = amdgpu_ctx_stable_pstate(adev, fpriv, id, false, &stable_pstate);
|
||||
if (!r)
|
||||
args->out.pstate.flags = stable_pstate;
|
||||
break;
|
||||
case AMDGPU_CTX_OP_SET_STABLE_PSTATE:
|
||||
if (args->in.flags & ~AMDGPU_CTX_STABLE_PSTATE_FLAGS_MASK)
|
||||
return -EINVAL;
|
||||
stable_pstate = args->in.flags & AMDGPU_CTX_STABLE_PSTATE_FLAGS_MASK;
|
||||
if (stable_pstate > AMDGPU_CTX_STABLE_PSTATE_PEAK)
|
||||
return -EINVAL;
|
||||
r = amdgpu_ctx_stable_pstate(adev, fpriv, id, true, &stable_pstate);
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
|
|
@ -49,10 +49,10 @@ struct amdgpu_ctx {
|
|||
bool preamble_presented;
|
||||
int32_t init_priority;
|
||||
int32_t override_priority;
|
||||
struct mutex lock;
|
||||
atomic_t guilty;
|
||||
unsigned long ras_counter_ce;
|
||||
unsigned long ras_counter_ue;
|
||||
uint32_t stable_pstate;
|
||||
};
|
||||
|
||||
struct amdgpu_ctx_mgr {
|
||||
|
|
|
@ -37,6 +37,8 @@
|
|||
#include "amdgpu_fw_attestation.h"
|
||||
#include "amdgpu_umr.h"
|
||||
|
||||
#include "amdgpu_reset.h"
|
||||
|
||||
#if defined(CONFIG_DEBUG_FS)
|
||||
|
||||
/**
|
||||
|
@ -728,7 +730,7 @@ static ssize_t amdgpu_debugfs_gca_config_read(struct file *f, char __user *buf,
|
|||
return -ENOMEM;
|
||||
|
||||
/* version, increment each time something is added */
|
||||
config[no_regs++] = 3;
|
||||
config[no_regs++] = 4;
|
||||
config[no_regs++] = adev->gfx.config.max_shader_engines;
|
||||
config[no_regs++] = adev->gfx.config.max_tile_pipes;
|
||||
config[no_regs++] = adev->gfx.config.max_cu_per_sh;
|
||||
|
@ -768,6 +770,9 @@ static ssize_t amdgpu_debugfs_gca_config_read(struct file *f, char __user *buf,
|
|||
config[no_regs++] = adev->pdev->subsystem_device;
|
||||
config[no_regs++] = adev->pdev->subsystem_vendor;
|
||||
|
||||
/* rev==4 APU flag */
|
||||
config[no_regs++] = adev->flags & AMD_IS_APU ? 1 : 0;
|
||||
|
||||
while (size && (*pos < no_regs * 4)) {
|
||||
uint32_t value;
|
||||
|
||||
|
@ -1120,8 +1125,10 @@ static ssize_t amdgpu_debugfs_gfxoff_read(struct file *f, char __user *buf,
|
|||
return -EINVAL;
|
||||
|
||||
r = pm_runtime_get_sync(adev_to_drm(adev)->dev);
|
||||
if (r < 0)
|
||||
if (r < 0) {
|
||||
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
|
||||
return r;
|
||||
}
|
||||
|
||||
while (size) {
|
||||
uint32_t value;
|
||||
|
@ -1279,7 +1286,7 @@ static int amdgpu_debugfs_test_ib_show(struct seq_file *m, void *unused)
|
|||
}
|
||||
|
||||
/* Avoid accidently unparking the sched thread during GPU reset */
|
||||
r = down_write_killable(&adev->reset_sem);
|
||||
r = down_write_killable(&adev->reset_domain->sem);
|
||||
if (r)
|
||||
return r;
|
||||
|
||||
|
@ -1308,7 +1315,7 @@ static int amdgpu_debugfs_test_ib_show(struct seq_file *m, void *unused)
|
|||
kthread_unpark(ring->sched.thread);
|
||||
}
|
||||
|
||||
up_write(&adev->reset_sem);
|
||||
up_write(&adev->reset_domain->sem);
|
||||
|
||||
pm_runtime_mark_last_busy(dev->dev);
|
||||
pm_runtime_put_autosuspend(dev->dev);
|
||||
|
@ -1357,6 +1364,25 @@ static int amdgpu_debugfs_evict_gtt(void *data, u64 *val)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int amdgpu_debugfs_benchmark(void *data, u64 val)
|
||||
{
|
||||
struct amdgpu_device *adev = (struct amdgpu_device *)data;
|
||||
struct drm_device *dev = adev_to_drm(adev);
|
||||
int r;
|
||||
|
||||
r = pm_runtime_get_sync(dev->dev);
|
||||
if (r < 0) {
|
||||
pm_runtime_put_autosuspend(dev->dev);
|
||||
return r;
|
||||
}
|
||||
|
||||
r = amdgpu_benchmark(adev, val);
|
||||
|
||||
pm_runtime_mark_last_busy(dev->dev);
|
||||
pm_runtime_put_autosuspend(dev->dev);
|
||||
|
||||
return r;
|
||||
}
|
||||
|
||||
static int amdgpu_debugfs_vm_info_show(struct seq_file *m, void *unused)
|
||||
{
|
||||
|
@ -1393,6 +1419,8 @@ DEFINE_DEBUGFS_ATTRIBUTE(amdgpu_evict_vram_fops, amdgpu_debugfs_evict_vram,
|
|||
NULL, "%lld\n");
|
||||
DEFINE_DEBUGFS_ATTRIBUTE(amdgpu_evict_gtt_fops, amdgpu_debugfs_evict_gtt,
|
||||
NULL, "%lld\n");
|
||||
DEFINE_DEBUGFS_ATTRIBUTE(amdgpu_benchmark_fops, NULL, amdgpu_debugfs_benchmark,
|
||||
"%lld\n");
|
||||
|
||||
static void amdgpu_ib_preempt_fences_swap(struct amdgpu_ring *ring,
|
||||
struct dma_fence **fences)
|
||||
|
@ -1517,7 +1545,7 @@ static int amdgpu_debugfs_ib_preempt(void *data, u64 val)
|
|||
return -ENOMEM;
|
||||
|
||||
/* Avoid accidently unparking the sched thread during GPU reset */
|
||||
r = down_read_killable(&adev->reset_sem);
|
||||
r = down_read_killable(&adev->reset_domain->sem);
|
||||
if (r)
|
||||
goto pro_end;
|
||||
|
||||
|
@ -1560,7 +1588,7 @@ failure:
|
|||
/* restart the scheduler */
|
||||
kthread_unpark(ring->sched.thread);
|
||||
|
||||
up_read(&adev->reset_sem);
|
||||
up_read(&adev->reset_domain->sem);
|
||||
|
||||
ttm_bo_unlock_delayed_workqueue(&adev->mman.bdev, resched);
|
||||
|
||||
|
@ -1585,22 +1613,25 @@ static int amdgpu_debugfs_sclk_set(void *data, u64 val)
|
|||
return ret;
|
||||
}
|
||||
|
||||
if (is_support_sw_smu(adev)) {
|
||||
ret = smu_get_dpm_freq_range(&adev->smu, SMU_SCLK, &min_freq, &max_freq);
|
||||
if (ret || val > max_freq || val < min_freq)
|
||||
return -EINVAL;
|
||||
ret = smu_set_soft_freq_range(&adev->smu, SMU_SCLK, (uint32_t)val, (uint32_t)val);
|
||||
} else {
|
||||
return 0;
|
||||
ret = amdgpu_dpm_get_dpm_freq_range(adev, PP_SCLK, &min_freq, &max_freq);
|
||||
if (ret == -EOPNOTSUPP) {
|
||||
ret = 0;
|
||||
goto out;
|
||||
}
|
||||
if (ret || val > max_freq || val < min_freq) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = amdgpu_dpm_set_soft_freq_range(adev, PP_SCLK, (uint32_t)val, (uint32_t)val);
|
||||
if (ret)
|
||||
ret = -EINVAL;
|
||||
|
||||
out:
|
||||
pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
|
||||
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
|
||||
|
||||
if (ret)
|
||||
return -EINVAL;
|
||||
|
||||
return 0;
|
||||
return ret;
|
||||
}
|
||||
|
||||
DEFINE_DEBUGFS_ATTRIBUTE(fops_ib_preempt, NULL,
|
||||
|
@ -1609,6 +1640,91 @@ DEFINE_DEBUGFS_ATTRIBUTE(fops_ib_preempt, NULL,
|
|||
DEFINE_DEBUGFS_ATTRIBUTE(fops_sclk_set, NULL,
|
||||
amdgpu_debugfs_sclk_set, "%llu\n");
|
||||
|
||||
static ssize_t amdgpu_reset_dump_register_list_read(struct file *f,
|
||||
char __user *buf, size_t size, loff_t *pos)
|
||||
{
|
||||
struct amdgpu_device *adev = (struct amdgpu_device *)file_inode(f)->i_private;
|
||||
char reg_offset[12];
|
||||
int i, ret, len = 0;
|
||||
|
||||
if (*pos)
|
||||
return 0;
|
||||
|
||||
memset(reg_offset, 0, 12);
|
||||
ret = down_read_killable(&adev->reset_domain->sem);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
for (i = 0; i < adev->num_regs; i++) {
|
||||
sprintf(reg_offset, "0x%x\n", adev->reset_dump_reg_list[i]);
|
||||
up_read(&adev->reset_domain->sem);
|
||||
if (copy_to_user(buf + len, reg_offset, strlen(reg_offset)))
|
||||
return -EFAULT;
|
||||
|
||||
len += strlen(reg_offset);
|
||||
ret = down_read_killable(&adev->reset_domain->sem);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
up_read(&adev->reset_domain->sem);
|
||||
*pos += len;
|
||||
|
||||
return len;
|
||||
}
|
||||
|
||||
static ssize_t amdgpu_reset_dump_register_list_write(struct file *f,
|
||||
const char __user *buf, size_t size, loff_t *pos)
|
||||
{
|
||||
struct amdgpu_device *adev = (struct amdgpu_device *)file_inode(f)->i_private;
|
||||
char reg_offset[11];
|
||||
uint32_t *new, *tmp = NULL;
|
||||
int ret, i = 0, len = 0;
|
||||
|
||||
do {
|
||||
memset(reg_offset, 0, 11);
|
||||
if (copy_from_user(reg_offset, buf + len,
|
||||
min(10, ((int)size-len)))) {
|
||||
ret = -EFAULT;
|
||||
goto error_free;
|
||||
}
|
||||
|
||||
new = krealloc_array(tmp, i + 1, sizeof(uint32_t), GFP_KERNEL);
|
||||
if (!new) {
|
||||
ret = -ENOMEM;
|
||||
goto error_free;
|
||||
}
|
||||
tmp = new;
|
||||
if (sscanf(reg_offset, "%X %n", &tmp[i], &ret) != 1) {
|
||||
ret = -EINVAL;
|
||||
goto error_free;
|
||||
}
|
||||
|
||||
len += ret;
|
||||
i++;
|
||||
} while (len < size);
|
||||
|
||||
ret = down_write_killable(&adev->reset_domain->sem);
|
||||
if (ret)
|
||||
goto error_free;
|
||||
|
||||
swap(adev->reset_dump_reg_list, tmp);
|
||||
adev->num_regs = i;
|
||||
up_write(&adev->reset_domain->sem);
|
||||
ret = size;
|
||||
|
||||
error_free:
|
||||
kfree(tmp);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static const struct file_operations amdgpu_reset_dump_register_list = {
|
||||
.owner = THIS_MODULE,
|
||||
.read = amdgpu_reset_dump_register_list_read,
|
||||
.write = amdgpu_reset_dump_register_list_write,
|
||||
.llseek = default_llseek
|
||||
};
|
||||
|
||||
int amdgpu_debugfs_init(struct amdgpu_device *adev)
|
||||
{
|
||||
struct dentry *root = adev_to_drm(adev)->primary->debugfs_root;
|
||||
|
@ -1662,6 +1778,16 @@ int amdgpu_debugfs_init(struct amdgpu_device *adev)
|
|||
amdgpu_debugfs_ring_init(adev, ring);
|
||||
}
|
||||
|
||||
for ( i = 0; i < adev->vcn.num_vcn_inst; i++) {
|
||||
if (!amdgpu_vcnfw_log)
|
||||
break;
|
||||
|
||||
if (adev->vcn.harvest_config & (1 << i))
|
||||
continue;
|
||||
|
||||
amdgpu_debugfs_vcn_fwlog_init(adev, i, &adev->vcn.inst[i]);
|
||||
}
|
||||
|
||||
amdgpu_ras_debugfs_create_all(adev);
|
||||
amdgpu_rap_debugfs_init(adev);
|
||||
amdgpu_securedisplay_debugfs_init(adev);
|
||||
|
@ -1675,6 +1801,10 @@ int amdgpu_debugfs_init(struct amdgpu_device *adev)
|
|||
&amdgpu_debugfs_test_ib_fops);
|
||||
debugfs_create_file("amdgpu_vm_info", 0444, root, adev,
|
||||
&amdgpu_debugfs_vm_info_fops);
|
||||
debugfs_create_file("amdgpu_benchmark", 0200, root, adev,
|
||||
&amdgpu_benchmark_fops);
|
||||
debugfs_create_file("amdgpu_reset_dump_register_list", 0644, root, adev,
|
||||
&amdgpu_reset_dump_register_list);
|
||||
|
||||
adev->debugfs_vbios_blob.data = adev->bios;
|
||||
adev->debugfs_vbios_blob.size = adev->bios_size;
|
||||
|
|
|
@ -31,6 +31,7 @@
|
|||
#include <linux/console.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/iommu.h>
|
||||
#include <linux/pci.h>
|
||||
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_probe_helper.h>
|
||||
|
@ -55,7 +56,6 @@
|
|||
#include "soc15.h"
|
||||
#include "nv.h"
|
||||
#include "bif/bif_4_1_d.h"
|
||||
#include <linux/pci.h>
|
||||
#include <linux/firmware.h>
|
||||
#include "amdgpu_vf_error.h"
|
||||
|
||||
|
@ -80,14 +80,11 @@ MODULE_FIRMWARE("amdgpu/raven_gpu_info.bin");
|
|||
MODULE_FIRMWARE("amdgpu/picasso_gpu_info.bin");
|
||||
MODULE_FIRMWARE("amdgpu/raven2_gpu_info.bin");
|
||||
MODULE_FIRMWARE("amdgpu/arcturus_gpu_info.bin");
|
||||
MODULE_FIRMWARE("amdgpu/renoir_gpu_info.bin");
|
||||
MODULE_FIRMWARE("amdgpu/navi10_gpu_info.bin");
|
||||
MODULE_FIRMWARE("amdgpu/navi14_gpu_info.bin");
|
||||
MODULE_FIRMWARE("amdgpu/navi12_gpu_info.bin");
|
||||
MODULE_FIRMWARE("amdgpu/vangogh_gpu_info.bin");
|
||||
MODULE_FIRMWARE("amdgpu/yellow_carp_gpu_info.bin");
|
||||
|
||||
#define AMDGPU_RESUME_MS 2000
|
||||
#define AMDGPU_MAX_RETRY_LIMIT 2
|
||||
#define AMDGPU_RETRY_SRIOV_RESET(r) ((r) == -EBUSY || (r) == -ETIMEDOUT || (r) == -EINVAL)
|
||||
|
||||
const char *amdgpu_asic_name[] = {
|
||||
"TAHITI",
|
||||
|
@ -424,10 +421,10 @@ bool amdgpu_device_skip_hw_access(struct amdgpu_device *adev)
|
|||
* the lock.
|
||||
*/
|
||||
if (in_task()) {
|
||||
if (down_read_trylock(&adev->reset_sem))
|
||||
up_read(&adev->reset_sem);
|
||||
if (down_read_trylock(&adev->reset_domain->sem))
|
||||
up_read(&adev->reset_domain->sem);
|
||||
else
|
||||
lockdep_assert_held(&adev->reset_sem);
|
||||
lockdep_assert_held(&adev->reset_domain->sem);
|
||||
}
|
||||
#endif
|
||||
return false;
|
||||
|
@ -453,9 +450,9 @@ uint32_t amdgpu_device_rreg(struct amdgpu_device *adev,
|
|||
if ((reg * 4) < adev->rmmio_size) {
|
||||
if (!(acc_flags & AMDGPU_REGS_NO_KIQ) &&
|
||||
amdgpu_sriov_runtime(adev) &&
|
||||
down_read_trylock(&adev->reset_sem)) {
|
||||
down_read_trylock(&adev->reset_domain->sem)) {
|
||||
ret = amdgpu_kiq_rreg(adev, reg);
|
||||
up_read(&adev->reset_sem);
|
||||
up_read(&adev->reset_domain->sem);
|
||||
} else {
|
||||
ret = readl(((void __iomem *)adev->rmmio) + (reg * 4));
|
||||
}
|
||||
|
@ -538,9 +535,9 @@ void amdgpu_device_wreg(struct amdgpu_device *adev,
|
|||
if ((reg * 4) < adev->rmmio_size) {
|
||||
if (!(acc_flags & AMDGPU_REGS_NO_KIQ) &&
|
||||
amdgpu_sriov_runtime(adev) &&
|
||||
down_read_trylock(&adev->reset_sem)) {
|
||||
down_read_trylock(&adev->reset_domain->sem)) {
|
||||
amdgpu_kiq_wreg(adev, reg, v);
|
||||
up_read(&adev->reset_sem);
|
||||
up_read(&adev->reset_domain->sem);
|
||||
} else {
|
||||
writel(v, ((void __iomem *)adev->rmmio) + (reg * 4));
|
||||
}
|
||||
|
@ -554,7 +551,11 @@ void amdgpu_device_wreg(struct amdgpu_device *adev,
|
|||
/**
|
||||
* amdgpu_mm_wreg_mmio_rlc - write register either with direct/indirect mmio or with RLC path if in range
|
||||
*
|
||||
* this function is invoked only the debugfs register access
|
||||
* @adev: amdgpu_device pointer
|
||||
* @reg: mmio/rlc register
|
||||
* @v: value to write
|
||||
*
|
||||
* this function is invoked only for the debugfs register access
|
||||
*/
|
||||
void amdgpu_mm_wreg_mmio_rlc(struct amdgpu_device *adev,
|
||||
uint32_t reg, uint32_t v)
|
||||
|
@ -566,7 +567,7 @@ void amdgpu_mm_wreg_mmio_rlc(struct amdgpu_device *adev,
|
|||
adev->gfx.rlc.funcs &&
|
||||
adev->gfx.rlc.funcs->is_rlcg_access_range) {
|
||||
if (adev->gfx.rlc.funcs->is_rlcg_access_range(adev, reg))
|
||||
return adev->gfx.rlc.funcs->sriov_wreg(adev, reg, v, 0, 0);
|
||||
return amdgpu_sriov_wreg(adev, reg, v, 0, 0);
|
||||
} else if ((reg * 4) >= adev->rmmio_size) {
|
||||
adev->pcie_wreg(adev, reg * 4, v);
|
||||
} else {
|
||||
|
@ -1312,6 +1313,31 @@ bool amdgpu_device_need_post(struct amdgpu_device *adev)
|
|||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* amdgpu_device_should_use_aspm - check if the device should program ASPM
|
||||
*
|
||||
* @adev: amdgpu_device pointer
|
||||
*
|
||||
* Confirm whether the module parameter and pcie bridge agree that ASPM should
|
||||
* be set for this device.
|
||||
*
|
||||
* Returns true if it should be used or false if not.
|
||||
*/
|
||||
bool amdgpu_device_should_use_aspm(struct amdgpu_device *adev)
|
||||
{
|
||||
switch (amdgpu_aspm) {
|
||||
case -1:
|
||||
break;
|
||||
case 0:
|
||||
return false;
|
||||
case 1:
|
||||
return true;
|
||||
default:
|
||||
return false;
|
||||
}
|
||||
return pcie_aspm_enabled(adev->pdev);
|
||||
}
|
||||
|
||||
/* if we get transitioned to only one device, take VGA back */
|
||||
/**
|
||||
* amdgpu_device_vga_set_decode - enable/disable vga decode
|
||||
|
@ -1446,7 +1472,8 @@ static int amdgpu_device_init_apu_flags(struct amdgpu_device *adev)
|
|||
case CHIP_YELLOW_CARP:
|
||||
break;
|
||||
case CHIP_CYAN_SKILLFISH:
|
||||
if (adev->pdev->device == 0x13FE)
|
||||
if ((adev->pdev->device == 0x13FE) ||
|
||||
(adev->pdev->device == 0x143F))
|
||||
adev->apu_flags |= AMD_APU_IS_CYAN_SKILLFISH2;
|
||||
break;
|
||||
default:
|
||||
|
@ -1507,6 +1534,11 @@ static int amdgpu_device_check_arguments(struct amdgpu_device *adev)
|
|||
amdgpu_sched_hw_submission = roundup_pow_of_two(amdgpu_sched_hw_submission);
|
||||
}
|
||||
|
||||
if (amdgpu_reset_method < -1 || amdgpu_reset_method > 4) {
|
||||
dev_warn(adev->dev, "invalid option for reset method, reverting to default\n");
|
||||
amdgpu_reset_method = -1;
|
||||
}
|
||||
|
||||
amdgpu_device_check_smu_prv_buffer_size(adev);
|
||||
|
||||
amdgpu_device_check_vm_size(adev);
|
||||
|
@ -1517,7 +1549,6 @@ static int amdgpu_device_check_arguments(struct amdgpu_device *adev)
|
|||
|
||||
amdgpu_gmc_tmz_set(adev);
|
||||
|
||||
amdgpu_gmc_noretry_set(adev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -1955,27 +1986,9 @@ static int amdgpu_device_parse_gpu_info_fw(struct amdgpu_device *adev)
|
|||
case CHIP_ARCTURUS:
|
||||
chip_name = "arcturus";
|
||||
break;
|
||||
case CHIP_RENOIR:
|
||||
if (adev->apu_flags & AMD_APU_IS_RENOIR)
|
||||
chip_name = "renoir";
|
||||
else
|
||||
chip_name = "green_sardine";
|
||||
break;
|
||||
case CHIP_NAVI10:
|
||||
chip_name = "navi10";
|
||||
break;
|
||||
case CHIP_NAVI14:
|
||||
chip_name = "navi14";
|
||||
break;
|
||||
case CHIP_NAVI12:
|
||||
chip_name = "navi12";
|
||||
break;
|
||||
case CHIP_VANGOGH:
|
||||
chip_name = "vangogh";
|
||||
break;
|
||||
case CHIP_YELLOW_CARP:
|
||||
chip_name = "yellow_carp";
|
||||
break;
|
||||
}
|
||||
|
||||
snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_gpu_info.bin", chip_name);
|
||||
|
@ -2073,6 +2086,8 @@ out:
|
|||
*/
|
||||
static int amdgpu_device_ip_early_init(struct amdgpu_device *adev)
|
||||
{
|
||||
struct drm_device *dev = adev_to_drm(adev);
|
||||
struct pci_dev *parent;
|
||||
int i, r;
|
||||
|
||||
amdgpu_device_enable_virtual_display(adev);
|
||||
|
@ -2137,6 +2152,18 @@ static int amdgpu_device_ip_early_init(struct amdgpu_device *adev)
|
|||
break;
|
||||
}
|
||||
|
||||
if (amdgpu_has_atpx() &&
|
||||
(amdgpu_is_atpx_hybrid() ||
|
||||
amdgpu_has_atpx_dgpu_power_cntl()) &&
|
||||
((adev->flags & AMD_IS_APU) == 0) &&
|
||||
!pci_is_thunderbolt_attached(to_pci_dev(dev->dev)))
|
||||
adev->flags |= AMD_IS_PX;
|
||||
|
||||
if (!(adev->flags & AMD_IS_APU)) {
|
||||
parent = pci_upstream_bridge(adev->pdev);
|
||||
adev->has_pr3 = parent ? pci_pr3_present(parent) : false;
|
||||
}
|
||||
|
||||
amdgpu_amdkfd_device_probe(adev);
|
||||
|
||||
adev->pm.pp_feature = amdgpu_pp_feature_mask;
|
||||
|
@ -2287,6 +2314,49 @@ static int amdgpu_device_fw_loading(struct amdgpu_device *adev)
|
|||
return r;
|
||||
}
|
||||
|
||||
static int amdgpu_device_init_schedulers(struct amdgpu_device *adev)
|
||||
{
|
||||
long timeout;
|
||||
int r, i;
|
||||
|
||||
for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
|
||||
struct amdgpu_ring *ring = adev->rings[i];
|
||||
|
||||
/* No need to setup the GPU scheduler for rings that don't need it */
|
||||
if (!ring || ring->no_scheduler)
|
||||
continue;
|
||||
|
||||
switch (ring->funcs->type) {
|
||||
case AMDGPU_RING_TYPE_GFX:
|
||||
timeout = adev->gfx_timeout;
|
||||
break;
|
||||
case AMDGPU_RING_TYPE_COMPUTE:
|
||||
timeout = adev->compute_timeout;
|
||||
break;
|
||||
case AMDGPU_RING_TYPE_SDMA:
|
||||
timeout = adev->sdma_timeout;
|
||||
break;
|
||||
default:
|
||||
timeout = adev->video_timeout;
|
||||
break;
|
||||
}
|
||||
|
||||
r = drm_sched_init(&ring->sched, &amdgpu_sched_ops,
|
||||
ring->num_hw_submission, amdgpu_job_hang_limit,
|
||||
timeout, adev->reset_domain->wq,
|
||||
ring->sched_score, ring->name,
|
||||
adev->dev);
|
||||
if (r) {
|
||||
DRM_ERROR("Failed to create scheduler on ring %s.\n",
|
||||
ring->name);
|
||||
return r;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* amdgpu_device_ip_init - run init for hardware IPs
|
||||
*
|
||||
|
@ -2398,8 +2468,28 @@ static int amdgpu_device_ip_init(struct amdgpu_device *adev)
|
|||
if (r)
|
||||
goto init_failed;
|
||||
|
||||
if (adev->gmc.xgmi.num_physical_nodes > 1)
|
||||
amdgpu_xgmi_add_device(adev);
|
||||
/**
|
||||
* In case of XGMI grab extra reference for reset domain for this device
|
||||
*/
|
||||
if (adev->gmc.xgmi.num_physical_nodes > 1) {
|
||||
if (amdgpu_xgmi_add_device(adev) == 0) {
|
||||
struct amdgpu_hive_info *hive = amdgpu_get_xgmi_hive(adev);
|
||||
|
||||
if (!hive->reset_domain ||
|
||||
!amdgpu_reset_get_reset_domain(hive->reset_domain)) {
|
||||
r = -ENOENT;
|
||||
goto init_failed;
|
||||
}
|
||||
|
||||
/* Drop the early temporary reset domain we created for device */
|
||||
amdgpu_reset_put_reset_domain(adev->reset_domain);
|
||||
adev->reset_domain = hive->reset_domain;
|
||||
}
|
||||
}
|
||||
|
||||
r = amdgpu_device_init_schedulers(adev);
|
||||
if (r)
|
||||
goto init_failed;
|
||||
|
||||
/* Don't init kfd if whole hive need to be reset during init */
|
||||
if (!adev->gmc.xgmi.pending_reset)
|
||||
|
@ -2610,6 +2700,12 @@ static int amdgpu_device_ip_late_init(struct amdgpu_device *adev)
|
|||
adev->ip_blocks[i].status.late_initialized = true;
|
||||
}
|
||||
|
||||
r = amdgpu_ras_late_init(adev);
|
||||
if (r) {
|
||||
DRM_ERROR("amdgpu_ras_late_init failed %d", r);
|
||||
return r;
|
||||
}
|
||||
|
||||
amdgpu_ras_set_error_query_ready(adev, true);
|
||||
|
||||
amdgpu_device_set_cg_state(adev, AMD_CG_STATE_GATE);
|
||||
|
@ -2624,7 +2720,7 @@ static int amdgpu_device_ip_late_init(struct amdgpu_device *adev)
|
|||
/* For passthrough configuration on arcturus and aldebaran, enable special handling SBR */
|
||||
if (amdgpu_passthrough(adev) && ((adev->asic_type == CHIP_ARCTURUS && adev->gmc.xgmi.num_physical_nodes > 1)||
|
||||
adev->asic_type == CHIP_ALDEBARAN ))
|
||||
smu_handle_passthrough_sbr(&adev->smu, true);
|
||||
amdgpu_dpm_handle_passthrough_sbr(adev, true);
|
||||
|
||||
if (adev->gmc.xgmi.num_physical_nodes > 1) {
|
||||
mutex_lock(&mgpu_info.mutex);
|
||||
|
@ -2708,11 +2804,11 @@ static int amdgpu_device_ip_fini_early(struct amdgpu_device *adev)
|
|||
}
|
||||
}
|
||||
|
||||
amdgpu_amdkfd_suspend(adev, false);
|
||||
|
||||
amdgpu_device_set_pg_state(adev, AMD_PG_STATE_UNGATE);
|
||||
amdgpu_device_set_cg_state(adev, AMD_CG_STATE_UNGATE);
|
||||
|
||||
amdgpu_amdkfd_suspend(adev, false);
|
||||
|
||||
/* Workaroud for ASICs need to disable SMC first */
|
||||
amdgpu_device_smu_fini_early(adev);
|
||||
|
||||
|
@ -2881,7 +2977,7 @@ static int amdgpu_device_ip_suspend_phase2(struct amdgpu_device *adev)
|
|||
int i, r;
|
||||
|
||||
if (adev->in_s0ix)
|
||||
amdgpu_gfx_state_change_set(adev, sGpuChangeState_D3Entry);
|
||||
amdgpu_dpm_gfx_state_change(adev, sGpuChangeState_D3Entry);
|
||||
|
||||
for (i = adev->num_ip_blocks - 1; i >= 0; i--) {
|
||||
if (!adev->ip_blocks[i].status.valid)
|
||||
|
@ -3307,9 +3403,9 @@ static void amdgpu_device_xgmi_reset_func(struct work_struct *__work)
|
|||
if (adev->asic_reset_res)
|
||||
goto fail;
|
||||
|
||||
if (adev->mmhub.ras_funcs &&
|
||||
adev->mmhub.ras_funcs->reset_ras_error_count)
|
||||
adev->mmhub.ras_funcs->reset_ras_error_count(adev);
|
||||
if (adev->mmhub.ras && adev->mmhub.ras->ras_block.hw_ops &&
|
||||
adev->mmhub.ras->ras_block.hw_ops->reset_ras_error_count)
|
||||
adev->mmhub.ras->ras_block.hw_ops->reset_ras_error_count(adev);
|
||||
} else {
|
||||
|
||||
task_barrier_full(&hive->tb);
|
||||
|
@ -3493,12 +3589,12 @@ int amdgpu_device_init(struct amdgpu_device *adev,
|
|||
mutex_init(&adev->mn_lock);
|
||||
mutex_init(&adev->virt.vf_errors.lock);
|
||||
hash_init(adev->mn_hash);
|
||||
atomic_set(&adev->in_gpu_reset, 0);
|
||||
init_rwsem(&adev->reset_sem);
|
||||
mutex_init(&adev->psp.mutex);
|
||||
mutex_init(&adev->notifier_lock);
|
||||
mutex_init(&adev->pm.stable_pstate_ctx_lock);
|
||||
mutex_init(&adev->benchmark_mutex);
|
||||
|
||||
amdgpu_device_init_apu_flags(adev);
|
||||
amdgpu_device_init_apu_flags(adev);
|
||||
|
||||
r = amdgpu_device_check_arguments(adev);
|
||||
if (r)
|
||||
|
@ -3519,6 +3615,8 @@ int amdgpu_device_init(struct amdgpu_device *adev,
|
|||
|
||||
INIT_LIST_HEAD(&adev->reset_list);
|
||||
|
||||
INIT_LIST_HEAD(&adev->ras_list);
|
||||
|
||||
INIT_DELAYED_WORK(&adev->delayed_init_work,
|
||||
amdgpu_device_delayed_init_work_handler);
|
||||
INIT_DELAYED_WORK(&adev->gfx.gfx_off_delay_work,
|
||||
|
@ -3568,6 +3666,15 @@ int amdgpu_device_init(struct amdgpu_device *adev,
|
|||
if (amdgpu_mes && adev->asic_type >= CHIP_NAVI10)
|
||||
adev->enable_mes = true;
|
||||
|
||||
/*
|
||||
* Reset domain needs to be present early, before XGMI hive discovered
|
||||
* (if any) and intitialized to use reset sem and in_gpu reset flag
|
||||
* early on during init and before calling to RREG32.
|
||||
*/
|
||||
adev->reset_domain = amdgpu_reset_create_reset_domain(SINGLE_DEVICE, "amdgpu-reset-dev");
|
||||
if (!adev->reset_domain)
|
||||
return -ENOMEM;
|
||||
|
||||
/* detect hw virtualization here */
|
||||
amdgpu_detect_virtualization(adev);
|
||||
|
||||
|
@ -3582,6 +3689,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
|
|||
if (r)
|
||||
return r;
|
||||
|
||||
amdgpu_gmc_noretry_set(adev);
|
||||
/* Need to get xgmi info early to decide the reset behavior*/
|
||||
if (adev->gmc.xgmi.supported) {
|
||||
r = adev->gfxhub.funcs->get_xgmi_info(adev);
|
||||
|
@ -3749,19 +3857,6 @@ fence_driver_init:
|
|||
} else
|
||||
adev->ucode_sysfs_en = true;
|
||||
|
||||
if ((amdgpu_testing & 1)) {
|
||||
if (adev->accel_working)
|
||||
amdgpu_test_moves(adev);
|
||||
else
|
||||
DRM_INFO("amdgpu: acceleration disabled, skipping move tests\n");
|
||||
}
|
||||
if (amdgpu_benchmarking) {
|
||||
if (adev->accel_working)
|
||||
amdgpu_benchmark(adev, amdgpu_benchmarking);
|
||||
else
|
||||
DRM_INFO("amdgpu: acceleration disabled, skipping benchmarks\n");
|
||||
}
|
||||
|
||||
/*
|
||||
* Register gpu instance before amdgpu_device_enable_mgpu_fan_boost.
|
||||
* Otherwise the mgpu fan boost feature will be skipped due to the
|
||||
|
@ -3953,6 +4048,9 @@ void amdgpu_device_fini_sw(struct amdgpu_device *adev)
|
|||
if (adev->mman.discovery_bin)
|
||||
amdgpu_discovery_fini(adev);
|
||||
|
||||
amdgpu_reset_put_reset_domain(adev->reset_domain);
|
||||
adev->reset_domain = NULL;
|
||||
|
||||
kfree(adev->pci_state);
|
||||
|
||||
}
|
||||
|
@ -4044,7 +4142,7 @@ int amdgpu_device_resume(struct drm_device *dev, bool fbcon)
|
|||
return 0;
|
||||
|
||||
if (adev->in_s0ix)
|
||||
amdgpu_gfx_state_change_set(adev, sGpuChangeState_D0Entry);
|
||||
amdgpu_dpm_gfx_state_change(adev, sGpuChangeState_D0Entry);
|
||||
|
||||
/* post card */
|
||||
if (amdgpu_device_need_post(adev)) {
|
||||
|
@ -4347,7 +4445,9 @@ static int amdgpu_device_reset_sriov(struct amdgpu_device *adev,
|
|||
{
|
||||
int r;
|
||||
struct amdgpu_hive_info *hive = NULL;
|
||||
int retry_limit = 0;
|
||||
|
||||
retry:
|
||||
amdgpu_amdkfd_pre_reset(adev);
|
||||
|
||||
amdgpu_amdkfd_pre_reset(adev);
|
||||
|
@ -4396,6 +4496,14 @@ error:
|
|||
}
|
||||
amdgpu_virt_release_full_gpu(adev, true);
|
||||
|
||||
if (AMDGPU_RETRY_SRIOV_RESET(r)) {
|
||||
if (retry_limit < AMDGPU_MAX_RETRY_LIMIT) {
|
||||
retry_limit++;
|
||||
goto retry;
|
||||
} else
|
||||
DRM_ERROR("GPU reset retry is beyond the retry limit\n");
|
||||
}
|
||||
|
||||
return r;
|
||||
}
|
||||
|
||||
|
@ -4587,6 +4695,22 @@ int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev,
|
|||
return r;
|
||||
}
|
||||
|
||||
static int amdgpu_reset_reg_dumps(struct amdgpu_device *adev)
|
||||
{
|
||||
uint32_t reg_value;
|
||||
int i;
|
||||
|
||||
lockdep_assert_held(&adev->reset_domain->sem);
|
||||
dump_stack();
|
||||
|
||||
for (i = 0; i < adev->num_regs; i++) {
|
||||
reg_value = RREG32(adev->reset_dump_reg_list[i]);
|
||||
trace_amdgpu_reset_reg_dumps(adev->reset_dump_reg_list[i], reg_value);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int amdgpu_do_asic_reset(struct list_head *device_list_handle,
|
||||
struct amdgpu_reset_context *reset_context)
|
||||
{
|
||||
|
@ -4597,6 +4721,7 @@ int amdgpu_do_asic_reset(struct list_head *device_list_handle,
|
|||
/* Try reset handler method first */
|
||||
tmp_adev = list_first_entry(device_list_handle, struct amdgpu_device,
|
||||
reset_list);
|
||||
amdgpu_reset_reg_dumps(tmp_adev);
|
||||
r = amdgpu_reset_perform_reset(tmp_adev, reset_context);
|
||||
/* If reset handler not implemented, continue; otherwise return */
|
||||
if (r == -ENOSYS)
|
||||
|
@ -4645,9 +4770,9 @@ int amdgpu_do_asic_reset(struct list_head *device_list_handle,
|
|||
|
||||
if (!r && amdgpu_ras_intr_triggered()) {
|
||||
list_for_each_entry(tmp_adev, device_list_handle, reset_list) {
|
||||
if (tmp_adev->mmhub.ras_funcs &&
|
||||
tmp_adev->mmhub.ras_funcs->reset_ras_error_count)
|
||||
tmp_adev->mmhub.ras_funcs->reset_ras_error_count(tmp_adev);
|
||||
if (tmp_adev->mmhub.ras && tmp_adev->mmhub.ras->ras_block.hw_ops &&
|
||||
tmp_adev->mmhub.ras->ras_block.hw_ops->reset_ras_error_count)
|
||||
tmp_adev->mmhub.ras->ras_block.hw_ops->reset_ras_error_count(tmp_adev);
|
||||
}
|
||||
|
||||
amdgpu_ras_intr_cleared();
|
||||
|
@ -4754,17 +4879,8 @@ end:
|
|||
return r;
|
||||
}
|
||||
|
||||
static bool amdgpu_device_lock_adev(struct amdgpu_device *adev,
|
||||
struct amdgpu_hive_info *hive)
|
||||
static void amdgpu_device_set_mp1_state(struct amdgpu_device *adev)
|
||||
{
|
||||
if (atomic_cmpxchg(&adev->in_gpu_reset, 0, 1) != 0)
|
||||
return false;
|
||||
|
||||
if (hive) {
|
||||
down_write_nest_lock(&adev->reset_sem, &hive->hive_lock);
|
||||
} else {
|
||||
down_write(&adev->reset_sem);
|
||||
}
|
||||
|
||||
switch (amdgpu_asic_reset_method(adev)) {
|
||||
case AMD_RESET_METHOD_MODE1:
|
||||
|
@ -4777,56 +4893,12 @@ static bool amdgpu_device_lock_adev(struct amdgpu_device *adev,
|
|||
adev->mp1_state = PP_MP1_STATE_NONE;
|
||||
break;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static void amdgpu_device_unlock_adev(struct amdgpu_device *adev)
|
||||
static void amdgpu_device_unset_mp1_state(struct amdgpu_device *adev)
|
||||
{
|
||||
amdgpu_vf_error_trans_all(adev);
|
||||
adev->mp1_state = PP_MP1_STATE_NONE;
|
||||
atomic_set(&adev->in_gpu_reset, 0);
|
||||
up_write(&adev->reset_sem);
|
||||
}
|
||||
|
||||
/*
|
||||
* to lockup a list of amdgpu devices in a hive safely, if not a hive
|
||||
* with multiple nodes, it will be similar as amdgpu_device_lock_adev.
|
||||
*
|
||||
* unlock won't require roll back.
|
||||
*/
|
||||
static int amdgpu_device_lock_hive_adev(struct amdgpu_device *adev, struct amdgpu_hive_info *hive)
|
||||
{
|
||||
struct amdgpu_device *tmp_adev = NULL;
|
||||
|
||||
if (!amdgpu_sriov_vf(adev) && (adev->gmc.xgmi.num_physical_nodes > 1)) {
|
||||
if (!hive) {
|
||||
dev_err(adev->dev, "Hive is NULL while device has multiple xgmi nodes");
|
||||
return -ENODEV;
|
||||
}
|
||||
list_for_each_entry(tmp_adev, &hive->device_list, gmc.xgmi.head) {
|
||||
if (!amdgpu_device_lock_adev(tmp_adev, hive))
|
||||
goto roll_back;
|
||||
}
|
||||
} else if (!amdgpu_device_lock_adev(adev, hive))
|
||||
return -EAGAIN;
|
||||
|
||||
return 0;
|
||||
roll_back:
|
||||
if (!list_is_first(&tmp_adev->gmc.xgmi.head, &hive->device_list)) {
|
||||
/*
|
||||
* if the lockup iteration break in the middle of a hive,
|
||||
* it may means there may has a race issue,
|
||||
* or a hive device locked up independently.
|
||||
* we may be in trouble and may not, so will try to roll back
|
||||
* the lock and give out a warnning.
|
||||
*/
|
||||
dev_warn(tmp_adev->dev, "Hive lock iteration broke in the middle. Rolling back to unlock");
|
||||
list_for_each_entry_continue_reverse(tmp_adev, &hive->device_list, gmc.xgmi.head) {
|
||||
amdgpu_device_unlock_adev(tmp_adev);
|
||||
}
|
||||
}
|
||||
return -EAGAIN;
|
||||
}
|
||||
|
||||
static void amdgpu_device_resume_display_audio(struct amdgpu_device *adev)
|
||||
|
@ -4960,7 +5032,7 @@ retry:
|
|||
}
|
||||
|
||||
/**
|
||||
* amdgpu_device_gpu_recover - reset the asic and recover scheduler
|
||||
* amdgpu_device_gpu_recover_imp - reset the asic and recover scheduler
|
||||
*
|
||||
* @adev: amdgpu_device pointer
|
||||
* @job: which job trigger hang
|
||||
|
@ -4970,7 +5042,7 @@ retry:
|
|||
* Returns 0 for success or an error on failure.
|
||||
*/
|
||||
|
||||
int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
|
||||
int amdgpu_device_gpu_recover_imp(struct amdgpu_device *adev,
|
||||
struct amdgpu_job *job)
|
||||
{
|
||||
struct list_head device_list, *device_list_handle = NULL;
|
||||
|
@ -5004,26 +5076,10 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
|
|||
dev_info(adev->dev, "GPU %s begin!\n",
|
||||
need_emergency_restart ? "jobs stop":"reset");
|
||||
|
||||
/*
|
||||
* Here we trylock to avoid chain of resets executing from
|
||||
* either trigger by jobs on different adevs in XGMI hive or jobs on
|
||||
* different schedulers for same device while this TO handler is running.
|
||||
* We always reset all schedulers for device and all devices for XGMI
|
||||
* hive so that should take care of them too.
|
||||
*/
|
||||
if (!amdgpu_sriov_vf(adev))
|
||||
hive = amdgpu_get_xgmi_hive(adev);
|
||||
if (hive) {
|
||||
if (atomic_cmpxchg(&hive->in_reset, 0, 1) != 0) {
|
||||
DRM_INFO("Bailing on TDR for s_job:%llx, hive: %llx as another already in progress",
|
||||
job ? job->base.id : -1, hive->hive_id);
|
||||
amdgpu_put_xgmi_hive(hive);
|
||||
if (job && job->vm)
|
||||
drm_sched_increase_karma(&job->base);
|
||||
return 0;
|
||||
}
|
||||
if (hive)
|
||||
mutex_lock(&hive->hive_lock);
|
||||
}
|
||||
|
||||
reset_context.method = AMD_RESET_METHOD_NONE;
|
||||
reset_context.reset_req_dev = adev;
|
||||
|
@ -5031,22 +5087,6 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
|
|||
reset_context.hive = hive;
|
||||
clear_bit(AMDGPU_NEED_FULL_RESET, &reset_context.flags);
|
||||
|
||||
/*
|
||||
* lock the device before we try to operate the linked list
|
||||
* if didn't get the device lock, don't touch the linked list since
|
||||
* others may iterating it.
|
||||
*/
|
||||
r = amdgpu_device_lock_hive_adev(adev, hive);
|
||||
if (r) {
|
||||
dev_info(adev->dev, "Bailing on TDR for s_job:%llx, as another already in progress",
|
||||
job ? job->base.id : -1);
|
||||
|
||||
/* even we skipped this reset, still need to set the job to guilty */
|
||||
if (job && job->vm)
|
||||
drm_sched_increase_karma(&job->base);
|
||||
goto skip_recovery;
|
||||
}
|
||||
|
||||
/*
|
||||
* Build list of devices to reset.
|
||||
* In case we are in XGMI hive mode, resort the device list
|
||||
|
@ -5064,8 +5104,16 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
|
|||
device_list_handle = &device_list;
|
||||
}
|
||||
|
||||
/* We need to lock reset domain only once both for XGMI and single device */
|
||||
tmp_adev = list_first_entry(device_list_handle, struct amdgpu_device,
|
||||
reset_list);
|
||||
amdgpu_device_lock_reset_domain(tmp_adev->reset_domain);
|
||||
|
||||
/* block all schedulers and reset given job's ring */
|
||||
list_for_each_entry(tmp_adev, device_list_handle, reset_list) {
|
||||
|
||||
amdgpu_device_set_mp1_state(tmp_adev);
|
||||
|
||||
/*
|
||||
* Try to put the audio codec into suspend state
|
||||
* before gpu reset started.
|
||||
|
@ -5187,6 +5235,9 @@ skip_hw_reset:
|
|||
drm_helper_resume_force_mode(adev_to_drm(tmp_adev));
|
||||
}
|
||||
|
||||
if (tmp_adev->asic_reset_res)
|
||||
r = tmp_adev->asic_reset_res;
|
||||
|
||||
tmp_adev->asic_reset_res = 0;
|
||||
|
||||
if (r) {
|
||||
|
@ -5214,21 +5265,55 @@ skip_sched_resume:
|
|||
|
||||
if (audio_suspended)
|
||||
amdgpu_device_resume_display_audio(tmp_adev);
|
||||
amdgpu_device_unlock_adev(tmp_adev);
|
||||
|
||||
amdgpu_device_unset_mp1_state(tmp_adev);
|
||||
}
|
||||
|
||||
skip_recovery:
|
||||
tmp_adev = list_first_entry(device_list_handle, struct amdgpu_device,
|
||||
reset_list);
|
||||
amdgpu_device_unlock_reset_domain(tmp_adev->reset_domain);
|
||||
|
||||
if (hive) {
|
||||
atomic_set(&hive->in_reset, 0);
|
||||
mutex_unlock(&hive->hive_lock);
|
||||
amdgpu_put_xgmi_hive(hive);
|
||||
}
|
||||
|
||||
if (r && r != -EAGAIN)
|
||||
if (r)
|
||||
dev_info(adev->dev, "GPU reset end with ret = %d\n", r);
|
||||
return r;
|
||||
}
|
||||
|
||||
struct amdgpu_recover_work_struct {
|
||||
struct work_struct base;
|
||||
struct amdgpu_device *adev;
|
||||
struct amdgpu_job *job;
|
||||
int ret;
|
||||
};
|
||||
|
||||
static void amdgpu_device_queue_gpu_recover_work(struct work_struct *work)
|
||||
{
|
||||
struct amdgpu_recover_work_struct *recover_work = container_of(work, struct amdgpu_recover_work_struct, base);
|
||||
|
||||
recover_work->ret = amdgpu_device_gpu_recover_imp(recover_work->adev, recover_work->job);
|
||||
}
|
||||
/*
|
||||
* Serialize gpu recover into reset domain single threaded wq
|
||||
*/
|
||||
int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
|
||||
struct amdgpu_job *job)
|
||||
{
|
||||
struct amdgpu_recover_work_struct work = {.adev = adev, .job = job};
|
||||
|
||||
INIT_WORK(&work.base, amdgpu_device_queue_gpu_recover_work);
|
||||
|
||||
if (!amdgpu_reset_domain_schedule(adev->reset_domain, &work.base))
|
||||
return -EAGAIN;
|
||||
|
||||
flush_work(&work.base);
|
||||
|
||||
return work.ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* amdgpu_device_get_pcie_info - fence pcie info about the PCIE slot
|
||||
*
|
||||
|
@ -5416,20 +5501,6 @@ int amdgpu_device_baco_exit(struct drm_device *dev)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void amdgpu_cancel_all_tdr(struct amdgpu_device *adev)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
|
||||
struct amdgpu_ring *ring = adev->rings[i];
|
||||
|
||||
if (!ring || !ring->sched.thread)
|
||||
continue;
|
||||
|
||||
cancel_delayed_work_sync(&ring->sched.work_tdr);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* amdgpu_pci_error_detected - Called when a PCI error is detected.
|
||||
* @pdev: PCI device struct
|
||||
|
@ -5460,14 +5531,11 @@ pci_ers_result_t amdgpu_pci_error_detected(struct pci_dev *pdev, pci_channel_sta
|
|||
/* Fatal error, prepare for slot reset */
|
||||
case pci_channel_io_frozen:
|
||||
/*
|
||||
* Cancel and wait for all TDRs in progress if failing to
|
||||
* set adev->in_gpu_reset in amdgpu_device_lock_adev
|
||||
*
|
||||
* Locking adev->reset_sem will prevent any external access
|
||||
* Locking adev->reset_domain->sem will prevent any external access
|
||||
* to GPU during PCI error recovery
|
||||
*/
|
||||
while (!amdgpu_device_lock_adev(adev, NULL))
|
||||
amdgpu_cancel_all_tdr(adev);
|
||||
amdgpu_device_lock_reset_domain(adev->reset_domain);
|
||||
amdgpu_device_set_mp1_state(adev);
|
||||
|
||||
/*
|
||||
* Block any work scheduling as we do for regular GPU reset
|
||||
|
@ -5574,7 +5642,8 @@ out:
|
|||
DRM_INFO("PCIe error recovery succeeded\n");
|
||||
} else {
|
||||
DRM_ERROR("PCIe error recovery failed, err:%d", r);
|
||||
amdgpu_device_unlock_adev(adev);
|
||||
amdgpu_device_unset_mp1_state(adev);
|
||||
amdgpu_device_unlock_reset_domain(adev->reset_domain);
|
||||
}
|
||||
|
||||
return r ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_RECOVERED;
|
||||
|
@ -5611,7 +5680,8 @@ void amdgpu_pci_resume(struct pci_dev *pdev)
|
|||
drm_sched_start(&ring->sched, true);
|
||||
}
|
||||
|
||||
amdgpu_device_unlock_adev(adev);
|
||||
amdgpu_device_unset_mp1_state(adev);
|
||||
amdgpu_device_unlock_reset_domain(adev->reset_domain);
|
||||
}
|
||||
|
||||
bool amdgpu_device_cache_pci_state(struct pci_dev *pdev)
|
||||
|
@ -5688,6 +5758,11 @@ void amdgpu_device_invalidate_hdp(struct amdgpu_device *adev,
|
|||
amdgpu_asic_invalidate_hdp(adev, ring);
|
||||
}
|
||||
|
||||
int amdgpu_in_reset(struct amdgpu_device *adev)
|
||||
{
|
||||
return atomic_read(&adev->reset_domain->in_gpu_reset);
|
||||
}
|
||||
|
||||
/**
|
||||
* amdgpu_device_halt() - bring hardware to some kind of halt state
|
||||
*
|
||||
|
@ -5726,3 +5801,36 @@ void amdgpu_device_halt(struct amdgpu_device *adev)
|
|||
pci_disable_device(pdev);
|
||||
pci_wait_for_pending_transaction(pdev);
|
||||
}
|
||||
|
||||
u32 amdgpu_device_pcie_port_rreg(struct amdgpu_device *adev,
|
||||
u32 reg)
|
||||
{
|
||||
unsigned long flags, address, data;
|
||||
u32 r;
|
||||
|
||||
address = adev->nbio.funcs->get_pcie_port_index_offset(adev);
|
||||
data = adev->nbio.funcs->get_pcie_port_data_offset(adev);
|
||||
|
||||
spin_lock_irqsave(&adev->pcie_idx_lock, flags);
|
||||
WREG32(address, reg * 4);
|
||||
(void)RREG32(address);
|
||||
r = RREG32(data);
|
||||
spin_unlock_irqrestore(&adev->pcie_idx_lock, flags);
|
||||
return r;
|
||||
}
|
||||
|
||||
void amdgpu_device_pcie_port_wreg(struct amdgpu_device *adev,
|
||||
u32 reg, u32 v)
|
||||
{
|
||||
unsigned long flags, address, data;
|
||||
|
||||
address = adev->nbio.funcs->get_pcie_port_index_offset(adev);
|
||||
data = adev->nbio.funcs->get_pcie_port_data_offset(adev);
|
||||
|
||||
spin_lock_irqsave(&adev->pcie_idx_lock, flags);
|
||||
WREG32(address, reg * 4);
|
||||
(void)RREG32(address);
|
||||
WREG32(data, v);
|
||||
(void)RREG32(data);
|
||||
spin_unlock_irqrestore(&adev->pcie_idx_lock, flags);
|
||||
}
|
||||
|
|
|
@ -360,8 +360,11 @@ out:
|
|||
return r;
|
||||
}
|
||||
|
||||
static void amdgpu_discovery_sysfs_fini(struct amdgpu_device *adev);
|
||||
|
||||
void amdgpu_discovery_fini(struct amdgpu_device *adev)
|
||||
{
|
||||
amdgpu_discovery_sysfs_fini(adev);
|
||||
kfree(adev->mman.discovery_bin);
|
||||
adev->mman.discovery_bin = NULL;
|
||||
}
|
||||
|
@ -382,6 +385,578 @@ static int amdgpu_discovery_validate_ip(const struct ip *ip)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void amdgpu_discovery_read_harvest_bit_per_ip(struct amdgpu_device *adev,
|
||||
uint32_t *vcn_harvest_count)
|
||||
{
|
||||
struct binary_header *bhdr;
|
||||
struct ip_discovery_header *ihdr;
|
||||
struct die_header *dhdr;
|
||||
struct ip *ip;
|
||||
uint16_t die_offset, ip_offset, num_dies, num_ips;
|
||||
int i, j;
|
||||
|
||||
bhdr = (struct binary_header *)adev->mman.discovery_bin;
|
||||
ihdr = (struct ip_discovery_header *)(adev->mman.discovery_bin +
|
||||
le16_to_cpu(bhdr->table_list[IP_DISCOVERY].offset));
|
||||
num_dies = le16_to_cpu(ihdr->num_dies);
|
||||
|
||||
/* scan harvest bit of all IP data structures */
|
||||
for (i = 0; i < num_dies; i++) {
|
||||
die_offset = le16_to_cpu(ihdr->die_info[i].die_offset);
|
||||
dhdr = (struct die_header *)(adev->mman.discovery_bin + die_offset);
|
||||
num_ips = le16_to_cpu(dhdr->num_ips);
|
||||
ip_offset = die_offset + sizeof(*dhdr);
|
||||
|
||||
for (j = 0; j < num_ips; j++) {
|
||||
ip = (struct ip *)(adev->mman.discovery_bin + ip_offset);
|
||||
|
||||
if (amdgpu_discovery_validate_ip(ip))
|
||||
goto next_ip;
|
||||
|
||||
if (le16_to_cpu(ip->harvest) == 1) {
|
||||
switch (le16_to_cpu(ip->hw_id)) {
|
||||
case VCN_HWID:
|
||||
(*vcn_harvest_count)++;
|
||||
if (ip->number_instance == 0)
|
||||
adev->vcn.harvest_config |= AMDGPU_VCN_HARVEST_VCN0;
|
||||
else
|
||||
adev->vcn.harvest_config |= AMDGPU_VCN_HARVEST_VCN1;
|
||||
break;
|
||||
case DMU_HWID:
|
||||
adev->harvest_ip_mask |= AMD_HARVEST_IP_DMU_MASK;
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
next_ip:
|
||||
ip_offset += sizeof(*ip) + 4 * (ip->num_base_address - 1);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static void amdgpu_discovery_read_from_harvest_table(struct amdgpu_device *adev,
|
||||
uint32_t *vcn_harvest_count)
|
||||
{
|
||||
struct binary_header *bhdr;
|
||||
struct harvest_table *harvest_info;
|
||||
int i;
|
||||
|
||||
bhdr = (struct binary_header *)adev->mman.discovery_bin;
|
||||
harvest_info = (struct harvest_table *)(adev->mman.discovery_bin +
|
||||
le16_to_cpu(bhdr->table_list[HARVEST_INFO].offset));
|
||||
for (i = 0; i < 32; i++) {
|
||||
if (le16_to_cpu(harvest_info->list[i].hw_id) == 0)
|
||||
break;
|
||||
|
||||
switch (le16_to_cpu(harvest_info->list[i].hw_id)) {
|
||||
case VCN_HWID:
|
||||
(*vcn_harvest_count)++;
|
||||
if (harvest_info->list[i].number_instance == 0)
|
||||
adev->vcn.harvest_config |= AMDGPU_VCN_HARVEST_VCN0;
|
||||
else
|
||||
adev->vcn.harvest_config |= AMDGPU_VCN_HARVEST_VCN1;
|
||||
break;
|
||||
case DMU_HWID:
|
||||
adev->harvest_ip_mask |= AMD_HARVEST_IP_DMU_MASK;
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/* ================================================== */
|
||||
|
||||
struct ip_hw_instance {
|
||||
struct kobject kobj; /* ip_discovery/die/#die/#hw_id/#instance/<attrs...> */
|
||||
|
||||
int hw_id;
|
||||
u8 num_instance;
|
||||
u8 major, minor, revision;
|
||||
u8 harvest;
|
||||
|
||||
int num_base_addresses;
|
||||
u32 base_addr[];
|
||||
};
|
||||
|
||||
struct ip_hw_id {
|
||||
struct kset hw_id_kset; /* ip_discovery/die/#die/#hw_id/, contains ip_hw_instance */
|
||||
int hw_id;
|
||||
};
|
||||
|
||||
struct ip_die_entry {
|
||||
struct kset ip_kset; /* ip_discovery/die/#die/, contains ip_hw_id */
|
||||
u16 num_ips;
|
||||
};
|
||||
|
||||
/* -------------------------------------------------- */
|
||||
|
||||
struct ip_hw_instance_attr {
|
||||
struct attribute attr;
|
||||
ssize_t (*show)(struct ip_hw_instance *ip_hw_instance, char *buf);
|
||||
};
|
||||
|
||||
static ssize_t hw_id_show(struct ip_hw_instance *ip_hw_instance, char *buf)
|
||||
{
|
||||
return sysfs_emit(buf, "%d\n", ip_hw_instance->hw_id);
|
||||
}
|
||||
|
||||
static ssize_t num_instance_show(struct ip_hw_instance *ip_hw_instance, char *buf)
|
||||
{
|
||||
return sysfs_emit(buf, "%d\n", ip_hw_instance->num_instance);
|
||||
}
|
||||
|
||||
static ssize_t major_show(struct ip_hw_instance *ip_hw_instance, char *buf)
|
||||
{
|
||||
return sysfs_emit(buf, "%d\n", ip_hw_instance->major);
|
||||
}
|
||||
|
||||
static ssize_t minor_show(struct ip_hw_instance *ip_hw_instance, char *buf)
|
||||
{
|
||||
return sysfs_emit(buf, "%d\n", ip_hw_instance->minor);
|
||||
}
|
||||
|
||||
static ssize_t revision_show(struct ip_hw_instance *ip_hw_instance, char *buf)
|
||||
{
|
||||
return sysfs_emit(buf, "%d\n", ip_hw_instance->revision);
|
||||
}
|
||||
|
||||
static ssize_t harvest_show(struct ip_hw_instance *ip_hw_instance, char *buf)
|
||||
{
|
||||
return sysfs_emit(buf, "0x%01X\n", ip_hw_instance->harvest);
|
||||
}
|
||||
|
||||
static ssize_t num_base_addresses_show(struct ip_hw_instance *ip_hw_instance, char *buf)
|
||||
{
|
||||
return sysfs_emit(buf, "%d\n", ip_hw_instance->num_base_addresses);
|
||||
}
|
||||
|
||||
static ssize_t base_addr_show(struct ip_hw_instance *ip_hw_instance, char *buf)
|
||||
{
|
||||
ssize_t res, at;
|
||||
int ii;
|
||||
|
||||
for (res = at = ii = 0; ii < ip_hw_instance->num_base_addresses; ii++) {
|
||||
/* Here we satisfy the condition that, at + size <= PAGE_SIZE.
|
||||
*/
|
||||
if (at + 12 > PAGE_SIZE)
|
||||
break;
|
||||
res = sysfs_emit_at(buf, at, "0x%08X\n",
|
||||
ip_hw_instance->base_addr[ii]);
|
||||
if (res <= 0)
|
||||
break;
|
||||
at += res;
|
||||
}
|
||||
|
||||
return res < 0 ? res : at;
|
||||
}
|
||||
|
||||
static struct ip_hw_instance_attr ip_hw_attr[] = {
|
||||
__ATTR_RO(hw_id),
|
||||
__ATTR_RO(num_instance),
|
||||
__ATTR_RO(major),
|
||||
__ATTR_RO(minor),
|
||||
__ATTR_RO(revision),
|
||||
__ATTR_RO(harvest),
|
||||
__ATTR_RO(num_base_addresses),
|
||||
__ATTR_RO(base_addr),
|
||||
};
|
||||
|
||||
static struct attribute *ip_hw_instance_attrs[ARRAY_SIZE(ip_hw_attr) + 1];
|
||||
ATTRIBUTE_GROUPS(ip_hw_instance);
|
||||
|
||||
#define to_ip_hw_instance(x) container_of(x, struct ip_hw_instance, kobj)
|
||||
#define to_ip_hw_instance_attr(x) container_of(x, struct ip_hw_instance_attr, attr)
|
||||
|
||||
static ssize_t ip_hw_instance_attr_show(struct kobject *kobj,
|
||||
struct attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
struct ip_hw_instance *ip_hw_instance = to_ip_hw_instance(kobj);
|
||||
struct ip_hw_instance_attr *ip_hw_attr = to_ip_hw_instance_attr(attr);
|
||||
|
||||
if (!ip_hw_attr->show)
|
||||
return -EIO;
|
||||
|
||||
return ip_hw_attr->show(ip_hw_instance, buf);
|
||||
}
|
||||
|
||||
static const struct sysfs_ops ip_hw_instance_sysfs_ops = {
|
||||
.show = ip_hw_instance_attr_show,
|
||||
};
|
||||
|
||||
static void ip_hw_instance_release(struct kobject *kobj)
|
||||
{
|
||||
struct ip_hw_instance *ip_hw_instance = to_ip_hw_instance(kobj);
|
||||
|
||||
kfree(ip_hw_instance);
|
||||
}
|
||||
|
||||
static struct kobj_type ip_hw_instance_ktype = {
|
||||
.release = ip_hw_instance_release,
|
||||
.sysfs_ops = &ip_hw_instance_sysfs_ops,
|
||||
.default_groups = ip_hw_instance_groups,
|
||||
};
|
||||
|
||||
/* -------------------------------------------------- */
|
||||
|
||||
#define to_ip_hw_id(x) container_of(to_kset(x), struct ip_hw_id, hw_id_kset)
|
||||
|
||||
static void ip_hw_id_release(struct kobject *kobj)
|
||||
{
|
||||
struct ip_hw_id *ip_hw_id = to_ip_hw_id(kobj);
|
||||
|
||||
if (!list_empty(&ip_hw_id->hw_id_kset.list))
|
||||
DRM_ERROR("ip_hw_id->hw_id_kset is not empty");
|
||||
kfree(ip_hw_id);
|
||||
}
|
||||
|
||||
static struct kobj_type ip_hw_id_ktype = {
|
||||
.release = ip_hw_id_release,
|
||||
.sysfs_ops = &kobj_sysfs_ops,
|
||||
};
|
||||
|
||||
/* -------------------------------------------------- */
|
||||
|
||||
static void die_kobj_release(struct kobject *kobj);
|
||||
static void ip_disc_release(struct kobject *kobj);
|
||||
|
||||
struct ip_die_entry_attribute {
|
||||
struct attribute attr;
|
||||
ssize_t (*show)(struct ip_die_entry *ip_die_entry, char *buf);
|
||||
};
|
||||
|
||||
#define to_ip_die_entry_attr(x) container_of(x, struct ip_die_entry_attribute, attr)
|
||||
|
||||
static ssize_t num_ips_show(struct ip_die_entry *ip_die_entry, char *buf)
|
||||
{
|
||||
return sysfs_emit(buf, "%d\n", ip_die_entry->num_ips);
|
||||
}
|
||||
|
||||
/* If there are more ip_die_entry attrs, other than the number of IPs,
|
||||
* we can make this intro an array of attrs, and then initialize
|
||||
* ip_die_entry_attrs in a loop.
|
||||
*/
|
||||
static struct ip_die_entry_attribute num_ips_attr =
|
||||
__ATTR_RO(num_ips);
|
||||
|
||||
static struct attribute *ip_die_entry_attrs[] = {
|
||||
&num_ips_attr.attr,
|
||||
NULL,
|
||||
};
|
||||
ATTRIBUTE_GROUPS(ip_die_entry); /* ip_die_entry_groups */
|
||||
|
||||
#define to_ip_die_entry(x) container_of(to_kset(x), struct ip_die_entry, ip_kset)
|
||||
|
||||
static ssize_t ip_die_entry_attr_show(struct kobject *kobj,
|
||||
struct attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
struct ip_die_entry_attribute *ip_die_entry_attr = to_ip_die_entry_attr(attr);
|
||||
struct ip_die_entry *ip_die_entry = to_ip_die_entry(kobj);
|
||||
|
||||
if (!ip_die_entry_attr->show)
|
||||
return -EIO;
|
||||
|
||||
return ip_die_entry_attr->show(ip_die_entry, buf);
|
||||
}
|
||||
|
||||
static void ip_die_entry_release(struct kobject *kobj)
|
||||
{
|
||||
struct ip_die_entry *ip_die_entry = to_ip_die_entry(kobj);
|
||||
|
||||
if (!list_empty(&ip_die_entry->ip_kset.list))
|
||||
DRM_ERROR("ip_die_entry->ip_kset is not empty");
|
||||
kfree(ip_die_entry);
|
||||
}
|
||||
|
||||
static const struct sysfs_ops ip_die_entry_sysfs_ops = {
|
||||
.show = ip_die_entry_attr_show,
|
||||
};
|
||||
|
||||
static struct kobj_type ip_die_entry_ktype = {
|
||||
.release = ip_die_entry_release,
|
||||
.sysfs_ops = &ip_die_entry_sysfs_ops,
|
||||
.default_groups = ip_die_entry_groups,
|
||||
};
|
||||
|
||||
static struct kobj_type die_kobj_ktype = {
|
||||
.release = die_kobj_release,
|
||||
.sysfs_ops = &kobj_sysfs_ops,
|
||||
};
|
||||
|
||||
static struct kobj_type ip_discovery_ktype = {
|
||||
.release = ip_disc_release,
|
||||
.sysfs_ops = &kobj_sysfs_ops,
|
||||
};
|
||||
|
||||
struct ip_discovery_top {
|
||||
struct kobject kobj; /* ip_discovery/ */
|
||||
struct kset die_kset; /* ip_discovery/die/, contains ip_die_entry */
|
||||
struct amdgpu_device *adev;
|
||||
};
|
||||
|
||||
static void die_kobj_release(struct kobject *kobj)
|
||||
{
|
||||
struct ip_discovery_top *ip_top = container_of(to_kset(kobj),
|
||||
struct ip_discovery_top,
|
||||
die_kset);
|
||||
if (!list_empty(&ip_top->die_kset.list))
|
||||
DRM_ERROR("ip_top->die_kset is not empty");
|
||||
}
|
||||
|
||||
static void ip_disc_release(struct kobject *kobj)
|
||||
{
|
||||
struct ip_discovery_top *ip_top = container_of(kobj, struct ip_discovery_top,
|
||||
kobj);
|
||||
struct amdgpu_device *adev = ip_top->adev;
|
||||
|
||||
adev->ip_top = NULL;
|
||||
kfree(ip_top);
|
||||
}
|
||||
|
||||
static int amdgpu_discovery_sysfs_ips(struct amdgpu_device *adev,
|
||||
struct ip_die_entry *ip_die_entry,
|
||||
const size_t _ip_offset, const int num_ips)
|
||||
{
|
||||
int ii, jj, kk, res;
|
||||
|
||||
DRM_DEBUG("num_ips:%d", num_ips);
|
||||
|
||||
/* Find all IPs of a given HW ID, and add their instance to
|
||||
* #die/#hw_id/#instance/<attributes>
|
||||
*/
|
||||
for (ii = 0; ii < HW_ID_MAX; ii++) {
|
||||
struct ip_hw_id *ip_hw_id = NULL;
|
||||
size_t ip_offset = _ip_offset;
|
||||
|
||||
for (jj = 0; jj < num_ips; jj++) {
|
||||
struct ip *ip;
|
||||
struct ip_hw_instance *ip_hw_instance;
|
||||
|
||||
ip = (struct ip *)(adev->mman.discovery_bin + ip_offset);
|
||||
if (amdgpu_discovery_validate_ip(ip) ||
|
||||
le16_to_cpu(ip->hw_id) != ii)
|
||||
goto next_ip;
|
||||
|
||||
DRM_DEBUG("match:%d @ ip_offset:%zu", ii, ip_offset);
|
||||
|
||||
/* We have a hw_id match; register the hw
|
||||
* block if not yet registered.
|
||||
*/
|
||||
if (!ip_hw_id) {
|
||||
ip_hw_id = kzalloc(sizeof(*ip_hw_id), GFP_KERNEL);
|
||||
if (!ip_hw_id)
|
||||
return -ENOMEM;
|
||||
ip_hw_id->hw_id = ii;
|
||||
|
||||
kobject_set_name(&ip_hw_id->hw_id_kset.kobj, "%d", ii);
|
||||
ip_hw_id->hw_id_kset.kobj.kset = &ip_die_entry->ip_kset;
|
||||
ip_hw_id->hw_id_kset.kobj.ktype = &ip_hw_id_ktype;
|
||||
res = kset_register(&ip_hw_id->hw_id_kset);
|
||||
if (res) {
|
||||
DRM_ERROR("Couldn't register ip_hw_id kset");
|
||||
kfree(ip_hw_id);
|
||||
return res;
|
||||
}
|
||||
if (hw_id_names[ii]) {
|
||||
res = sysfs_create_link(&ip_die_entry->ip_kset.kobj,
|
||||
&ip_hw_id->hw_id_kset.kobj,
|
||||
hw_id_names[ii]);
|
||||
if (res) {
|
||||
DRM_ERROR("Couldn't create IP link %s in IP Die:%s\n",
|
||||
hw_id_names[ii],
|
||||
kobject_name(&ip_die_entry->ip_kset.kobj));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/* Now register its instance.
|
||||
*/
|
||||
ip_hw_instance = kzalloc(struct_size(ip_hw_instance,
|
||||
base_addr,
|
||||
ip->num_base_address),
|
||||
GFP_KERNEL);
|
||||
if (!ip_hw_instance) {
|
||||
DRM_ERROR("no memory for ip_hw_instance");
|
||||
return -ENOMEM;
|
||||
}
|
||||
ip_hw_instance->hw_id = le16_to_cpu(ip->hw_id); /* == ii */
|
||||
ip_hw_instance->num_instance = ip->number_instance;
|
||||
ip_hw_instance->major = ip->major;
|
||||
ip_hw_instance->minor = ip->minor;
|
||||
ip_hw_instance->revision = ip->revision;
|
||||
ip_hw_instance->harvest = ip->harvest;
|
||||
ip_hw_instance->num_base_addresses = ip->num_base_address;
|
||||
|
||||
for (kk = 0; kk < ip_hw_instance->num_base_addresses; kk++)
|
||||
ip_hw_instance->base_addr[kk] = ip->base_address[kk];
|
||||
|
||||
kobject_init(&ip_hw_instance->kobj, &ip_hw_instance_ktype);
|
||||
ip_hw_instance->kobj.kset = &ip_hw_id->hw_id_kset;
|
||||
res = kobject_add(&ip_hw_instance->kobj, NULL,
|
||||
"%d", ip_hw_instance->num_instance);
|
||||
next_ip:
|
||||
ip_offset += sizeof(*ip) + 4 * (ip->num_base_address - 1);
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int amdgpu_discovery_sysfs_recurse(struct amdgpu_device *adev)
|
||||
{
|
||||
struct binary_header *bhdr;
|
||||
struct ip_discovery_header *ihdr;
|
||||
struct die_header *dhdr;
|
||||
struct kset *die_kset = &adev->ip_top->die_kset;
|
||||
u16 num_dies, die_offset, num_ips;
|
||||
size_t ip_offset;
|
||||
int ii, res;
|
||||
|
||||
bhdr = (struct binary_header *)adev->mman.discovery_bin;
|
||||
ihdr = (struct ip_discovery_header *)(adev->mman.discovery_bin +
|
||||
le16_to_cpu(bhdr->table_list[IP_DISCOVERY].offset));
|
||||
num_dies = le16_to_cpu(ihdr->num_dies);
|
||||
|
||||
DRM_DEBUG("number of dies: %d\n", num_dies);
|
||||
|
||||
for (ii = 0; ii < num_dies; ii++) {
|
||||
struct ip_die_entry *ip_die_entry;
|
||||
|
||||
die_offset = le16_to_cpu(ihdr->die_info[ii].die_offset);
|
||||
dhdr = (struct die_header *)(adev->mman.discovery_bin + die_offset);
|
||||
num_ips = le16_to_cpu(dhdr->num_ips);
|
||||
ip_offset = die_offset + sizeof(*dhdr);
|
||||
|
||||
/* Add the die to the kset.
|
||||
*
|
||||
* dhdr->die_id == ii, which was checked in
|
||||
* amdgpu_discovery_reg_base_init().
|
||||
*/
|
||||
|
||||
ip_die_entry = kzalloc(sizeof(*ip_die_entry), GFP_KERNEL);
|
||||
if (!ip_die_entry)
|
||||
return -ENOMEM;
|
||||
|
||||
ip_die_entry->num_ips = num_ips;
|
||||
|
||||
kobject_set_name(&ip_die_entry->ip_kset.kobj, "%d", le16_to_cpu(dhdr->die_id));
|
||||
ip_die_entry->ip_kset.kobj.kset = die_kset;
|
||||
ip_die_entry->ip_kset.kobj.ktype = &ip_die_entry_ktype;
|
||||
res = kset_register(&ip_die_entry->ip_kset);
|
||||
if (res) {
|
||||
DRM_ERROR("Couldn't register ip_die_entry kset");
|
||||
kfree(ip_die_entry);
|
||||
return res;
|
||||
}
|
||||
|
||||
amdgpu_discovery_sysfs_ips(adev, ip_die_entry, ip_offset, num_ips);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int amdgpu_discovery_sysfs_init(struct amdgpu_device *adev)
|
||||
{
|
||||
struct kset *die_kset;
|
||||
int res, ii;
|
||||
|
||||
adev->ip_top = kzalloc(sizeof(*adev->ip_top), GFP_KERNEL);
|
||||
if (!adev->ip_top)
|
||||
return -ENOMEM;
|
||||
|
||||
adev->ip_top->adev = adev;
|
||||
|
||||
res = kobject_init_and_add(&adev->ip_top->kobj, &ip_discovery_ktype,
|
||||
&adev->dev->kobj, "ip_discovery");
|
||||
if (res) {
|
||||
DRM_ERROR("Couldn't init and add ip_discovery/");
|
||||
goto Err;
|
||||
}
|
||||
|
||||
die_kset = &adev->ip_top->die_kset;
|
||||
kobject_set_name(&die_kset->kobj, "%s", "die");
|
||||
die_kset->kobj.parent = &adev->ip_top->kobj;
|
||||
die_kset->kobj.ktype = &die_kobj_ktype;
|
||||
res = kset_register(&adev->ip_top->die_kset);
|
||||
if (res) {
|
||||
DRM_ERROR("Couldn't register die_kset");
|
||||
goto Err;
|
||||
}
|
||||
|
||||
for (ii = 0; ii < ARRAY_SIZE(ip_hw_attr); ii++)
|
||||
ip_hw_instance_attrs[ii] = &ip_hw_attr[ii].attr;
|
||||
ip_hw_instance_attrs[ii] = NULL;
|
||||
|
||||
res = amdgpu_discovery_sysfs_recurse(adev);
|
||||
|
||||
return res;
|
||||
Err:
|
||||
kobject_put(&adev->ip_top->kobj);
|
||||
return res;
|
||||
}
|
||||
|
||||
/* -------------------------------------------------- */
|
||||
|
||||
#define list_to_kobj(el) container_of(el, struct kobject, entry)
|
||||
|
||||
static void amdgpu_discovery_sysfs_ip_hw_free(struct ip_hw_id *ip_hw_id)
|
||||
{
|
||||
struct list_head *el, *tmp;
|
||||
struct kset *hw_id_kset;
|
||||
|
||||
hw_id_kset = &ip_hw_id->hw_id_kset;
|
||||
spin_lock(&hw_id_kset->list_lock);
|
||||
list_for_each_prev_safe(el, tmp, &hw_id_kset->list) {
|
||||
list_del_init(el);
|
||||
spin_unlock(&hw_id_kset->list_lock);
|
||||
/* kobject is embedded in ip_hw_instance */
|
||||
kobject_put(list_to_kobj(el));
|
||||
spin_lock(&hw_id_kset->list_lock);
|
||||
}
|
||||
spin_unlock(&hw_id_kset->list_lock);
|
||||
kobject_put(&ip_hw_id->hw_id_kset.kobj);
|
||||
}
|
||||
|
||||
static void amdgpu_discovery_sysfs_die_free(struct ip_die_entry *ip_die_entry)
|
||||
{
|
||||
struct list_head *el, *tmp;
|
||||
struct kset *ip_kset;
|
||||
|
||||
ip_kset = &ip_die_entry->ip_kset;
|
||||
spin_lock(&ip_kset->list_lock);
|
||||
list_for_each_prev_safe(el, tmp, &ip_kset->list) {
|
||||
list_del_init(el);
|
||||
spin_unlock(&ip_kset->list_lock);
|
||||
amdgpu_discovery_sysfs_ip_hw_free(to_ip_hw_id(list_to_kobj(el)));
|
||||
spin_lock(&ip_kset->list_lock);
|
||||
}
|
||||
spin_unlock(&ip_kset->list_lock);
|
||||
kobject_put(&ip_die_entry->ip_kset.kobj);
|
||||
}
|
||||
|
||||
static void amdgpu_discovery_sysfs_fini(struct amdgpu_device *adev)
|
||||
{
|
||||
struct list_head *el, *tmp;
|
||||
struct kset *die_kset;
|
||||
|
||||
die_kset = &adev->ip_top->die_kset;
|
||||
spin_lock(&die_kset->list_lock);
|
||||
list_for_each_prev_safe(el, tmp, &die_kset->list) {
|
||||
list_del_init(el);
|
||||
spin_unlock(&die_kset->list_lock);
|
||||
amdgpu_discovery_sysfs_die_free(to_ip_die_entry(list_to_kobj(el)));
|
||||
spin_lock(&die_kset->list_lock);
|
||||
}
|
||||
spin_unlock(&die_kset->list_lock);
|
||||
kobject_put(&adev->ip_top->die_kset.kobj);
|
||||
kobject_put(&adev->ip_top->kobj);
|
||||
}
|
||||
|
||||
/* ================================================== */
|
||||
|
||||
int amdgpu_discovery_reg_base_init(struct amdgpu_device *adev)
|
||||
{
|
||||
struct binary_header *bhdr;
|
||||
|
@ -492,6 +1067,8 @@ next_ip:
|
|||
}
|
||||
}
|
||||
|
||||
amdgpu_discovery_sysfs_init(adev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -545,32 +1122,26 @@ int amdgpu_discovery_get_ip_version(struct amdgpu_device *adev, int hw_id, int n
|
|||
|
||||
void amdgpu_discovery_harvest_ip(struct amdgpu_device *adev)
|
||||
{
|
||||
struct binary_header *bhdr;
|
||||
struct harvest_table *harvest_info;
|
||||
int i, vcn_harvest_count = 0;
|
||||
int vcn_harvest_count = 0;
|
||||
|
||||
bhdr = (struct binary_header *)adev->mman.discovery_bin;
|
||||
harvest_info = (struct harvest_table *)(adev->mman.discovery_bin +
|
||||
le16_to_cpu(bhdr->table_list[HARVEST_INFO].offset));
|
||||
|
||||
for (i = 0; i < 32; i++) {
|
||||
if (le16_to_cpu(harvest_info->list[i].hw_id) == 0)
|
||||
break;
|
||||
|
||||
switch (le16_to_cpu(harvest_info->list[i].hw_id)) {
|
||||
case VCN_HWID:
|
||||
vcn_harvest_count++;
|
||||
if (harvest_info->list[i].number_instance == 0)
|
||||
adev->vcn.harvest_config |= AMDGPU_VCN_HARVEST_VCN0;
|
||||
else
|
||||
adev->vcn.harvest_config |= AMDGPU_VCN_HARVEST_VCN1;
|
||||
break;
|
||||
case DMU_HWID:
|
||||
adev->harvest_ip_mask |= AMD_HARVEST_IP_DMU_MASK;
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
/*
|
||||
* Harvest table does not fit Navi1x and legacy GPUs,
|
||||
* so read harvest bit per IP data structure to set
|
||||
* harvest configuration.
|
||||
*/
|
||||
if (adev->ip_versions[GC_HWIP][0] < IP_VERSION(10, 2, 0)) {
|
||||
if ((adev->pdev->device == 0x731E &&
|
||||
(adev->pdev->revision == 0xC6 ||
|
||||
adev->pdev->revision == 0xC7)) ||
|
||||
(adev->pdev->device == 0x7340 &&
|
||||
adev->pdev->revision == 0xC9) ||
|
||||
(adev->pdev->device == 0x7360 &&
|
||||
adev->pdev->revision == 0xC7))
|
||||
amdgpu_discovery_read_harvest_bit_per_ip(adev,
|
||||
&vcn_harvest_count);
|
||||
} else {
|
||||
amdgpu_discovery_read_from_harvest_table(adev,
|
||||
&vcn_harvest_count);
|
||||
}
|
||||
|
||||
amdgpu_discovery_harvest_config_quirk(adev);
|
||||
|
@ -674,12 +1245,15 @@ static int amdgpu_discovery_set_common_ip_blocks(struct amdgpu_device *adev)
|
|||
case IP_VERSION(10, 1, 1):
|
||||
case IP_VERSION(10, 1, 2):
|
||||
case IP_VERSION(10, 1, 3):
|
||||
case IP_VERSION(10, 1, 4):
|
||||
case IP_VERSION(10, 3, 0):
|
||||
case IP_VERSION(10, 3, 1):
|
||||
case IP_VERSION(10, 3, 2):
|
||||
case IP_VERSION(10, 3, 3):
|
||||
case IP_VERSION(10, 3, 4):
|
||||
case IP_VERSION(10, 3, 5):
|
||||
case IP_VERSION(10, 3, 6):
|
||||
case IP_VERSION(10, 3, 7):
|
||||
amdgpu_device_ip_block_add(adev, &nv_common_ip_block);
|
||||
break;
|
||||
default:
|
||||
|
@ -709,12 +1283,15 @@ static int amdgpu_discovery_set_gmc_ip_blocks(struct amdgpu_device *adev)
|
|||
case IP_VERSION(10, 1, 1):
|
||||
case IP_VERSION(10, 1, 2):
|
||||
case IP_VERSION(10, 1, 3):
|
||||
case IP_VERSION(10, 1, 4):
|
||||
case IP_VERSION(10, 3, 0):
|
||||
case IP_VERSION(10, 3, 1):
|
||||
case IP_VERSION(10, 3, 2):
|
||||
case IP_VERSION(10, 3, 3):
|
||||
case IP_VERSION(10, 3, 4):
|
||||
case IP_VERSION(10, 3, 5):
|
||||
case IP_VERSION(10, 3, 6):
|
||||
case IP_VERSION(10, 3, 7):
|
||||
amdgpu_device_ip_block_add(adev, &gmc_v10_0_ip_block);
|
||||
break;
|
||||
default:
|
||||
|
@ -790,6 +1367,8 @@ static int amdgpu_discovery_set_psp_ip_blocks(struct amdgpu_device *adev)
|
|||
case IP_VERSION(13, 0, 1):
|
||||
case IP_VERSION(13, 0, 2):
|
||||
case IP_VERSION(13, 0, 3):
|
||||
case IP_VERSION(13, 0, 5):
|
||||
case IP_VERSION(13, 0, 8):
|
||||
amdgpu_device_ip_block_add(adev, &psp_v13_0_ip_block);
|
||||
break;
|
||||
default:
|
||||
|
@ -831,6 +1410,8 @@ static int amdgpu_discovery_set_smu_ip_blocks(struct amdgpu_device *adev)
|
|||
case IP_VERSION(13, 0, 1):
|
||||
case IP_VERSION(13, 0, 2):
|
||||
case IP_VERSION(13, 0, 3):
|
||||
case IP_VERSION(13, 0, 5):
|
||||
case IP_VERSION(13, 0, 8):
|
||||
amdgpu_device_ip_block_add(adev, &smu_v13_0_ip_block);
|
||||
break;
|
||||
default:
|
||||
|
@ -846,8 +1427,14 @@ static int amdgpu_discovery_set_display_ip_blocks(struct amdgpu_device *adev)
|
|||
{
|
||||
if (adev->enable_virtual_display || amdgpu_sriov_vf(adev)) {
|
||||
amdgpu_device_ip_block_add(adev, &amdgpu_vkms_ip_block);
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (!amdgpu_device_has_dc_support(adev))
|
||||
return 0;
|
||||
|
||||
#if defined(CONFIG_DRM_AMD_DC)
|
||||
} else if (adev->ip_versions[DCE_HWIP][0]) {
|
||||
if (adev->ip_versions[DCE_HWIP][0]) {
|
||||
switch (adev->ip_versions[DCE_HWIP][0]) {
|
||||
case IP_VERSION(1, 0, 0):
|
||||
case IP_VERSION(1, 0, 1):
|
||||
|
@ -861,6 +1448,8 @@ static int amdgpu_discovery_set_display_ip_blocks(struct amdgpu_device *adev)
|
|||
case IP_VERSION(3, 0, 1):
|
||||
case IP_VERSION(3, 1, 2):
|
||||
case IP_VERSION(3, 1, 3):
|
||||
case IP_VERSION(3, 1, 5):
|
||||
case IP_VERSION(3, 1, 6):
|
||||
amdgpu_device_ip_block_add(adev, &dm_ip_block);
|
||||
break;
|
||||
default:
|
||||
|
@ -882,8 +1471,8 @@ static int amdgpu_discovery_set_display_ip_blocks(struct amdgpu_device *adev)
|
|||
adev->ip_versions[DCI_HWIP][0]);
|
||||
return -EINVAL;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
#endif
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -904,12 +1493,15 @@ static int amdgpu_discovery_set_gc_ip_blocks(struct amdgpu_device *adev)
|
|||
case IP_VERSION(10, 1, 2):
|
||||
case IP_VERSION(10, 1, 1):
|
||||
case IP_VERSION(10, 1, 3):
|
||||
case IP_VERSION(10, 1, 4):
|
||||
case IP_VERSION(10, 3, 0):
|
||||
case IP_VERSION(10, 3, 2):
|
||||
case IP_VERSION(10, 3, 1):
|
||||
case IP_VERSION(10, 3, 4):
|
||||
case IP_VERSION(10, 3, 5):
|
||||
case IP_VERSION(10, 3, 6):
|
||||
case IP_VERSION(10, 3, 3):
|
||||
case IP_VERSION(10, 3, 7):
|
||||
amdgpu_device_ip_block_add(adev, &gfx_v10_0_ip_block);
|
||||
break;
|
||||
default:
|
||||
|
@ -944,8 +1536,10 @@ static int amdgpu_discovery_set_sdma_ip_blocks(struct amdgpu_device *adev)
|
|||
case IP_VERSION(5, 2, 2):
|
||||
case IP_VERSION(5, 2, 4):
|
||||
case IP_VERSION(5, 2, 5):
|
||||
case IP_VERSION(5, 2, 6):
|
||||
case IP_VERSION(5, 2, 3):
|
||||
case IP_VERSION(5, 2, 1):
|
||||
case IP_VERSION(5, 2, 7):
|
||||
amdgpu_device_ip_block_add(adev, &sdma_v5_2_ip_block);
|
||||
break;
|
||||
default:
|
||||
|
@ -1012,6 +1606,7 @@ static int amdgpu_discovery_set_mm_ip_blocks(struct amdgpu_device *adev)
|
|||
case IP_VERSION(3, 0, 0):
|
||||
case IP_VERSION(3, 0, 16):
|
||||
case IP_VERSION(3, 1, 1):
|
||||
case IP_VERSION(3, 1, 2):
|
||||
case IP_VERSION(3, 0, 2):
|
||||
case IP_VERSION(3, 0, 192):
|
||||
amdgpu_device_ip_block_add(adev, &vcn_v3_0_ip_block);
|
||||
|
@ -1038,12 +1633,14 @@ static int amdgpu_discovery_set_mes_ip_blocks(struct amdgpu_device *adev)
|
|||
case IP_VERSION(10, 1, 1):
|
||||
case IP_VERSION(10, 1, 2):
|
||||
case IP_VERSION(10, 1, 3):
|
||||
case IP_VERSION(10, 1, 4):
|
||||
case IP_VERSION(10, 3, 0):
|
||||
case IP_VERSION(10, 3, 1):
|
||||
case IP_VERSION(10, 3, 2):
|
||||
case IP_VERSION(10, 3, 3):
|
||||
case IP_VERSION(10, 3, 4):
|
||||
case IP_VERSION(10, 3, 5):
|
||||
case IP_VERSION(10, 3, 6):
|
||||
amdgpu_device_ip_block_add(adev, &mes_v10_1_ip_block);
|
||||
break;
|
||||
default:
|
||||
|
@ -1217,11 +1814,6 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
|
|||
return -EINVAL;
|
||||
|
||||
amdgpu_discovery_harvest_ip(adev);
|
||||
|
||||
if (!adev->mman.discovery_bin) {
|
||||
DRM_ERROR("ip discovery uninitialized\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
|
@ -1242,6 +1834,7 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
|
|||
case IP_VERSION(10, 1, 1):
|
||||
case IP_VERSION(10, 1, 2):
|
||||
case IP_VERSION(10, 1, 3):
|
||||
case IP_VERSION(10, 1, 4):
|
||||
case IP_VERSION(10, 3, 0):
|
||||
case IP_VERSION(10, 3, 2):
|
||||
case IP_VERSION(10, 3, 4):
|
||||
|
@ -1254,10 +1847,32 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
|
|||
case IP_VERSION(10, 3, 3):
|
||||
adev->family = AMDGPU_FAMILY_YC;
|
||||
break;
|
||||
case IP_VERSION(10, 3, 6):
|
||||
adev->family = AMDGPU_FAMILY_GC_10_3_6;
|
||||
break;
|
||||
case IP_VERSION(10, 3, 7):
|
||||
adev->family = AMDGPU_FAMILY_GC_10_3_7;
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
switch (adev->ip_versions[GC_HWIP][0]) {
|
||||
case IP_VERSION(9, 1, 0):
|
||||
case IP_VERSION(9, 2, 2):
|
||||
case IP_VERSION(9, 3, 0):
|
||||
case IP_VERSION(10, 1, 3):
|
||||
case IP_VERSION(10, 1, 4):
|
||||
case IP_VERSION(10, 3, 1):
|
||||
case IP_VERSION(10, 3, 3):
|
||||
case IP_VERSION(10, 3, 6):
|
||||
case IP_VERSION(10, 3, 7):
|
||||
adev->flags |= AMD_IS_APU;
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
if (adev->ip_versions[XGMI_HWIP][0] == IP_VERSION(4, 8, 0))
|
||||
adev->gmc.xgmi.supported = true;
|
||||
|
||||
|
@ -1285,7 +1900,9 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
|
|||
break;
|
||||
case IP_VERSION(7, 2, 0):
|
||||
case IP_VERSION(7, 2, 1):
|
||||
case IP_VERSION(7, 3, 0):
|
||||
case IP_VERSION(7, 5, 0):
|
||||
case IP_VERSION(7, 5, 1):
|
||||
adev->nbio.funcs = &nbio_v7_2_funcs;
|
||||
adev->nbio.hdp_flush_reg = &nbio_v7_2_hdp_flush_reg;
|
||||
break;
|
||||
|
@ -1368,6 +1985,8 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
|
|||
case IP_VERSION(11, 0, 11):
|
||||
case IP_VERSION(11, 5, 0):
|
||||
case IP_VERSION(13, 0, 1):
|
||||
case IP_VERSION(13, 0, 9):
|
||||
case IP_VERSION(13, 0, 10):
|
||||
adev->smuio.funcs = &smuio_v11_0_6_funcs;
|
||||
break;
|
||||
case IP_VERSION(13, 0, 2):
|
||||
|
|
|
@ -200,8 +200,10 @@ int amdgpu_display_crtc_page_flip_target(struct drm_crtc *crtc,
|
|||
goto unpin;
|
||||
}
|
||||
|
||||
r = dma_resv_get_fences(new_abo->tbo.base.resv, NULL,
|
||||
&work->shared_count, &work->shared);
|
||||
/* TODO: Unify this with other drivers */
|
||||
r = dma_resv_get_fences(new_abo->tbo.base.resv, true,
|
||||
&work->shared_count,
|
||||
&work->shared);
|
||||
if (unlikely(r != 0)) {
|
||||
DRM_ERROR("failed to get fences for buffer\n");
|
||||
goto unpin;
|
||||
|
@ -504,28 +506,9 @@ uint32_t amdgpu_display_supported_domains(struct amdgpu_device *adev,
|
|||
*/
|
||||
if ((bo_flags & AMDGPU_GEM_CREATE_CPU_GTT_USWC) &&
|
||||
amdgpu_bo_support_uswc(bo_flags) &&
|
||||
amdgpu_device_asic_has_dc_support(adev->asic_type)) {
|
||||
switch (adev->asic_type) {
|
||||
case CHIP_CARRIZO:
|
||||
case CHIP_STONEY:
|
||||
domain |= AMDGPU_GEM_DOMAIN_GTT;
|
||||
break;
|
||||
case CHIP_RAVEN:
|
||||
/* enable S/G on PCO and RV2 */
|
||||
if ((adev->apu_flags & AMD_APU_IS_RAVEN2) ||
|
||||
(adev->apu_flags & AMD_APU_IS_PICASSO))
|
||||
domain |= AMDGPU_GEM_DOMAIN_GTT;
|
||||
break;
|
||||
case CHIP_RENOIR:
|
||||
case CHIP_VANGOGH:
|
||||
case CHIP_YELLOW_CARP:
|
||||
domain |= AMDGPU_GEM_DOMAIN_GTT;
|
||||
break;
|
||||
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
amdgpu_device_asic_has_dc_support(adev->asic_type) &&
|
||||
adev->mode_info.gpu_vm_support)
|
||||
domain |= AMDGPU_GEM_DOMAIN_GTT;
|
||||
#endif
|
||||
|
||||
return domain;
|
||||
|
@ -708,9 +691,9 @@ static int convert_tiling_flags_to_modifier(struct amdgpu_framebuffer *afb)
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (adev->asic_type >= CHIP_SIENNA_CICHLID)
|
||||
if (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(10, 3, 0))
|
||||
version = AMD_FMT_MOD_TILE_VER_GFX10_RBPLUS;
|
||||
else if (adev->family == AMDGPU_FAMILY_NV)
|
||||
else if (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(10, 0, 0))
|
||||
version = AMD_FMT_MOD_TILE_VER_GFX10;
|
||||
else
|
||||
version = AMD_FMT_MOD_TILE_VER_GFX9;
|
||||
|
@ -804,7 +787,7 @@ static int convert_tiling_flags_to_modifier(struct amdgpu_framebuffer *afb)
|
|||
if (adev->family >= AMDGPU_FAMILY_NV) {
|
||||
int extra_pipe = 0;
|
||||
|
||||
if (adev->asic_type >= CHIP_SIENNA_CICHLID &&
|
||||
if ((adev->ip_versions[GC_HWIP][0] >= IP_VERSION(10, 3, 0)) &&
|
||||
pipes == packers && pipes > 1)
|
||||
extra_pipe = 1;
|
||||
|
||||
|
@ -954,7 +937,7 @@ static int amdgpu_display_verify_sizes(struct amdgpu_framebuffer *rfb)
|
|||
int ret;
|
||||
unsigned int i, block_width, block_height, block_size_log2;
|
||||
|
||||
if (!rfb->base.dev->mode_config.allow_fb_modifiers)
|
||||
if (rfb->base.dev->mode_config.fb_modifiers_not_supported)
|
||||
return 0;
|
||||
|
||||
for (i = 0; i < format_info->num_planes; ++i) {
|
||||
|
@ -1141,7 +1124,7 @@ int amdgpu_display_framebuffer_init(struct drm_device *dev,
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (!dev->mode_config.allow_fb_modifiers && !adev->enable_virtual_display) {
|
||||
if (dev->mode_config.fb_modifiers_not_supported && !adev->enable_virtual_display) {
|
||||
drm_WARN_ONCE(dev, adev->family >= AMDGPU_FAMILY_AI,
|
||||
"GFX9+ requires FB check based on format modifier\n");
|
||||
ret = check_tiling_flags_gfx6(rfb);
|
||||
|
@ -1149,7 +1132,7 @@ int amdgpu_display_framebuffer_init(struct drm_device *dev,
|
|||
return ret;
|
||||
}
|
||||
|
||||
if (dev->mode_config.allow_fb_modifiers &&
|
||||
if (!dev->mode_config.fb_modifiers_not_supported &&
|
||||
!(rfb->base.flags & DRM_MODE_FB_MODIFIERS)) {
|
||||
ret = convert_tiling_flags_to_modifier(rfb);
|
||||
if (ret) {
|
||||
|
|
|
@ -99,9 +99,11 @@
|
|||
* - 3.42.0 - Add 16bpc fixed point display support
|
||||
* - 3.43.0 - Add device hot plug/unplug support
|
||||
* - 3.44.0 - DCN3 supports DCC independent block settings: !64B && 128B, 64B && 128B
|
||||
* - 3.45.0 - Add context ioctl stable pstate interface
|
||||
* * 3.46.0 - To enable hot plug amdgpu tests in libdrm
|
||||
*/
|
||||
#define KMS_DRIVER_MAJOR 3
|
||||
#define KMS_DRIVER_MINOR 44
|
||||
#define KMS_DRIVER_MINOR 46
|
||||
#define KMS_DRIVER_PATCHLEVEL 0
|
||||
|
||||
int amdgpu_vram_limit;
|
||||
|
@ -109,8 +111,6 @@ int amdgpu_vis_vram_limit;
|
|||
int amdgpu_gart_size = -1; /* auto */
|
||||
int amdgpu_gtt_size = -1; /* auto */
|
||||
int amdgpu_moverate = -1; /* auto */
|
||||
int amdgpu_benchmarking;
|
||||
int amdgpu_testing;
|
||||
int amdgpu_audio = -1;
|
||||
int amdgpu_disp_priority;
|
||||
int amdgpu_hw_i2c;
|
||||
|
@ -174,10 +174,11 @@ int amdgpu_mes;
|
|||
int amdgpu_noretry = -1;
|
||||
int amdgpu_force_asic_type = -1;
|
||||
int amdgpu_tmz = -1; /* auto */
|
||||
uint amdgpu_freesync_vid_mode;
|
||||
int amdgpu_reset_method = -1; /* auto */
|
||||
int amdgpu_num_kcq = -1;
|
||||
int amdgpu_smartshift_bias;
|
||||
int amdgpu_use_xgmi_p2p = 1;
|
||||
int amdgpu_vcnfw_log;
|
||||
|
||||
static void amdgpu_drv_delayed_reset_work_handler(struct work_struct *work);
|
||||
|
||||
|
@ -231,20 +232,6 @@ module_param_named(gttsize, amdgpu_gtt_size, int, 0600);
|
|||
MODULE_PARM_DESC(moverate, "Maximum buffer migration rate in MB/s. (32, 64, etc., -1=auto, 0=1=disabled)");
|
||||
module_param_named(moverate, amdgpu_moverate, int, 0600);
|
||||
|
||||
/**
|
||||
* DOC: benchmark (int)
|
||||
* Run benchmarks. The default is 0 (Skip benchmarks).
|
||||
*/
|
||||
MODULE_PARM_DESC(benchmark, "Run benchmark");
|
||||
module_param_named(benchmark, amdgpu_benchmarking, int, 0444);
|
||||
|
||||
/**
|
||||
* DOC: test (int)
|
||||
* Test BO GTT->VRAM and VRAM->GTT GPU copies. The default is 0 (Skip test, only set 1 to run test).
|
||||
*/
|
||||
MODULE_PARM_DESC(test, "Run tests");
|
||||
module_param_named(test, amdgpu_testing, int, 0444);
|
||||
|
||||
/**
|
||||
* DOC: audio (int)
|
||||
* Set HDMI/DPAudio. Only affects non-DC display handling. The default is -1 (Enabled), set 0 to disabled it.
|
||||
|
@ -667,6 +654,13 @@ MODULE_PARM_DESC(force_asic_type,
|
|||
"A non negative value used to specify the asic type for all supported GPUs");
|
||||
module_param_named(force_asic_type, amdgpu_force_asic_type, int, 0444);
|
||||
|
||||
/**
|
||||
* DOC: use_xgmi_p2p (int)
|
||||
* Enables/disables XGMI P2P interface (0 = disable, 1 = enable).
|
||||
*/
|
||||
MODULE_PARM_DESC(use_xgmi_p2p,
|
||||
"Enable XGMI P2P interface (0 = disable; 1 = enable (default))");
|
||||
module_param_named(use_xgmi_p2p, amdgpu_use_xgmi_p2p, int, 0444);
|
||||
|
||||
|
||||
#ifdef CONFIG_HSA_AMD
|
||||
|
@ -740,7 +734,7 @@ MODULE_PARM_DESC(debug_largebar,
|
|||
* systems with a broken CRAT table.
|
||||
*
|
||||
* Default is auto (according to asic type, iommu_v2, and crat table, to decide
|
||||
* whehter use CRAT)
|
||||
* whether use CRAT)
|
||||
*/
|
||||
int ignore_crat;
|
||||
module_param(ignore_crat, int, 0444);
|
||||
|
@ -843,37 +837,11 @@ module_param_named(backlight, amdgpu_backlight, bint, 0444);
|
|||
MODULE_PARM_DESC(tmz, "Enable TMZ feature (-1 = auto (default), 0 = off, 1 = on)");
|
||||
module_param_named(tmz, amdgpu_tmz, int, 0444);
|
||||
|
||||
/**
|
||||
* DOC: freesync_video (uint)
|
||||
* Enable the optimization to adjust front porch timing to achieve seamless
|
||||
* mode change experience when setting a freesync supported mode for which full
|
||||
* modeset is not needed.
|
||||
*
|
||||
* The Display Core will add a set of modes derived from the base FreeSync
|
||||
* video mode into the corresponding connector's mode list based on commonly
|
||||
* used refresh rates and VRR range of the connected display, when users enable
|
||||
* this feature. From the userspace perspective, they can see a seamless mode
|
||||
* change experience when the change between different refresh rates under the
|
||||
* same resolution. Additionally, userspace applications such as Video playback
|
||||
* can read this modeset list and change the refresh rate based on the video
|
||||
* frame rate. Finally, the userspace can also derive an appropriate mode for a
|
||||
* particular refresh rate based on the FreeSync Mode and add it to the
|
||||
* connector's mode list.
|
||||
*
|
||||
* Note: This is an experimental feature.
|
||||
*
|
||||
* The default value: 0 (off).
|
||||
*/
|
||||
MODULE_PARM_DESC(
|
||||
freesync_video,
|
||||
"Enable freesync modesetting optimization feature (0 = off (default), 1 = on)");
|
||||
module_param_named(freesync_video, amdgpu_freesync_vid_mode, uint, 0444);
|
||||
|
||||
/**
|
||||
* DOC: reset_method (int)
|
||||
* GPU reset method (-1 = auto (default), 0 = legacy, 1 = mode0, 2 = mode1, 3 = mode2, 4 = baco, 5 = pci)
|
||||
* GPU reset method (-1 = auto (default), 0 = legacy, 1 = mode0, 2 = mode1, 3 = mode2, 4 = baco)
|
||||
*/
|
||||
MODULE_PARM_DESC(reset_method, "GPU reset method (-1 = auto (default), 0 = legacy, 1 = mode0, 2 = mode1, 3 = mode2, 4 = baco/bamaco, 5 = pci)");
|
||||
MODULE_PARM_DESC(reset_method, "GPU reset method (-1 = auto (default), 0 = legacy, 1 = mode0, 2 = mode1, 3 = mode2, 4 = baco/bamaco)");
|
||||
module_param_named(reset_method, amdgpu_reset_method, int, 0444);
|
||||
|
||||
/**
|
||||
|
@ -888,6 +856,13 @@ module_param_named(bad_page_threshold, amdgpu_bad_page_threshold, int, 0444);
|
|||
MODULE_PARM_DESC(num_kcq, "number of kernel compute queue user want to setup (8 if set to greater than 8 or less than 0, only affect gfx 8+)");
|
||||
module_param_named(num_kcq, amdgpu_num_kcq, int, 0444);
|
||||
|
||||
/**
|
||||
* DOC: vcnfw_log (int)
|
||||
* Enable vcnfw log output for debugging, the default is disabled.
|
||||
*/
|
||||
MODULE_PARM_DESC(vcnfw_log, "Enable vcnfw log(0 = disable (default value), 1 = enable)");
|
||||
module_param_named(vcnfw_log, amdgpu_vcnfw_log, int, 0444);
|
||||
|
||||
/**
|
||||
* DOC: smu_pptable_id (int)
|
||||
* Used to override pptable id. id = 0 use VBIOS pptable.
|
||||
|
@ -1942,13 +1917,14 @@ static const struct pci_device_id pciidlist[] = {
|
|||
{0x1002, 0x73FF, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_DIMGREY_CAVEFISH},
|
||||
|
||||
/* Aldebaran */
|
||||
{0x1002, 0x7408, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ALDEBARAN|AMD_EXP_HW_SUPPORT},
|
||||
{0x1002, 0x740C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ALDEBARAN|AMD_EXP_HW_SUPPORT},
|
||||
{0x1002, 0x740F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ALDEBARAN|AMD_EXP_HW_SUPPORT},
|
||||
{0x1002, 0x7410, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ALDEBARAN|AMD_EXP_HW_SUPPORT},
|
||||
{0x1002, 0x7408, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ALDEBARAN},
|
||||
{0x1002, 0x740C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ALDEBARAN},
|
||||
{0x1002, 0x740F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ALDEBARAN},
|
||||
{0x1002, 0x7410, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ALDEBARAN},
|
||||
|
||||
/* CYAN_SKILLFISH */
|
||||
{0x1002, 0x13FE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CYAN_SKILLFISH|AMD_IS_APU},
|
||||
{0x1002, 0x143F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CYAN_SKILLFISH|AMD_IS_APU},
|
||||
|
||||
/* BEIGE_GOBY */
|
||||
{0x1002, 0x7420, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BEIGE_GOBY},
|
||||
|
@ -1994,6 +1970,28 @@ static bool amdgpu_is_fw_framebuffer(resource_size_t base,
|
|||
return found;
|
||||
}
|
||||
|
||||
static void amdgpu_get_secondary_funcs(struct amdgpu_device *adev)
|
||||
{
|
||||
struct pci_dev *p = NULL;
|
||||
int i;
|
||||
|
||||
/* 0 - GPU
|
||||
* 1 - audio
|
||||
* 2 - USB
|
||||
* 3 - UCSI
|
||||
*/
|
||||
for (i = 1; i < 4; i++) {
|
||||
p = pci_get_domain_bus_and_slot(pci_domain_nr(adev->pdev->bus),
|
||||
adev->pdev->bus->number, i);
|
||||
if (p) {
|
||||
pm_runtime_get_sync(&p->dev);
|
||||
pm_runtime_mark_last_busy(&p->dev);
|
||||
pm_runtime_put_autosuspend(&p->dev);
|
||||
pci_dev_put(p);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static int amdgpu_pci_probe(struct pci_dev *pdev,
|
||||
const struct pci_device_id *ent)
|
||||
{
|
||||
|
@ -2129,6 +2127,48 @@ retry_init:
|
|||
if (ret)
|
||||
DRM_ERROR("Creating debugfs files failed (%d).\n", ret);
|
||||
|
||||
if (adev->runpm) {
|
||||
/* only need to skip on ATPX */
|
||||
if (amdgpu_device_supports_px(ddev))
|
||||
dev_pm_set_driver_flags(ddev->dev, DPM_FLAG_NO_DIRECT_COMPLETE);
|
||||
/* we want direct complete for BOCO */
|
||||
if (amdgpu_device_supports_boco(ddev))
|
||||
dev_pm_set_driver_flags(ddev->dev, DPM_FLAG_SMART_PREPARE |
|
||||
DPM_FLAG_SMART_SUSPEND |
|
||||
DPM_FLAG_MAY_SKIP_RESUME);
|
||||
pm_runtime_use_autosuspend(ddev->dev);
|
||||
pm_runtime_set_autosuspend_delay(ddev->dev, 5000);
|
||||
|
||||
pm_runtime_allow(ddev->dev);
|
||||
|
||||
pm_runtime_mark_last_busy(ddev->dev);
|
||||
pm_runtime_put_autosuspend(ddev->dev);
|
||||
|
||||
/*
|
||||
* For runpm implemented via BACO, PMFW will handle the
|
||||
* timing for BACO in and out:
|
||||
* - put ASIC into BACO state only when both video and
|
||||
* audio functions are in D3 state.
|
||||
* - pull ASIC out of BACO state when either video or
|
||||
* audio function is in D0 state.
|
||||
* Also, at startup, PMFW assumes both functions are in
|
||||
* D0 state.
|
||||
*
|
||||
* So if snd driver was loaded prior to amdgpu driver
|
||||
* and audio function was put into D3 state, there will
|
||||
* be no PMFW-aware D-state transition(D0->D3) on runpm
|
||||
* suspend. Thus the BACO will be not correctly kicked in.
|
||||
*
|
||||
* Via amdgpu_get_secondary_funcs(), the audio dev is put
|
||||
* into D0 state. Then there will be a PMFW-aware D-state
|
||||
* transition(D0->D3) on runpm suspend.
|
||||
*/
|
||||
if (amdgpu_device_supports_baco(ddev) &&
|
||||
!(adev->flags & AMD_IS_APU) &&
|
||||
(adev->asic_type >= CHIP_NAVI10))
|
||||
amdgpu_get_secondary_funcs(adev);
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
err_pci:
|
||||
|
@ -2140,8 +2180,15 @@ static void
|
|||
amdgpu_pci_remove(struct pci_dev *pdev)
|
||||
{
|
||||
struct drm_device *dev = pci_get_drvdata(pdev);
|
||||
struct amdgpu_device *adev = drm_to_adev(dev);
|
||||
|
||||
drm_dev_unplug(dev);
|
||||
|
||||
if (adev->runpm) {
|
||||
pm_runtime_get_sync(dev->dev);
|
||||
pm_runtime_forbid(dev->dev);
|
||||
}
|
||||
|
||||
amdgpu_driver_unload_kms(dev);
|
||||
|
||||
/*
|
||||
|
|
|
@ -446,24 +446,18 @@ int amdgpu_fence_driver_start_ring(struct amdgpu_ring *ring,
|
|||
* for the requested ring.
|
||||
*
|
||||
* @ring: ring to init the fence driver on
|
||||
* @num_hw_submission: number of entries on the hardware queue
|
||||
* @sched_score: optional score atomic shared with other schedulers
|
||||
*
|
||||
* Init the fence driver for the requested ring (all asics).
|
||||
* Helper function for amdgpu_fence_driver_init().
|
||||
*/
|
||||
int amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring,
|
||||
unsigned num_hw_submission,
|
||||
atomic_t *sched_score)
|
||||
int amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring)
|
||||
{
|
||||
struct amdgpu_device *adev = ring->adev;
|
||||
long timeout;
|
||||
int r;
|
||||
|
||||
if (!adev)
|
||||
return -EINVAL;
|
||||
|
||||
if (!is_power_of_2(num_hw_submission))
|
||||
if (!is_power_of_2(ring->num_hw_submission))
|
||||
return -EINVAL;
|
||||
|
||||
ring->fence_drv.cpu_addr = NULL;
|
||||
|
@ -474,41 +468,14 @@ int amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring,
|
|||
|
||||
timer_setup(&ring->fence_drv.fallback_timer, amdgpu_fence_fallback, 0);
|
||||
|
||||
ring->fence_drv.num_fences_mask = num_hw_submission * 2 - 1;
|
||||
ring->fence_drv.num_fences_mask = ring->num_hw_submission * 2 - 1;
|
||||
spin_lock_init(&ring->fence_drv.lock);
|
||||
ring->fence_drv.fences = kcalloc(num_hw_submission * 2, sizeof(void *),
|
||||
ring->fence_drv.fences = kcalloc(ring->num_hw_submission * 2, sizeof(void *),
|
||||
GFP_KERNEL);
|
||||
|
||||
if (!ring->fence_drv.fences)
|
||||
return -ENOMEM;
|
||||
|
||||
/* No need to setup the GPU scheduler for rings that don't need it */
|
||||
if (ring->no_scheduler)
|
||||
return 0;
|
||||
|
||||
switch (ring->funcs->type) {
|
||||
case AMDGPU_RING_TYPE_GFX:
|
||||
timeout = adev->gfx_timeout;
|
||||
break;
|
||||
case AMDGPU_RING_TYPE_COMPUTE:
|
||||
timeout = adev->compute_timeout;
|
||||
break;
|
||||
case AMDGPU_RING_TYPE_SDMA:
|
||||
timeout = adev->sdma_timeout;
|
||||
break;
|
||||
default:
|
||||
timeout = adev->video_timeout;
|
||||
break;
|
||||
}
|
||||
|
||||
r = drm_sched_init(&ring->sched, &amdgpu_sched_ops,
|
||||
num_hw_submission, amdgpu_job_hang_limit,
|
||||
timeout, NULL, sched_score, ring->name);
|
||||
if (r) {
|
||||
DRM_ERROR("Failed to create scheduler on ring %s.\n",
|
||||
ring->name);
|
||||
return r;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -30,7 +30,6 @@
|
|||
#include "amdgpu_eeprom.h"
|
||||
|
||||
#define FRU_EEPROM_MADDR 0x60000
|
||||
#define I2C_PRODUCT_INFO_OFFSET 0xC0
|
||||
|
||||
static bool is_fru_eeprom_supported(struct amdgpu_device *adev)
|
||||
{
|
||||
|
@ -40,7 +39,13 @@ static bool is_fru_eeprom_supported(struct amdgpu_device *adev)
|
|||
*/
|
||||
struct atom_context *atom_ctx = adev->mode_info.atom_context;
|
||||
|
||||
/* VBIOS is of the format ###-DXXXYY-##. For SKU identification,
|
||||
/* The i2c access is blocked on VF
|
||||
* TODO: Need other way to get the info
|
||||
*/
|
||||
if (amdgpu_sriov_vf(adev))
|
||||
return false;
|
||||
|
||||
/* VBIOS is of the format ###-DXXXYYYY-##. For SKU identification,
|
||||
* we can use just the "DXXX" portion. If there were more models, we
|
||||
* could convert the 3 characters to a hex integer and use a switch
|
||||
* for ease/speed/readability. For now, 2 string comparisons are
|
||||
|
@ -59,17 +64,24 @@ static bool is_fru_eeprom_supported(struct amdgpu_device *adev)
|
|||
case CHIP_ALDEBARAN:
|
||||
/* All Aldebaran SKUs have the FRU */
|
||||
return true;
|
||||
case CHIP_SIENNA_CICHLID:
|
||||
if (strnstr(atom_ctx->vbios_version, "D603",
|
||||
sizeof(atom_ctx->vbios_version)))
|
||||
return true;
|
||||
else
|
||||
return false;
|
||||
default:
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
static int amdgpu_fru_read_eeprom(struct amdgpu_device *adev, uint32_t addrptr,
|
||||
unsigned char *buff)
|
||||
unsigned char *buf, size_t buf_size)
|
||||
{
|
||||
int ret, size;
|
||||
int ret;
|
||||
u8 size;
|
||||
|
||||
ret = amdgpu_eeprom_read(&adev->pm.smu_i2c, addrptr, buff, 1);
|
||||
ret = amdgpu_eeprom_read(adev->pm.fru_eeprom_i2c_bus, addrptr, buf, 1);
|
||||
if (ret < 1) {
|
||||
DRM_WARN("FRU: Failed to get size field");
|
||||
return ret;
|
||||
|
@ -78,9 +90,11 @@ static int amdgpu_fru_read_eeprom(struct amdgpu_device *adev, uint32_t addrptr,
|
|||
/* The size returned by the i2c requires subtraction of 0xC0 since the
|
||||
* size apparently always reports as 0xC0+actual size.
|
||||
*/
|
||||
size = buff[0] - I2C_PRODUCT_INFO_OFFSET;
|
||||
size = buf[0] & 0x3F;
|
||||
size = min_t(size_t, size, buf_size);
|
||||
|
||||
ret = amdgpu_eeprom_read(&adev->pm.smu_i2c, addrptr + 1, buff, size);
|
||||
ret = amdgpu_eeprom_read(adev->pm.fru_eeprom_i2c_bus, addrptr + 1,
|
||||
buf, size);
|
||||
if (ret < 1) {
|
||||
DRM_WARN("FRU: Failed to get data field");
|
||||
return ret;
|
||||
|
@ -91,19 +105,15 @@ static int amdgpu_fru_read_eeprom(struct amdgpu_device *adev, uint32_t addrptr,
|
|||
|
||||
int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
|
||||
{
|
||||
unsigned char buff[AMDGPU_PRODUCT_NAME_LEN+2];
|
||||
unsigned char buf[AMDGPU_PRODUCT_NAME_LEN];
|
||||
u32 addrptr;
|
||||
int size, len;
|
||||
int offset = 2;
|
||||
|
||||
if (!is_fru_eeprom_supported(adev))
|
||||
return 0;
|
||||
|
||||
if (adev->asic_type == CHIP_ALDEBARAN)
|
||||
offset = 0;
|
||||
|
||||
/* If algo exists, it means that the i2c_adapter's initialized */
|
||||
if (!adev->pm.smu_i2c.algo) {
|
||||
if (!adev->pm.fru_eeprom_i2c_bus || !adev->pm.fru_eeprom_i2c_bus->algo) {
|
||||
DRM_WARN("Cannot access FRU, EEPROM accessor not initialized");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
@ -121,7 +131,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
|
|||
* and the language field, so just start from 0xb, manufacturer size
|
||||
*/
|
||||
addrptr = FRU_EEPROM_MADDR + 0xb;
|
||||
size = amdgpu_fru_read_eeprom(adev, addrptr, buff);
|
||||
size = amdgpu_fru_read_eeprom(adev, addrptr, buf, sizeof(buf));
|
||||
if (size < 1) {
|
||||
DRM_ERROR("Failed to read FRU Manufacturer, ret:%d", size);
|
||||
return -EINVAL;
|
||||
|
@ -131,7 +141,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
|
|||
* size field being 1 byte. This pattern continues below.
|
||||
*/
|
||||
addrptr += size + 1;
|
||||
size = amdgpu_fru_read_eeprom(adev, addrptr, buff);
|
||||
size = amdgpu_fru_read_eeprom(adev, addrptr, buf, sizeof(buf));
|
||||
if (size < 1) {
|
||||
DRM_ERROR("Failed to read FRU product name, ret:%d", size);
|
||||
return -EINVAL;
|
||||
|
@ -143,12 +153,11 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
|
|||
AMDGPU_PRODUCT_NAME_LEN);
|
||||
len = AMDGPU_PRODUCT_NAME_LEN - 1;
|
||||
}
|
||||
/* Start at 2 due to buff using fields 0 and 1 for the address */
|
||||
memcpy(adev->product_name, &buff[offset], len);
|
||||
memcpy(adev->product_name, buf, len);
|
||||
adev->product_name[len] = '\0';
|
||||
|
||||
addrptr += size + 1;
|
||||
size = amdgpu_fru_read_eeprom(adev, addrptr, buff);
|
||||
size = amdgpu_fru_read_eeprom(adev, addrptr, buf, sizeof(buf));
|
||||
if (size < 1) {
|
||||
DRM_ERROR("Failed to read FRU product number, ret:%d", size);
|
||||
return -EINVAL;
|
||||
|
@ -162,11 +171,11 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
|
|||
DRM_WARN("FRU Product Number is larger than 16 characters. This is likely a mistake");
|
||||
len = sizeof(adev->product_number) - 1;
|
||||
}
|
||||
memcpy(adev->product_number, &buff[offset], len);
|
||||
memcpy(adev->product_number, buf, len);
|
||||
adev->product_number[len] = '\0';
|
||||
|
||||
addrptr += size + 1;
|
||||
size = amdgpu_fru_read_eeprom(adev, addrptr, buff);
|
||||
size = amdgpu_fru_read_eeprom(adev, addrptr, buf, sizeof(buf));
|
||||
|
||||
if (size < 1) {
|
||||
DRM_ERROR("Failed to read FRU product version, ret:%d", size);
|
||||
|
@ -174,7 +183,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
|
|||
}
|
||||
|
||||
addrptr += size + 1;
|
||||
size = amdgpu_fru_read_eeprom(adev, addrptr, buff);
|
||||
size = amdgpu_fru_read_eeprom(adev, addrptr, buf, sizeof(buf));
|
||||
|
||||
if (size < 1) {
|
||||
DRM_ERROR("Failed to read FRU serial number, ret:%d", size);
|
||||
|
@ -189,7 +198,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
|
|||
DRM_WARN("FRU Serial Number is larger than 16 characters. This is likely a mistake");
|
||||
len = sizeof(adev->serial) - 1;
|
||||
}
|
||||
memcpy(adev->serial, &buff[offset], len);
|
||||
memcpy(adev->serial, buf, len);
|
||||
adev->serial[len] = '\0';
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -150,7 +150,7 @@ void amdgpu_gart_table_vram_free(struct amdgpu_device *adev)
|
|||
* replaces them with the dummy page (all asics).
|
||||
* Returns 0 for success, -EINVAL for failure.
|
||||
*/
|
||||
int amdgpu_gart_unbind(struct amdgpu_device *adev, uint64_t offset,
|
||||
void amdgpu_gart_unbind(struct amdgpu_device *adev, uint64_t offset,
|
||||
int pages)
|
||||
{
|
||||
unsigned t;
|
||||
|
@ -161,13 +161,11 @@ int amdgpu_gart_unbind(struct amdgpu_device *adev, uint64_t offset,
|
|||
uint64_t flags = 0;
|
||||
int idx;
|
||||
|
||||
if (!adev->gart.ready) {
|
||||
WARN(1, "trying to unbind memory from uninitialized GART !\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
if (!adev->gart.ptr)
|
||||
return;
|
||||
|
||||
if (!drm_dev_enter(adev_to_drm(adev), &idx))
|
||||
return 0;
|
||||
return;
|
||||
|
||||
t = offset / AMDGPU_GPU_PAGE_SIZE;
|
||||
p = t / AMDGPU_GPU_PAGES_IN_CPU_PAGE;
|
||||
|
@ -188,7 +186,6 @@ int amdgpu_gart_unbind(struct amdgpu_device *adev, uint64_t offset,
|
|||
amdgpu_gmc_flush_gpu_tlb(adev, 0, i, 0);
|
||||
|
||||
drm_dev_exit(idx);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -204,7 +201,7 @@ int amdgpu_gart_unbind(struct amdgpu_device *adev, uint64_t offset,
|
|||
* Map the dma_addresses into GART entries (all asics).
|
||||
* Returns 0 for success, -EINVAL for failure.
|
||||
*/
|
||||
int amdgpu_gart_map(struct amdgpu_device *adev, uint64_t offset,
|
||||
void amdgpu_gart_map(struct amdgpu_device *adev, uint64_t offset,
|
||||
int pages, dma_addr_t *dma_addr, uint64_t flags,
|
||||
void *dst)
|
||||
{
|
||||
|
@ -212,13 +209,8 @@ int amdgpu_gart_map(struct amdgpu_device *adev, uint64_t offset,
|
|||
unsigned i, j, t;
|
||||
int idx;
|
||||
|
||||
if (!adev->gart.ready) {
|
||||
WARN(1, "trying to bind memory to uninitialized GART !\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (!drm_dev_enter(adev_to_drm(adev), &idx))
|
||||
return 0;
|
||||
return;
|
||||
|
||||
t = offset / AMDGPU_GPU_PAGE_SIZE;
|
||||
|
||||
|
@ -230,7 +222,6 @@ int amdgpu_gart_map(struct amdgpu_device *adev, uint64_t offset,
|
|||
}
|
||||
}
|
||||
drm_dev_exit(idx);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -246,20 +237,14 @@ int amdgpu_gart_map(struct amdgpu_device *adev, uint64_t offset,
|
|||
* (all asics).
|
||||
* Returns 0 for success, -EINVAL for failure.
|
||||
*/
|
||||
int amdgpu_gart_bind(struct amdgpu_device *adev, uint64_t offset,
|
||||
void amdgpu_gart_bind(struct amdgpu_device *adev, uint64_t offset,
|
||||
int pages, dma_addr_t *dma_addr,
|
||||
uint64_t flags)
|
||||
{
|
||||
if (!adev->gart.ready) {
|
||||
WARN(1, "trying to bind memory to uninitialized GART !\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (!adev->gart.ptr)
|
||||
return 0;
|
||||
return;
|
||||
|
||||
return amdgpu_gart_map(adev, offset, pages, dma_addr, flags,
|
||||
adev->gart.ptr);
|
||||
amdgpu_gart_map(adev, offset, pages, dma_addr, flags, adev->gart.ptr);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -274,6 +259,9 @@ void amdgpu_gart_invalidate_tlb(struct amdgpu_device *adev)
|
|||
{
|
||||
int i;
|
||||
|
||||
if (!adev->gart.ptr)
|
||||
return;
|
||||
|
||||
mb();
|
||||
amdgpu_device_flush_hdp(adev, NULL);
|
||||
for (i = 0; i < adev->num_vmhubs; i++)
|
||||
|
|
|
@ -46,7 +46,6 @@ struct amdgpu_gart {
|
|||
unsigned num_gpu_pages;
|
||||
unsigned num_cpu_pages;
|
||||
unsigned table_size;
|
||||
bool ready;
|
||||
|
||||
/* Asic default pte flags */
|
||||
uint64_t gart_pte_flags;
|
||||
|
@ -58,12 +57,12 @@ int amdgpu_gart_table_vram_pin(struct amdgpu_device *adev);
|
|||
void amdgpu_gart_table_vram_unpin(struct amdgpu_device *adev);
|
||||
int amdgpu_gart_init(struct amdgpu_device *adev);
|
||||
void amdgpu_gart_dummy_page_fini(struct amdgpu_device *adev);
|
||||
int amdgpu_gart_unbind(struct amdgpu_device *adev, uint64_t offset,
|
||||
int pages);
|
||||
int amdgpu_gart_map(struct amdgpu_device *adev, uint64_t offset,
|
||||
int pages, dma_addr_t *dma_addr, uint64_t flags,
|
||||
void *dst);
|
||||
int amdgpu_gart_bind(struct amdgpu_device *adev, uint64_t offset,
|
||||
int pages, dma_addr_t *dma_addr, uint64_t flags);
|
||||
void amdgpu_gart_unbind(struct amdgpu_device *adev, uint64_t offset,
|
||||
int pages);
|
||||
void amdgpu_gart_map(struct amdgpu_device *adev, uint64_t offset,
|
||||
int pages, dma_addr_t *dma_addr, uint64_t flags,
|
||||
void *dst);
|
||||
void amdgpu_gart_bind(struct amdgpu_device *adev, uint64_t offset,
|
||||
int pages, dma_addr_t *dma_addr, uint64_t flags);
|
||||
void amdgpu_gart_invalidate_tlb(struct amdgpu_device *adev);
|
||||
#endif
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue