- More changes on simplifying locking mechanisms (Chris)
- Selftests fixes and improvements (Chris) - More work around engine tracking for better handling (Chris, Tvrtko) - HDCP debug and info improvements (Ram, Ashuman) - Add DSI properties (Vandita) - Rework on sdvo support for better debuggability before fixing bugs (Ville) - Display PLLs fixes and improvements, specially targeting Ice Lake (Imre, Matt, Ville) - Perf fixes and improvements (Lionel) - Enumerate scratch buffers (Lionel) - Add infra to hold off preemption on a request (Lionel) - Ice Lake color space fixes (Uma) - Type-C fixes and improvements (Lucas) - Fix and improvements around workarounds (Chris, John, Tvrtko) - GuC related fixes and improvements (Chris, Daniele, Michal, Tvrtko) - Fix on VLV/CHV display power domain (Ville) - Improvements around Watermark (Ville) - Favor intel_ types on intel_atomic functions (Ville) - Don’t pass stack garbage to pcode (Ville) - Improve display tracepoints (Steven) - Don’t overestimate 4:2:0 link symbol clock (Ville) - Add support for 4th pipe and transcoder (Lucas) - Introduce initial support for Tiger Lake platform (Daniele, Lucas, Mahesh, Jose, Imre, Mika, Vandita, Rodrigo, Michel) - PPGTT allocation simplification (Chris) - Standardize function names and suffixes to make clean, symmetric and let checkpatch happy (Janusz) - Skip SINK_COUNT read on CH7511 (Ville) - Fix on kernel documentation (Chris, Michal) - Add modular FIA (Anusha, Lucas) - Fix EHL display (Matt, Vivek) - Enable hotplug retry (Imre, Jose) - Disable preemption under GVT (Chris) - OA; Reconfigure context on the fly (Chris) - Fixes and improvements around engine reset. (Chris) - Small clean up on display pipe fault mask (Ville) - Make sure cdclk is high enough for DP audio on VLV/CHV (Ville) - Drop some wmb() and improve pwrite flush (Chris) - Fix critical PSR regression (DK) - Remove unused variables (YueHaibing) - Use dev_get_drvdata for simplification (Chunhong) - Use upstream version of header tests (Jani) -----BEGIN PGP SIGNATURE----- iQEcBAABAgAGBQJdQJHXAAoJEPpiX2QO6xPKIpkH/3lMqbuv6UXyX1zvcYj6Ap4g c6ocA7O1ooQDfFBfnLJNd6D+Gs3uTt9KROL0WdhmolfgzfLihFnvSx1VP/pvi7gC kVT1JbwbzuwYbBXQ8WhmtkfqDp/quy3wku/ThNchY9pG1IaqNuRiP35+pXRNLO08 Q+RUHl8j1OkoLTLuzxfYGFtY72F8mIlkki8zMwlthH2Skz9h9d8POh8phOv+3TDx aQ7CsOfScnLSrEyWlnOeYFexps0LpNC7TAG8fGkVI28Jig16DSwg7QR3MhQ9UtB1 8IC3+Jz8+p83PQHx7mGS7Va/XTERVT4czsoNC/IK7cFMy1yFilzoqpFHH8Is3sk= =dAkP -----END PGP SIGNATURE----- Merge tag 'drm-intel-next-2019-07-30' of git://anongit.freedesktop.org/drm/drm-intel into drm-next - More changes on simplifying locking mechanisms (Chris) - Selftests fixes and improvements (Chris) - More work around engine tracking for better handling (Chris, Tvrtko) - HDCP debug and info improvements (Ram, Ashuman) - Add DSI properties (Vandita) - Rework on sdvo support for better debuggability before fixing bugs (Ville) - Display PLLs fixes and improvements, specially targeting Ice Lake (Imre, Matt, Ville) - Perf fixes and improvements (Lionel) - Enumerate scratch buffers (Lionel) - Add infra to hold off preemption on a request (Lionel) - Ice Lake color space fixes (Uma) - Type-C fixes and improvements (Lucas) - Fix and improvements around workarounds (Chris, John, Tvrtko) - GuC related fixes and improvements (Chris, Daniele, Michal, Tvrtko) - Fix on VLV/CHV display power domain (Ville) - Improvements around Watermark (Ville) - Favor intel_ types on intel_atomic functions (Ville) - Don’t pass stack garbage to pcode (Ville) - Improve display tracepoints (Steven) - Don’t overestimate 4:2:0 link symbol clock (Ville) - Add support for 4th pipe and transcoder (Lucas) - Introduce initial support for Tiger Lake platform (Daniele, Lucas, Mahesh, Jose, Imre, Mika, Vandita, Rodrigo, Michel) - PPGTT allocation simplification (Chris) - Standardize function names and suffixes to make clean, symmetric and let checkpatch happy (Janusz) - Skip SINK_COUNT read on CH7511 (Ville) - Fix on kernel documentation (Chris, Michal) - Add modular FIA (Anusha, Lucas) - Fix EHL display (Matt, Vivek) - Enable hotplug retry (Imre, Jose) - Disable preemption under GVT (Chris) - OA; Reconfigure context on the fly (Chris) - Fixes and improvements around engine reset. (Chris) - Small clean up on display pipe fault mask (Ville) - Make sure cdclk is high enough for DP audio on VLV/CHV (Ville) - Drop some wmb() and improve pwrite flush (Chris) - Fix critical PSR regression (DK) - Remove unused variables (YueHaibing) - Use dev_get_drvdata for simplification (Chunhong) - Use upstream version of header tests (Jani) drm-intel-next-2019-07-08: - Signal fence completion from i915_request_wait (Chris) - Fixes and improvements around rings pin/unpin (Chris) - Display uncore prep patches (Daniele) - Execlists preemption improvements (Chris) - Selftests fixes and improvements (Chris) - More Elkhartlake enabling work (Vandita, Jose, Matt, Vivek) - Defer address space cleanup to an RCU worker (Chris) - Implicit dev_priv removal and GT compartmentalization and other related follow-ups (Tvrtko, Chris) - Prevent dereference of engine before NULL check in error capture (Chris) - GuC related fixes (Daniele, Robert) - Many changes on active tracking, timelines and locking mechanisms (Chris) - Disable SAMPLER_STATE prefetching on Gen11 (HW W/a) (Kenneth) - I915_perf fixes (Lionel) - Add Ice Lake PCI ID (Mika) - eDP backlight fix (Lee) - Fix various gen2 tracepoints (Ville) - Some irq vfunc clean-up and improvements (Ville) - Move OA files to separated folder (Michal) - Display self contained headers clean-up (Jani) - Preparation for 4th pile (Lucas) - Move atomic commit, watermark and other places to use more intel_crtc_state (Maarten) - Many Ice Lake Type C and Thunderbolt fixes (Imre) - Fix some Ice Lake hw w/a whitelist regs (Lionel) - Fix memleak in runtime wakeref tracking (Mika) - Remove unused Private PPAT manager (Michal) - Don't check PPGTT presence on PPGTT-only platforms (Michal) - Fix ICL DSI suspend/resume (Chris) - Fix ICL Bandwidth issues (Ville) - Add N & CTS values for 10/12 bit deep color (Aditya) - Moving more GT related stuff under gt folder (Chris) - Forcewake related fixes (Chris) - Show support for accurate sw PMU busyness tracking (Chris) - Handle gtt double alloc failures (Chris) - Upgrade to new GuC version (Michal) - Improve w/a debug dumps and pull engine w/a initialization into a common (Chris) - Look for instdone on all engines at hangcheck (Tvrtko) - Engine lookup simplification (Chris) - Many plane color formats fixes and improvements (Ville) - Fix some compilation issues (YueHaibing) - GTT page directory clean up and improvements (Mika) Signed-off-by: Dave Airlie <airlied@redhat.com> From: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190801201314.GA23635@intel.com
This commit is contained in:
commit
dce14e36ae
|
@ -430,31 +430,31 @@ WOPCM Layout
|
|||
GuC
|
||||
===
|
||||
|
||||
Firmware Layout
|
||||
-------------------
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/i915/gt/uc/intel_uc_fw_abi.h
|
||||
:doc: Firmware Layout
|
||||
|
||||
GuC-specific firmware loader
|
||||
----------------------------
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/i915/intel_guc_fw.c
|
||||
.. kernel-doc:: drivers/gpu/drm/i915/gt/uc/intel_guc_fw.c
|
||||
:internal:
|
||||
|
||||
GuC-based command submission
|
||||
----------------------------
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/i915/intel_guc_submission.c
|
||||
.. kernel-doc:: drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
|
||||
:doc: GuC-based command submission
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/i915/intel_guc_submission.c
|
||||
.. kernel-doc:: drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
|
||||
:internal:
|
||||
|
||||
GuC Firmware Layout
|
||||
-------------------
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/i915/intel_guc_fwif.h
|
||||
:doc: GuC Firmware Layout
|
||||
|
||||
GuC Address Space
|
||||
-----------------
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/i915/intel_guc.c
|
||||
.. kernel-doc:: drivers/gpu/drm/i915/gt/uc/intel_guc.c
|
||||
:doc: GuC Address Space
|
||||
|
||||
Tracing
|
||||
|
|
|
@ -549,6 +549,7 @@ static const struct pci_device_id intel_early_ids[] __initconst = {
|
|||
INTEL_CNL_IDS(&gen9_early_ops),
|
||||
INTEL_ICL_11_IDS(&gen11_early_ops),
|
||||
INTEL_EHL_IDS(&gen11_early_ops),
|
||||
INTEL_TGL_12_IDS(&gen11_early_ops),
|
||||
};
|
||||
|
||||
struct resource intel_graphics_stolen_res __ro_after_init = DEFINE_RES_MEM(0, 0);
|
||||
|
|
|
@ -7,6 +7,7 @@ config DRM_I915_WERROR
|
|||
# We use the dependency on !COMPILE_TEST to not be enabled in
|
||||
# allmodconfig or allyesconfig configurations
|
||||
depends on !COMPILE_TEST
|
||||
select HEADER_TEST
|
||||
default n
|
||||
help
|
||||
Add -Werror to the build flags for (and only for) i915.ko.
|
||||
|
@ -94,6 +95,20 @@ config DRM_I915_TRACE_GEM
|
|||
|
||||
If in doubt, say "N".
|
||||
|
||||
config DRM_I915_TRACE_GTT
|
||||
bool "Insert extra ftrace output from the GTT internals"
|
||||
depends on DRM_I915_DEBUG_GEM
|
||||
select TRACING
|
||||
default n
|
||||
help
|
||||
Enable additional and verbose debugging output that will spam
|
||||
ordinary tests, but may be vital for post-mortem debugging when
|
||||
used with /proc/sys/kernel/ftrace_dump_on_oops
|
||||
|
||||
Recommended for driver developers only.
|
||||
|
||||
If in doubt, say "N".
|
||||
|
||||
config DRM_I915_SW_FENCE_DEBUG_OBJECTS
|
||||
bool "Enable additional driver debugging for fence objects"
|
||||
depends on DRM_I915
|
||||
|
|
|
@ -32,9 +32,9 @@ subdir-ccflags-y += \
|
|||
$(call as-instr,movntdqa (%eax)$(comma)%xmm0,-DCONFIG_AS_MOVNTDQA)
|
||||
|
||||
# Extra header tests
|
||||
include $(src)/Makefile.header-test
|
||||
header-test-pattern-$(CONFIG_DRM_I915_WERROR) := *.h
|
||||
|
||||
subdir-ccflags-y += -I$(src)
|
||||
subdir-ccflags-y += -I$(srctree)/$(src)
|
||||
|
||||
# Please keep these build lists sorted!
|
||||
|
||||
|
@ -73,14 +73,23 @@ gt-y += \
|
|||
gt/intel_context.o \
|
||||
gt/intel_engine_cs.o \
|
||||
gt/intel_engine_pm.o \
|
||||
gt/intel_gt.o \
|
||||
gt/intel_gt_pm.o \
|
||||
gt/intel_hangcheck.o \
|
||||
gt/intel_lrc.o \
|
||||
gt/intel_renderstate.o \
|
||||
gt/intel_reset.o \
|
||||
gt/intel_ringbuffer.o \
|
||||
gt/intel_mocs.o \
|
||||
gt/intel_sseu.o \
|
||||
gt/intel_timeline.o \
|
||||
gt/intel_workarounds.o
|
||||
# autogenerated null render state
|
||||
gt-y += \
|
||||
gt/gen6_renderstate.o \
|
||||
gt/gen7_renderstate.o \
|
||||
gt/gen8_renderstate.o \
|
||||
gt/gen9_renderstate.o
|
||||
gt-$(CONFIG_DRM_I915_SELFTEST) += \
|
||||
gt/mock_engine.o
|
||||
i915-y += $(gt-y)
|
||||
|
@ -120,33 +129,26 @@ i915-y += \
|
|||
i915_gem_fence_reg.o \
|
||||
i915_gem_gtt.o \
|
||||
i915_gem.o \
|
||||
i915_gem_render_state.o \
|
||||
i915_globals.o \
|
||||
i915_query.o \
|
||||
i915_request.o \
|
||||
i915_scheduler.o \
|
||||
i915_timeline.o \
|
||||
i915_trace_points.o \
|
||||
i915_vma.o \
|
||||
intel_wopcm.o
|
||||
|
||||
# general-purpose microcontroller (GuC) support
|
||||
i915-y += intel_uc.o \
|
||||
intel_uc_fw.o \
|
||||
intel_guc.o \
|
||||
intel_guc_ads.o \
|
||||
intel_guc_ct.o \
|
||||
intel_guc_fw.o \
|
||||
intel_guc_log.o \
|
||||
intel_guc_submission.o \
|
||||
intel_huc.o \
|
||||
intel_huc_fw.o
|
||||
|
||||
# autogenerated null render state
|
||||
i915-y += intel_renderstate_gen6.o \
|
||||
intel_renderstate_gen7.o \
|
||||
intel_renderstate_gen8.o \
|
||||
intel_renderstate_gen9.o
|
||||
obj-y += gt/uc/
|
||||
i915-y += gt/uc/intel_uc.o \
|
||||
gt/uc/intel_uc_fw.o \
|
||||
gt/uc/intel_guc.o \
|
||||
gt/uc/intel_guc_ads.o \
|
||||
gt/uc/intel_guc_ct.o \
|
||||
gt/uc/intel_guc_fw.o \
|
||||
gt/uc/intel_guc_log.o \
|
||||
gt/uc/intel_guc_submission.o \
|
||||
gt/uc/intel_huc.o \
|
||||
gt/uc/intel_huc_fw.o
|
||||
|
||||
# modesetting core code
|
||||
obj-y += display/
|
||||
|
@ -173,7 +175,8 @@ i915-y += \
|
|||
display/intel_overlay.o \
|
||||
display/intel_psr.o \
|
||||
display/intel_quirks.o \
|
||||
display/intel_sprite.o
|
||||
display/intel_sprite.o \
|
||||
display/intel_tc.o
|
||||
i915-$(CONFIG_ACPI) += \
|
||||
display/intel_acpi.o \
|
||||
display/intel_opregion.o
|
||||
|
@ -210,6 +213,25 @@ i915-y += \
|
|||
display/vlv_dsi.o \
|
||||
display/vlv_dsi_pll.o
|
||||
|
||||
# perf code
|
||||
obj-y += oa/
|
||||
i915-y += \
|
||||
oa/i915_oa_hsw.o \
|
||||
oa/i915_oa_bdw.o \
|
||||
oa/i915_oa_chv.o \
|
||||
oa/i915_oa_sklgt2.o \
|
||||
oa/i915_oa_sklgt3.o \
|
||||
oa/i915_oa_sklgt4.o \
|
||||
oa/i915_oa_bxt.o \
|
||||
oa/i915_oa_kblgt2.o \
|
||||
oa/i915_oa_kblgt3.o \
|
||||
oa/i915_oa_glk.o \
|
||||
oa/i915_oa_cflgt2.o \
|
||||
oa/i915_oa_cflgt3.o \
|
||||
oa/i915_oa_cnl.o \
|
||||
oa/i915_oa_icl.o
|
||||
i915-y += i915_perf.o
|
||||
|
||||
# Post-mortem debug and GPU hang state capture
|
||||
i915-$(CONFIG_DRM_I915_CAPTURE_ERROR) += i915_gpu_error.o
|
||||
i915-$(CONFIG_DRM_I915_SELFTEST) += \
|
||||
|
@ -224,23 +246,6 @@ i915-$(CONFIG_DRM_I915_SELFTEST) += \
|
|||
# virtual gpu code
|
||||
i915-y += i915_vgpu.o
|
||||
|
||||
# perf code
|
||||
i915-y += i915_perf.o \
|
||||
i915_oa_hsw.o \
|
||||
i915_oa_bdw.o \
|
||||
i915_oa_chv.o \
|
||||
i915_oa_sklgt2.o \
|
||||
i915_oa_sklgt3.o \
|
||||
i915_oa_sklgt4.o \
|
||||
i915_oa_bxt.o \
|
||||
i915_oa_kblgt2.o \
|
||||
i915_oa_kblgt3.o \
|
||||
i915_oa_glk.o \
|
||||
i915_oa_cflgt2.o \
|
||||
i915_oa_cflgt3.o \
|
||||
i915_oa_cnl.o \
|
||||
i915_oa_icl.o
|
||||
|
||||
ifeq ($(CONFIG_DRM_I915_GVT),y)
|
||||
i915-y += intel_gvt.o
|
||||
include $(src)/gvt/Makefile
|
||||
|
|
|
@ -1,22 +0,0 @@
|
|||
# SPDX-License-Identifier: MIT
|
||||
# Copyright © 2019 Intel Corporation
|
||||
|
||||
# Test the headers are compilable as standalone units
|
||||
header-test-$(CONFIG_DRM_I915_WERROR) := \
|
||||
i915_active_types.h \
|
||||
i915_debugfs.h \
|
||||
i915_drv.h \
|
||||
i915_irq.h \
|
||||
i915_params.h \
|
||||
i915_priolist_types.h \
|
||||
i915_reg.h \
|
||||
i915_scheduler_types.h \
|
||||
i915_timeline_types.h \
|
||||
i915_utils.h \
|
||||
intel_csr.h \
|
||||
intel_drv.h \
|
||||
intel_pm.h \
|
||||
intel_runtime_pm.h \
|
||||
intel_sideband.h \
|
||||
intel_uncore.h \
|
||||
intel_wakeref.h
|
|
@ -1,2 +1,6 @@
|
|||
# For building individual subdir files on the command line
|
||||
subdir-ccflags-y += -I$(srctree)/$(src)/..
|
||||
|
||||
# Extra header tests
|
||||
include $(src)/Makefile.header-test
|
||||
header-test-pattern-$(CONFIG_DRM_I915_WERROR) := *.h
|
||||
header-test- := intel_vbt_defs.h
|
||||
|
|
|
@ -1,16 +0,0 @@
|
|||
# SPDX-License-Identifier: MIT
|
||||
# Copyright © 2019 Intel Corporation
|
||||
|
||||
# Test the headers are compilable as standalone units
|
||||
header_test := $(notdir $(filter-out %/intel_vbt_defs.h,$(wildcard $(src)/*.h)))
|
||||
|
||||
quiet_cmd_header_test = HDRTEST $@
|
||||
cmd_header_test = echo "\#include \"$(<F)\"" > $@
|
||||
|
||||
header_test_%.c: %.h
|
||||
$(call cmd,header_test)
|
||||
|
||||
extra-$(CONFIG_DRM_I915_WERROR) += \
|
||||
$(foreach h,$(header_test),$(patsubst %.h,header_test_%.o,$(h)))
|
||||
|
||||
clean-files += $(foreach h,$(header_test),$(patsubst %.h,header_test_%.c,$(h)))
|
|
@ -202,63 +202,62 @@ static void dsi_program_swing_and_deemphasis(struct intel_encoder *encoder)
|
|||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base);
|
||||
enum port port;
|
||||
enum phy phy;
|
||||
u32 tmp;
|
||||
int lane;
|
||||
|
||||
for_each_dsi_port(port, intel_dsi->ports) {
|
||||
|
||||
for_each_dsi_phy(phy, intel_dsi->phys) {
|
||||
/*
|
||||
* Program voltage swing and pre-emphasis level values as per
|
||||
* table in BSPEC under DDI buffer programing
|
||||
*/
|
||||
tmp = I915_READ(ICL_PORT_TX_DW5_LN0(port));
|
||||
tmp = I915_READ(ICL_PORT_TX_DW5_LN0(phy));
|
||||
tmp &= ~(SCALING_MODE_SEL_MASK | RTERM_SELECT_MASK);
|
||||
tmp |= SCALING_MODE_SEL(0x2);
|
||||
tmp |= TAP2_DISABLE | TAP3_DISABLE;
|
||||
tmp |= RTERM_SELECT(0x6);
|
||||
I915_WRITE(ICL_PORT_TX_DW5_GRP(port), tmp);
|
||||
I915_WRITE(ICL_PORT_TX_DW5_GRP(phy), tmp);
|
||||
|
||||
tmp = I915_READ(ICL_PORT_TX_DW5_AUX(port));
|
||||
tmp = I915_READ(ICL_PORT_TX_DW5_AUX(phy));
|
||||
tmp &= ~(SCALING_MODE_SEL_MASK | RTERM_SELECT_MASK);
|
||||
tmp |= SCALING_MODE_SEL(0x2);
|
||||
tmp |= TAP2_DISABLE | TAP3_DISABLE;
|
||||
tmp |= RTERM_SELECT(0x6);
|
||||
I915_WRITE(ICL_PORT_TX_DW5_AUX(port), tmp);
|
||||
I915_WRITE(ICL_PORT_TX_DW5_AUX(phy), tmp);
|
||||
|
||||
tmp = I915_READ(ICL_PORT_TX_DW2_LN0(port));
|
||||
tmp = I915_READ(ICL_PORT_TX_DW2_LN0(phy));
|
||||
tmp &= ~(SWING_SEL_LOWER_MASK | SWING_SEL_UPPER_MASK |
|
||||
RCOMP_SCALAR_MASK);
|
||||
tmp |= SWING_SEL_UPPER(0x2);
|
||||
tmp |= SWING_SEL_LOWER(0x2);
|
||||
tmp |= RCOMP_SCALAR(0x98);
|
||||
I915_WRITE(ICL_PORT_TX_DW2_GRP(port), tmp);
|
||||
I915_WRITE(ICL_PORT_TX_DW2_GRP(phy), tmp);
|
||||
|
||||
tmp = I915_READ(ICL_PORT_TX_DW2_AUX(port));
|
||||
tmp = I915_READ(ICL_PORT_TX_DW2_AUX(phy));
|
||||
tmp &= ~(SWING_SEL_LOWER_MASK | SWING_SEL_UPPER_MASK |
|
||||
RCOMP_SCALAR_MASK);
|
||||
tmp |= SWING_SEL_UPPER(0x2);
|
||||
tmp |= SWING_SEL_LOWER(0x2);
|
||||
tmp |= RCOMP_SCALAR(0x98);
|
||||
I915_WRITE(ICL_PORT_TX_DW2_AUX(port), tmp);
|
||||
I915_WRITE(ICL_PORT_TX_DW2_AUX(phy), tmp);
|
||||
|
||||
tmp = I915_READ(ICL_PORT_TX_DW4_AUX(port));
|
||||
tmp = I915_READ(ICL_PORT_TX_DW4_AUX(phy));
|
||||
tmp &= ~(POST_CURSOR_1_MASK | POST_CURSOR_2_MASK |
|
||||
CURSOR_COEFF_MASK);
|
||||
tmp |= POST_CURSOR_1(0x0);
|
||||
tmp |= POST_CURSOR_2(0x0);
|
||||
tmp |= CURSOR_COEFF(0x3f);
|
||||
I915_WRITE(ICL_PORT_TX_DW4_AUX(port), tmp);
|
||||
I915_WRITE(ICL_PORT_TX_DW4_AUX(phy), tmp);
|
||||
|
||||
for (lane = 0; lane <= 3; lane++) {
|
||||
/* Bspec: must not use GRP register for write */
|
||||
tmp = I915_READ(ICL_PORT_TX_DW4_LN(lane, port));
|
||||
tmp = I915_READ(ICL_PORT_TX_DW4_LN(lane, phy));
|
||||
tmp &= ~(POST_CURSOR_1_MASK | POST_CURSOR_2_MASK |
|
||||
CURSOR_COEFF_MASK);
|
||||
tmp |= POST_CURSOR_1(0x0);
|
||||
tmp |= POST_CURSOR_2(0x0);
|
||||
tmp |= CURSOR_COEFF(0x3f);
|
||||
I915_WRITE(ICL_PORT_TX_DW4_LN(lane, port), tmp);
|
||||
I915_WRITE(ICL_PORT_TX_DW4_LN(lane, phy), tmp);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -364,10 +363,10 @@ static void gen11_dsi_power_up_lanes(struct intel_encoder *encoder)
|
|||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base);
|
||||
enum port port;
|
||||
enum phy phy;
|
||||
|
||||
for_each_dsi_port(port, intel_dsi->ports)
|
||||
intel_combo_phy_power_up_lanes(dev_priv, port, true,
|
||||
for_each_dsi_phy(phy, intel_dsi->phys)
|
||||
intel_combo_phy_power_up_lanes(dev_priv, phy, true,
|
||||
intel_dsi->lane_count, false);
|
||||
}
|
||||
|
||||
|
@ -375,34 +374,47 @@ static void gen11_dsi_config_phy_lanes_sequence(struct intel_encoder *encoder)
|
|||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base);
|
||||
enum port port;
|
||||
enum phy phy;
|
||||
u32 tmp;
|
||||
int lane;
|
||||
|
||||
/* Step 4b(i) set loadgen select for transmit and aux lanes */
|
||||
for_each_dsi_port(port, intel_dsi->ports) {
|
||||
tmp = I915_READ(ICL_PORT_TX_DW4_AUX(port));
|
||||
for_each_dsi_phy(phy, intel_dsi->phys) {
|
||||
tmp = I915_READ(ICL_PORT_TX_DW4_AUX(phy));
|
||||
tmp &= ~LOADGEN_SELECT;
|
||||
I915_WRITE(ICL_PORT_TX_DW4_AUX(port), tmp);
|
||||
I915_WRITE(ICL_PORT_TX_DW4_AUX(phy), tmp);
|
||||
for (lane = 0; lane <= 3; lane++) {
|
||||
tmp = I915_READ(ICL_PORT_TX_DW4_LN(lane, port));
|
||||
tmp = I915_READ(ICL_PORT_TX_DW4_LN(lane, phy));
|
||||
tmp &= ~LOADGEN_SELECT;
|
||||
if (lane != 2)
|
||||
tmp |= LOADGEN_SELECT;
|
||||
I915_WRITE(ICL_PORT_TX_DW4_LN(lane, port), tmp);
|
||||
I915_WRITE(ICL_PORT_TX_DW4_LN(lane, phy), tmp);
|
||||
}
|
||||
}
|
||||
|
||||
/* Step 4b(ii) set latency optimization for transmit and aux lanes */
|
||||
for_each_dsi_port(port, intel_dsi->ports) {
|
||||
tmp = I915_READ(ICL_PORT_TX_DW2_AUX(port));
|
||||
for_each_dsi_phy(phy, intel_dsi->phys) {
|
||||
tmp = I915_READ(ICL_PORT_TX_DW2_AUX(phy));
|
||||
tmp &= ~FRC_LATENCY_OPTIM_MASK;
|
||||
tmp |= FRC_LATENCY_OPTIM_VAL(0x5);
|
||||
I915_WRITE(ICL_PORT_TX_DW2_AUX(port), tmp);
|
||||
tmp = I915_READ(ICL_PORT_TX_DW2_LN0(port));
|
||||
I915_WRITE(ICL_PORT_TX_DW2_AUX(phy), tmp);
|
||||
tmp = I915_READ(ICL_PORT_TX_DW2_LN0(phy));
|
||||
tmp &= ~FRC_LATENCY_OPTIM_MASK;
|
||||
tmp |= FRC_LATENCY_OPTIM_VAL(0x5);
|
||||
I915_WRITE(ICL_PORT_TX_DW2_GRP(port), tmp);
|
||||
I915_WRITE(ICL_PORT_TX_DW2_GRP(phy), tmp);
|
||||
|
||||
/* For EHL set latency optimization for PCS_DW1 lanes */
|
||||
if (IS_ELKHARTLAKE(dev_priv)) {
|
||||
tmp = I915_READ(ICL_PORT_PCS_DW1_AUX(phy));
|
||||
tmp &= ~LATENCY_OPTIM_MASK;
|
||||
tmp |= LATENCY_OPTIM_VAL(0);
|
||||
I915_WRITE(ICL_PORT_PCS_DW1_AUX(phy), tmp);
|
||||
|
||||
tmp = I915_READ(ICL_PORT_PCS_DW1_LN0(phy));
|
||||
tmp &= ~LATENCY_OPTIM_MASK;
|
||||
tmp |= LATENCY_OPTIM_VAL(0x1);
|
||||
I915_WRITE(ICL_PORT_PCS_DW1_GRP(phy), tmp);
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -412,16 +424,16 @@ static void gen11_dsi_voltage_swing_program_seq(struct intel_encoder *encoder)
|
|||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base);
|
||||
u32 tmp;
|
||||
enum port port;
|
||||
enum phy phy;
|
||||
|
||||
/* clear common keeper enable bit */
|
||||
for_each_dsi_port(port, intel_dsi->ports) {
|
||||
tmp = I915_READ(ICL_PORT_PCS_DW1_LN0(port));
|
||||
for_each_dsi_phy(phy, intel_dsi->phys) {
|
||||
tmp = I915_READ(ICL_PORT_PCS_DW1_LN0(phy));
|
||||
tmp &= ~COMMON_KEEPER_EN;
|
||||
I915_WRITE(ICL_PORT_PCS_DW1_GRP(port), tmp);
|
||||
tmp = I915_READ(ICL_PORT_PCS_DW1_AUX(port));
|
||||
I915_WRITE(ICL_PORT_PCS_DW1_GRP(phy), tmp);
|
||||
tmp = I915_READ(ICL_PORT_PCS_DW1_AUX(phy));
|
||||
tmp &= ~COMMON_KEEPER_EN;
|
||||
I915_WRITE(ICL_PORT_PCS_DW1_AUX(port), tmp);
|
||||
I915_WRITE(ICL_PORT_PCS_DW1_AUX(phy), tmp);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -429,33 +441,33 @@ static void gen11_dsi_voltage_swing_program_seq(struct intel_encoder *encoder)
|
|||
* Note: loadgen select program is done
|
||||
* as part of lane phy sequence configuration
|
||||
*/
|
||||
for_each_dsi_port(port, intel_dsi->ports) {
|
||||
tmp = I915_READ(ICL_PORT_CL_DW5(port));
|
||||
for_each_dsi_phy(phy, intel_dsi->phys) {
|
||||
tmp = I915_READ(ICL_PORT_CL_DW5(phy));
|
||||
tmp |= SUS_CLOCK_CONFIG;
|
||||
I915_WRITE(ICL_PORT_CL_DW5(port), tmp);
|
||||
I915_WRITE(ICL_PORT_CL_DW5(phy), tmp);
|
||||
}
|
||||
|
||||
/* Clear training enable to change swing values */
|
||||
for_each_dsi_port(port, intel_dsi->ports) {
|
||||
tmp = I915_READ(ICL_PORT_TX_DW5_LN0(port));
|
||||
for_each_dsi_phy(phy, intel_dsi->phys) {
|
||||
tmp = I915_READ(ICL_PORT_TX_DW5_LN0(phy));
|
||||
tmp &= ~TX_TRAINING_EN;
|
||||
I915_WRITE(ICL_PORT_TX_DW5_GRP(port), tmp);
|
||||
tmp = I915_READ(ICL_PORT_TX_DW5_AUX(port));
|
||||
I915_WRITE(ICL_PORT_TX_DW5_GRP(phy), tmp);
|
||||
tmp = I915_READ(ICL_PORT_TX_DW5_AUX(phy));
|
||||
tmp &= ~TX_TRAINING_EN;
|
||||
I915_WRITE(ICL_PORT_TX_DW5_AUX(port), tmp);
|
||||
I915_WRITE(ICL_PORT_TX_DW5_AUX(phy), tmp);
|
||||
}
|
||||
|
||||
/* Program swing and de-emphasis */
|
||||
dsi_program_swing_and_deemphasis(encoder);
|
||||
|
||||
/* Set training enable to trigger update */
|
||||
for_each_dsi_port(port, intel_dsi->ports) {
|
||||
tmp = I915_READ(ICL_PORT_TX_DW5_LN0(port));
|
||||
for_each_dsi_phy(phy, intel_dsi->phys) {
|
||||
tmp = I915_READ(ICL_PORT_TX_DW5_LN0(phy));
|
||||
tmp |= TX_TRAINING_EN;
|
||||
I915_WRITE(ICL_PORT_TX_DW5_GRP(port), tmp);
|
||||
tmp = I915_READ(ICL_PORT_TX_DW5_AUX(port));
|
||||
I915_WRITE(ICL_PORT_TX_DW5_GRP(phy), tmp);
|
||||
tmp = I915_READ(ICL_PORT_TX_DW5_AUX(phy));
|
||||
tmp |= TX_TRAINING_EN;
|
||||
I915_WRITE(ICL_PORT_TX_DW5_AUX(port), tmp);
|
||||
I915_WRITE(ICL_PORT_TX_DW5_AUX(phy), tmp);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -484,6 +496,7 @@ static void gen11_dsi_setup_dphy_timings(struct intel_encoder *encoder)
|
|||
struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base);
|
||||
u32 tmp;
|
||||
enum port port;
|
||||
enum phy phy;
|
||||
|
||||
/* Program T-INIT master registers */
|
||||
for_each_dsi_port(port, intel_dsi->ports) {
|
||||
|
@ -531,6 +544,14 @@ static void gen11_dsi_setup_dphy_timings(struct intel_encoder *encoder)
|
|||
I915_WRITE(DSI_TA_TIMING_PARAM(port), tmp);
|
||||
}
|
||||
}
|
||||
|
||||
if (IS_ELKHARTLAKE(dev_priv)) {
|
||||
for_each_dsi_phy(phy, intel_dsi->phys) {
|
||||
tmp = I915_READ(ICL_DPHY_CHKN(phy));
|
||||
tmp |= ICL_DPHY_CHKN_AFE_OVER_PPI_STRAP;
|
||||
I915_WRITE(ICL_DPHY_CHKN(phy), tmp);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static void gen11_dsi_gate_clocks(struct intel_encoder *encoder)
|
||||
|
@ -538,15 +559,14 @@ static void gen11_dsi_gate_clocks(struct intel_encoder *encoder)
|
|||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base);
|
||||
u32 tmp;
|
||||
enum port port;
|
||||
enum phy phy;
|
||||
|
||||
mutex_lock(&dev_priv->dpll_lock);
|
||||
tmp = I915_READ(DPCLKA_CFGCR0_ICL);
|
||||
for_each_dsi_port(port, intel_dsi->ports) {
|
||||
tmp |= DPCLKA_CFGCR0_DDI_CLK_OFF(port);
|
||||
}
|
||||
tmp = I915_READ(ICL_DPCLKA_CFGCR0);
|
||||
for_each_dsi_phy(phy, intel_dsi->phys)
|
||||
tmp |= ICL_DPCLKA_CFGCR0_DDI_CLK_OFF(phy);
|
||||
|
||||
I915_WRITE(DPCLKA_CFGCR0_ICL, tmp);
|
||||
I915_WRITE(ICL_DPCLKA_CFGCR0, tmp);
|
||||
mutex_unlock(&dev_priv->dpll_lock);
|
||||
}
|
||||
|
||||
|
@ -555,15 +575,14 @@ static void gen11_dsi_ungate_clocks(struct intel_encoder *encoder)
|
|||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base);
|
||||
u32 tmp;
|
||||
enum port port;
|
||||
enum phy phy;
|
||||
|
||||
mutex_lock(&dev_priv->dpll_lock);
|
||||
tmp = I915_READ(DPCLKA_CFGCR0_ICL);
|
||||
for_each_dsi_port(port, intel_dsi->ports) {
|
||||
tmp &= ~DPCLKA_CFGCR0_DDI_CLK_OFF(port);
|
||||
}
|
||||
tmp = I915_READ(ICL_DPCLKA_CFGCR0);
|
||||
for_each_dsi_phy(phy, intel_dsi->phys)
|
||||
tmp &= ~ICL_DPCLKA_CFGCR0_DDI_CLK_OFF(phy);
|
||||
|
||||
I915_WRITE(DPCLKA_CFGCR0_ICL, tmp);
|
||||
I915_WRITE(ICL_DPCLKA_CFGCR0, tmp);
|
||||
mutex_unlock(&dev_priv->dpll_lock);
|
||||
}
|
||||
|
||||
|
@ -573,24 +592,24 @@ static void gen11_dsi_map_pll(struct intel_encoder *encoder,
|
|||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base);
|
||||
struct intel_shared_dpll *pll = crtc_state->shared_dpll;
|
||||
enum port port;
|
||||
enum phy phy;
|
||||
u32 val;
|
||||
|
||||
mutex_lock(&dev_priv->dpll_lock);
|
||||
|
||||
val = I915_READ(DPCLKA_CFGCR0_ICL);
|
||||
for_each_dsi_port(port, intel_dsi->ports) {
|
||||
val &= ~DPCLKA_CFGCR0_DDI_CLK_SEL_MASK(port);
|
||||
val |= DPCLKA_CFGCR0_DDI_CLK_SEL(pll->info->id, port);
|
||||
val = I915_READ(ICL_DPCLKA_CFGCR0);
|
||||
for_each_dsi_phy(phy, intel_dsi->phys) {
|
||||
val &= ~ICL_DPCLKA_CFGCR0_DDI_CLK_SEL_MASK(phy);
|
||||
val |= ICL_DPCLKA_CFGCR0_DDI_CLK_SEL(pll->info->id, phy);
|
||||
}
|
||||
I915_WRITE(DPCLKA_CFGCR0_ICL, val);
|
||||
I915_WRITE(ICL_DPCLKA_CFGCR0, val);
|
||||
|
||||
for_each_dsi_port(port, intel_dsi->ports) {
|
||||
val &= ~DPCLKA_CFGCR0_DDI_CLK_OFF(port);
|
||||
for_each_dsi_phy(phy, intel_dsi->phys) {
|
||||
val &= ~ICL_DPCLKA_CFGCR0_DDI_CLK_OFF(phy);
|
||||
}
|
||||
I915_WRITE(DPCLKA_CFGCR0_ICL, val);
|
||||
I915_WRITE(ICL_DPCLKA_CFGCR0, val);
|
||||
|
||||
POSTING_READ(DPCLKA_CFGCR0_ICL);
|
||||
POSTING_READ(ICL_DPCLKA_CFGCR0);
|
||||
|
||||
mutex_unlock(&dev_priv->dpll_lock);
|
||||
}
|
||||
|
@ -744,7 +763,7 @@ gen11_dsi_set_transcoder_timings(struct intel_encoder *encoder,
|
|||
enum transcoder dsi_trans;
|
||||
/* horizontal timings */
|
||||
u16 htotal, hactive, hsync_start, hsync_end, hsync_size;
|
||||
u16 hfront_porch, hback_porch;
|
||||
u16 hback_porch;
|
||||
/* vertical timings */
|
||||
u16 vtotal, vactive, vsync_start, vsync_end, vsync_shift;
|
||||
|
||||
|
@ -753,8 +772,6 @@ gen11_dsi_set_transcoder_timings(struct intel_encoder *encoder,
|
|||
hsync_start = adjusted_mode->crtc_hsync_start;
|
||||
hsync_end = adjusted_mode->crtc_hsync_end;
|
||||
hsync_size = hsync_end - hsync_start;
|
||||
hfront_porch = (adjusted_mode->crtc_hsync_start -
|
||||
adjusted_mode->crtc_hdisplay);
|
||||
hback_porch = (adjusted_mode->crtc_htotal -
|
||||
adjusted_mode->crtc_hsync_end);
|
||||
vactive = adjusted_mode->crtc_vdisplay;
|
||||
|
@ -1487,6 +1504,26 @@ static void icl_dphy_param_init(struct intel_dsi *intel_dsi)
|
|||
intel_dsi_log_params(intel_dsi);
|
||||
}
|
||||
|
||||
static void icl_dsi_add_properties(struct intel_connector *connector)
|
||||
{
|
||||
u32 allowed_scalers;
|
||||
|
||||
allowed_scalers = BIT(DRM_MODE_SCALE_ASPECT) |
|
||||
BIT(DRM_MODE_SCALE_FULLSCREEN) |
|
||||
BIT(DRM_MODE_SCALE_CENTER);
|
||||
|
||||
drm_connector_attach_scaling_mode_property(&connector->base,
|
||||
allowed_scalers);
|
||||
|
||||
connector->base.state->scaling_mode = DRM_MODE_SCALE_ASPECT;
|
||||
|
||||
connector->base.display_info.panel_orientation =
|
||||
intel_dsi_get_panel_orientation(connector);
|
||||
drm_connector_init_panel_orientation_property(&connector->base,
|
||||
connector->panel.fixed_mode->hdisplay,
|
||||
connector->panel.fixed_mode->vdisplay);
|
||||
}
|
||||
|
||||
void icl_dsi_init(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
struct drm_device *dev = &dev_priv->drm;
|
||||
|
@ -1580,6 +1617,8 @@ void icl_dsi_init(struct drm_i915_private *dev_priv)
|
|||
}
|
||||
|
||||
icl_dphy_param_init(intel_dsi);
|
||||
|
||||
icl_dsi_add_properties(intel_connector);
|
||||
return;
|
||||
|
||||
err:
|
||||
|
|
|
@ -176,33 +176,49 @@ int intel_plane_atomic_check_with_state(const struct intel_crtc_state *old_crtc_
|
|||
new_crtc_state->data_rate[plane->id] =
|
||||
intel_plane_data_rate(new_crtc_state, new_plane_state);
|
||||
|
||||
return intel_plane_atomic_calc_changes(old_crtc_state,
|
||||
&new_crtc_state->base,
|
||||
old_plane_state,
|
||||
&new_plane_state->base);
|
||||
return intel_plane_atomic_calc_changes(old_crtc_state, new_crtc_state,
|
||||
old_plane_state, new_plane_state);
|
||||
}
|
||||
|
||||
static int intel_plane_atomic_check(struct drm_plane *plane,
|
||||
struct drm_plane_state *new_plane_state)
|
||||
static struct intel_crtc *
|
||||
get_crtc_from_states(const struct intel_plane_state *old_plane_state,
|
||||
const struct intel_plane_state *new_plane_state)
|
||||
{
|
||||
struct drm_atomic_state *state = new_plane_state->state;
|
||||
const struct drm_plane_state *old_plane_state =
|
||||
drm_atomic_get_old_plane_state(state, plane);
|
||||
struct drm_crtc *crtc = new_plane_state->crtc ?: old_plane_state->crtc;
|
||||
const struct drm_crtc_state *old_crtc_state;
|
||||
struct drm_crtc_state *new_crtc_state;
|
||||
if (new_plane_state->base.crtc)
|
||||
return to_intel_crtc(new_plane_state->base.crtc);
|
||||
|
||||
new_plane_state->visible = false;
|
||||
if (old_plane_state->base.crtc)
|
||||
return to_intel_crtc(old_plane_state->base.crtc);
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static int intel_plane_atomic_check(struct drm_plane *_plane,
|
||||
struct drm_plane_state *_new_plane_state)
|
||||
{
|
||||
struct intel_plane *plane = to_intel_plane(_plane);
|
||||
struct intel_atomic_state *state =
|
||||
to_intel_atomic_state(_new_plane_state->state);
|
||||
struct intel_plane_state *new_plane_state =
|
||||
to_intel_plane_state(_new_plane_state);
|
||||
const struct intel_plane_state *old_plane_state =
|
||||
intel_atomic_get_old_plane_state(state, plane);
|
||||
struct intel_crtc *crtc =
|
||||
get_crtc_from_states(old_plane_state, new_plane_state);
|
||||
const struct intel_crtc_state *old_crtc_state;
|
||||
struct intel_crtc_state *new_crtc_state;
|
||||
|
||||
new_plane_state->base.visible = false;
|
||||
if (!crtc)
|
||||
return 0;
|
||||
|
||||
old_crtc_state = drm_atomic_get_old_crtc_state(state, crtc);
|
||||
new_crtc_state = drm_atomic_get_new_crtc_state(state, crtc);
|
||||
old_crtc_state = intel_atomic_get_old_crtc_state(state, crtc);
|
||||
new_crtc_state = intel_atomic_get_new_crtc_state(state, crtc);
|
||||
|
||||
return intel_plane_atomic_check_with_state(to_intel_crtc_state(old_crtc_state),
|
||||
to_intel_crtc_state(new_crtc_state),
|
||||
to_intel_plane_state(old_plane_state),
|
||||
to_intel_plane_state(new_plane_state));
|
||||
return intel_plane_atomic_check_with_state(old_crtc_state,
|
||||
new_crtc_state,
|
||||
old_plane_state,
|
||||
new_plane_state);
|
||||
}
|
||||
|
||||
static struct intel_plane *
|
||||
|
|
|
@ -8,7 +8,6 @@
|
|||
|
||||
#include <linux/types.h>
|
||||
|
||||
struct drm_crtc_state;
|
||||
struct drm_plane;
|
||||
struct drm_property;
|
||||
struct intel_atomic_state;
|
||||
|
@ -43,8 +42,8 @@ int intel_plane_atomic_check_with_state(const struct intel_crtc_state *old_crtc_
|
|||
const struct intel_plane_state *old_plane_state,
|
||||
struct intel_plane_state *intel_state);
|
||||
int intel_plane_atomic_calc_changes(const struct intel_crtc_state *old_crtc_state,
|
||||
struct drm_crtc_state *crtc_state,
|
||||
struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *old_plane_state,
|
||||
struct drm_plane_state *plane_state);
|
||||
struct intel_plane_state *plane_state);
|
||||
|
||||
#endif /* __INTEL_ATOMIC_PLANE_H__ */
|
||||
|
|
|
@ -72,6 +72,13 @@ struct dp_aud_n_m {
|
|||
u16 n;
|
||||
};
|
||||
|
||||
struct hdmi_aud_ncts {
|
||||
int sample_rate;
|
||||
int clock;
|
||||
int n;
|
||||
int cts;
|
||||
};
|
||||
|
||||
/* Values according to DP 1.4 Table 2-104 */
|
||||
static const struct dp_aud_n_m dp_aud_n_m[] = {
|
||||
{ 32000, LC_162M, 1024, 10125 },
|
||||
|
@ -148,12 +155,7 @@ static const struct {
|
|||
#define TMDS_594M 594000
|
||||
#define TMDS_593M 593407
|
||||
|
||||
static const struct {
|
||||
int sample_rate;
|
||||
int clock;
|
||||
int n;
|
||||
int cts;
|
||||
} hdmi_aud_ncts[] = {
|
||||
static const struct hdmi_aud_ncts hdmi_aud_ncts_24bpp[] = {
|
||||
{ 32000, TMDS_296M, 5824, 421875 },
|
||||
{ 32000, TMDS_297M, 3072, 222750 },
|
||||
{ 32000, TMDS_593M, 5824, 843750 },
|
||||
|
@ -184,6 +186,49 @@ static const struct {
|
|||
{ 192000, TMDS_594M, 24576, 594000 },
|
||||
};
|
||||
|
||||
/* Appendix C - N & CTS values for deep color from HDMI 2.0 spec*/
|
||||
/* HDMI N/CTS table for 10 bit deep color(30 bpp)*/
|
||||
#define TMDS_371M 371250
|
||||
#define TMDS_370M 370878
|
||||
|
||||
static const struct hdmi_aud_ncts hdmi_aud_ncts_30bpp[] = {
|
||||
{ 32000, TMDS_370M, 5824, 527344 },
|
||||
{ 32000, TMDS_371M, 6144, 556875 },
|
||||
{ 44100, TMDS_370M, 8918, 585938 },
|
||||
{ 44100, TMDS_371M, 4704, 309375 },
|
||||
{ 88200, TMDS_370M, 17836, 585938 },
|
||||
{ 88200, TMDS_371M, 9408, 309375 },
|
||||
{ 176400, TMDS_370M, 35672, 585938 },
|
||||
{ 176400, TMDS_371M, 18816, 309375 },
|
||||
{ 48000, TMDS_370M, 11648, 703125 },
|
||||
{ 48000, TMDS_371M, 5120, 309375 },
|
||||
{ 96000, TMDS_370M, 23296, 703125 },
|
||||
{ 96000, TMDS_371M, 10240, 309375 },
|
||||
{ 192000, TMDS_370M, 46592, 703125 },
|
||||
{ 192000, TMDS_371M, 20480, 309375 },
|
||||
};
|
||||
|
||||
/* HDMI N/CTS table for 12 bit deep color(36 bpp)*/
|
||||
#define TMDS_445_5M 445500
|
||||
#define TMDS_445M 445054
|
||||
|
||||
static const struct hdmi_aud_ncts hdmi_aud_ncts_36bpp[] = {
|
||||
{ 32000, TMDS_445M, 5824, 632813 },
|
||||
{ 32000, TMDS_445_5M, 4096, 445500 },
|
||||
{ 44100, TMDS_445M, 8918, 703125 },
|
||||
{ 44100, TMDS_445_5M, 4704, 371250 },
|
||||
{ 88200, TMDS_445M, 17836, 703125 },
|
||||
{ 88200, TMDS_445_5M, 9408, 371250 },
|
||||
{ 176400, TMDS_445M, 35672, 703125 },
|
||||
{ 176400, TMDS_445_5M, 18816, 371250 },
|
||||
{ 48000, TMDS_445M, 5824, 421875 },
|
||||
{ 48000, TMDS_445_5M, 5120, 371250 },
|
||||
{ 96000, TMDS_445M, 11648, 421875 },
|
||||
{ 96000, TMDS_445_5M, 10240, 371250 },
|
||||
{ 192000, TMDS_445M, 23296, 421875 },
|
||||
{ 192000, TMDS_445_5M, 20480, 371250 },
|
||||
};
|
||||
|
||||
/* get AUD_CONFIG_PIXEL_CLOCK_HDMI_* value for mode */
|
||||
static u32 audio_config_hdmi_pixel_clock(const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
|
@ -212,14 +257,24 @@ static u32 audio_config_hdmi_pixel_clock(const struct intel_crtc_state *crtc_sta
|
|||
static int audio_config_hdmi_get_n(const struct intel_crtc_state *crtc_state,
|
||||
int rate)
|
||||
{
|
||||
const struct drm_display_mode *adjusted_mode =
|
||||
&crtc_state->base.adjusted_mode;
|
||||
int i;
|
||||
const struct hdmi_aud_ncts *hdmi_ncts_table;
|
||||
int i, size;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(hdmi_aud_ncts); i++) {
|
||||
if (rate == hdmi_aud_ncts[i].sample_rate &&
|
||||
adjusted_mode->crtc_clock == hdmi_aud_ncts[i].clock) {
|
||||
return hdmi_aud_ncts[i].n;
|
||||
if (crtc_state->pipe_bpp == 36) {
|
||||
hdmi_ncts_table = hdmi_aud_ncts_36bpp;
|
||||
size = ARRAY_SIZE(hdmi_aud_ncts_36bpp);
|
||||
} else if (crtc_state->pipe_bpp == 30) {
|
||||
hdmi_ncts_table = hdmi_aud_ncts_30bpp;
|
||||
size = ARRAY_SIZE(hdmi_aud_ncts_30bpp);
|
||||
} else {
|
||||
hdmi_ncts_table = hdmi_aud_ncts_24bpp;
|
||||
size = ARRAY_SIZE(hdmi_aud_ncts_24bpp);
|
||||
}
|
||||
|
||||
for (i = 0; i < size; i++) {
|
||||
if (rate == hdmi_ncts_table[i].sample_rate &&
|
||||
crtc_state->port_clock == hdmi_ncts_table[i].clock) {
|
||||
return hdmi_ncts_table[i].n;
|
||||
}
|
||||
}
|
||||
return 0;
|
||||
|
|
|
@ -28,6 +28,7 @@
|
|||
#include <drm/drm_dp_helper.h>
|
||||
#include <drm/i915_drm.h>
|
||||
|
||||
#include "display/intel_display.h"
|
||||
#include "display/intel_gmbus.h"
|
||||
|
||||
#include "i915_drv.h"
|
||||
|
@ -1354,12 +1355,27 @@ static const u8 mcc_ddc_pin_map[] = {
|
|||
[MCC_DDC_BUS_DDI_C] = GMBUS_PIN_9_TC1_ICP,
|
||||
};
|
||||
|
||||
static const u8 tgp_ddc_pin_map[] = {
|
||||
[ICL_DDC_BUS_DDI_A] = GMBUS_PIN_1_BXT,
|
||||
[ICL_DDC_BUS_DDI_B] = GMBUS_PIN_2_BXT,
|
||||
[TGL_DDC_BUS_DDI_C] = GMBUS_PIN_3_BXT,
|
||||
[ICL_DDC_BUS_PORT_1] = GMBUS_PIN_9_TC1_ICP,
|
||||
[ICL_DDC_BUS_PORT_2] = GMBUS_PIN_10_TC2_ICP,
|
||||
[ICL_DDC_BUS_PORT_3] = GMBUS_PIN_11_TC3_ICP,
|
||||
[ICL_DDC_BUS_PORT_4] = GMBUS_PIN_12_TC4_ICP,
|
||||
[TGL_DDC_BUS_PORT_5] = GMBUS_PIN_13_TC5_TGP,
|
||||
[TGL_DDC_BUS_PORT_6] = GMBUS_PIN_14_TC6_TGP,
|
||||
};
|
||||
|
||||
static u8 map_ddc_pin(struct drm_i915_private *dev_priv, u8 vbt_pin)
|
||||
{
|
||||
const u8 *ddc_pin_map;
|
||||
int n_entries;
|
||||
|
||||
if (HAS_PCH_MCC(dev_priv)) {
|
||||
if (HAS_PCH_TGP(dev_priv)) {
|
||||
ddc_pin_map = tgp_ddc_pin_map;
|
||||
n_entries = ARRAY_SIZE(tgp_ddc_pin_map);
|
||||
} else if (HAS_PCH_MCC(dev_priv)) {
|
||||
ddc_pin_map = mcc_ddc_pin_map;
|
||||
n_entries = ARRAY_SIZE(mcc_ddc_pin_map);
|
||||
} else if (HAS_PCH_ICP(dev_priv)) {
|
||||
|
@ -1668,6 +1684,9 @@ parse_general_definitions(struct drm_i915_private *dev_priv,
|
|||
if (!child->device_type)
|
||||
continue;
|
||||
|
||||
DRM_DEBUG_KMS("Found VBT child device with type 0x%x\n",
|
||||
child->device_type);
|
||||
|
||||
/*
|
||||
* Copy as much as we know (sizeof) and is available
|
||||
* (child_dev_size) of the child device. Accessing the data must
|
||||
|
@ -1730,12 +1749,13 @@ init_vbt_missing_defaults(struct drm_i915_private *dev_priv)
|
|||
for (port = PORT_A; port < I915_MAX_PORTS; port++) {
|
||||
struct ddi_vbt_port_info *info =
|
||||
&dev_priv->vbt.ddi_port_info[port];
|
||||
enum phy phy = intel_port_to_phy(dev_priv, port);
|
||||
|
||||
/*
|
||||
* VBT has the TypeC mode (native,TBT/USB) and we don't want
|
||||
* to detect it.
|
||||
*/
|
||||
if (intel_port_is_tc(dev_priv, port))
|
||||
if (intel_phy_is_tc(dev_priv, phy))
|
||||
continue;
|
||||
|
||||
info->supports_dvi = (port != PORT_A && port != PORT_E);
|
||||
|
@ -1888,10 +1908,10 @@ out:
|
|||
}
|
||||
|
||||
/**
|
||||
* intel_bios_cleanup - Free any resources allocated by intel_bios_init()
|
||||
* intel_bios_driver_remove - Free any resources allocated by intel_bios_init()
|
||||
* @dev_priv: i915 device instance
|
||||
*/
|
||||
void intel_bios_cleanup(struct drm_i915_private *dev_priv)
|
||||
void intel_bios_driver_remove(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
kfree(dev_priv->vbt.child_dev);
|
||||
dev_priv->vbt.child_dev = NULL;
|
||||
|
|
|
@ -42,6 +42,7 @@ enum intel_backlight_type {
|
|||
INTEL_BACKLIGHT_DISPLAY_DDI,
|
||||
INTEL_BACKLIGHT_DSI_DCS,
|
||||
INTEL_BACKLIGHT_PANEL_DRIVER_INTERFACE,
|
||||
INTEL_BACKLIGHT_VESA_EDP_AUX_INTERFACE,
|
||||
};
|
||||
|
||||
struct edp_power_seq {
|
||||
|
@ -227,7 +228,7 @@ struct mipi_pps_data {
|
|||
} __packed;
|
||||
|
||||
void intel_bios_init(struct drm_i915_private *dev_priv);
|
||||
void intel_bios_cleanup(struct drm_i915_private *dev_priv);
|
||||
void intel_bios_driver_remove(struct drm_i915_private *dev_priv);
|
||||
bool intel_bios_is_valid_vbt(const void *buf, size_t size);
|
||||
bool intel_bios_is_tv_present(struct drm_i915_private *dev_priv);
|
||||
bool intel_bios_is_lvds_present(struct drm_i915_private *dev_priv, u8 *i2c_pin);
|
||||
|
|
|
@ -65,7 +65,7 @@ static int icl_pcode_read_qgv_point_info(struct drm_i915_private *dev_priv,
|
|||
struct intel_qgv_point *sp,
|
||||
int point)
|
||||
{
|
||||
u32 val = 0, val2;
|
||||
u32 val = 0, val2 = 0;
|
||||
int ret;
|
||||
|
||||
ret = sandybridge_pcode_read(dev_priv,
|
||||
|
|
|
@ -545,10 +545,10 @@ static void vlv_set_cdclk(struct drm_i915_private *dev_priv,
|
|||
/* There are cases where we can end up here with power domains
|
||||
* off and a CDCLK frequency other than the minimum, like when
|
||||
* issuing a modeset without actually changing any display after
|
||||
* a system suspend. So grab the PIPE-A domain, which covers
|
||||
* a system suspend. So grab the display core domain, which covers
|
||||
* the HW blocks needed for the following programming.
|
||||
*/
|
||||
wakeref = intel_display_power_get(dev_priv, POWER_DOMAIN_PIPE_A);
|
||||
wakeref = intel_display_power_get(dev_priv, POWER_DOMAIN_DISPLAY_CORE);
|
||||
|
||||
vlv_iosf_sb_get(dev_priv,
|
||||
BIT(VLV_IOSF_SB_CCK) |
|
||||
|
@ -606,7 +606,7 @@ static void vlv_set_cdclk(struct drm_i915_private *dev_priv,
|
|||
|
||||
vlv_program_pfi_credits(dev_priv);
|
||||
|
||||
intel_display_power_put(dev_priv, POWER_DOMAIN_PIPE_A, wakeref);
|
||||
intel_display_power_put(dev_priv, POWER_DOMAIN_DISPLAY_CORE, wakeref);
|
||||
}
|
||||
|
||||
static void chv_set_cdclk(struct drm_i915_private *dev_priv,
|
||||
|
@ -631,10 +631,10 @@ static void chv_set_cdclk(struct drm_i915_private *dev_priv,
|
|||
/* There are cases where we can end up here with power domains
|
||||
* off and a CDCLK frequency other than the minimum, like when
|
||||
* issuing a modeset without actually changing any display after
|
||||
* a system suspend. So grab the PIPE-A domain, which covers
|
||||
* a system suspend. So grab the display core domain, which covers
|
||||
* the HW blocks needed for the following programming.
|
||||
*/
|
||||
wakeref = intel_display_power_get(dev_priv, POWER_DOMAIN_PIPE_A);
|
||||
wakeref = intel_display_power_get(dev_priv, POWER_DOMAIN_DISPLAY_CORE);
|
||||
|
||||
vlv_punit_get(dev_priv);
|
||||
val = vlv_punit_read(dev_priv, PUNIT_REG_DSPSSPM);
|
||||
|
@ -653,7 +653,7 @@ static void chv_set_cdclk(struct drm_i915_private *dev_priv,
|
|||
|
||||
vlv_program_pfi_credits(dev_priv);
|
||||
|
||||
intel_display_power_put(dev_priv, POWER_DOMAIN_PIPE_A, wakeref);
|
||||
intel_display_power_put(dev_priv, POWER_DOMAIN_DISPLAY_CORE, wakeref);
|
||||
}
|
||||
|
||||
static int bdw_calc_cdclk(int min_cdclk)
|
||||
|
@ -1756,9 +1756,10 @@ sanitize:
|
|||
|
||||
static int icl_calc_cdclk(int min_cdclk, unsigned int ref)
|
||||
{
|
||||
int ranges_24[] = { 312000, 552000, 648000 };
|
||||
int ranges_19_38[] = { 307200, 556800, 652800 };
|
||||
int *ranges;
|
||||
static const int ranges_24[] = { 180000, 192000, 312000, 552000, 648000 };
|
||||
static const int ranges_19_38[] = { 172800, 192000, 307200, 556800, 652800 };
|
||||
const int *ranges;
|
||||
int len, i;
|
||||
|
||||
switch (ref) {
|
||||
default:
|
||||
|
@ -1766,19 +1767,22 @@ static int icl_calc_cdclk(int min_cdclk, unsigned int ref)
|
|||
/* fall through */
|
||||
case 24000:
|
||||
ranges = ranges_24;
|
||||
len = ARRAY_SIZE(ranges_24);
|
||||
break;
|
||||
case 19200:
|
||||
case 38400:
|
||||
ranges = ranges_19_38;
|
||||
len = ARRAY_SIZE(ranges_19_38);
|
||||
break;
|
||||
}
|
||||
|
||||
if (min_cdclk > ranges[1])
|
||||
return ranges[2];
|
||||
else if (min_cdclk > ranges[0])
|
||||
return ranges[1];
|
||||
else
|
||||
return ranges[0];
|
||||
for (i = 0; i < len; i++) {
|
||||
if (min_cdclk <= ranges[i])
|
||||
return ranges[i];
|
||||
}
|
||||
|
||||
WARN_ON(min_cdclk > ranges[len - 1]);
|
||||
return ranges[len - 1];
|
||||
}
|
||||
|
||||
static int icl_calc_cdclk_pll_vco(struct drm_i915_private *dev_priv, int cdclk)
|
||||
|
@ -1792,16 +1796,24 @@ static int icl_calc_cdclk_pll_vco(struct drm_i915_private *dev_priv, int cdclk)
|
|||
default:
|
||||
MISSING_CASE(cdclk);
|
||||
/* fall through */
|
||||
case 172800:
|
||||
case 307200:
|
||||
case 556800:
|
||||
case 652800:
|
||||
WARN_ON(dev_priv->cdclk.hw.ref != 19200 &&
|
||||
dev_priv->cdclk.hw.ref != 38400);
|
||||
break;
|
||||
case 180000:
|
||||
case 312000:
|
||||
case 552000:
|
||||
case 648000:
|
||||
WARN_ON(dev_priv->cdclk.hw.ref != 24000);
|
||||
break;
|
||||
case 192000:
|
||||
WARN_ON(dev_priv->cdclk.hw.ref != 19200 &&
|
||||
dev_priv->cdclk.hw.ref != 38400 &&
|
||||
dev_priv->cdclk.hw.ref != 24000);
|
||||
break;
|
||||
}
|
||||
|
||||
ratio = cdclk / (dev_priv->cdclk.hw.ref / 2);
|
||||
|
@ -1854,14 +1866,23 @@ static void icl_set_cdclk(struct drm_i915_private *dev_priv,
|
|||
dev_priv->cdclk.hw.voltage_level = cdclk_state->voltage_level;
|
||||
}
|
||||
|
||||
static u8 icl_calc_voltage_level(int cdclk)
|
||||
static u8 icl_calc_voltage_level(struct drm_i915_private *dev_priv, int cdclk)
|
||||
{
|
||||
if (cdclk > 556800)
|
||||
return 2;
|
||||
else if (cdclk > 312000)
|
||||
return 1;
|
||||
else
|
||||
return 0;
|
||||
if (IS_ELKHARTLAKE(dev_priv)) {
|
||||
if (cdclk > 312000)
|
||||
return 2;
|
||||
else if (cdclk > 180000)
|
||||
return 1;
|
||||
else
|
||||
return 0;
|
||||
} else {
|
||||
if (cdclk > 556800)
|
||||
return 2;
|
||||
else if (cdclk > 312000)
|
||||
return 1;
|
||||
else
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
static void icl_get_cdclk(struct drm_i915_private *dev_priv,
|
||||
|
@ -1912,7 +1933,7 @@ out:
|
|||
* at least what the CDCLK frequency requires.
|
||||
*/
|
||||
cdclk_state->voltage_level =
|
||||
icl_calc_voltage_level(cdclk_state->cdclk);
|
||||
icl_calc_voltage_level(dev_priv, cdclk_state->cdclk);
|
||||
}
|
||||
|
||||
static void icl_init_cdclk(struct drm_i915_private *dev_priv)
|
||||
|
@ -1947,7 +1968,8 @@ sanitize:
|
|||
sanitized_state.vco = icl_calc_cdclk_pll_vco(dev_priv,
|
||||
sanitized_state.cdclk);
|
||||
sanitized_state.voltage_level =
|
||||
icl_calc_voltage_level(sanitized_state.cdclk);
|
||||
icl_calc_voltage_level(dev_priv,
|
||||
sanitized_state.cdclk);
|
||||
|
||||
icl_set_cdclk(dev_priv, &sanitized_state, INVALID_PIPE);
|
||||
}
|
||||
|
@ -1958,7 +1980,8 @@ static void icl_uninit_cdclk(struct drm_i915_private *dev_priv)
|
|||
|
||||
cdclk_state.cdclk = cdclk_state.bypass;
|
||||
cdclk_state.vco = 0;
|
||||
cdclk_state.voltage_level = icl_calc_voltage_level(cdclk_state.cdclk);
|
||||
cdclk_state.voltage_level = icl_calc_voltage_level(dev_priv,
|
||||
cdclk_state.cdclk);
|
||||
|
||||
icl_set_cdclk(dev_priv, &cdclk_state, INVALID_PIPE);
|
||||
}
|
||||
|
@ -2560,7 +2583,7 @@ static int icl_modeset_calc_cdclk(struct intel_atomic_state *state)
|
|||
state->cdclk.logical.vco = vco;
|
||||
state->cdclk.logical.cdclk = cdclk;
|
||||
state->cdclk.logical.voltage_level =
|
||||
max(icl_calc_voltage_level(cdclk),
|
||||
max(icl_calc_voltage_level(dev_priv, cdclk),
|
||||
cnl_compute_min_voltage_level(state));
|
||||
|
||||
if (!state->active_crtcs) {
|
||||
|
@ -2570,7 +2593,7 @@ static int icl_modeset_calc_cdclk(struct intel_atomic_state *state)
|
|||
state->cdclk.actual.vco = vco;
|
||||
state->cdclk.actual.cdclk = cdclk;
|
||||
state->cdclk.actual.voltage_level =
|
||||
icl_calc_voltage_level(cdclk);
|
||||
icl_calc_voltage_level(dev_priv, cdclk);
|
||||
} else {
|
||||
state->cdclk.actual = state->cdclk.logical;
|
||||
}
|
||||
|
@ -2605,7 +2628,12 @@ static int intel_compute_max_dotclk(struct drm_i915_private *dev_priv)
|
|||
*/
|
||||
void intel_update_max_cdclk(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
if (INTEL_GEN(dev_priv) >= 11) {
|
||||
if (IS_ELKHARTLAKE(dev_priv)) {
|
||||
if (dev_priv->cdclk.hw.ref == 24000)
|
||||
dev_priv->max_cdclk_freq = 552000;
|
||||
else
|
||||
dev_priv->max_cdclk_freq = 556800;
|
||||
} else if (INTEL_GEN(dev_priv) >= 11) {
|
||||
if (dev_priv->cdclk.hw.ref == 24000)
|
||||
dev_priv->max_cdclk_freq = 648000;
|
||||
else
|
||||
|
|
|
@ -6,13 +6,13 @@
|
|||
#include "intel_combo_phy.h"
|
||||
#include "intel_drv.h"
|
||||
|
||||
#define for_each_combo_port(__dev_priv, __port) \
|
||||
for ((__port) = PORT_A; (__port) < I915_MAX_PORTS; (__port)++) \
|
||||
for_each_if(intel_port_is_combophy(__dev_priv, __port))
|
||||
#define for_each_combo_phy(__dev_priv, __phy) \
|
||||
for ((__phy) = PHY_A; (__phy) < I915_MAX_PHYS; (__phy)++) \
|
||||
for_each_if(intel_phy_is_combo(__dev_priv, __phy))
|
||||
|
||||
#define for_each_combo_port_reverse(__dev_priv, __port) \
|
||||
for ((__port) = I915_MAX_PORTS; (__port)-- > PORT_A;) \
|
||||
for_each_if(intel_port_is_combophy(__dev_priv, __port))
|
||||
#define for_each_combo_phy_reverse(__dev_priv, __phy) \
|
||||
for ((__phy) = I915_MAX_PHYS; (__phy)-- > PHY_A;) \
|
||||
for_each_if(intel_phy_is_combo(__dev_priv, __phy))
|
||||
|
||||
enum {
|
||||
PROCMON_0_85V_DOT_0,
|
||||
|
@ -38,18 +38,17 @@ static const struct cnl_procmon {
|
|||
};
|
||||
|
||||
/*
|
||||
* CNL has just one set of registers, while ICL has two sets: one for port A and
|
||||
* the other for port B. The CNL registers are equivalent to the ICL port A
|
||||
* registers, that's why we call the ICL macros even though the function has CNL
|
||||
* on its name.
|
||||
* CNL has just one set of registers, while gen11 has a set for each combo PHY.
|
||||
* The CNL registers are equivalent to the gen11 PHY A registers, that's why we
|
||||
* call the ICL macros even though the function has CNL on its name.
|
||||
*/
|
||||
static const struct cnl_procmon *
|
||||
cnl_get_procmon_ref_values(struct drm_i915_private *dev_priv, enum port port)
|
||||
cnl_get_procmon_ref_values(struct drm_i915_private *dev_priv, enum phy phy)
|
||||
{
|
||||
const struct cnl_procmon *procmon;
|
||||
u32 val;
|
||||
|
||||
val = I915_READ(ICL_PORT_COMP_DW3(port));
|
||||
val = I915_READ(ICL_PORT_COMP_DW3(phy));
|
||||
switch (val & (PROCESS_INFO_MASK | VOLTAGE_INFO_MASK)) {
|
||||
default:
|
||||
MISSING_CASE(val);
|
||||
|
@ -75,32 +74,32 @@ cnl_get_procmon_ref_values(struct drm_i915_private *dev_priv, enum port port)
|
|||
}
|
||||
|
||||
static void cnl_set_procmon_ref_values(struct drm_i915_private *dev_priv,
|
||||
enum port port)
|
||||
enum phy phy)
|
||||
{
|
||||
const struct cnl_procmon *procmon;
|
||||
u32 val;
|
||||
|
||||
procmon = cnl_get_procmon_ref_values(dev_priv, port);
|
||||
procmon = cnl_get_procmon_ref_values(dev_priv, phy);
|
||||
|
||||
val = I915_READ(ICL_PORT_COMP_DW1(port));
|
||||
val = I915_READ(ICL_PORT_COMP_DW1(phy));
|
||||
val &= ~((0xff << 16) | 0xff);
|
||||
val |= procmon->dw1;
|
||||
I915_WRITE(ICL_PORT_COMP_DW1(port), val);
|
||||
I915_WRITE(ICL_PORT_COMP_DW1(phy), val);
|
||||
|
||||
I915_WRITE(ICL_PORT_COMP_DW9(port), procmon->dw9);
|
||||
I915_WRITE(ICL_PORT_COMP_DW10(port), procmon->dw10);
|
||||
I915_WRITE(ICL_PORT_COMP_DW9(phy), procmon->dw9);
|
||||
I915_WRITE(ICL_PORT_COMP_DW10(phy), procmon->dw10);
|
||||
}
|
||||
|
||||
static bool check_phy_reg(struct drm_i915_private *dev_priv,
|
||||
enum port port, i915_reg_t reg, u32 mask,
|
||||
enum phy phy, i915_reg_t reg, u32 mask,
|
||||
u32 expected_val)
|
||||
{
|
||||
u32 val = I915_READ(reg);
|
||||
|
||||
if ((val & mask) != expected_val) {
|
||||
DRM_DEBUG_DRIVER("Port %c combo PHY reg %08x state mismatch: "
|
||||
DRM_DEBUG_DRIVER("Combo PHY %c reg %08x state mismatch: "
|
||||
"current %08x mask %08x expected %08x\n",
|
||||
port_name(port),
|
||||
phy_name(phy),
|
||||
reg.reg, val, mask, expected_val);
|
||||
return false;
|
||||
}
|
||||
|
@ -109,18 +108,18 @@ static bool check_phy_reg(struct drm_i915_private *dev_priv,
|
|||
}
|
||||
|
||||
static bool cnl_verify_procmon_ref_values(struct drm_i915_private *dev_priv,
|
||||
enum port port)
|
||||
enum phy phy)
|
||||
{
|
||||
const struct cnl_procmon *procmon;
|
||||
bool ret;
|
||||
|
||||
procmon = cnl_get_procmon_ref_values(dev_priv, port);
|
||||
procmon = cnl_get_procmon_ref_values(dev_priv, phy);
|
||||
|
||||
ret = check_phy_reg(dev_priv, port, ICL_PORT_COMP_DW1(port),
|
||||
ret = check_phy_reg(dev_priv, phy, ICL_PORT_COMP_DW1(phy),
|
||||
(0xff << 16) | 0xff, procmon->dw1);
|
||||
ret &= check_phy_reg(dev_priv, port, ICL_PORT_COMP_DW9(port),
|
||||
ret &= check_phy_reg(dev_priv, phy, ICL_PORT_COMP_DW9(phy),
|
||||
-1U, procmon->dw9);
|
||||
ret &= check_phy_reg(dev_priv, port, ICL_PORT_COMP_DW10(port),
|
||||
ret &= check_phy_reg(dev_priv, phy, ICL_PORT_COMP_DW10(phy),
|
||||
-1U, procmon->dw10);
|
||||
|
||||
return ret;
|
||||
|
@ -134,15 +133,15 @@ static bool cnl_combo_phy_enabled(struct drm_i915_private *dev_priv)
|
|||
|
||||
static bool cnl_combo_phy_verify_state(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
enum port port = PORT_A;
|
||||
enum phy phy = PHY_A;
|
||||
bool ret;
|
||||
|
||||
if (!cnl_combo_phy_enabled(dev_priv))
|
||||
return false;
|
||||
|
||||
ret = cnl_verify_procmon_ref_values(dev_priv, port);
|
||||
ret = cnl_verify_procmon_ref_values(dev_priv, phy);
|
||||
|
||||
ret &= check_phy_reg(dev_priv, port, CNL_PORT_CL1CM_DW5,
|
||||
ret &= check_phy_reg(dev_priv, phy, CNL_PORT_CL1CM_DW5,
|
||||
CL_POWER_DOWN_ENABLE, CL_POWER_DOWN_ENABLE);
|
||||
|
||||
return ret;
|
||||
|
@ -157,7 +156,7 @@ static void cnl_combo_phys_init(struct drm_i915_private *dev_priv)
|
|||
I915_WRITE(CHICKEN_MISC_2, val);
|
||||
|
||||
/* Dummy PORT_A to get the correct CNL register from the ICL macro */
|
||||
cnl_set_procmon_ref_values(dev_priv, PORT_A);
|
||||
cnl_set_procmon_ref_values(dev_priv, PHY_A);
|
||||
|
||||
val = I915_READ(CNL_PORT_COMP_DW0);
|
||||
val |= COMP_INIT;
|
||||
|
@ -181,35 +180,39 @@ static void cnl_combo_phys_uninit(struct drm_i915_private *dev_priv)
|
|||
}
|
||||
|
||||
static bool icl_combo_phy_enabled(struct drm_i915_private *dev_priv,
|
||||
enum port port)
|
||||
enum phy phy)
|
||||
{
|
||||
return !(I915_READ(ICL_PHY_MISC(port)) &
|
||||
ICL_PHY_MISC_DE_IO_COMP_PWR_DOWN) &&
|
||||
(I915_READ(ICL_PORT_COMP_DW0(port)) & COMP_INIT);
|
||||
/* The PHY C added by EHL has no PHY_MISC register */
|
||||
if (IS_ELKHARTLAKE(dev_priv) && phy == PHY_C)
|
||||
return I915_READ(ICL_PORT_COMP_DW0(phy)) & COMP_INIT;
|
||||
else
|
||||
return !(I915_READ(ICL_PHY_MISC(phy)) &
|
||||
ICL_PHY_MISC_DE_IO_COMP_PWR_DOWN) &&
|
||||
(I915_READ(ICL_PORT_COMP_DW0(phy)) & COMP_INIT);
|
||||
}
|
||||
|
||||
static bool icl_combo_phy_verify_state(struct drm_i915_private *dev_priv,
|
||||
enum port port)
|
||||
enum phy phy)
|
||||
{
|
||||
bool ret;
|
||||
|
||||
if (!icl_combo_phy_enabled(dev_priv, port))
|
||||
if (!icl_combo_phy_enabled(dev_priv, phy))
|
||||
return false;
|
||||
|
||||
ret = cnl_verify_procmon_ref_values(dev_priv, port);
|
||||
ret = cnl_verify_procmon_ref_values(dev_priv, phy);
|
||||
|
||||
if (port == PORT_A)
|
||||
ret &= check_phy_reg(dev_priv, port, ICL_PORT_COMP_DW8(port),
|
||||
if (phy == PHY_A)
|
||||
ret &= check_phy_reg(dev_priv, phy, ICL_PORT_COMP_DW8(phy),
|
||||
IREFGEN, IREFGEN);
|
||||
|
||||
ret &= check_phy_reg(dev_priv, port, ICL_PORT_CL_DW5(port),
|
||||
ret &= check_phy_reg(dev_priv, phy, ICL_PORT_CL_DW5(phy),
|
||||
CL_POWER_DOWN_ENABLE, CL_POWER_DOWN_ENABLE);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
void intel_combo_phy_power_up_lanes(struct drm_i915_private *dev_priv,
|
||||
enum port port, bool is_dsi,
|
||||
enum phy phy, bool is_dsi,
|
||||
int lane_count, bool lane_reversal)
|
||||
{
|
||||
u8 lane_mask;
|
||||
|
@ -254,66 +257,120 @@ void intel_combo_phy_power_up_lanes(struct drm_i915_private *dev_priv,
|
|||
}
|
||||
}
|
||||
|
||||
val = I915_READ(ICL_PORT_CL_DW10(port));
|
||||
val = I915_READ(ICL_PORT_CL_DW10(phy));
|
||||
val &= ~PWR_DOWN_LN_MASK;
|
||||
val |= lane_mask << PWR_DOWN_LN_SHIFT;
|
||||
I915_WRITE(ICL_PORT_CL_DW10(port), val);
|
||||
I915_WRITE(ICL_PORT_CL_DW10(phy), val);
|
||||
}
|
||||
|
||||
static u32 ehl_combo_phy_a_mux(struct drm_i915_private *i915, u32 val)
|
||||
{
|
||||
bool ddi_a_present = i915->vbt.ddi_port_info[PORT_A].child != NULL;
|
||||
bool ddi_d_present = i915->vbt.ddi_port_info[PORT_D].child != NULL;
|
||||
bool dsi_present = intel_bios_is_dsi_present(i915, NULL);
|
||||
|
||||
/*
|
||||
* VBT's 'dvo port' field for child devices references the DDI, not
|
||||
* the PHY. So if combo PHY A is wired up to drive an external
|
||||
* display, we should see a child device present on PORT_D and
|
||||
* nothing on PORT_A and no DSI.
|
||||
*/
|
||||
if (ddi_d_present && !ddi_a_present && !dsi_present)
|
||||
return val | ICL_PHY_MISC_MUX_DDID;
|
||||
|
||||
/*
|
||||
* If we encounter a VBT that claims to have an external display on
|
||||
* DDI-D _and_ an internal display on DDI-A/DSI leave an error message
|
||||
* in the log and let the internal display win.
|
||||
*/
|
||||
if (ddi_d_present)
|
||||
DRM_ERROR("VBT claims to have both internal and external displays on PHY A. Configuring for internal.\n");
|
||||
|
||||
return val & ~ICL_PHY_MISC_MUX_DDID;
|
||||
}
|
||||
|
||||
static void icl_combo_phys_init(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
enum port port;
|
||||
enum phy phy;
|
||||
|
||||
for_each_combo_port(dev_priv, port) {
|
||||
for_each_combo_phy(dev_priv, phy) {
|
||||
u32 val;
|
||||
|
||||
if (icl_combo_phy_verify_state(dev_priv, port)) {
|
||||
DRM_DEBUG_DRIVER("Port %c combo PHY already enabled, won't reprogram it.\n",
|
||||
port_name(port));
|
||||
if (icl_combo_phy_verify_state(dev_priv, phy)) {
|
||||
DRM_DEBUG_DRIVER("Combo PHY %c already enabled, won't reprogram it.\n",
|
||||
phy_name(phy));
|
||||
continue;
|
||||
}
|
||||
|
||||
val = I915_READ(ICL_PHY_MISC(port));
|
||||
/*
|
||||
* Although EHL adds a combo PHY C, there's no PHY_MISC
|
||||
* register for it and no need to program the
|
||||
* DE_IO_COMP_PWR_DOWN setting on PHY C.
|
||||
*/
|
||||
if (IS_ELKHARTLAKE(dev_priv) && phy == PHY_C)
|
||||
goto skip_phy_misc;
|
||||
|
||||
/*
|
||||
* EHL's combo PHY A can be hooked up to either an external
|
||||
* display (via DDI-D) or an internal display (via DDI-A or
|
||||
* the DSI DPHY). This is a motherboard design decision that
|
||||
* can't be changed on the fly, so initialize the PHY's mux
|
||||
* based on whether our VBT indicates the presence of any
|
||||
* "internal" child devices.
|
||||
*/
|
||||
val = I915_READ(ICL_PHY_MISC(phy));
|
||||
if (IS_ELKHARTLAKE(dev_priv) && phy == PHY_A)
|
||||
val = ehl_combo_phy_a_mux(dev_priv, val);
|
||||
val &= ~ICL_PHY_MISC_DE_IO_COMP_PWR_DOWN;
|
||||
I915_WRITE(ICL_PHY_MISC(port), val);
|
||||
I915_WRITE(ICL_PHY_MISC(phy), val);
|
||||
|
||||
cnl_set_procmon_ref_values(dev_priv, port);
|
||||
skip_phy_misc:
|
||||
cnl_set_procmon_ref_values(dev_priv, phy);
|
||||
|
||||
if (port == PORT_A) {
|
||||
val = I915_READ(ICL_PORT_COMP_DW8(port));
|
||||
if (phy == PHY_A) {
|
||||
val = I915_READ(ICL_PORT_COMP_DW8(phy));
|
||||
val |= IREFGEN;
|
||||
I915_WRITE(ICL_PORT_COMP_DW8(port), val);
|
||||
I915_WRITE(ICL_PORT_COMP_DW8(phy), val);
|
||||
}
|
||||
|
||||
val = I915_READ(ICL_PORT_COMP_DW0(port));
|
||||
val = I915_READ(ICL_PORT_COMP_DW0(phy));
|
||||
val |= COMP_INIT;
|
||||
I915_WRITE(ICL_PORT_COMP_DW0(port), val);
|
||||
I915_WRITE(ICL_PORT_COMP_DW0(phy), val);
|
||||
|
||||
val = I915_READ(ICL_PORT_CL_DW5(port));
|
||||
val = I915_READ(ICL_PORT_CL_DW5(phy));
|
||||
val |= CL_POWER_DOWN_ENABLE;
|
||||
I915_WRITE(ICL_PORT_CL_DW5(port), val);
|
||||
I915_WRITE(ICL_PORT_CL_DW5(phy), val);
|
||||
}
|
||||
}
|
||||
|
||||
static void icl_combo_phys_uninit(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
enum port port;
|
||||
enum phy phy;
|
||||
|
||||
for_each_combo_port_reverse(dev_priv, port) {
|
||||
for_each_combo_phy_reverse(dev_priv, phy) {
|
||||
u32 val;
|
||||
|
||||
if (port == PORT_A &&
|
||||
!icl_combo_phy_verify_state(dev_priv, port))
|
||||
DRM_WARN("Port %c combo PHY HW state changed unexpectedly\n",
|
||||
port_name(port));
|
||||
if (phy == PHY_A &&
|
||||
!icl_combo_phy_verify_state(dev_priv, phy))
|
||||
DRM_WARN("Combo PHY %c HW state changed unexpectedly\n",
|
||||
phy_name(phy));
|
||||
|
||||
val = I915_READ(ICL_PHY_MISC(port));
|
||||
/*
|
||||
* Although EHL adds a combo PHY C, there's no PHY_MISC
|
||||
* register for it and no need to program the
|
||||
* DE_IO_COMP_PWR_DOWN setting on PHY C.
|
||||
*/
|
||||
if (IS_ELKHARTLAKE(dev_priv) && phy == PHY_C)
|
||||
goto skip_phy_misc;
|
||||
|
||||
val = I915_READ(ICL_PHY_MISC(phy));
|
||||
val |= ICL_PHY_MISC_DE_IO_COMP_PWR_DOWN;
|
||||
I915_WRITE(ICL_PHY_MISC(port), val);
|
||||
I915_WRITE(ICL_PHY_MISC(phy), val);
|
||||
|
||||
val = I915_READ(ICL_PORT_COMP_DW0(port));
|
||||
skip_phy_misc:
|
||||
val = I915_READ(ICL_PORT_COMP_DW0(phy));
|
||||
val &= ~COMP_INIT;
|
||||
I915_WRITE(ICL_PORT_COMP_DW0(port), val);
|
||||
I915_WRITE(ICL_PORT_COMP_DW0(phy), val);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -7,14 +7,14 @@
|
|||
#define __INTEL_COMBO_PHY_H__
|
||||
|
||||
#include <linux/types.h>
|
||||
#include <drm/i915_drm.h>
|
||||
|
||||
struct drm_i915_private;
|
||||
enum phy;
|
||||
|
||||
void intel_combo_phy_init(struct drm_i915_private *dev_priv);
|
||||
void intel_combo_phy_uninit(struct drm_i915_private *dev_priv);
|
||||
void intel_combo_phy_power_up_lanes(struct drm_i915_private *dev_priv,
|
||||
enum port port, bool is_dsi,
|
||||
enum phy phy, bool is_dsi,
|
||||
int lane_count, bool lane_reversal);
|
||||
|
||||
#endif /* __INTEL_COMBO_PHY_H__ */
|
||||
|
|
|
@ -118,7 +118,7 @@ int intel_connector_register(struct drm_connector *connector)
|
|||
if (ret)
|
||||
goto err;
|
||||
|
||||
if (i915_inject_load_failure()) {
|
||||
if (i915_inject_probe_failure()) {
|
||||
ret = -EFAULT;
|
||||
goto err_backlight;
|
||||
}
|
||||
|
|
|
@ -45,6 +45,7 @@
|
|||
#include "intel_lspcon.h"
|
||||
#include "intel_panel.h"
|
||||
#include "intel_psr.h"
|
||||
#include "intel_tc.h"
|
||||
#include "intel_vdsc.h"
|
||||
|
||||
struct ddi_buf_trans {
|
||||
|
@ -846,8 +847,8 @@ cnl_get_buf_trans_edp(struct drm_i915_private *dev_priv, int *n_entries)
|
|||
}
|
||||
|
||||
static const struct cnl_ddi_buf_trans *
|
||||
icl_get_combo_buf_trans(struct drm_i915_private *dev_priv, enum port port,
|
||||
int type, int rate, int *n_entries)
|
||||
icl_get_combo_buf_trans(struct drm_i915_private *dev_priv, int type, int rate,
|
||||
int *n_entries)
|
||||
{
|
||||
if (type == INTEL_OUTPUT_HDMI) {
|
||||
*n_entries = ARRAY_SIZE(icl_combo_phy_ddi_translations_hdmi);
|
||||
|
@ -867,12 +868,13 @@ icl_get_combo_buf_trans(struct drm_i915_private *dev_priv, enum port port,
|
|||
static int intel_ddi_hdmi_level(struct drm_i915_private *dev_priv, enum port port)
|
||||
{
|
||||
int n_entries, level, default_entry;
|
||||
enum phy phy = intel_port_to_phy(dev_priv, port);
|
||||
|
||||
level = dev_priv->vbt.ddi_port_info[port].hdmi_level_shift;
|
||||
|
||||
if (INTEL_GEN(dev_priv) >= 11) {
|
||||
if (intel_port_is_combophy(dev_priv, port))
|
||||
icl_get_combo_buf_trans(dev_priv, port, INTEL_OUTPUT_HDMI,
|
||||
if (intel_phy_is_combo(dev_priv, phy))
|
||||
icl_get_combo_buf_trans(dev_priv, INTEL_OUTPUT_HDMI,
|
||||
0, &n_entries);
|
||||
else
|
||||
n_entries = ARRAY_SIZE(icl_mg_phy_ddi_translations);
|
||||
|
@ -1486,9 +1488,10 @@ static void icl_ddi_clock_get(struct intel_encoder *encoder,
|
|||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
struct intel_dpll_hw_state *pll_state = &pipe_config->dpll_hw_state;
|
||||
enum port port = encoder->port;
|
||||
enum phy phy = intel_port_to_phy(dev_priv, port);
|
||||
int link_clock;
|
||||
|
||||
if (intel_port_is_combophy(dev_priv, port)) {
|
||||
if (intel_phy_is_combo(dev_priv, phy)) {
|
||||
link_clock = cnl_calc_wrpll_link(dev_priv, pll_state);
|
||||
} else {
|
||||
enum intel_dpll_id pll_id = intel_get_shared_dpll_id(dev_priv,
|
||||
|
@ -1770,7 +1773,10 @@ void intel_ddi_enable_transcoder_func(const struct intel_crtc_state *crtc_state)
|
|||
|
||||
/* Enable TRANS_DDI_FUNC_CTL for the pipe to work in HDMI mode */
|
||||
temp = TRANS_DDI_FUNC_ENABLE;
|
||||
temp |= TRANS_DDI_SELECT_PORT(port);
|
||||
if (INTEL_GEN(dev_priv) >= 12)
|
||||
temp |= TGL_TRANS_DDI_SELECT_PORT(port);
|
||||
else
|
||||
temp |= TRANS_DDI_SELECT_PORT(port);
|
||||
|
||||
switch (crtc_state->pipe_bpp) {
|
||||
case 18:
|
||||
|
@ -1850,8 +1856,13 @@ void intel_ddi_disable_transcoder_func(const struct intel_crtc_state *crtc_state
|
|||
i915_reg_t reg = TRANS_DDI_FUNC_CTL(cpu_transcoder);
|
||||
u32 val = I915_READ(reg);
|
||||
|
||||
val &= ~(TRANS_DDI_FUNC_ENABLE | TRANS_DDI_PORT_MASK | TRANS_DDI_DP_VC_PAYLOAD_ALLOC);
|
||||
val |= TRANS_DDI_PORT_NONE;
|
||||
if (INTEL_GEN(dev_priv) >= 12) {
|
||||
val &= ~(TRANS_DDI_FUNC_ENABLE | TGL_TRANS_DDI_PORT_MASK |
|
||||
TRANS_DDI_DP_VC_PAYLOAD_ALLOC);
|
||||
} else {
|
||||
val &= ~(TRANS_DDI_FUNC_ENABLE | TRANS_DDI_PORT_MASK |
|
||||
TRANS_DDI_DP_VC_PAYLOAD_ALLOC);
|
||||
}
|
||||
I915_WRITE(reg, val);
|
||||
|
||||
if (dev_priv->quirks & QUIRK_INCREASE_DDI_DISABLED_TIME &&
|
||||
|
@ -2003,10 +2014,19 @@ static void intel_ddi_get_encoder_pipes(struct intel_encoder *encoder,
|
|||
mst_pipe_mask = 0;
|
||||
for_each_pipe(dev_priv, p) {
|
||||
enum transcoder cpu_transcoder = (enum transcoder)p;
|
||||
unsigned int port_mask, ddi_select;
|
||||
|
||||
if (INTEL_GEN(dev_priv) >= 12) {
|
||||
port_mask = TGL_TRANS_DDI_PORT_MASK;
|
||||
ddi_select = TGL_TRANS_DDI_SELECT_PORT(port);
|
||||
} else {
|
||||
port_mask = TRANS_DDI_PORT_MASK;
|
||||
ddi_select = TRANS_DDI_SELECT_PORT(port);
|
||||
}
|
||||
|
||||
tmp = I915_READ(TRANS_DDI_FUNC_CTL(cpu_transcoder));
|
||||
|
||||
if ((tmp & TRANS_DDI_PORT_MASK) != TRANS_DDI_SELECT_PORT(port))
|
||||
if ((tmp & port_mask) != ddi_select)
|
||||
continue;
|
||||
|
||||
if ((tmp & TRANS_DDI_MODE_SELECT_MASK) ==
|
||||
|
@ -2085,6 +2105,7 @@ static void intel_ddi_get_power_domains(struct intel_encoder *encoder,
|
|||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
struct intel_digital_port *dig_port;
|
||||
enum phy phy = intel_port_to_phy(dev_priv, encoder->port);
|
||||
|
||||
/*
|
||||
* TODO: Add support for MST encoders. Atm, the following should never
|
||||
|
@ -2102,7 +2123,7 @@ static void intel_ddi_get_power_domains(struct intel_encoder *encoder,
|
|||
* ports.
|
||||
*/
|
||||
if (intel_crtc_has_dp_encoder(crtc_state) ||
|
||||
intel_port_is_tc(dev_priv, encoder->port))
|
||||
intel_phy_is_tc(dev_priv, phy))
|
||||
intel_display_power_get(dev_priv,
|
||||
intel_ddi_main_link_aux_domain(dig_port));
|
||||
|
||||
|
@ -2122,9 +2143,14 @@ void intel_ddi_enable_pipe_clock(const struct intel_crtc_state *crtc_state)
|
|||
enum port port = encoder->port;
|
||||
enum transcoder cpu_transcoder = crtc_state->cpu_transcoder;
|
||||
|
||||
if (cpu_transcoder != TRANSCODER_EDP)
|
||||
I915_WRITE(TRANS_CLK_SEL(cpu_transcoder),
|
||||
TRANS_CLK_SEL_PORT(port));
|
||||
if (cpu_transcoder != TRANSCODER_EDP) {
|
||||
if (INTEL_GEN(dev_priv) >= 12)
|
||||
I915_WRITE(TRANS_CLK_SEL(cpu_transcoder),
|
||||
TGL_TRANS_CLK_SEL_PORT(port));
|
||||
else
|
||||
I915_WRITE(TRANS_CLK_SEL(cpu_transcoder),
|
||||
TRANS_CLK_SEL_PORT(port));
|
||||
}
|
||||
}
|
||||
|
||||
void intel_ddi_disable_pipe_clock(const struct intel_crtc_state *crtc_state)
|
||||
|
@ -2132,9 +2158,14 @@ void intel_ddi_disable_pipe_clock(const struct intel_crtc_state *crtc_state)
|
|||
struct drm_i915_private *dev_priv = to_i915(crtc_state->base.crtc->dev);
|
||||
enum transcoder cpu_transcoder = crtc_state->cpu_transcoder;
|
||||
|
||||
if (cpu_transcoder != TRANSCODER_EDP)
|
||||
I915_WRITE(TRANS_CLK_SEL(cpu_transcoder),
|
||||
TRANS_CLK_SEL_DISABLED);
|
||||
if (cpu_transcoder != TRANSCODER_EDP) {
|
||||
if (INTEL_GEN(dev_priv) >= 12)
|
||||
I915_WRITE(TRANS_CLK_SEL(cpu_transcoder),
|
||||
TGL_TRANS_CLK_SEL_DISABLED);
|
||||
else
|
||||
I915_WRITE(TRANS_CLK_SEL(cpu_transcoder),
|
||||
TRANS_CLK_SEL_DISABLED);
|
||||
}
|
||||
}
|
||||
|
||||
static void _skl_ddi_set_iboost(struct drm_i915_private *dev_priv,
|
||||
|
@ -2227,11 +2258,12 @@ u8 intel_ddi_dp_voltage_max(struct intel_encoder *encoder)
|
|||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
|
||||
enum port port = encoder->port;
|
||||
enum phy phy = intel_port_to_phy(dev_priv, port);
|
||||
int n_entries;
|
||||
|
||||
if (INTEL_GEN(dev_priv) >= 11) {
|
||||
if (intel_port_is_combophy(dev_priv, port))
|
||||
icl_get_combo_buf_trans(dev_priv, port, encoder->type,
|
||||
if (intel_phy_is_combo(dev_priv, phy))
|
||||
icl_get_combo_buf_trans(dev_priv, encoder->type,
|
||||
intel_dp->link_rate, &n_entries);
|
||||
else
|
||||
n_entries = ARRAY_SIZE(icl_mg_phy_ddi_translations);
|
||||
|
@ -2413,15 +2445,15 @@ static void cnl_ddi_vswing_sequence(struct intel_encoder *encoder,
|
|||
}
|
||||
|
||||
static void icl_ddi_combo_vswing_program(struct drm_i915_private *dev_priv,
|
||||
u32 level, enum port port, int type,
|
||||
u32 level, enum phy phy, int type,
|
||||
int rate)
|
||||
{
|
||||
const struct cnl_ddi_buf_trans *ddi_translations = NULL;
|
||||
u32 n_entries, val;
|
||||
int ln;
|
||||
|
||||
ddi_translations = icl_get_combo_buf_trans(dev_priv, port, type,
|
||||
rate, &n_entries);
|
||||
ddi_translations = icl_get_combo_buf_trans(dev_priv, type, rate,
|
||||
&n_entries);
|
||||
if (!ddi_translations)
|
||||
return;
|
||||
|
||||
|
@ -2431,41 +2463,41 @@ static void icl_ddi_combo_vswing_program(struct drm_i915_private *dev_priv,
|
|||
}
|
||||
|
||||
/* Set PORT_TX_DW5 */
|
||||
val = I915_READ(ICL_PORT_TX_DW5_LN0(port));
|
||||
val = I915_READ(ICL_PORT_TX_DW5_LN0(phy));
|
||||
val &= ~(SCALING_MODE_SEL_MASK | RTERM_SELECT_MASK |
|
||||
TAP2_DISABLE | TAP3_DISABLE);
|
||||
val |= SCALING_MODE_SEL(0x2);
|
||||
val |= RTERM_SELECT(0x6);
|
||||
val |= TAP3_DISABLE;
|
||||
I915_WRITE(ICL_PORT_TX_DW5_GRP(port), val);
|
||||
I915_WRITE(ICL_PORT_TX_DW5_GRP(phy), val);
|
||||
|
||||
/* Program PORT_TX_DW2 */
|
||||
val = I915_READ(ICL_PORT_TX_DW2_LN0(port));
|
||||
val = I915_READ(ICL_PORT_TX_DW2_LN0(phy));
|
||||
val &= ~(SWING_SEL_LOWER_MASK | SWING_SEL_UPPER_MASK |
|
||||
RCOMP_SCALAR_MASK);
|
||||
val |= SWING_SEL_UPPER(ddi_translations[level].dw2_swing_sel);
|
||||
val |= SWING_SEL_LOWER(ddi_translations[level].dw2_swing_sel);
|
||||
/* Program Rcomp scalar for every table entry */
|
||||
val |= RCOMP_SCALAR(0x98);
|
||||
I915_WRITE(ICL_PORT_TX_DW2_GRP(port), val);
|
||||
I915_WRITE(ICL_PORT_TX_DW2_GRP(phy), val);
|
||||
|
||||
/* Program PORT_TX_DW4 */
|
||||
/* We cannot write to GRP. It would overwrite individual loadgen. */
|
||||
for (ln = 0; ln <= 3; ln++) {
|
||||
val = I915_READ(ICL_PORT_TX_DW4_LN(ln, port));
|
||||
val = I915_READ(ICL_PORT_TX_DW4_LN(ln, phy));
|
||||
val &= ~(POST_CURSOR_1_MASK | POST_CURSOR_2_MASK |
|
||||
CURSOR_COEFF_MASK);
|
||||
val |= POST_CURSOR_1(ddi_translations[level].dw4_post_cursor_1);
|
||||
val |= POST_CURSOR_2(ddi_translations[level].dw4_post_cursor_2);
|
||||
val |= CURSOR_COEFF(ddi_translations[level].dw4_cursor_coeff);
|
||||
I915_WRITE(ICL_PORT_TX_DW4_LN(ln, port), val);
|
||||
I915_WRITE(ICL_PORT_TX_DW4_LN(ln, phy), val);
|
||||
}
|
||||
|
||||
/* Program PORT_TX_DW7 */
|
||||
val = I915_READ(ICL_PORT_TX_DW7_LN0(port));
|
||||
val = I915_READ(ICL_PORT_TX_DW7_LN0(phy));
|
||||
val &= ~N_SCALAR_MASK;
|
||||
val |= N_SCALAR(ddi_translations[level].dw7_n_scalar);
|
||||
I915_WRITE(ICL_PORT_TX_DW7_GRP(port), val);
|
||||
I915_WRITE(ICL_PORT_TX_DW7_GRP(phy), val);
|
||||
}
|
||||
|
||||
static void icl_combo_phy_ddi_vswing_sequence(struct intel_encoder *encoder,
|
||||
|
@ -2473,7 +2505,7 @@ static void icl_combo_phy_ddi_vswing_sequence(struct intel_encoder *encoder,
|
|||
enum intel_output_type type)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
enum port port = encoder->port;
|
||||
enum phy phy = intel_port_to_phy(dev_priv, encoder->port);
|
||||
int width = 0;
|
||||
int rate = 0;
|
||||
u32 val;
|
||||
|
@ -2494,12 +2526,12 @@ static void icl_combo_phy_ddi_vswing_sequence(struct intel_encoder *encoder,
|
|||
* set PORT_PCS_DW1 cmnkeeper_enable to 1b,
|
||||
* else clear to 0b.
|
||||
*/
|
||||
val = I915_READ(ICL_PORT_PCS_DW1_LN0(port));
|
||||
val = I915_READ(ICL_PORT_PCS_DW1_LN0(phy));
|
||||
if (type == INTEL_OUTPUT_HDMI)
|
||||
val &= ~COMMON_KEEPER_EN;
|
||||
else
|
||||
val |= COMMON_KEEPER_EN;
|
||||
I915_WRITE(ICL_PORT_PCS_DW1_GRP(port), val);
|
||||
I915_WRITE(ICL_PORT_PCS_DW1_GRP(phy), val);
|
||||
|
||||
/* 2. Program loadgen select */
|
||||
/*
|
||||
|
@ -2509,33 +2541,33 @@ static void icl_combo_phy_ddi_vswing_sequence(struct intel_encoder *encoder,
|
|||
* > 6 GHz (LN0=0, LN1=0, LN2=0, LN3=0)
|
||||
*/
|
||||
for (ln = 0; ln <= 3; ln++) {
|
||||
val = I915_READ(ICL_PORT_TX_DW4_LN(ln, port));
|
||||
val = I915_READ(ICL_PORT_TX_DW4_LN(ln, phy));
|
||||
val &= ~LOADGEN_SELECT;
|
||||
|
||||
if ((rate <= 600000 && width == 4 && ln >= 1) ||
|
||||
(rate <= 600000 && width < 4 && (ln == 1 || ln == 2))) {
|
||||
val |= LOADGEN_SELECT;
|
||||
}
|
||||
I915_WRITE(ICL_PORT_TX_DW4_LN(ln, port), val);
|
||||
I915_WRITE(ICL_PORT_TX_DW4_LN(ln, phy), val);
|
||||
}
|
||||
|
||||
/* 3. Set PORT_CL_DW5 SUS Clock Config to 11b */
|
||||
val = I915_READ(ICL_PORT_CL_DW5(port));
|
||||
val = I915_READ(ICL_PORT_CL_DW5(phy));
|
||||
val |= SUS_CLOCK_CONFIG;
|
||||
I915_WRITE(ICL_PORT_CL_DW5(port), val);
|
||||
I915_WRITE(ICL_PORT_CL_DW5(phy), val);
|
||||
|
||||
/* 4. Clear training enable to change swing values */
|
||||
val = I915_READ(ICL_PORT_TX_DW5_LN0(port));
|
||||
val = I915_READ(ICL_PORT_TX_DW5_LN0(phy));
|
||||
val &= ~TX_TRAINING_EN;
|
||||
I915_WRITE(ICL_PORT_TX_DW5_GRP(port), val);
|
||||
I915_WRITE(ICL_PORT_TX_DW5_GRP(phy), val);
|
||||
|
||||
/* 5. Program swing and de-emphasis */
|
||||
icl_ddi_combo_vswing_program(dev_priv, level, port, type, rate);
|
||||
icl_ddi_combo_vswing_program(dev_priv, level, phy, type, rate);
|
||||
|
||||
/* 6. Set training enable to trigger update */
|
||||
val = I915_READ(ICL_PORT_TX_DW5_LN0(port));
|
||||
val = I915_READ(ICL_PORT_TX_DW5_LN0(phy));
|
||||
val |= TX_TRAINING_EN;
|
||||
I915_WRITE(ICL_PORT_TX_DW5_GRP(port), val);
|
||||
I915_WRITE(ICL_PORT_TX_DW5_GRP(phy), val);
|
||||
}
|
||||
|
||||
static void icl_mg_phy_ddi_vswing_sequence(struct intel_encoder *encoder,
|
||||
|
@ -2663,9 +2695,9 @@ static void icl_ddi_vswing_sequence(struct intel_encoder *encoder,
|
|||
enum intel_output_type type)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
enum port port = encoder->port;
|
||||
enum phy phy = intel_port_to_phy(dev_priv, encoder->port);
|
||||
|
||||
if (intel_port_is_combophy(dev_priv, port))
|
||||
if (intel_phy_is_combo(dev_priv, phy))
|
||||
icl_combo_phy_ddi_vswing_sequence(encoder, level, type);
|
||||
else
|
||||
icl_mg_phy_ddi_vswing_sequence(encoder, link_clock, level);
|
||||
|
@ -2728,12 +2760,13 @@ u32 ddi_signal_levels(struct intel_dp *intel_dp)
|
|||
|
||||
static inline
|
||||
u32 icl_dpclka_cfgcr0_clk_off(struct drm_i915_private *dev_priv,
|
||||
enum port port)
|
||||
enum phy phy)
|
||||
{
|
||||
if (intel_port_is_combophy(dev_priv, port)) {
|
||||
return ICL_DPCLKA_CFGCR0_DDI_CLK_OFF(port);
|
||||
} else if (intel_port_is_tc(dev_priv, port)) {
|
||||
enum tc_port tc_port = intel_port_to_tc(dev_priv, port);
|
||||
if (intel_phy_is_combo(dev_priv, phy)) {
|
||||
return ICL_DPCLKA_CFGCR0_DDI_CLK_OFF(phy);
|
||||
} else if (intel_phy_is_tc(dev_priv, phy)) {
|
||||
enum tc_port tc_port = intel_port_to_tc(dev_priv,
|
||||
(enum port)phy);
|
||||
|
||||
return ICL_DPCLKA_CFGCR0_TC_CLK_OFF(tc_port);
|
||||
}
|
||||
|
@ -2746,23 +2779,33 @@ static void icl_map_plls_to_ports(struct intel_encoder *encoder,
|
|||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
struct intel_shared_dpll *pll = crtc_state->shared_dpll;
|
||||
enum port port = encoder->port;
|
||||
enum phy phy = intel_port_to_phy(dev_priv, encoder->port);
|
||||
u32 val;
|
||||
|
||||
mutex_lock(&dev_priv->dpll_lock);
|
||||
|
||||
val = I915_READ(DPCLKA_CFGCR0_ICL);
|
||||
WARN_ON((val & icl_dpclka_cfgcr0_clk_off(dev_priv, port)) == 0);
|
||||
val = I915_READ(ICL_DPCLKA_CFGCR0);
|
||||
WARN_ON((val & icl_dpclka_cfgcr0_clk_off(dev_priv, phy)) == 0);
|
||||
|
||||
if (intel_port_is_combophy(dev_priv, port)) {
|
||||
val &= ~DPCLKA_CFGCR0_DDI_CLK_SEL_MASK(port);
|
||||
val |= DPCLKA_CFGCR0_DDI_CLK_SEL(pll->info->id, port);
|
||||
I915_WRITE(DPCLKA_CFGCR0_ICL, val);
|
||||
POSTING_READ(DPCLKA_CFGCR0_ICL);
|
||||
if (intel_phy_is_combo(dev_priv, phy)) {
|
||||
/*
|
||||
* Even though this register references DDIs, note that we
|
||||
* want to pass the PHY rather than the port (DDI). For
|
||||
* ICL, port=phy in all cases so it doesn't matter, but for
|
||||
* EHL the bspec notes the following:
|
||||
*
|
||||
* "DDID clock tied to DDIA clock, so DPCLKA_CFGCR0 DDIA
|
||||
* Clock Select chooses the PLL for both DDIA and DDID and
|
||||
* drives port A in all cases."
|
||||
*/
|
||||
val &= ~ICL_DPCLKA_CFGCR0_DDI_CLK_SEL_MASK(phy);
|
||||
val |= ICL_DPCLKA_CFGCR0_DDI_CLK_SEL(pll->info->id, phy);
|
||||
I915_WRITE(ICL_DPCLKA_CFGCR0, val);
|
||||
POSTING_READ(ICL_DPCLKA_CFGCR0);
|
||||
}
|
||||
|
||||
val &= ~icl_dpclka_cfgcr0_clk_off(dev_priv, port);
|
||||
I915_WRITE(DPCLKA_CFGCR0_ICL, val);
|
||||
val &= ~icl_dpclka_cfgcr0_clk_off(dev_priv, phy);
|
||||
I915_WRITE(ICL_DPCLKA_CFGCR0, val);
|
||||
|
||||
mutex_unlock(&dev_priv->dpll_lock);
|
||||
}
|
||||
|
@ -2770,14 +2813,14 @@ static void icl_map_plls_to_ports(struct intel_encoder *encoder,
|
|||
static void icl_unmap_plls_to_ports(struct intel_encoder *encoder)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
enum port port = encoder->port;
|
||||
enum phy phy = intel_port_to_phy(dev_priv, encoder->port);
|
||||
u32 val;
|
||||
|
||||
mutex_lock(&dev_priv->dpll_lock);
|
||||
|
||||
val = I915_READ(DPCLKA_CFGCR0_ICL);
|
||||
val |= icl_dpclka_cfgcr0_clk_off(dev_priv, port);
|
||||
I915_WRITE(DPCLKA_CFGCR0_ICL, val);
|
||||
val = I915_READ(ICL_DPCLKA_CFGCR0);
|
||||
val |= icl_dpclka_cfgcr0_clk_off(dev_priv, phy);
|
||||
I915_WRITE(ICL_DPCLKA_CFGCR0, val);
|
||||
|
||||
mutex_unlock(&dev_priv->dpll_lock);
|
||||
}
|
||||
|
@ -2835,11 +2878,13 @@ void icl_sanitize_encoder_pll_mapping(struct intel_encoder *encoder)
|
|||
ddi_clk_needed = false;
|
||||
}
|
||||
|
||||
val = I915_READ(DPCLKA_CFGCR0_ICL);
|
||||
val = I915_READ(ICL_DPCLKA_CFGCR0);
|
||||
for_each_port_masked(port, port_mask) {
|
||||
enum phy phy = intel_port_to_phy(dev_priv, port);
|
||||
|
||||
bool ddi_clk_ungated = !(val &
|
||||
icl_dpclka_cfgcr0_clk_off(dev_priv,
|
||||
port));
|
||||
phy));
|
||||
|
||||
if (ddi_clk_needed == ddi_clk_ungated)
|
||||
continue;
|
||||
|
@ -2851,10 +2896,10 @@ void icl_sanitize_encoder_pll_mapping(struct intel_encoder *encoder)
|
|||
if (WARN_ON(ddi_clk_needed))
|
||||
continue;
|
||||
|
||||
DRM_NOTE("Port %c is disabled/in DSI mode with an ungated DDI clock, gate it\n",
|
||||
port_name(port));
|
||||
val |= icl_dpclka_cfgcr0_clk_off(dev_priv, port);
|
||||
I915_WRITE(DPCLKA_CFGCR0_ICL, val);
|
||||
DRM_NOTE("PHY %c is disabled/in DSI mode with an ungated DDI clock, gate it\n",
|
||||
phy_name(port));
|
||||
val |= icl_dpclka_cfgcr0_clk_off(dev_priv, phy);
|
||||
I915_WRITE(ICL_DPCLKA_CFGCR0, val);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -2863,6 +2908,7 @@ static void intel_ddi_clk_select(struct intel_encoder *encoder,
|
|||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
enum port port = encoder->port;
|
||||
enum phy phy = intel_port_to_phy(dev_priv, port);
|
||||
u32 val;
|
||||
const struct intel_shared_dpll *pll = crtc_state->shared_dpll;
|
||||
|
||||
|
@ -2872,7 +2918,7 @@ static void intel_ddi_clk_select(struct intel_encoder *encoder,
|
|||
mutex_lock(&dev_priv->dpll_lock);
|
||||
|
||||
if (INTEL_GEN(dev_priv) >= 11) {
|
||||
if (!intel_port_is_combophy(dev_priv, port))
|
||||
if (!intel_phy_is_combo(dev_priv, phy))
|
||||
I915_WRITE(DDI_CLK_SEL(port),
|
||||
icl_pll_to_ddi_clk_sel(encoder, crtc_state));
|
||||
} else if (IS_CANNONLAKE(dev_priv)) {
|
||||
|
@ -2912,9 +2958,10 @@ static void intel_ddi_clk_disable(struct intel_encoder *encoder)
|
|||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
enum port port = encoder->port;
|
||||
enum phy phy = intel_port_to_phy(dev_priv, port);
|
||||
|
||||
if (INTEL_GEN(dev_priv) >= 11) {
|
||||
if (!intel_port_is_combophy(dev_priv, port))
|
||||
if (!intel_phy_is_combo(dev_priv, phy))
|
||||
I915_WRITE(DDI_CLK_SEL(port), DDI_CLK_SEL_NONE);
|
||||
} else if (IS_CANNONLAKE(dev_priv)) {
|
||||
I915_WRITE(DPCLKA_CFGCR0, I915_READ(DPCLKA_CFGCR0) |
|
||||
|
@ -2995,25 +3042,22 @@ static void icl_program_mg_dp_mode(struct intel_digital_port *intel_dig_port)
|
|||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(intel_dig_port->base.base.dev);
|
||||
enum port port = intel_dig_port->base.port;
|
||||
enum tc_port tc_port = intel_port_to_tc(dev_priv, port);
|
||||
u32 ln0, ln1, lane_info;
|
||||
u32 ln0, ln1, lane_mask;
|
||||
|
||||
if (tc_port == PORT_TC_NONE || intel_dig_port->tc_type == TC_PORT_TBT)
|
||||
if (intel_dig_port->tc_mode == TC_PORT_TBT_ALT)
|
||||
return;
|
||||
|
||||
ln0 = I915_READ(MG_DP_MODE(0, port));
|
||||
ln1 = I915_READ(MG_DP_MODE(1, port));
|
||||
|
||||
switch (intel_dig_port->tc_type) {
|
||||
case TC_PORT_TYPEC:
|
||||
switch (intel_dig_port->tc_mode) {
|
||||
case TC_PORT_DP_ALT:
|
||||
ln0 &= ~(MG_DP_MODE_CFG_DP_X1_MODE | MG_DP_MODE_CFG_DP_X2_MODE);
|
||||
ln1 &= ~(MG_DP_MODE_CFG_DP_X1_MODE | MG_DP_MODE_CFG_DP_X2_MODE);
|
||||
|
||||
lane_info = (I915_READ(PORT_TX_DFLEXDPSP) &
|
||||
DP_LANE_ASSIGNMENT_MASK(tc_port)) >>
|
||||
DP_LANE_ASSIGNMENT_SHIFT(tc_port);
|
||||
lane_mask = intel_tc_port_get_lane_mask(intel_dig_port);
|
||||
|
||||
switch (lane_info) {
|
||||
switch (lane_mask) {
|
||||
case 0x1:
|
||||
case 0x4:
|
||||
break;
|
||||
|
@ -3038,7 +3082,7 @@ static void icl_program_mg_dp_mode(struct intel_digital_port *intel_dig_port)
|
|||
MG_DP_MODE_CFG_DP_X2_MODE;
|
||||
break;
|
||||
default:
|
||||
MISSING_CASE(lane_info);
|
||||
MISSING_CASE(lane_mask);
|
||||
}
|
||||
break;
|
||||
|
||||
|
@ -3048,7 +3092,7 @@ static void icl_program_mg_dp_mode(struct intel_digital_port *intel_dig_port)
|
|||
break;
|
||||
|
||||
default:
|
||||
MISSING_CASE(intel_dig_port->tc_type);
|
||||
MISSING_CASE(intel_dig_port->tc_mode);
|
||||
return;
|
||||
}
|
||||
|
||||
|
@ -3110,6 +3154,7 @@ static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
|
|||
struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
enum port port = encoder->port;
|
||||
enum phy phy = intel_port_to_phy(dev_priv, port);
|
||||
struct intel_digital_port *dig_port = enc_to_dig_port(&encoder->base);
|
||||
bool is_mst = intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DP_MST);
|
||||
int level = intel_ddi_dp_level(intel_dp);
|
||||
|
@ -3123,7 +3168,10 @@ static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
|
|||
|
||||
intel_ddi_clk_select(encoder, crtc_state);
|
||||
|
||||
intel_display_power_get(dev_priv, dig_port->ddi_io_power_domain);
|
||||
if (!intel_phy_is_tc(dev_priv, phy) ||
|
||||
dig_port->tc_mode != TC_PORT_TBT_ALT)
|
||||
intel_display_power_get(dev_priv,
|
||||
dig_port->ddi_io_power_domain);
|
||||
|
||||
icl_program_mg_dp_mode(dig_port);
|
||||
icl_disable_phy_clock_gating(dig_port);
|
||||
|
@ -3138,11 +3186,11 @@ static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
|
|||
else
|
||||
intel_prepare_dp_ddi_buffers(encoder, crtc_state);
|
||||
|
||||
if (intel_port_is_combophy(dev_priv, port)) {
|
||||
if (intel_phy_is_combo(dev_priv, phy)) {
|
||||
bool lane_reversal =
|
||||
dig_port->saved_port_bits & DDI_BUF_PORT_REVERSAL;
|
||||
|
||||
intel_combo_phy_power_up_lanes(dev_priv, port, false,
|
||||
intel_combo_phy_power_up_lanes(dev_priv, phy, false,
|
||||
crtc_state->lane_count,
|
||||
lane_reversal);
|
||||
}
|
||||
|
@ -3290,6 +3338,7 @@ static void intel_ddi_post_disable_dp(struct intel_encoder *encoder,
|
|||
struct intel_dp *intel_dp = &dig_port->dp;
|
||||
bool is_mst = intel_crtc_has_type(old_crtc_state,
|
||||
INTEL_OUTPUT_DP_MST);
|
||||
enum phy phy = intel_port_to_phy(dev_priv, encoder->port);
|
||||
|
||||
if (!is_mst) {
|
||||
intel_ddi_disable_pipe_clock(old_crtc_state);
|
||||
|
@ -3305,8 +3354,10 @@ static void intel_ddi_post_disable_dp(struct intel_encoder *encoder,
|
|||
intel_edp_panel_vdd_on(intel_dp);
|
||||
intel_edp_panel_off(intel_dp);
|
||||
|
||||
intel_display_power_put_unchecked(dev_priv,
|
||||
dig_port->ddi_io_power_domain);
|
||||
if (!intel_phy_is_tc(dev_priv, phy) ||
|
||||
dig_port->tc_mode != TC_PORT_TBT_ALT)
|
||||
intel_display_power_put_unchecked(dev_priv,
|
||||
dig_port->ddi_io_power_domain);
|
||||
|
||||
intel_ddi_clk_disable(encoder);
|
||||
}
|
||||
|
@ -3591,33 +3642,28 @@ static void intel_ddi_update_pipe(struct intel_encoder *encoder,
|
|||
intel_hdcp_disable(to_intel_connector(conn_state->connector));
|
||||
}
|
||||
|
||||
static void intel_ddi_set_fia_lane_count(struct intel_encoder *encoder,
|
||||
const struct intel_crtc_state *pipe_config,
|
||||
enum port port)
|
||||
static void
|
||||
intel_ddi_update_prepare(struct intel_atomic_state *state,
|
||||
struct intel_encoder *encoder,
|
||||
struct intel_crtc *crtc)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
struct intel_digital_port *dig_port = enc_to_dig_port(&encoder->base);
|
||||
enum tc_port tc_port = intel_port_to_tc(dev_priv, port);
|
||||
u32 val = I915_READ(PORT_TX_DFLEXDPMLE1);
|
||||
bool lane_reversal = dig_port->saved_port_bits & DDI_BUF_PORT_REVERSAL;
|
||||
struct intel_crtc_state *crtc_state =
|
||||
crtc ? intel_atomic_get_new_crtc_state(state, crtc) : NULL;
|
||||
int required_lanes = crtc_state ? crtc_state->lane_count : 1;
|
||||
|
||||
val &= ~DFLEXDPMLE1_DPMLETC_MASK(tc_port);
|
||||
switch (pipe_config->lane_count) {
|
||||
case 1:
|
||||
val |= (lane_reversal) ? DFLEXDPMLE1_DPMLETC_ML3(tc_port) :
|
||||
DFLEXDPMLE1_DPMLETC_ML0(tc_port);
|
||||
break;
|
||||
case 2:
|
||||
val |= (lane_reversal) ? DFLEXDPMLE1_DPMLETC_ML3_2(tc_port) :
|
||||
DFLEXDPMLE1_DPMLETC_ML1_0(tc_port);
|
||||
break;
|
||||
case 4:
|
||||
val |= DFLEXDPMLE1_DPMLETC_ML3_0(tc_port);
|
||||
break;
|
||||
default:
|
||||
MISSING_CASE(pipe_config->lane_count);
|
||||
}
|
||||
I915_WRITE(PORT_TX_DFLEXDPMLE1, val);
|
||||
WARN_ON(crtc && crtc->active);
|
||||
|
||||
intel_tc_port_get_link(enc_to_dig_port(&encoder->base), required_lanes);
|
||||
if (crtc_state && crtc_state->base.active)
|
||||
intel_update_active_dpll(state, crtc, encoder);
|
||||
}
|
||||
|
||||
static void
|
||||
intel_ddi_update_complete(struct intel_atomic_state *state,
|
||||
struct intel_encoder *encoder,
|
||||
struct intel_crtc *crtc)
|
||||
{
|
||||
intel_tc_port_put_link(enc_to_dig_port(&encoder->base));
|
||||
}
|
||||
|
||||
static void
|
||||
|
@ -3627,26 +3673,25 @@ intel_ddi_pre_pll_enable(struct intel_encoder *encoder,
|
|||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
struct intel_digital_port *dig_port = enc_to_dig_port(&encoder->base);
|
||||
enum port port = encoder->port;
|
||||
enum phy phy = intel_port_to_phy(dev_priv, encoder->port);
|
||||
bool is_tc_port = intel_phy_is_tc(dev_priv, phy);
|
||||
|
||||
if (intel_crtc_has_dp_encoder(crtc_state) ||
|
||||
intel_port_is_tc(dev_priv, encoder->port))
|
||||
if (is_tc_port)
|
||||
intel_tc_port_get_link(dig_port, crtc_state->lane_count);
|
||||
|
||||
if (intel_crtc_has_dp_encoder(crtc_state) || is_tc_port)
|
||||
intel_display_power_get(dev_priv,
|
||||
intel_ddi_main_link_aux_domain(dig_port));
|
||||
|
||||
if (IS_GEN9_LP(dev_priv))
|
||||
if (is_tc_port && dig_port->tc_mode != TC_PORT_TBT_ALT)
|
||||
/*
|
||||
* Program the lane count for static/dynamic connections on
|
||||
* Type-C ports. Skip this step for TBT.
|
||||
*/
|
||||
intel_tc_port_set_fia_lane_count(dig_port, crtc_state->lane_count);
|
||||
else if (IS_GEN9_LP(dev_priv))
|
||||
bxt_ddi_phy_set_lane_optim_mask(encoder,
|
||||
crtc_state->lane_lat_optim_mask);
|
||||
|
||||
/*
|
||||
* Program the lane count for static/dynamic connections on Type-C ports.
|
||||
* Skip this step for TBT.
|
||||
*/
|
||||
if (dig_port->tc_type == TC_PORT_UNKNOWN ||
|
||||
dig_port->tc_type == TC_PORT_TBT)
|
||||
return;
|
||||
|
||||
intel_ddi_set_fia_lane_count(encoder, crtc_state, port);
|
||||
}
|
||||
|
||||
static void
|
||||
|
@ -3656,11 +3701,15 @@ intel_ddi_post_pll_disable(struct intel_encoder *encoder,
|
|||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
struct intel_digital_port *dig_port = enc_to_dig_port(&encoder->base);
|
||||
enum phy phy = intel_port_to_phy(dev_priv, encoder->port);
|
||||
bool is_tc_port = intel_phy_is_tc(dev_priv, phy);
|
||||
|
||||
if (intel_crtc_has_dp_encoder(crtc_state) ||
|
||||
intel_port_is_tc(dev_priv, encoder->port))
|
||||
if (intel_crtc_has_dp_encoder(crtc_state) || is_tc_port)
|
||||
intel_display_power_put_unchecked(dev_priv,
|
||||
intel_ddi_main_link_aux_domain(dig_port));
|
||||
|
||||
if (is_tc_port)
|
||||
intel_tc_port_put_link(dig_port);
|
||||
}
|
||||
|
||||
static void intel_ddi_prepare_link_retrain(struct intel_dp *intel_dp)
|
||||
|
@ -3737,7 +3786,6 @@ void intel_ddi_get_config(struct intel_encoder *encoder,
|
|||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
struct intel_crtc *intel_crtc = to_intel_crtc(pipe_config->base.crtc);
|
||||
enum transcoder cpu_transcoder = pipe_config->cpu_transcoder;
|
||||
struct intel_digital_port *intel_dig_port;
|
||||
u32 temp, flags = 0;
|
||||
|
||||
/* XXX: DSI transcoder paranoia */
|
||||
|
@ -3776,7 +3824,6 @@ void intel_ddi_get_config(struct intel_encoder *encoder,
|
|||
switch (temp & TRANS_DDI_MODE_SELECT_MASK) {
|
||||
case TRANS_DDI_MODE_SELECT_HDMI:
|
||||
pipe_config->has_hdmi_sink = true;
|
||||
intel_dig_port = enc_to_dig_port(&encoder->base);
|
||||
|
||||
pipe_config->infoframes.enable |=
|
||||
intel_hdmi_infoframes_enabled(encoder, pipe_config);
|
||||
|
@ -3914,49 +3961,18 @@ static int intel_ddi_compute_config(struct intel_encoder *encoder,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void intel_ddi_encoder_suspend(struct intel_encoder *encoder)
|
||||
{
|
||||
struct intel_digital_port *dig_port = enc_to_dig_port(&encoder->base);
|
||||
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
|
||||
|
||||
intel_dp_encoder_suspend(encoder);
|
||||
|
||||
/*
|
||||
* TODO: disconnect also from USB DP alternate mode once we have a
|
||||
* way to handle the modeset restore in that mode during resume
|
||||
* even if the sink has disappeared while being suspended.
|
||||
*/
|
||||
if (dig_port->tc_legacy_port)
|
||||
icl_tc_phy_disconnect(i915, dig_port);
|
||||
}
|
||||
|
||||
static void intel_ddi_encoder_reset(struct drm_encoder *drm_encoder)
|
||||
{
|
||||
struct intel_digital_port *dig_port = enc_to_dig_port(drm_encoder);
|
||||
struct drm_i915_private *i915 = to_i915(drm_encoder->dev);
|
||||
|
||||
if (intel_port_is_tc(i915, dig_port->base.port))
|
||||
intel_digital_port_connected(&dig_port->base);
|
||||
|
||||
intel_dp_encoder_reset(drm_encoder);
|
||||
}
|
||||
|
||||
static void intel_ddi_encoder_destroy(struct drm_encoder *encoder)
|
||||
{
|
||||
struct intel_digital_port *dig_port = enc_to_dig_port(encoder);
|
||||
struct drm_i915_private *i915 = to_i915(encoder->dev);
|
||||
|
||||
intel_dp_encoder_flush_work(encoder);
|
||||
|
||||
if (intel_port_is_tc(i915, dig_port->base.port))
|
||||
icl_tc_phy_disconnect(i915, dig_port);
|
||||
|
||||
drm_encoder_cleanup(encoder);
|
||||
kfree(dig_port);
|
||||
}
|
||||
|
||||
static const struct drm_encoder_funcs intel_ddi_funcs = {
|
||||
.reset = intel_ddi_encoder_reset,
|
||||
.reset = intel_dp_encoder_reset,
|
||||
.destroy = intel_ddi_encoder_destroy,
|
||||
};
|
||||
|
||||
|
@ -4081,14 +4097,17 @@ static int intel_hdmi_reset_link(struct intel_encoder *encoder,
|
|||
return modeset_pipe(&crtc->base, ctx);
|
||||
}
|
||||
|
||||
static bool intel_ddi_hotplug(struct intel_encoder *encoder,
|
||||
struct intel_connector *connector)
|
||||
static enum intel_hotplug_state
|
||||
intel_ddi_hotplug(struct intel_encoder *encoder,
|
||||
struct intel_connector *connector,
|
||||
bool irq_received)
|
||||
{
|
||||
struct intel_digital_port *dig_port = enc_to_dig_port(&encoder->base);
|
||||
struct drm_modeset_acquire_ctx ctx;
|
||||
bool changed;
|
||||
enum intel_hotplug_state state;
|
||||
int ret;
|
||||
|
||||
changed = intel_encoder_hotplug(encoder, connector);
|
||||
state = intel_encoder_hotplug(encoder, connector, irq_received);
|
||||
|
||||
drm_modeset_acquire_init(&ctx, 0);
|
||||
|
||||
|
@ -4110,7 +4129,27 @@ static bool intel_ddi_hotplug(struct intel_encoder *encoder,
|
|||
drm_modeset_acquire_fini(&ctx);
|
||||
WARN(ret, "Acquiring modeset locks failed with %i\n", ret);
|
||||
|
||||
return changed;
|
||||
/*
|
||||
* Unpowered type-c dongles can take some time to boot and be
|
||||
* responsible, so here giving some time to those dongles to power up
|
||||
* and then retrying the probe.
|
||||
*
|
||||
* On many platforms the HDMI live state signal is known to be
|
||||
* unreliable, so we can't use it to detect if a sink is connected or
|
||||
* not. Instead we detect if it's connected based on whether we can
|
||||
* read the EDID or not. That in turn has a problem during disconnect,
|
||||
* since the HPD interrupt may be raised before the DDC lines get
|
||||
* disconnected (due to how the required length of DDC vs. HPD
|
||||
* connector pins are specified) and so we'll still be able to get a
|
||||
* valid EDID. To solve this schedule another detection cycle if this
|
||||
* time around we didn't detect any change in the sink's connection
|
||||
* status.
|
||||
*/
|
||||
if (state == INTEL_HOTPLUG_UNCHANGED && irq_received &&
|
||||
!dig_port->dp.is_mst)
|
||||
state = INTEL_HOTPLUG_RETRY;
|
||||
|
||||
return state;
|
||||
}
|
||||
|
||||
static struct intel_connector *
|
||||
|
@ -4198,6 +4237,7 @@ void intel_ddi_init(struct drm_i915_private *dev_priv, enum port port)
|
|||
struct drm_encoder *encoder;
|
||||
bool init_hdmi, init_dp, init_lspcon = false;
|
||||
enum pipe pipe;
|
||||
enum phy phy = intel_port_to_phy(dev_priv, port);
|
||||
|
||||
init_hdmi = port_info->supports_dvi || port_info->supports_hdmi;
|
||||
init_dp = port_info->supports_dp;
|
||||
|
@ -4242,7 +4282,7 @@ void intel_ddi_init(struct drm_i915_private *dev_priv, enum port port)
|
|||
intel_encoder->update_pipe = intel_ddi_update_pipe;
|
||||
intel_encoder->get_hw_state = intel_ddi_get_hw_state;
|
||||
intel_encoder->get_config = intel_ddi_get_config;
|
||||
intel_encoder->suspend = intel_ddi_encoder_suspend;
|
||||
intel_encoder->suspend = intel_dp_encoder_suspend;
|
||||
intel_encoder->get_power_domains = intel_ddi_get_power_domains;
|
||||
intel_encoder->type = INTEL_OUTPUT_DDI;
|
||||
intel_encoder->power_domain = intel_port_to_power_domain(port);
|
||||
|
@ -4261,9 +4301,15 @@ void intel_ddi_init(struct drm_i915_private *dev_priv, enum port port)
|
|||
intel_dig_port->max_lanes = intel_ddi_max_lanes(intel_dig_port);
|
||||
intel_dig_port->aux_ch = intel_bios_port_aux_ch(dev_priv, port);
|
||||
|
||||
intel_dig_port->tc_legacy_port = intel_port_is_tc(dev_priv, port) &&
|
||||
!port_info->supports_typec_usb &&
|
||||
!port_info->supports_tbt;
|
||||
if (intel_phy_is_tc(dev_priv, phy)) {
|
||||
bool is_legacy = !port_info->supports_typec_usb &&
|
||||
!port_info->supports_tbt;
|
||||
|
||||
intel_tc_port_init(intel_dig_port, is_legacy);
|
||||
|
||||
intel_encoder->update_prepare = intel_ddi_update_prepare;
|
||||
intel_encoder->update_complete = intel_ddi_update_complete;
|
||||
}
|
||||
|
||||
switch (port) {
|
||||
case PORT_A:
|
||||
|
@ -4290,6 +4336,18 @@ void intel_ddi_init(struct drm_i915_private *dev_priv, enum port port)
|
|||
intel_dig_port->ddi_io_power_domain =
|
||||
POWER_DOMAIN_PORT_DDI_F_IO;
|
||||
break;
|
||||
case PORT_G:
|
||||
intel_dig_port->ddi_io_power_domain =
|
||||
POWER_DOMAIN_PORT_DDI_G_IO;
|
||||
break;
|
||||
case PORT_H:
|
||||
intel_dig_port->ddi_io_power_domain =
|
||||
POWER_DOMAIN_PORT_DDI_H_IO;
|
||||
break;
|
||||
case PORT_I:
|
||||
intel_dig_port->ddi_io_power_domain =
|
||||
POWER_DOMAIN_PORT_DDI_I_IO;
|
||||
break;
|
||||
default:
|
||||
MISSING_CASE(port);
|
||||
}
|
||||
|
@ -4324,9 +4382,6 @@ void intel_ddi_init(struct drm_i915_private *dev_priv, enum port port)
|
|||
|
||||
intel_infoframe_init(intel_dig_port);
|
||||
|
||||
if (intel_port_is_tc(dev_priv, port))
|
||||
intel_digital_port_connected(intel_encoder);
|
||||
|
||||
return;
|
||||
|
||||
err:
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -45,6 +45,8 @@ enum i915_gpio {
|
|||
GPIOK,
|
||||
GPIOL,
|
||||
GPIOM,
|
||||
GPION,
|
||||
GPIOO,
|
||||
};
|
||||
|
||||
/*
|
||||
|
@ -58,6 +60,7 @@ enum pipe {
|
|||
PIPE_A = 0,
|
||||
PIPE_B,
|
||||
PIPE_C,
|
||||
PIPE_D,
|
||||
_PIPE_EDP,
|
||||
|
||||
I915_MAX_PIPES = _PIPE_EDP
|
||||
|
@ -75,6 +78,7 @@ enum transcoder {
|
|||
TRANSCODER_A = PIPE_A,
|
||||
TRANSCODER_B = PIPE_B,
|
||||
TRANSCODER_C = PIPE_C,
|
||||
TRANSCODER_D = PIPE_D,
|
||||
|
||||
/*
|
||||
* The following transcoders can map to any pipe, their enum value
|
||||
|
@ -98,6 +102,8 @@ static inline const char *transcoder_name(enum transcoder transcoder)
|
|||
return "B";
|
||||
case TRANSCODER_C:
|
||||
return "C";
|
||||
case TRANSCODER_D:
|
||||
return "D";
|
||||
case TRANSCODER_EDP:
|
||||
return "EDP";
|
||||
case TRANSCODER_DSI_A:
|
||||
|
@ -173,6 +179,12 @@ static inline const char *port_identifier(enum port port)
|
|||
return "Port E";
|
||||
case PORT_F:
|
||||
return "Port F";
|
||||
case PORT_G:
|
||||
return "Port G";
|
||||
case PORT_H:
|
||||
return "Port H";
|
||||
case PORT_I:
|
||||
return "Port I";
|
||||
default:
|
||||
return "<invalid>";
|
||||
}
|
||||
|
@ -185,14 +197,15 @@ enum tc_port {
|
|||
PORT_TC2,
|
||||
PORT_TC3,
|
||||
PORT_TC4,
|
||||
PORT_TC5,
|
||||
PORT_TC6,
|
||||
|
||||
I915_MAX_TC_PORTS
|
||||
};
|
||||
|
||||
enum tc_port_type {
|
||||
TC_PORT_UNKNOWN = 0,
|
||||
TC_PORT_TYPEC,
|
||||
TC_PORT_TBT,
|
||||
enum tc_port_mode {
|
||||
TC_PORT_TBT_ALT,
|
||||
TC_PORT_DP_ALT,
|
||||
TC_PORT_LEGACY,
|
||||
};
|
||||
|
||||
|
@ -229,6 +242,30 @@ struct intel_link_m_n {
|
|||
u32 link_n;
|
||||
};
|
||||
|
||||
enum phy {
|
||||
PHY_NONE = -1,
|
||||
|
||||
PHY_A = 0,
|
||||
PHY_B,
|
||||
PHY_C,
|
||||
PHY_D,
|
||||
PHY_E,
|
||||
PHY_F,
|
||||
PHY_G,
|
||||
PHY_H,
|
||||
PHY_I,
|
||||
|
||||
I915_MAX_PHYS
|
||||
};
|
||||
|
||||
#define phy_name(a) ((a) + 'A')
|
||||
|
||||
enum phy_fia {
|
||||
FIA1,
|
||||
FIA2,
|
||||
FIA3,
|
||||
};
|
||||
|
||||
#define for_each_pipe(__dev_priv, __p) \
|
||||
for ((__p) = 0; (__p) < INTEL_INFO(__dev_priv)->num_pipes; (__p)++)
|
||||
|
||||
|
@ -254,6 +291,10 @@ struct intel_link_m_n {
|
|||
for ((__port) = PORT_A; (__port) < I915_MAX_PORTS; (__port)++) \
|
||||
for_each_if((__ports_mask) & BIT(__port))
|
||||
|
||||
#define for_each_phy_masked(__phy, __phys_mask) \
|
||||
for ((__phy) = PHY_A; (__phy) < I915_MAX_PHYS; (__phy)++) \
|
||||
for_each_if((__phys_mask) & BIT(__phy))
|
||||
|
||||
#define for_each_crtc(dev, crtc) \
|
||||
list_for_each_entry(crtc, &(dev)->mode_config.crtc_list, head)
|
||||
|
||||
|
@ -357,5 +398,6 @@ void lpt_disable_clkout_dp(struct drm_i915_private *dev_priv);
|
|||
u32 intel_plane_fb_max_stride(struct drm_i915_private *dev_priv,
|
||||
u32 pixel_format, u64 modifier);
|
||||
bool intel_plane_can_remap(const struct intel_plane_state *plane_state);
|
||||
enum phy intel_port_to_phy(struct drm_i915_private *i915, enum port port);
|
||||
|
||||
#endif
|
||||
|
|
|
@ -17,13 +17,17 @@
|
|||
#include "intel_drv.h"
|
||||
#include "intel_hotplug.h"
|
||||
#include "intel_sideband.h"
|
||||
#include "intel_tc.h"
|
||||
|
||||
bool intel_display_power_well_is_enabled(struct drm_i915_private *dev_priv,
|
||||
enum i915_power_well_id power_well_id);
|
||||
|
||||
const char *
|
||||
intel_display_power_domain_str(enum intel_display_power_domain domain)
|
||||
intel_display_power_domain_str(struct drm_i915_private *i915,
|
||||
enum intel_display_power_domain domain)
|
||||
{
|
||||
bool ddi_tc_ports = IS_GEN(i915, 12);
|
||||
|
||||
switch (domain) {
|
||||
case POWER_DOMAIN_DISPLAY_CORE:
|
||||
return "DISPLAY_CORE";
|
||||
|
@ -33,22 +37,28 @@ intel_display_power_domain_str(enum intel_display_power_domain domain)
|
|||
return "PIPE_B";
|
||||
case POWER_DOMAIN_PIPE_C:
|
||||
return "PIPE_C";
|
||||
case POWER_DOMAIN_PIPE_D:
|
||||
return "PIPE_D";
|
||||
case POWER_DOMAIN_PIPE_A_PANEL_FITTER:
|
||||
return "PIPE_A_PANEL_FITTER";
|
||||
case POWER_DOMAIN_PIPE_B_PANEL_FITTER:
|
||||
return "PIPE_B_PANEL_FITTER";
|
||||
case POWER_DOMAIN_PIPE_C_PANEL_FITTER:
|
||||
return "PIPE_C_PANEL_FITTER";
|
||||
case POWER_DOMAIN_PIPE_D_PANEL_FITTER:
|
||||
return "PIPE_D_PANEL_FITTER";
|
||||
case POWER_DOMAIN_TRANSCODER_A:
|
||||
return "TRANSCODER_A";
|
||||
case POWER_DOMAIN_TRANSCODER_B:
|
||||
return "TRANSCODER_B";
|
||||
case POWER_DOMAIN_TRANSCODER_C:
|
||||
return "TRANSCODER_C";
|
||||
case POWER_DOMAIN_TRANSCODER_D:
|
||||
return "TRANSCODER_D";
|
||||
case POWER_DOMAIN_TRANSCODER_EDP:
|
||||
return "TRANSCODER_EDP";
|
||||
case POWER_DOMAIN_TRANSCODER_EDP_VDSC:
|
||||
return "TRANSCODER_EDP_VDSC";
|
||||
case POWER_DOMAIN_TRANSCODER_VDSC_PW2:
|
||||
return "TRANSCODER_VDSC_PW2";
|
||||
case POWER_DOMAIN_TRANSCODER_DSI_A:
|
||||
return "TRANSCODER_DSI_A";
|
||||
case POWER_DOMAIN_TRANSCODER_DSI_C:
|
||||
|
@ -60,11 +70,23 @@ intel_display_power_domain_str(enum intel_display_power_domain domain)
|
|||
case POWER_DOMAIN_PORT_DDI_C_LANES:
|
||||
return "PORT_DDI_C_LANES";
|
||||
case POWER_DOMAIN_PORT_DDI_D_LANES:
|
||||
return "PORT_DDI_D_LANES";
|
||||
BUILD_BUG_ON(POWER_DOMAIN_PORT_DDI_D_LANES !=
|
||||
POWER_DOMAIN_PORT_DDI_TC1_LANES);
|
||||
return ddi_tc_ports ? "PORT_DDI_TC1_LANES" : "PORT_DDI_D_LANES";
|
||||
case POWER_DOMAIN_PORT_DDI_E_LANES:
|
||||
return "PORT_DDI_E_LANES";
|
||||
BUILD_BUG_ON(POWER_DOMAIN_PORT_DDI_E_LANES !=
|
||||
POWER_DOMAIN_PORT_DDI_TC2_LANES);
|
||||
return ddi_tc_ports ? "PORT_DDI_TC2_LANES" : "PORT_DDI_E_LANES";
|
||||
case POWER_DOMAIN_PORT_DDI_F_LANES:
|
||||
return "PORT_DDI_F_LANES";
|
||||
BUILD_BUG_ON(POWER_DOMAIN_PORT_DDI_F_LANES !=
|
||||
POWER_DOMAIN_PORT_DDI_TC3_LANES);
|
||||
return ddi_tc_ports ? "PORT_DDI_TC3_LANES" : "PORT_DDI_F_LANES";
|
||||
case POWER_DOMAIN_PORT_DDI_TC4_LANES:
|
||||
return "PORT_DDI_TC4_LANES";
|
||||
case POWER_DOMAIN_PORT_DDI_TC5_LANES:
|
||||
return "PORT_DDI_TC5_LANES";
|
||||
case POWER_DOMAIN_PORT_DDI_TC6_LANES:
|
||||
return "PORT_DDI_TC6_LANES";
|
||||
case POWER_DOMAIN_PORT_DDI_A_IO:
|
||||
return "PORT_DDI_A_IO";
|
||||
case POWER_DOMAIN_PORT_DDI_B_IO:
|
||||
|
@ -72,11 +94,23 @@ intel_display_power_domain_str(enum intel_display_power_domain domain)
|
|||
case POWER_DOMAIN_PORT_DDI_C_IO:
|
||||
return "PORT_DDI_C_IO";
|
||||
case POWER_DOMAIN_PORT_DDI_D_IO:
|
||||
return "PORT_DDI_D_IO";
|
||||
BUILD_BUG_ON(POWER_DOMAIN_PORT_DDI_D_IO !=
|
||||
POWER_DOMAIN_PORT_DDI_TC1_IO);
|
||||
return ddi_tc_ports ? "PORT_DDI_TC1_IO" : "PORT_DDI_D_IO";
|
||||
case POWER_DOMAIN_PORT_DDI_E_IO:
|
||||
return "PORT_DDI_E_IO";
|
||||
BUILD_BUG_ON(POWER_DOMAIN_PORT_DDI_E_IO !=
|
||||
POWER_DOMAIN_PORT_DDI_TC2_IO);
|
||||
return ddi_tc_ports ? "PORT_DDI_TC2_IO" : "PORT_DDI_E_IO";
|
||||
case POWER_DOMAIN_PORT_DDI_F_IO:
|
||||
return "PORT_DDI_F_IO";
|
||||
BUILD_BUG_ON(POWER_DOMAIN_PORT_DDI_F_IO !=
|
||||
POWER_DOMAIN_PORT_DDI_TC3_IO);
|
||||
return ddi_tc_ports ? "PORT_DDI_TC3_IO" : "PORT_DDI_F_IO";
|
||||
case POWER_DOMAIN_PORT_DDI_TC4_IO:
|
||||
return "PORT_DDI_TC4_IO";
|
||||
case POWER_DOMAIN_PORT_DDI_TC5_IO:
|
||||
return "PORT_DDI_TC5_IO";
|
||||
case POWER_DOMAIN_PORT_DDI_TC6_IO:
|
||||
return "PORT_DDI_TC6_IO";
|
||||
case POWER_DOMAIN_PORT_DSI:
|
||||
return "PORT_DSI";
|
||||
case POWER_DOMAIN_PORT_CRT:
|
||||
|
@ -94,11 +128,20 @@ intel_display_power_domain_str(enum intel_display_power_domain domain)
|
|||
case POWER_DOMAIN_AUX_C:
|
||||
return "AUX_C";
|
||||
case POWER_DOMAIN_AUX_D:
|
||||
return "AUX_D";
|
||||
BUILD_BUG_ON(POWER_DOMAIN_AUX_D != POWER_DOMAIN_AUX_TC1);
|
||||
return ddi_tc_ports ? "AUX_TC1" : "AUX_D";
|
||||
case POWER_DOMAIN_AUX_E:
|
||||
return "AUX_E";
|
||||
BUILD_BUG_ON(POWER_DOMAIN_AUX_E != POWER_DOMAIN_AUX_TC2);
|
||||
return ddi_tc_ports ? "AUX_TC2" : "AUX_E";
|
||||
case POWER_DOMAIN_AUX_F:
|
||||
return "AUX_F";
|
||||
BUILD_BUG_ON(POWER_DOMAIN_AUX_F != POWER_DOMAIN_AUX_TC3);
|
||||
return ddi_tc_ports ? "AUX_TC3" : "AUX_F";
|
||||
case POWER_DOMAIN_AUX_TC4:
|
||||
return "AUX_TC4";
|
||||
case POWER_DOMAIN_AUX_TC5:
|
||||
return "AUX_TC5";
|
||||
case POWER_DOMAIN_AUX_TC6:
|
||||
return "AUX_TC6";
|
||||
case POWER_DOMAIN_AUX_IO_A:
|
||||
return "AUX_IO_A";
|
||||
case POWER_DOMAIN_AUX_TBT1:
|
||||
|
@ -109,6 +152,10 @@ intel_display_power_domain_str(enum intel_display_power_domain domain)
|
|||
return "AUX_TBT3";
|
||||
case POWER_DOMAIN_AUX_TBT4:
|
||||
return "AUX_TBT4";
|
||||
case POWER_DOMAIN_AUX_TBT5:
|
||||
return "AUX_TBT5";
|
||||
case POWER_DOMAIN_AUX_TBT6:
|
||||
return "AUX_TBT6";
|
||||
case POWER_DOMAIN_GMBUS:
|
||||
return "GMBUS";
|
||||
case POWER_DOMAIN_INIT:
|
||||
|
@ -117,6 +164,8 @@ intel_display_power_domain_str(enum intel_display_power_domain domain)
|
|||
return "MODESET";
|
||||
case POWER_DOMAIN_GT_IRQ:
|
||||
return "GT_IRQ";
|
||||
case POWER_DOMAIN_DPLL_DC_OFF:
|
||||
return "DPLL_DC_OFF";
|
||||
default:
|
||||
MISSING_CASE(domain);
|
||||
return "?";
|
||||
|
@ -269,11 +318,17 @@ static void hsw_wait_for_power_well_enable(struct drm_i915_private *dev_priv,
|
|||
int pw_idx = power_well->desc->hsw.idx;
|
||||
|
||||
/* Timeout for PW1:10 us, AUX:not specified, other PWs:20 us. */
|
||||
WARN_ON(intel_wait_for_register(&dev_priv->uncore,
|
||||
regs->driver,
|
||||
HSW_PWR_WELL_CTL_STATE(pw_idx),
|
||||
HSW_PWR_WELL_CTL_STATE(pw_idx),
|
||||
1));
|
||||
if (intel_wait_for_register(&dev_priv->uncore,
|
||||
regs->driver,
|
||||
HSW_PWR_WELL_CTL_STATE(pw_idx),
|
||||
HSW_PWR_WELL_CTL_STATE(pw_idx),
|
||||
1)) {
|
||||
DRM_DEBUG_KMS("%s power well enable timeout\n",
|
||||
power_well->desc->name);
|
||||
|
||||
/* An AUX timeout is expected if the TBT DP tunnel is down. */
|
||||
WARN_ON(!power_well->desc->hsw.is_tc_tbt);
|
||||
}
|
||||
}
|
||||
|
||||
static u32 hsw_power_well_requesters(struct drm_i915_private *dev_priv,
|
||||
|
@ -388,7 +443,7 @@ static void hsw_power_well_disable(struct drm_i915_private *dev_priv,
|
|||
hsw_wait_for_power_well_disable(dev_priv, power_well);
|
||||
}
|
||||
|
||||
#define ICL_AUX_PW_TO_PORT(pw_idx) ((pw_idx) - ICL_PW_CTL_IDX_AUX_A)
|
||||
#define ICL_AUX_PW_TO_PHY(pw_idx) ((pw_idx) - ICL_PW_CTL_IDX_AUX_A)
|
||||
|
||||
static void
|
||||
icl_combo_phy_aux_power_well_enable(struct drm_i915_private *dev_priv,
|
||||
|
@ -396,21 +451,29 @@ icl_combo_phy_aux_power_well_enable(struct drm_i915_private *dev_priv,
|
|||
{
|
||||
const struct i915_power_well_regs *regs = power_well->desc->hsw.regs;
|
||||
int pw_idx = power_well->desc->hsw.idx;
|
||||
enum port port = ICL_AUX_PW_TO_PORT(pw_idx);
|
||||
enum phy phy = ICL_AUX_PW_TO_PHY(pw_idx);
|
||||
u32 val;
|
||||
int wa_idx_max;
|
||||
|
||||
val = I915_READ(regs->driver);
|
||||
I915_WRITE(regs->driver, val | HSW_PWR_WELL_CTL_REQ(pw_idx));
|
||||
|
||||
val = I915_READ(ICL_PORT_CL_DW12(port));
|
||||
I915_WRITE(ICL_PORT_CL_DW12(port), val | ICL_LANE_ENABLE_AUX);
|
||||
if (INTEL_GEN(dev_priv) < 12) {
|
||||
val = I915_READ(ICL_PORT_CL_DW12(phy));
|
||||
I915_WRITE(ICL_PORT_CL_DW12(phy), val | ICL_LANE_ENABLE_AUX);
|
||||
}
|
||||
|
||||
hsw_wait_for_power_well_enable(dev_priv, power_well);
|
||||
|
||||
/* Display WA #1178: icl */
|
||||
if (IS_ICELAKE(dev_priv) &&
|
||||
pw_idx >= ICL_PW_CTL_IDX_AUX_A && pw_idx <= ICL_PW_CTL_IDX_AUX_B &&
|
||||
!intel_bios_is_port_edp(dev_priv, port)) {
|
||||
/* Display WA #1178: icl, tgl */
|
||||
if (IS_TIGERLAKE(dev_priv))
|
||||
wa_idx_max = ICL_PW_CTL_IDX_AUX_C;
|
||||
else
|
||||
wa_idx_max = ICL_PW_CTL_IDX_AUX_B;
|
||||
|
||||
if (!IS_ELKHARTLAKE(dev_priv) &&
|
||||
pw_idx >= ICL_PW_CTL_IDX_AUX_A && pw_idx <= wa_idx_max &&
|
||||
!intel_bios_is_port_edp(dev_priv, (enum port)phy)) {
|
||||
val = I915_READ(ICL_AUX_ANAOVRD1(pw_idx));
|
||||
val |= ICL_AUX_ANAOVRD1_ENABLE | ICL_AUX_ANAOVRD1_LDO_BYPASS;
|
||||
I915_WRITE(ICL_AUX_ANAOVRD1(pw_idx), val);
|
||||
|
@ -423,11 +486,13 @@ icl_combo_phy_aux_power_well_disable(struct drm_i915_private *dev_priv,
|
|||
{
|
||||
const struct i915_power_well_regs *regs = power_well->desc->hsw.regs;
|
||||
int pw_idx = power_well->desc->hsw.idx;
|
||||
enum port port = ICL_AUX_PW_TO_PORT(pw_idx);
|
||||
enum phy phy = ICL_AUX_PW_TO_PHY(pw_idx);
|
||||
u32 val;
|
||||
|
||||
val = I915_READ(ICL_PORT_CL_DW12(port));
|
||||
I915_WRITE(ICL_PORT_CL_DW12(port), val & ~ICL_LANE_ENABLE_AUX);
|
||||
if (INTEL_GEN(dev_priv) < 12) {
|
||||
val = I915_READ(ICL_PORT_CL_DW12(phy));
|
||||
I915_WRITE(ICL_PORT_CL_DW12(phy), val & ~ICL_LANE_ENABLE_AUX);
|
||||
}
|
||||
|
||||
val = I915_READ(regs->driver);
|
||||
I915_WRITE(regs->driver, val & ~HSW_PWR_WELL_CTL_REQ(pw_idx));
|
||||
|
@ -441,26 +506,108 @@ icl_combo_phy_aux_power_well_disable(struct drm_i915_private *dev_priv,
|
|||
#define ICL_TBT_AUX_PW_TO_CH(pw_idx) \
|
||||
((pw_idx) - ICL_PW_CTL_IDX_AUX_TBT1 + AUX_CH_C)
|
||||
|
||||
static enum aux_ch icl_tc_phy_aux_ch(struct drm_i915_private *dev_priv,
|
||||
struct i915_power_well *power_well)
|
||||
{
|
||||
int pw_idx = power_well->desc->hsw.idx;
|
||||
|
||||
return power_well->desc->hsw.is_tc_tbt ? ICL_TBT_AUX_PW_TO_CH(pw_idx) :
|
||||
ICL_AUX_PW_TO_CH(pw_idx);
|
||||
}
|
||||
|
||||
#if IS_ENABLED(CONFIG_DRM_I915_DEBUG_RUNTIME_PM)
|
||||
|
||||
static u64 async_put_domains_mask(struct i915_power_domains *power_domains);
|
||||
|
||||
static int power_well_async_ref_count(struct drm_i915_private *dev_priv,
|
||||
struct i915_power_well *power_well)
|
||||
{
|
||||
int refs = hweight64(power_well->desc->domains &
|
||||
async_put_domains_mask(&dev_priv->power_domains));
|
||||
|
||||
WARN_ON(refs > power_well->count);
|
||||
|
||||
return refs;
|
||||
}
|
||||
|
||||
static void icl_tc_port_assert_ref_held(struct drm_i915_private *dev_priv,
|
||||
struct i915_power_well *power_well)
|
||||
{
|
||||
enum aux_ch aux_ch = icl_tc_phy_aux_ch(dev_priv, power_well);
|
||||
struct intel_digital_port *dig_port = NULL;
|
||||
struct intel_encoder *encoder;
|
||||
|
||||
/* Bypass the check if all references are released asynchronously */
|
||||
if (power_well_async_ref_count(dev_priv, power_well) ==
|
||||
power_well->count)
|
||||
return;
|
||||
|
||||
aux_ch = icl_tc_phy_aux_ch(dev_priv, power_well);
|
||||
|
||||
for_each_intel_encoder(&dev_priv->drm, encoder) {
|
||||
enum phy phy = intel_port_to_phy(dev_priv, encoder->port);
|
||||
|
||||
if (!intel_phy_is_tc(dev_priv, phy))
|
||||
continue;
|
||||
|
||||
/* We'll check the MST primary port */
|
||||
if (encoder->type == INTEL_OUTPUT_DP_MST)
|
||||
continue;
|
||||
|
||||
dig_port = enc_to_dig_port(&encoder->base);
|
||||
if (WARN_ON(!dig_port))
|
||||
continue;
|
||||
|
||||
if (dig_port->aux_ch != aux_ch) {
|
||||
dig_port = NULL;
|
||||
continue;
|
||||
}
|
||||
|
||||
break;
|
||||
}
|
||||
|
||||
if (WARN_ON(!dig_port))
|
||||
return;
|
||||
|
||||
WARN_ON(!intel_tc_port_ref_held(dig_port));
|
||||
}
|
||||
|
||||
#else
|
||||
|
||||
static void icl_tc_port_assert_ref_held(struct drm_i915_private *dev_priv,
|
||||
struct i915_power_well *power_well)
|
||||
{
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
static void
|
||||
icl_tc_phy_aux_power_well_enable(struct drm_i915_private *dev_priv,
|
||||
struct i915_power_well *power_well)
|
||||
{
|
||||
int pw_idx = power_well->desc->hsw.idx;
|
||||
bool is_tbt = power_well->desc->hsw.is_tc_tbt;
|
||||
enum aux_ch aux_ch;
|
||||
enum aux_ch aux_ch = icl_tc_phy_aux_ch(dev_priv, power_well);
|
||||
u32 val;
|
||||
|
||||
aux_ch = is_tbt ? ICL_TBT_AUX_PW_TO_CH(pw_idx) :
|
||||
ICL_AUX_PW_TO_CH(pw_idx);
|
||||
icl_tc_port_assert_ref_held(dev_priv, power_well);
|
||||
|
||||
val = I915_READ(DP_AUX_CH_CTL(aux_ch));
|
||||
val &= ~DP_AUX_CH_CTL_TBT_IO;
|
||||
if (is_tbt)
|
||||
if (power_well->desc->hsw.is_tc_tbt)
|
||||
val |= DP_AUX_CH_CTL_TBT_IO;
|
||||
I915_WRITE(DP_AUX_CH_CTL(aux_ch), val);
|
||||
|
||||
hsw_power_well_enable(dev_priv, power_well);
|
||||
}
|
||||
|
||||
static void
|
||||
icl_tc_phy_aux_power_well_disable(struct drm_i915_private *dev_priv,
|
||||
struct i915_power_well *power_well)
|
||||
{
|
||||
icl_tc_port_assert_ref_held(dev_priv, power_well);
|
||||
|
||||
hsw_power_well_disable(dev_priv, power_well);
|
||||
}
|
||||
|
||||
/*
|
||||
* We should only use the power well if we explicitly asked the hardware to
|
||||
* enable it, so check if it's enabled and also check if we've requested it to
|
||||
|
@ -1071,7 +1218,7 @@ static void vlv_display_power_well_deinit(struct drm_i915_private *dev_priv)
|
|||
spin_unlock_irq(&dev_priv->irq_lock);
|
||||
|
||||
/* make sure we're done processing display irqs */
|
||||
synchronize_irq(dev_priv->drm.irq);
|
||||
intel_synchronize_irq(dev_priv);
|
||||
|
||||
intel_power_sequencer_reset(dev_priv);
|
||||
|
||||
|
@ -1575,12 +1722,15 @@ __async_put_domains_state_ok(struct i915_power_domains *power_domains)
|
|||
static void print_power_domains(struct i915_power_domains *power_domains,
|
||||
const char *prefix, u64 mask)
|
||||
{
|
||||
struct drm_i915_private *i915 =
|
||||
container_of(power_domains, struct drm_i915_private,
|
||||
power_domains);
|
||||
enum intel_display_power_domain domain;
|
||||
|
||||
DRM_DEBUG_DRIVER("%s (%lu):\n", prefix, hweight64(mask));
|
||||
for_each_power_domain(domain, mask)
|
||||
DRM_DEBUG_DRIVER("%s use_count %d\n",
|
||||
intel_display_power_domain_str(domain),
|
||||
intel_display_power_domain_str(i915, domain),
|
||||
power_domains->domain_use_count[domain]);
|
||||
}
|
||||
|
||||
|
@ -1750,7 +1900,7 @@ __intel_display_power_put_domain(struct drm_i915_private *dev_priv,
|
|||
{
|
||||
struct i915_power_domains *power_domains;
|
||||
struct i915_power_well *power_well;
|
||||
const char *name = intel_display_power_domain_str(domain);
|
||||
const char *name = intel_display_power_domain_str(dev_priv, domain);
|
||||
|
||||
power_domains = &dev_priv->power_domains;
|
||||
|
||||
|
@ -2359,7 +2509,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
|
|||
*/
|
||||
#define ICL_PW_2_POWER_DOMAINS ( \
|
||||
ICL_PW_3_POWER_DOMAINS | \
|
||||
BIT_ULL(POWER_DOMAIN_TRANSCODER_EDP_VDSC) | \
|
||||
BIT_ULL(POWER_DOMAIN_TRANSCODER_VDSC_PW2) | \
|
||||
BIT_ULL(POWER_DOMAIN_INIT))
|
||||
/*
|
||||
* - KVMR (HW control)
|
||||
|
@ -2368,6 +2518,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
|
|||
ICL_PW_2_POWER_DOMAINS | \
|
||||
BIT_ULL(POWER_DOMAIN_MODESET) | \
|
||||
BIT_ULL(POWER_DOMAIN_AUX_A) | \
|
||||
BIT_ULL(POWER_DOMAIN_DPLL_DC_OFF) | \
|
||||
BIT_ULL(POWER_DOMAIN_INIT))
|
||||
|
||||
#define ICL_DDI_IO_A_POWER_DOMAINS ( \
|
||||
|
@ -2405,6 +2556,93 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
|
|||
#define ICL_AUX_TBT4_IO_POWER_DOMAINS ( \
|
||||
BIT_ULL(POWER_DOMAIN_AUX_TBT4))
|
||||
|
||||
#define TGL_PW_5_POWER_DOMAINS ( \
|
||||
BIT_ULL(POWER_DOMAIN_PIPE_D) | \
|
||||
BIT_ULL(POWER_DOMAIN_PIPE_D_PANEL_FITTER) | \
|
||||
BIT_ULL(POWER_DOMAIN_INIT))
|
||||
|
||||
#define TGL_PW_4_POWER_DOMAINS ( \
|
||||
TGL_PW_5_POWER_DOMAINS | \
|
||||
BIT_ULL(POWER_DOMAIN_PIPE_C) | \
|
||||
BIT_ULL(POWER_DOMAIN_PIPE_C_PANEL_FITTER) | \
|
||||
BIT_ULL(POWER_DOMAIN_INIT))
|
||||
|
||||
#define TGL_PW_3_POWER_DOMAINS ( \
|
||||
TGL_PW_4_POWER_DOMAINS | \
|
||||
BIT_ULL(POWER_DOMAIN_PIPE_B) | \
|
||||
BIT_ULL(POWER_DOMAIN_TRANSCODER_B) | \
|
||||
BIT_ULL(POWER_DOMAIN_TRANSCODER_C) | \
|
||||
BIT_ULL(POWER_DOMAIN_TRANSCODER_D) | \
|
||||
BIT_ULL(POWER_DOMAIN_PIPE_B_PANEL_FITTER) | \
|
||||
BIT_ULL(POWER_DOMAIN_PORT_DDI_TC1_LANES) | \
|
||||
BIT_ULL(POWER_DOMAIN_PORT_DDI_TC1_IO) | \
|
||||
BIT_ULL(POWER_DOMAIN_PORT_DDI_TC2_LANES) | \
|
||||
BIT_ULL(POWER_DOMAIN_PORT_DDI_TC2_IO) | \
|
||||
BIT_ULL(POWER_DOMAIN_PORT_DDI_TC3_LANES) | \
|
||||
BIT_ULL(POWER_DOMAIN_PORT_DDI_TC3_IO) | \
|
||||
BIT_ULL(POWER_DOMAIN_PORT_DDI_TC4_LANES) | \
|
||||
BIT_ULL(POWER_DOMAIN_PORT_DDI_TC4_IO) | \
|
||||
BIT_ULL(POWER_DOMAIN_PORT_DDI_TC5_LANES) | \
|
||||
BIT_ULL(POWER_DOMAIN_PORT_DDI_TC5_IO) | \
|
||||
BIT_ULL(POWER_DOMAIN_PORT_DDI_TC6_LANES) | \
|
||||
BIT_ULL(POWER_DOMAIN_PORT_DDI_TC6_IO) | \
|
||||
BIT_ULL(POWER_DOMAIN_AUX_TC1) | \
|
||||
BIT_ULL(POWER_DOMAIN_AUX_TC2) | \
|
||||
BIT_ULL(POWER_DOMAIN_AUX_TC3) | \
|
||||
BIT_ULL(POWER_DOMAIN_AUX_TC4) | \
|
||||
BIT_ULL(POWER_DOMAIN_AUX_TC5) | \
|
||||
BIT_ULL(POWER_DOMAIN_AUX_TC6) | \
|
||||
BIT_ULL(POWER_DOMAIN_AUX_TBT1) | \
|
||||
BIT_ULL(POWER_DOMAIN_AUX_TBT2) | \
|
||||
BIT_ULL(POWER_DOMAIN_AUX_TBT3) | \
|
||||
BIT_ULL(POWER_DOMAIN_AUX_TBT4) | \
|
||||
BIT_ULL(POWER_DOMAIN_AUX_TBT5) | \
|
||||
BIT_ULL(POWER_DOMAIN_AUX_TBT6) | \
|
||||
BIT_ULL(POWER_DOMAIN_VGA) | \
|
||||
BIT_ULL(POWER_DOMAIN_AUDIO) | \
|
||||
BIT_ULL(POWER_DOMAIN_INIT))
|
||||
|
||||
#define TGL_PW_2_POWER_DOMAINS ( \
|
||||
TGL_PW_3_POWER_DOMAINS | \
|
||||
BIT_ULL(POWER_DOMAIN_TRANSCODER_VDSC_PW2) | \
|
||||
BIT_ULL(POWER_DOMAIN_INIT))
|
||||
|
||||
#define TGL_DISPLAY_DC_OFF_POWER_DOMAINS ( \
|
||||
TGL_PW_2_POWER_DOMAINS | \
|
||||
BIT_ULL(POWER_DOMAIN_MODESET) | \
|
||||
BIT_ULL(POWER_DOMAIN_AUX_A) | \
|
||||
BIT_ULL(POWER_DOMAIN_INIT))
|
||||
|
||||
#define TGL_DDI_IO_TC1_POWER_DOMAINS ( \
|
||||
BIT_ULL(POWER_DOMAIN_PORT_DDI_TC1_IO))
|
||||
#define TGL_DDI_IO_TC2_POWER_DOMAINS ( \
|
||||
BIT_ULL(POWER_DOMAIN_PORT_DDI_TC2_IO))
|
||||
#define TGL_DDI_IO_TC3_POWER_DOMAINS ( \
|
||||
BIT_ULL(POWER_DOMAIN_PORT_DDI_TC3_IO))
|
||||
#define TGL_DDI_IO_TC4_POWER_DOMAINS ( \
|
||||
BIT_ULL(POWER_DOMAIN_PORT_DDI_TC4_IO))
|
||||
#define TGL_DDI_IO_TC5_POWER_DOMAINS ( \
|
||||
BIT_ULL(POWER_DOMAIN_PORT_DDI_TC5_IO))
|
||||
#define TGL_DDI_IO_TC6_POWER_DOMAINS ( \
|
||||
BIT_ULL(POWER_DOMAIN_PORT_DDI_TC6_IO))
|
||||
|
||||
#define TGL_AUX_TC1_IO_POWER_DOMAINS ( \
|
||||
BIT_ULL(POWER_DOMAIN_AUX_TC1))
|
||||
#define TGL_AUX_TC2_IO_POWER_DOMAINS ( \
|
||||
BIT_ULL(POWER_DOMAIN_AUX_TC2))
|
||||
#define TGL_AUX_TC3_IO_POWER_DOMAINS ( \
|
||||
BIT_ULL(POWER_DOMAIN_AUX_TC3))
|
||||
#define TGL_AUX_TC4_IO_POWER_DOMAINS ( \
|
||||
BIT_ULL(POWER_DOMAIN_AUX_TC4))
|
||||
#define TGL_AUX_TC5_IO_POWER_DOMAINS ( \
|
||||
BIT_ULL(POWER_DOMAIN_AUX_TC5))
|
||||
#define TGL_AUX_TC6_IO_POWER_DOMAINS ( \
|
||||
BIT_ULL(POWER_DOMAIN_AUX_TC6))
|
||||
#define TGL_AUX_TBT5_IO_POWER_DOMAINS ( \
|
||||
BIT_ULL(POWER_DOMAIN_AUX_TBT5))
|
||||
#define TGL_AUX_TBT6_IO_POWER_DOMAINS ( \
|
||||
BIT_ULL(POWER_DOMAIN_AUX_TBT6))
|
||||
|
||||
static const struct i915_power_well_ops i9xx_always_on_power_well_ops = {
|
||||
.sync_hw = i9xx_power_well_sync_hw_noop,
|
||||
.enable = i9xx_always_on_power_well_noop,
|
||||
|
@ -3113,7 +3351,7 @@ static const struct i915_power_well_ops icl_combo_phy_aux_power_well_ops = {
|
|||
static const struct i915_power_well_ops icl_tc_phy_aux_power_well_ops = {
|
||||
.sync_hw = hsw_power_well_sync_hw,
|
||||
.enable = icl_tc_phy_aux_power_well_enable,
|
||||
.disable = hsw_power_well_disable,
|
||||
.disable = icl_tc_phy_aux_power_well_disable,
|
||||
.is_enabled = hsw_power_well_enabled,
|
||||
};
|
||||
|
||||
|
@ -3362,6 +3600,335 @@ static const struct i915_power_well_desc icl_power_wells[] = {
|
|||
},
|
||||
};
|
||||
|
||||
static const struct i915_power_well_desc tgl_power_wells[] = {
|
||||
{
|
||||
.name = "always-on",
|
||||
.always_on = true,
|
||||
.domains = POWER_DOMAIN_MASK,
|
||||
.ops = &i9xx_always_on_power_well_ops,
|
||||
.id = DISP_PW_ID_NONE,
|
||||
},
|
||||
{
|
||||
.name = "power well 1",
|
||||
/* Handled by the DMC firmware */
|
||||
.always_on = true,
|
||||
.domains = 0,
|
||||
.ops = &hsw_power_well_ops,
|
||||
.id = SKL_DISP_PW_1,
|
||||
{
|
||||
.hsw.regs = &hsw_power_well_regs,
|
||||
.hsw.idx = ICL_PW_CTL_IDX_PW_1,
|
||||
.hsw.has_fuses = true,
|
||||
},
|
||||
},
|
||||
{
|
||||
.name = "DC off",
|
||||
.domains = TGL_DISPLAY_DC_OFF_POWER_DOMAINS,
|
||||
.ops = &gen9_dc_off_power_well_ops,
|
||||
.id = DISP_PW_ID_NONE,
|
||||
},
|
||||
{
|
||||
.name = "power well 2",
|
||||
.domains = TGL_PW_2_POWER_DOMAINS,
|
||||
.ops = &hsw_power_well_ops,
|
||||
.id = SKL_DISP_PW_2,
|
||||
{
|
||||
.hsw.regs = &hsw_power_well_regs,
|
||||
.hsw.idx = ICL_PW_CTL_IDX_PW_2,
|
||||
.hsw.has_fuses = true,
|
||||
},
|
||||
},
|
||||
{
|
||||
.name = "power well 3",
|
||||
.domains = TGL_PW_3_POWER_DOMAINS,
|
||||
.ops = &hsw_power_well_ops,
|
||||
.id = DISP_PW_ID_NONE,
|
||||
{
|
||||
.hsw.regs = &hsw_power_well_regs,
|
||||
.hsw.idx = ICL_PW_CTL_IDX_PW_3,
|
||||
.hsw.irq_pipe_mask = BIT(PIPE_B),
|
||||
.hsw.has_vga = true,
|
||||
.hsw.has_fuses = true,
|
||||
},
|
||||
},
|
||||
{
|
||||
.name = "DDI A IO",
|
||||
.domains = ICL_DDI_IO_A_POWER_DOMAINS,
|
||||
.ops = &hsw_power_well_ops,
|
||||
.id = DISP_PW_ID_NONE,
|
||||
{
|
||||
.hsw.regs = &icl_ddi_power_well_regs,
|
||||
.hsw.idx = ICL_PW_CTL_IDX_DDI_A,
|
||||
}
|
||||
},
|
||||
{
|
||||
.name = "DDI B IO",
|
||||
.domains = ICL_DDI_IO_B_POWER_DOMAINS,
|
||||
.ops = &hsw_power_well_ops,
|
||||
.id = DISP_PW_ID_NONE,
|
||||
{
|
||||
.hsw.regs = &icl_ddi_power_well_regs,
|
||||
.hsw.idx = ICL_PW_CTL_IDX_DDI_B,
|
||||
}
|
||||
},
|
||||
{
|
||||
.name = "DDI C IO",
|
||||
.domains = ICL_DDI_IO_C_POWER_DOMAINS,
|
||||
.ops = &hsw_power_well_ops,
|
||||
.id = DISP_PW_ID_NONE,
|
||||
{
|
||||
.hsw.regs = &icl_ddi_power_well_regs,
|
||||
.hsw.idx = ICL_PW_CTL_IDX_DDI_C,
|
||||
}
|
||||
},
|
||||
{
|
||||
.name = "DDI TC1 IO",
|
||||
.domains = TGL_DDI_IO_TC1_POWER_DOMAINS,
|
||||
.ops = &hsw_power_well_ops,
|
||||
.id = DISP_PW_ID_NONE,
|
||||
{
|
||||
.hsw.regs = &icl_ddi_power_well_regs,
|
||||
.hsw.idx = TGL_PW_CTL_IDX_DDI_TC1,
|
||||
},
|
||||
},
|
||||
{
|
||||
.name = "DDI TC2 IO",
|
||||
.domains = TGL_DDI_IO_TC2_POWER_DOMAINS,
|
||||
.ops = &hsw_power_well_ops,
|
||||
.id = DISP_PW_ID_NONE,
|
||||
{
|
||||
.hsw.regs = &icl_ddi_power_well_regs,
|
||||
.hsw.idx = TGL_PW_CTL_IDX_DDI_TC2,
|
||||
},
|
||||
},
|
||||
{
|
||||
.name = "DDI TC3 IO",
|
||||
.domains = TGL_DDI_IO_TC3_POWER_DOMAINS,
|
||||
.ops = &hsw_power_well_ops,
|
||||
.id = DISP_PW_ID_NONE,
|
||||
{
|
||||
.hsw.regs = &icl_ddi_power_well_regs,
|
||||
.hsw.idx = TGL_PW_CTL_IDX_DDI_TC3,
|
||||
},
|
||||
},
|
||||
{
|
||||
.name = "DDI TC4 IO",
|
||||
.domains = TGL_DDI_IO_TC4_POWER_DOMAINS,
|
||||
.ops = &hsw_power_well_ops,
|
||||
.id = DISP_PW_ID_NONE,
|
||||
{
|
||||
.hsw.regs = &icl_ddi_power_well_regs,
|
||||
.hsw.idx = TGL_PW_CTL_IDX_DDI_TC4,
|
||||
},
|
||||
},
|
||||
{
|
||||
.name = "DDI TC5 IO",
|
||||
.domains = TGL_DDI_IO_TC5_POWER_DOMAINS,
|
||||
.ops = &hsw_power_well_ops,
|
||||
.id = DISP_PW_ID_NONE,
|
||||
{
|
||||
.hsw.regs = &icl_ddi_power_well_regs,
|
||||
.hsw.idx = TGL_PW_CTL_IDX_DDI_TC5,
|
||||
},
|
||||
},
|
||||
{
|
||||
.name = "DDI TC6 IO",
|
||||
.domains = TGL_DDI_IO_TC6_POWER_DOMAINS,
|
||||
.ops = &hsw_power_well_ops,
|
||||
.id = DISP_PW_ID_NONE,
|
||||
{
|
||||
.hsw.regs = &icl_ddi_power_well_regs,
|
||||
.hsw.idx = TGL_PW_CTL_IDX_DDI_TC6,
|
||||
},
|
||||
},
|
||||
{
|
||||
.name = "AUX A",
|
||||
.domains = ICL_AUX_A_IO_POWER_DOMAINS,
|
||||
.ops = &icl_combo_phy_aux_power_well_ops,
|
||||
.id = DISP_PW_ID_NONE,
|
||||
{
|
||||
.hsw.regs = &icl_aux_power_well_regs,
|
||||
.hsw.idx = ICL_PW_CTL_IDX_AUX_A,
|
||||
},
|
||||
},
|
||||
{
|
||||
.name = "AUX B",
|
||||
.domains = ICL_AUX_B_IO_POWER_DOMAINS,
|
||||
.ops = &icl_combo_phy_aux_power_well_ops,
|
||||
.id = DISP_PW_ID_NONE,
|
||||
{
|
||||
.hsw.regs = &icl_aux_power_well_regs,
|
||||
.hsw.idx = ICL_PW_CTL_IDX_AUX_B,
|
||||
},
|
||||
},
|
||||
{
|
||||
.name = "AUX C",
|
||||
.domains = ICL_AUX_C_IO_POWER_DOMAINS,
|
||||
.ops = &icl_combo_phy_aux_power_well_ops,
|
||||
.id = DISP_PW_ID_NONE,
|
||||
{
|
||||
.hsw.regs = &icl_aux_power_well_regs,
|
||||
.hsw.idx = ICL_PW_CTL_IDX_AUX_C,
|
||||
},
|
||||
},
|
||||
{
|
||||
.name = "AUX TC1",
|
||||
.domains = TGL_AUX_TC1_IO_POWER_DOMAINS,
|
||||
.ops = &icl_tc_phy_aux_power_well_ops,
|
||||
.id = DISP_PW_ID_NONE,
|
||||
{
|
||||
.hsw.regs = &icl_aux_power_well_regs,
|
||||
.hsw.idx = TGL_PW_CTL_IDX_AUX_TC1,
|
||||
.hsw.is_tc_tbt = false,
|
||||
},
|
||||
},
|
||||
{
|
||||
.name = "AUX TC2",
|
||||
.domains = TGL_AUX_TC2_IO_POWER_DOMAINS,
|
||||
.ops = &icl_tc_phy_aux_power_well_ops,
|
||||
.id = DISP_PW_ID_NONE,
|
||||
{
|
||||
.hsw.regs = &icl_aux_power_well_regs,
|
||||
.hsw.idx = TGL_PW_CTL_IDX_AUX_TC2,
|
||||
.hsw.is_tc_tbt = false,
|
||||
},
|
||||
},
|
||||
{
|
||||
.name = "AUX TC3",
|
||||
.domains = TGL_AUX_TC3_IO_POWER_DOMAINS,
|
||||
.ops = &icl_tc_phy_aux_power_well_ops,
|
||||
.id = DISP_PW_ID_NONE,
|
||||
{
|
||||
.hsw.regs = &icl_aux_power_well_regs,
|
||||
.hsw.idx = TGL_PW_CTL_IDX_AUX_TC3,
|
||||
.hsw.is_tc_tbt = false,
|
||||
},
|
||||
},
|
||||
{
|
||||
.name = "AUX TC4",
|
||||
.domains = TGL_AUX_TC4_IO_POWER_DOMAINS,
|
||||
.ops = &icl_tc_phy_aux_power_well_ops,
|
||||
.id = DISP_PW_ID_NONE,
|
||||
{
|
||||
.hsw.regs = &icl_aux_power_well_regs,
|
||||
.hsw.idx = TGL_PW_CTL_IDX_AUX_TC4,
|
||||
.hsw.is_tc_tbt = false,
|
||||
},
|
||||
},
|
||||
{
|
||||
.name = "AUX TC5",
|
||||
.domains = TGL_AUX_TC5_IO_POWER_DOMAINS,
|
||||
.ops = &icl_tc_phy_aux_power_well_ops,
|
||||
.id = DISP_PW_ID_NONE,
|
||||
{
|
||||
.hsw.regs = &icl_aux_power_well_regs,
|
||||
.hsw.idx = TGL_PW_CTL_IDX_AUX_TC5,
|
||||
.hsw.is_tc_tbt = false,
|
||||
},
|
||||
},
|
||||
{
|
||||
.name = "AUX TC6",
|
||||
.domains = TGL_AUX_TC6_IO_POWER_DOMAINS,
|
||||
.ops = &icl_tc_phy_aux_power_well_ops,
|
||||
.id = DISP_PW_ID_NONE,
|
||||
{
|
||||
.hsw.regs = &icl_aux_power_well_regs,
|
||||
.hsw.idx = TGL_PW_CTL_IDX_AUX_TC6,
|
||||
.hsw.is_tc_tbt = false,
|
||||
},
|
||||
},
|
||||
{
|
||||
.name = "AUX TBT1",
|
||||
.domains = ICL_AUX_TBT1_IO_POWER_DOMAINS,
|
||||
.ops = &hsw_power_well_ops,
|
||||
.id = DISP_PW_ID_NONE,
|
||||
{
|
||||
.hsw.regs = &icl_aux_power_well_regs,
|
||||
.hsw.idx = TGL_PW_CTL_IDX_AUX_TBT1,
|
||||
.hsw.is_tc_tbt = true,
|
||||
},
|
||||
},
|
||||
{
|
||||
.name = "AUX TBT2",
|
||||
.domains = ICL_AUX_TBT2_IO_POWER_DOMAINS,
|
||||
.ops = &hsw_power_well_ops,
|
||||
.id = DISP_PW_ID_NONE,
|
||||
{
|
||||
.hsw.regs = &icl_aux_power_well_regs,
|
||||
.hsw.idx = TGL_PW_CTL_IDX_AUX_TBT2,
|
||||
.hsw.is_tc_tbt = true,
|
||||
},
|
||||
},
|
||||
{
|
||||
.name = "AUX TBT3",
|
||||
.domains = ICL_AUX_TBT3_IO_POWER_DOMAINS,
|
||||
.ops = &hsw_power_well_ops,
|
||||
.id = DISP_PW_ID_NONE,
|
||||
{
|
||||
.hsw.regs = &icl_aux_power_well_regs,
|
||||
.hsw.idx = TGL_PW_CTL_IDX_AUX_TBT3,
|
||||
.hsw.is_tc_tbt = true,
|
||||
},
|
||||
},
|
||||
{
|
||||
.name = "AUX TBT4",
|
||||
.domains = ICL_AUX_TBT4_IO_POWER_DOMAINS,
|
||||
.ops = &hsw_power_well_ops,
|
||||
.id = DISP_PW_ID_NONE,
|
||||
{
|
||||
.hsw.regs = &icl_aux_power_well_regs,
|
||||
.hsw.idx = TGL_PW_CTL_IDX_AUX_TBT4,
|
||||
.hsw.is_tc_tbt = true,
|
||||
},
|
||||
},
|
||||
{
|
||||
.name = "AUX TBT5",
|
||||
.domains = TGL_AUX_TBT5_IO_POWER_DOMAINS,
|
||||
.ops = &hsw_power_well_ops,
|
||||
.id = DISP_PW_ID_NONE,
|
||||
{
|
||||
.hsw.regs = &icl_aux_power_well_regs,
|
||||
.hsw.idx = TGL_PW_CTL_IDX_AUX_TBT5,
|
||||
.hsw.is_tc_tbt = true,
|
||||
},
|
||||
},
|
||||
{
|
||||
.name = "AUX TBT6",
|
||||
.domains = TGL_AUX_TBT6_IO_POWER_DOMAINS,
|
||||
.ops = &hsw_power_well_ops,
|
||||
.id = DISP_PW_ID_NONE,
|
||||
{
|
||||
.hsw.regs = &icl_aux_power_well_regs,
|
||||
.hsw.idx = TGL_PW_CTL_IDX_AUX_TBT6,
|
||||
.hsw.is_tc_tbt = true,
|
||||
},
|
||||
},
|
||||
{
|
||||
.name = "power well 4",
|
||||
.domains = TGL_PW_4_POWER_DOMAINS,
|
||||
.ops = &hsw_power_well_ops,
|
||||
.id = DISP_PW_ID_NONE,
|
||||
{
|
||||
.hsw.regs = &hsw_power_well_regs,
|
||||
.hsw.idx = ICL_PW_CTL_IDX_PW_4,
|
||||
.hsw.has_fuses = true,
|
||||
.hsw.irq_pipe_mask = BIT(PIPE_C),
|
||||
}
|
||||
},
|
||||
{
|
||||
.name = "power well 5",
|
||||
.domains = TGL_PW_5_POWER_DOMAINS,
|
||||
.ops = &hsw_power_well_ops,
|
||||
.id = DISP_PW_ID_NONE,
|
||||
{
|
||||
.hsw.regs = &hsw_power_well_regs,
|
||||
.hsw.idx = TGL_PW_CTL_IDX_PW_5,
|
||||
.hsw.has_fuses = true,
|
||||
.hsw.irq_pipe_mask = BIT(PIPE_D),
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
static int
|
||||
sanitize_disable_power_well_option(const struct drm_i915_private *dev_priv,
|
||||
int disable_power_well)
|
||||
|
@ -3489,7 +4056,9 @@ int intel_power_domains_init(struct drm_i915_private *dev_priv)
|
|||
* The enabling order will be from lower to higher indexed wells,
|
||||
* the disabling order is reversed.
|
||||
*/
|
||||
if (IS_GEN(dev_priv, 11)) {
|
||||
if (IS_GEN(dev_priv, 12)) {
|
||||
err = set_power_wells(power_domains, tgl_power_wells);
|
||||
} else if (IS_GEN(dev_priv, 11)) {
|
||||
err = set_power_wells(power_domains, icl_power_wells);
|
||||
} else if (IS_CANNONLAKE(dev_priv)) {
|
||||
err = set_power_wells(power_domains, cnl_power_wells);
|
||||
|
@ -4337,7 +4906,7 @@ static void intel_power_domains_verify_state(struct drm_i915_private *dev_priv);
|
|||
*
|
||||
* It will return with power domains disabled (to be enabled later by
|
||||
* intel_power_domains_enable()) and must be paired with
|
||||
* intel_power_domains_fini_hw().
|
||||
* intel_power_domains_driver_remove().
|
||||
*/
|
||||
void intel_power_domains_init_hw(struct drm_i915_private *i915, bool resume)
|
||||
{
|
||||
|
@ -4389,7 +4958,7 @@ void intel_power_domains_init_hw(struct drm_i915_private *i915, bool resume)
|
|||
}
|
||||
|
||||
/**
|
||||
* intel_power_domains_fini_hw - deinitialize hw power domain state
|
||||
* intel_power_domains_driver_remove - deinitialize hw power domain state
|
||||
* @i915: i915 device instance
|
||||
*
|
||||
* De-initializes the display power domain HW state. It also ensures that the
|
||||
|
@ -4399,7 +4968,7 @@ void intel_power_domains_init_hw(struct drm_i915_private *i915, bool resume)
|
|||
* intel_power_domains_disable()) and must be paired with
|
||||
* intel_power_domains_init_hw().
|
||||
*/
|
||||
void intel_power_domains_fini_hw(struct drm_i915_private *i915)
|
||||
void intel_power_domains_driver_remove(struct drm_i915_private *i915)
|
||||
{
|
||||
intel_wakeref_t wakeref __maybe_unused =
|
||||
fetch_and_zero(&i915->power_domains.wakeref);
|
||||
|
@ -4553,7 +5122,8 @@ static void intel_power_domains_dump_info(struct drm_i915_private *i915)
|
|||
|
||||
for_each_power_domain(domain, power_well->desc->domains)
|
||||
DRM_DEBUG_DRIVER(" %-23s %d\n",
|
||||
intel_display_power_domain_str(domain),
|
||||
intel_display_power_domain_str(i915,
|
||||
domain),
|
||||
power_domains->domain_use_count[domain]);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -18,28 +18,47 @@ enum intel_display_power_domain {
|
|||
POWER_DOMAIN_PIPE_A,
|
||||
POWER_DOMAIN_PIPE_B,
|
||||
POWER_DOMAIN_PIPE_C,
|
||||
POWER_DOMAIN_PIPE_D,
|
||||
POWER_DOMAIN_PIPE_A_PANEL_FITTER,
|
||||
POWER_DOMAIN_PIPE_B_PANEL_FITTER,
|
||||
POWER_DOMAIN_PIPE_C_PANEL_FITTER,
|
||||
POWER_DOMAIN_PIPE_D_PANEL_FITTER,
|
||||
POWER_DOMAIN_TRANSCODER_A,
|
||||
POWER_DOMAIN_TRANSCODER_B,
|
||||
POWER_DOMAIN_TRANSCODER_C,
|
||||
POWER_DOMAIN_TRANSCODER_D,
|
||||
POWER_DOMAIN_TRANSCODER_EDP,
|
||||
POWER_DOMAIN_TRANSCODER_EDP_VDSC,
|
||||
/* VDSC/joining for TRANSCODER_EDP (ICL) or TRANSCODER_A (TGL) */
|
||||
POWER_DOMAIN_TRANSCODER_VDSC_PW2,
|
||||
POWER_DOMAIN_TRANSCODER_DSI_A,
|
||||
POWER_DOMAIN_TRANSCODER_DSI_C,
|
||||
POWER_DOMAIN_PORT_DDI_A_LANES,
|
||||
POWER_DOMAIN_PORT_DDI_B_LANES,
|
||||
POWER_DOMAIN_PORT_DDI_C_LANES,
|
||||
POWER_DOMAIN_PORT_DDI_D_LANES,
|
||||
POWER_DOMAIN_PORT_DDI_TC1_LANES = POWER_DOMAIN_PORT_DDI_D_LANES,
|
||||
POWER_DOMAIN_PORT_DDI_E_LANES,
|
||||
POWER_DOMAIN_PORT_DDI_TC2_LANES = POWER_DOMAIN_PORT_DDI_E_LANES,
|
||||
POWER_DOMAIN_PORT_DDI_F_LANES,
|
||||
POWER_DOMAIN_PORT_DDI_TC3_LANES = POWER_DOMAIN_PORT_DDI_F_LANES,
|
||||
POWER_DOMAIN_PORT_DDI_TC4_LANES,
|
||||
POWER_DOMAIN_PORT_DDI_TC5_LANES,
|
||||
POWER_DOMAIN_PORT_DDI_TC6_LANES,
|
||||
POWER_DOMAIN_PORT_DDI_A_IO,
|
||||
POWER_DOMAIN_PORT_DDI_B_IO,
|
||||
POWER_DOMAIN_PORT_DDI_C_IO,
|
||||
POWER_DOMAIN_PORT_DDI_D_IO,
|
||||
POWER_DOMAIN_PORT_DDI_TC1_IO = POWER_DOMAIN_PORT_DDI_D_IO,
|
||||
POWER_DOMAIN_PORT_DDI_E_IO,
|
||||
POWER_DOMAIN_PORT_DDI_TC2_IO = POWER_DOMAIN_PORT_DDI_E_IO,
|
||||
POWER_DOMAIN_PORT_DDI_F_IO,
|
||||
POWER_DOMAIN_PORT_DDI_TC3_IO = POWER_DOMAIN_PORT_DDI_F_IO,
|
||||
POWER_DOMAIN_PORT_DDI_G_IO,
|
||||
POWER_DOMAIN_PORT_DDI_TC4_IO = POWER_DOMAIN_PORT_DDI_G_IO,
|
||||
POWER_DOMAIN_PORT_DDI_H_IO,
|
||||
POWER_DOMAIN_PORT_DDI_TC5_IO = POWER_DOMAIN_PORT_DDI_H_IO,
|
||||
POWER_DOMAIN_PORT_DDI_I_IO,
|
||||
POWER_DOMAIN_PORT_DDI_TC6_IO = POWER_DOMAIN_PORT_DDI_I_IO,
|
||||
POWER_DOMAIN_PORT_DSI,
|
||||
POWER_DOMAIN_PORT_CRT,
|
||||
POWER_DOMAIN_PORT_OTHER,
|
||||
|
@ -49,16 +68,25 @@ enum intel_display_power_domain {
|
|||
POWER_DOMAIN_AUX_B,
|
||||
POWER_DOMAIN_AUX_C,
|
||||
POWER_DOMAIN_AUX_D,
|
||||
POWER_DOMAIN_AUX_TC1 = POWER_DOMAIN_AUX_D,
|
||||
POWER_DOMAIN_AUX_E,
|
||||
POWER_DOMAIN_AUX_TC2 = POWER_DOMAIN_AUX_E,
|
||||
POWER_DOMAIN_AUX_F,
|
||||
POWER_DOMAIN_AUX_TC3 = POWER_DOMAIN_AUX_F,
|
||||
POWER_DOMAIN_AUX_TC4,
|
||||
POWER_DOMAIN_AUX_TC5,
|
||||
POWER_DOMAIN_AUX_TC6,
|
||||
POWER_DOMAIN_AUX_IO_A,
|
||||
POWER_DOMAIN_AUX_TBT1,
|
||||
POWER_DOMAIN_AUX_TBT2,
|
||||
POWER_DOMAIN_AUX_TBT3,
|
||||
POWER_DOMAIN_AUX_TBT4,
|
||||
POWER_DOMAIN_AUX_TBT5,
|
||||
POWER_DOMAIN_AUX_TBT6,
|
||||
POWER_DOMAIN_GMBUS,
|
||||
POWER_DOMAIN_MODESET,
|
||||
POWER_DOMAIN_GT_IRQ,
|
||||
POWER_DOMAIN_DPLL_DC_OFF,
|
||||
POWER_DOMAIN_INIT,
|
||||
|
||||
POWER_DOMAIN_NUM,
|
||||
|
@ -213,7 +241,7 @@ void gen9_enable_dc5(struct drm_i915_private *dev_priv);
|
|||
int intel_power_domains_init(struct drm_i915_private *dev_priv);
|
||||
void intel_power_domains_cleanup(struct drm_i915_private *dev_priv);
|
||||
void intel_power_domains_init_hw(struct drm_i915_private *dev_priv, bool resume);
|
||||
void intel_power_domains_fini_hw(struct drm_i915_private *dev_priv);
|
||||
void intel_power_domains_driver_remove(struct drm_i915_private *dev_priv);
|
||||
void icl_display_core_init(struct drm_i915_private *dev_priv, bool resume);
|
||||
void icl_display_core_uninit(struct drm_i915_private *dev_priv);
|
||||
void intel_power_domains_enable(struct drm_i915_private *dev_priv);
|
||||
|
@ -227,7 +255,8 @@ void bxt_display_core_init(struct drm_i915_private *dev_priv, bool resume);
|
|||
void bxt_display_core_uninit(struct drm_i915_private *dev_priv);
|
||||
|
||||
const char *
|
||||
intel_display_power_domain_str(enum intel_display_power_domain domain);
|
||||
intel_display_power_domain_str(struct drm_i915_private *i915,
|
||||
enum intel_display_power_domain domain);
|
||||
|
||||
bool intel_display_power_is_enabled(struct drm_i915_private *dev_priv,
|
||||
enum intel_display_power_domain domain);
|
||||
|
|
|
@ -62,6 +62,7 @@
|
|||
#include "intel_panel.h"
|
||||
#include "intel_psr.h"
|
||||
#include "intel_sideband.h"
|
||||
#include "intel_tc.h"
|
||||
#include "intel_vdsc.h"
|
||||
|
||||
#define DP_DPRX_ESI_LEN 14
|
||||
|
@ -211,47 +212,13 @@ static int intel_dp_max_common_rate(struct intel_dp *intel_dp)
|
|||
return intel_dp->common_rates[intel_dp->num_common_rates - 1];
|
||||
}
|
||||
|
||||
static int intel_dp_get_fia_supported_lane_count(struct intel_dp *intel_dp)
|
||||
{
|
||||
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
|
||||
struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev);
|
||||
enum tc_port tc_port = intel_port_to_tc(dev_priv, dig_port->base.port);
|
||||
intel_wakeref_t wakeref;
|
||||
u32 lane_info;
|
||||
|
||||
if (tc_port == PORT_TC_NONE || dig_port->tc_type != TC_PORT_TYPEC)
|
||||
return 4;
|
||||
|
||||
lane_info = 0;
|
||||
with_intel_display_power(dev_priv, POWER_DOMAIN_DISPLAY_CORE, wakeref)
|
||||
lane_info = (I915_READ(PORT_TX_DFLEXDPSP) &
|
||||
DP_LANE_ASSIGNMENT_MASK(tc_port)) >>
|
||||
DP_LANE_ASSIGNMENT_SHIFT(tc_port);
|
||||
|
||||
switch (lane_info) {
|
||||
default:
|
||||
MISSING_CASE(lane_info);
|
||||
/* fall through */
|
||||
case 1:
|
||||
case 2:
|
||||
case 4:
|
||||
case 8:
|
||||
return 1;
|
||||
case 3:
|
||||
case 12:
|
||||
return 2;
|
||||
case 15:
|
||||
return 4;
|
||||
}
|
||||
}
|
||||
|
||||
/* Theoretical max between source and sink */
|
||||
static int intel_dp_max_common_lane_count(struct intel_dp *intel_dp)
|
||||
{
|
||||
struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
|
||||
int source_max = intel_dig_port->max_lanes;
|
||||
int sink_max = drm_dp_max_lane_count(intel_dp->dpcd);
|
||||
int fia_max = intel_dp_get_fia_supported_lane_count(intel_dp);
|
||||
int fia_max = intel_tc_port_fia_max_lane_count(intel_dig_port);
|
||||
|
||||
return min3(source_max, sink_max, fia_max);
|
||||
}
|
||||
|
@ -330,9 +297,9 @@ static int icl_max_source_rate(struct intel_dp *intel_dp)
|
|||
{
|
||||
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
|
||||
struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev);
|
||||
enum port port = dig_port->base.port;
|
||||
enum phy phy = intel_port_to_phy(dev_priv, dig_port->base.port);
|
||||
|
||||
if (intel_port_is_combophy(dev_priv, port) &&
|
||||
if (intel_phy_is_combo(dev_priv, phy) &&
|
||||
!IS_ELKHARTLAKE(dev_priv) &&
|
||||
!intel_dp_is_edp(intel_dp))
|
||||
return 540000;
|
||||
|
@ -1209,7 +1176,7 @@ static u32 skl_get_aux_send_ctl(struct intel_dp *intel_dp,
|
|||
DP_AUX_CH_CTL_FW_SYNC_PULSE_SKL(32) |
|
||||
DP_AUX_CH_CTL_SYNC_PULSE_SKL(32);
|
||||
|
||||
if (intel_dig_port->tc_type == TC_PORT_TBT)
|
||||
if (intel_dig_port->tc_mode == TC_PORT_TBT_ALT)
|
||||
ret |= DP_AUX_CH_CTL_TBT_IO;
|
||||
|
||||
return ret;
|
||||
|
@ -1225,6 +1192,8 @@ intel_dp_aux_xfer(struct intel_dp *intel_dp,
|
|||
struct drm_i915_private *i915 =
|
||||
to_i915(intel_dig_port->base.base.dev);
|
||||
struct intel_uncore *uncore = &i915->uncore;
|
||||
enum phy phy = intel_port_to_phy(i915, intel_dig_port->base.port);
|
||||
bool is_tc_port = intel_phy_is_tc(i915, phy);
|
||||
i915_reg_t ch_ctl, ch_data[5];
|
||||
u32 aux_clock_divider;
|
||||
enum intel_display_power_domain aux_domain =
|
||||
|
@ -1240,6 +1209,9 @@ intel_dp_aux_xfer(struct intel_dp *intel_dp,
|
|||
for (i = 0; i < ARRAY_SIZE(ch_data); i++)
|
||||
ch_data[i] = intel_dp->aux_ch_data_reg(intel_dp, i);
|
||||
|
||||
if (is_tc_port)
|
||||
intel_tc_port_lock(intel_dig_port);
|
||||
|
||||
aux_wakeref = intel_display_power_get(i915, aux_domain);
|
||||
pps_wakeref = pps_lock(intel_dp);
|
||||
|
||||
|
@ -1392,6 +1364,9 @@ out:
|
|||
pps_unlock(intel_dp, pps_wakeref);
|
||||
intel_display_power_put_async(i915, aux_domain, aux_wakeref);
|
||||
|
||||
if (is_tc_port)
|
||||
intel_tc_port_unlock(intel_dig_port);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -1879,8 +1854,10 @@ intel_dp_compute_link_config_wide(struct intel_dp *intel_dp,
|
|||
int mode_rate, link_clock, link_avail;
|
||||
|
||||
for (bpp = limits->max_bpp; bpp >= limits->min_bpp; bpp -= 2 * 3) {
|
||||
int output_bpp = intel_dp_output_bpp(pipe_config, bpp);
|
||||
|
||||
mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
|
||||
bpp);
|
||||
output_bpp);
|
||||
|
||||
for (clock = limits->min_clock; clock <= limits->max_clock; clock++) {
|
||||
for (lane_count = limits->min_lane_count;
|
||||
|
@ -4244,8 +4221,14 @@ intel_dp_get_dpcd(struct intel_dp *intel_dp)
|
|||
if (!intel_dp_read_dpcd(intel_dp))
|
||||
return false;
|
||||
|
||||
/* Don't clobber cached eDP rates. */
|
||||
/*
|
||||
* Don't clobber cached eDP rates. Also skip re-reading
|
||||
* the OUI/ID since we know it won't change.
|
||||
*/
|
||||
if (!intel_dp_is_edp(intel_dp)) {
|
||||
drm_dp_read_desc(&intel_dp->aux, &intel_dp->desc,
|
||||
drm_dp_is_branch(intel_dp->dpcd));
|
||||
|
||||
intel_dp_set_sink_rates(intel_dp);
|
||||
intel_dp_set_common_rates(intel_dp);
|
||||
}
|
||||
|
@ -4254,7 +4237,8 @@ intel_dp_get_dpcd(struct intel_dp *intel_dp)
|
|||
* Some eDP panels do not set a valid value for sink count, that is why
|
||||
* it don't care about read it here and in intel_edp_init_dpcd().
|
||||
*/
|
||||
if (!intel_dp_is_edp(intel_dp)) {
|
||||
if (!intel_dp_is_edp(intel_dp) &&
|
||||
!drm_dp_has_quirk(&intel_dp->desc, DP_DPCD_QUIRK_NO_SINK_COUNT)) {
|
||||
u8 count;
|
||||
ssize_t r;
|
||||
|
||||
|
@ -4879,14 +4863,16 @@ int intel_dp_retrain_link(struct intel_encoder *encoder,
|
|||
* retrain the link to get a picture. That's in case no
|
||||
* userspace component reacted to intermittent HPD dip.
|
||||
*/
|
||||
static bool intel_dp_hotplug(struct intel_encoder *encoder,
|
||||
struct intel_connector *connector)
|
||||
static enum intel_hotplug_state
|
||||
intel_dp_hotplug(struct intel_encoder *encoder,
|
||||
struct intel_connector *connector,
|
||||
bool irq_received)
|
||||
{
|
||||
struct drm_modeset_acquire_ctx ctx;
|
||||
bool changed;
|
||||
enum intel_hotplug_state state;
|
||||
int ret;
|
||||
|
||||
changed = intel_encoder_hotplug(encoder, connector);
|
||||
state = intel_encoder_hotplug(encoder, connector, irq_received);
|
||||
|
||||
drm_modeset_acquire_init(&ctx, 0);
|
||||
|
||||
|
@ -4905,7 +4891,14 @@ static bool intel_dp_hotplug(struct intel_encoder *encoder,
|
|||
drm_modeset_acquire_fini(&ctx);
|
||||
WARN(ret, "Acquiring modeset locks failed with %i\n", ret);
|
||||
|
||||
return changed;
|
||||
/*
|
||||
* Keeping it consistent with intel_ddi_hotplug() and
|
||||
* intel_hdmi_hotplug().
|
||||
*/
|
||||
if (state == INTEL_HOTPLUG_UNCHANGED && irq_received)
|
||||
state = INTEL_HOTPLUG_RETRY;
|
||||
|
||||
return state;
|
||||
}
|
||||
|
||||
static void intel_dp_check_service_irq(struct intel_dp *intel_dp)
|
||||
|
@ -5233,204 +5226,16 @@ static bool icl_combo_port_connected(struct drm_i915_private *dev_priv,
|
|||
return I915_READ(SDEISR) & SDE_DDI_HOTPLUG_ICP(port);
|
||||
}
|
||||
|
||||
static const char *tc_type_name(enum tc_port_type type)
|
||||
{
|
||||
static const char * const names[] = {
|
||||
[TC_PORT_UNKNOWN] = "unknown",
|
||||
[TC_PORT_LEGACY] = "legacy",
|
||||
[TC_PORT_TYPEC] = "typec",
|
||||
[TC_PORT_TBT] = "tbt",
|
||||
};
|
||||
|
||||
if (WARN_ON(type >= ARRAY_SIZE(names)))
|
||||
type = TC_PORT_UNKNOWN;
|
||||
|
||||
return names[type];
|
||||
}
|
||||
|
||||
static void icl_update_tc_port_type(struct drm_i915_private *dev_priv,
|
||||
struct intel_digital_port *intel_dig_port,
|
||||
bool is_legacy, bool is_typec, bool is_tbt)
|
||||
{
|
||||
enum port port = intel_dig_port->base.port;
|
||||
enum tc_port_type old_type = intel_dig_port->tc_type;
|
||||
|
||||
WARN_ON(is_legacy + is_typec + is_tbt != 1);
|
||||
|
||||
if (is_legacy)
|
||||
intel_dig_port->tc_type = TC_PORT_LEGACY;
|
||||
else if (is_typec)
|
||||
intel_dig_port->tc_type = TC_PORT_TYPEC;
|
||||
else if (is_tbt)
|
||||
intel_dig_port->tc_type = TC_PORT_TBT;
|
||||
else
|
||||
return;
|
||||
|
||||
/* Types are not supposed to be changed at runtime. */
|
||||
WARN_ON(old_type != TC_PORT_UNKNOWN &&
|
||||
old_type != intel_dig_port->tc_type);
|
||||
|
||||
if (old_type != intel_dig_port->tc_type)
|
||||
DRM_DEBUG_KMS("Port %c has TC type %s\n", port_name(port),
|
||||
tc_type_name(intel_dig_port->tc_type));
|
||||
}
|
||||
|
||||
/*
|
||||
* This function implements the first part of the Connect Flow described by our
|
||||
* specification, Gen11 TypeC Programming chapter. The rest of the flow (reading
|
||||
* lanes, EDID, etc) is done as needed in the typical places.
|
||||
*
|
||||
* Unlike the other ports, type-C ports are not available to use as soon as we
|
||||
* get a hotplug. The type-C PHYs can be shared between multiple controllers:
|
||||
* display, USB, etc. As a result, handshaking through FIA is required around
|
||||
* connect and disconnect to cleanly transfer ownership with the controller and
|
||||
* set the type-C power state.
|
||||
*
|
||||
* We could opt to only do the connect flow when we actually try to use the AUX
|
||||
* channels or do a modeset, then immediately run the disconnect flow after
|
||||
* usage, but there are some implications on this for a dynamic environment:
|
||||
* things may go away or change behind our backs. So for now our driver is
|
||||
* always trying to acquire ownership of the controller as soon as it gets an
|
||||
* interrupt (or polls state and sees a port is connected) and only gives it
|
||||
* back when it sees a disconnect. Implementation of a more fine-grained model
|
||||
* will require a lot of coordination with user space and thorough testing for
|
||||
* the extra possible cases.
|
||||
*/
|
||||
static bool icl_tc_phy_connect(struct drm_i915_private *dev_priv,
|
||||
struct intel_digital_port *dig_port)
|
||||
{
|
||||
enum tc_port tc_port = intel_port_to_tc(dev_priv, dig_port->base.port);
|
||||
u32 val;
|
||||
|
||||
if (dig_port->tc_type != TC_PORT_LEGACY &&
|
||||
dig_port->tc_type != TC_PORT_TYPEC)
|
||||
return true;
|
||||
|
||||
val = I915_READ(PORT_TX_DFLEXDPPMS);
|
||||
if (!(val & DP_PHY_MODE_STATUS_COMPLETED(tc_port))) {
|
||||
DRM_DEBUG_KMS("DP PHY for TC port %d not ready\n", tc_port);
|
||||
WARN_ON(dig_port->tc_legacy_port);
|
||||
return false;
|
||||
}
|
||||
|
||||
/*
|
||||
* This function may be called many times in a row without an HPD event
|
||||
* in between, so try to avoid the write when we can.
|
||||
*/
|
||||
val = I915_READ(PORT_TX_DFLEXDPCSSS);
|
||||
if (!(val & DP_PHY_MODE_STATUS_NOT_SAFE(tc_port))) {
|
||||
val |= DP_PHY_MODE_STATUS_NOT_SAFE(tc_port);
|
||||
I915_WRITE(PORT_TX_DFLEXDPCSSS, val);
|
||||
}
|
||||
|
||||
/*
|
||||
* Now we have to re-check the live state, in case the port recently
|
||||
* became disconnected. Not necessary for legacy mode.
|
||||
*/
|
||||
if (dig_port->tc_type == TC_PORT_TYPEC &&
|
||||
!(I915_READ(PORT_TX_DFLEXDPSP) & TC_LIVE_STATE_TC(tc_port))) {
|
||||
DRM_DEBUG_KMS("TC PHY %d sudden disconnect.\n", tc_port);
|
||||
icl_tc_phy_disconnect(dev_priv, dig_port);
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/*
|
||||
* See the comment at the connect function. This implements the Disconnect
|
||||
* Flow.
|
||||
*/
|
||||
void icl_tc_phy_disconnect(struct drm_i915_private *dev_priv,
|
||||
struct intel_digital_port *dig_port)
|
||||
{
|
||||
enum tc_port tc_port = intel_port_to_tc(dev_priv, dig_port->base.port);
|
||||
|
||||
if (dig_port->tc_type == TC_PORT_UNKNOWN)
|
||||
return;
|
||||
|
||||
/*
|
||||
* TBT disconnection flow is read the live status, what was done in
|
||||
* caller.
|
||||
*/
|
||||
if (dig_port->tc_type == TC_PORT_TYPEC ||
|
||||
dig_port->tc_type == TC_PORT_LEGACY) {
|
||||
u32 val;
|
||||
|
||||
val = I915_READ(PORT_TX_DFLEXDPCSSS);
|
||||
val &= ~DP_PHY_MODE_STATUS_NOT_SAFE(tc_port);
|
||||
I915_WRITE(PORT_TX_DFLEXDPCSSS, val);
|
||||
}
|
||||
|
||||
DRM_DEBUG_KMS("Port %c TC type %s disconnected\n",
|
||||
port_name(dig_port->base.port),
|
||||
tc_type_name(dig_port->tc_type));
|
||||
|
||||
dig_port->tc_type = TC_PORT_UNKNOWN;
|
||||
}
|
||||
|
||||
/*
|
||||
* The type-C ports are different because even when they are connected, they may
|
||||
* not be available/usable by the graphics driver: see the comment on
|
||||
* icl_tc_phy_connect(). So in our driver instead of adding the additional
|
||||
* concept of "usable" and make everything check for "connected and usable" we
|
||||
* define a port as "connected" when it is not only connected, but also when it
|
||||
* is usable by the rest of the driver. That maintains the old assumption that
|
||||
* connected ports are usable, and avoids exposing to the users objects they
|
||||
* can't really use.
|
||||
*/
|
||||
static bool icl_tc_port_connected(struct drm_i915_private *dev_priv,
|
||||
struct intel_digital_port *intel_dig_port)
|
||||
{
|
||||
enum port port = intel_dig_port->base.port;
|
||||
enum tc_port tc_port = intel_port_to_tc(dev_priv, port);
|
||||
bool is_legacy, is_typec, is_tbt;
|
||||
u32 dpsp;
|
||||
|
||||
/*
|
||||
* Complain if we got a legacy port HPD, but VBT didn't mark the port as
|
||||
* legacy. Treat the port as legacy from now on.
|
||||
*/
|
||||
if (!intel_dig_port->tc_legacy_port &&
|
||||
I915_READ(SDEISR) & SDE_TC_HOTPLUG_ICP(tc_port)) {
|
||||
DRM_ERROR("VBT incorrectly claims port %c is not TypeC legacy\n",
|
||||
port_name(port));
|
||||
intel_dig_port->tc_legacy_port = true;
|
||||
}
|
||||
is_legacy = intel_dig_port->tc_legacy_port;
|
||||
|
||||
/*
|
||||
* The spec says we shouldn't be using the ISR bits for detecting
|
||||
* between TC and TBT. We should use DFLEXDPSP.
|
||||
*/
|
||||
dpsp = I915_READ(PORT_TX_DFLEXDPSP);
|
||||
is_typec = dpsp & TC_LIVE_STATE_TC(tc_port);
|
||||
is_tbt = dpsp & TC_LIVE_STATE_TBT(tc_port);
|
||||
|
||||
if (!is_legacy && !is_typec && !is_tbt) {
|
||||
icl_tc_phy_disconnect(dev_priv, intel_dig_port);
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
icl_update_tc_port_type(dev_priv, intel_dig_port, is_legacy, is_typec,
|
||||
is_tbt);
|
||||
|
||||
if (!icl_tc_phy_connect(dev_priv, intel_dig_port))
|
||||
return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool icl_digital_port_connected(struct intel_encoder *encoder)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
struct intel_digital_port *dig_port = enc_to_dig_port(&encoder->base);
|
||||
enum phy phy = intel_port_to_phy(dev_priv, encoder->port);
|
||||
|
||||
if (intel_port_is_combophy(dev_priv, encoder->port))
|
||||
if (intel_phy_is_combo(dev_priv, phy))
|
||||
return icl_combo_port_connected(dev_priv, dig_port);
|
||||
else if (intel_port_is_tc(dev_priv, encoder->port))
|
||||
return icl_tc_port_connected(dev_priv, dig_port);
|
||||
else if (intel_phy_is_tc(dev_priv, phy))
|
||||
return intel_tc_port_connected(dig_port);
|
||||
else
|
||||
MISSING_CASE(encoder->hpd_pin);
|
||||
|
||||
|
@ -5588,9 +5393,6 @@ intel_dp_detect(struct drm_connector *connector,
|
|||
if (INTEL_GEN(dev_priv) >= 11)
|
||||
intel_dp_get_dsc_sink_cap(intel_dp);
|
||||
|
||||
drm_dp_read_desc(&intel_dp->aux, &intel_dp->desc,
|
||||
drm_dp_is_branch(intel_dp->dpcd));
|
||||
|
||||
intel_dp_configure_mst(intel_dp);
|
||||
|
||||
if (intel_dp->is_mst) {
|
||||
|
@ -6835,8 +6637,6 @@ static void intel_dp_set_drrs_state(struct drm_i915_private *dev_priv,
|
|||
const struct intel_crtc_state *crtc_state,
|
||||
int refresh_rate)
|
||||
{
|
||||
struct intel_encoder *encoder;
|
||||
struct intel_digital_port *dig_port = NULL;
|
||||
struct intel_dp *intel_dp = dev_priv->drrs.dp;
|
||||
struct intel_crtc *intel_crtc = to_intel_crtc(crtc_state->base.crtc);
|
||||
enum drrs_refresh_rate_type index = DRRS_HIGH_RR;
|
||||
|
@ -6851,9 +6651,6 @@ static void intel_dp_set_drrs_state(struct drm_i915_private *dev_priv,
|
|||
return;
|
||||
}
|
||||
|
||||
dig_port = dp_to_dig_port(intel_dp);
|
||||
encoder = &dig_port->base;
|
||||
|
||||
if (!intel_crtc) {
|
||||
DRM_DEBUG_KMS("DRRS: intel_crtc not initialized\n");
|
||||
return;
|
||||
|
@ -7333,6 +7130,7 @@ intel_dp_init_connector(struct intel_digital_port *intel_dig_port,
|
|||
struct drm_device *dev = intel_encoder->base.dev;
|
||||
struct drm_i915_private *dev_priv = to_i915(dev);
|
||||
enum port port = intel_encoder->port;
|
||||
enum phy phy = intel_port_to_phy(dev_priv, port);
|
||||
int type;
|
||||
|
||||
/* Initialize the work for modeset in case of link train failure */
|
||||
|
@ -7359,7 +7157,7 @@ intel_dp_init_connector(struct intel_digital_port *intel_dig_port,
|
|||
* Currently we don't support eDP on TypeC ports, although in
|
||||
* theory it could work on TypeC legacy ports.
|
||||
*/
|
||||
WARN_ON(intel_port_is_tc(dev_priv, port));
|
||||
WARN_ON(intel_phy_is_tc(dev_priv, phy));
|
||||
type = DRM_MODE_CONNECTOR_eDP;
|
||||
} else {
|
||||
type = DRM_MODE_CONNECTOR_DisplayPort;
|
||||
|
|
|
@ -112,8 +112,6 @@ bool intel_dp_get_colorimetry_status(struct intel_dp *intel_dp);
|
|||
int intel_dp_link_required(int pixel_clock, int bpp);
|
||||
int intel_dp_max_data_rate(int max_link_clock, int max_lanes);
|
||||
bool intel_digital_port_connected(struct intel_encoder *encoder);
|
||||
void icl_tc_phy_disconnect(struct drm_i915_private *dev_priv,
|
||||
struct intel_digital_port *dig_port);
|
||||
|
||||
static inline unsigned int intel_dp_unused_lane_mask(int lane_count)
|
||||
{
|
||||
|
|
|
@ -264,8 +264,11 @@ intel_dp_aux_display_control_capable(struct intel_connector *connector)
|
|||
int intel_dp_aux_init_backlight_funcs(struct intel_connector *intel_connector)
|
||||
{
|
||||
struct intel_panel *panel = &intel_connector->panel;
|
||||
struct drm_i915_private *dev_priv = to_i915(intel_connector->base.dev);
|
||||
|
||||
if (!i915_modparams.enable_dpcd_backlight)
|
||||
if (i915_modparams.enable_dpcd_backlight == 0 ||
|
||||
(i915_modparams.enable_dpcd_backlight == -1 &&
|
||||
dev_priv->vbt.backlight.type != INTEL_BACKLIGHT_VESA_EDP_AUX_INTERFACE))
|
||||
return -ENODEV;
|
||||
|
||||
if (!intel_dp_aux_display_control_capable(intel_connector))
|
||||
|
|
|
@ -6,9 +6,15 @@
|
|||
#ifndef __INTEL_DP_MST_H__
|
||||
#define __INTEL_DP_MST_H__
|
||||
|
||||
struct intel_digital_port;
|
||||
#include "intel_drv.h"
|
||||
|
||||
int intel_dp_mst_encoder_init(struct intel_digital_port *intel_dig_port, int conn_id);
|
||||
void intel_dp_mst_encoder_cleanup(struct intel_digital_port *intel_dig_port);
|
||||
static inline int
|
||||
intel_dp_mst_encoder_active_links(struct intel_digital_port *intel_dig_port)
|
||||
{
|
||||
return intel_dig_port->dp.active_mst_links;
|
||||
}
|
||||
|
||||
|
||||
#endif /* __INTEL_DP_MST_H__ */
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -28,6 +28,7 @@
|
|||
#include <linux/types.h>
|
||||
|
||||
#include "intel_display.h"
|
||||
#include "intel_wakeref.h"
|
||||
|
||||
/*FIXME: Move this to a more appropriate place. */
|
||||
#define abs_diff(a, b) ({ \
|
||||
|
@ -36,9 +37,9 @@
|
|||
(void) (&__a == &__b); \
|
||||
__a > __b ? (__a - __b) : (__b - __a); })
|
||||
|
||||
struct drm_atomic_state;
|
||||
struct drm_device;
|
||||
struct drm_i915_private;
|
||||
struct intel_atomic_state;
|
||||
struct intel_crtc;
|
||||
struct intel_crtc_state;
|
||||
struct intel_encoder;
|
||||
|
@ -110,35 +111,59 @@ enum intel_dpll_id {
|
|||
|
||||
|
||||
/**
|
||||
* @DPLL_ID_ICL_DPLL0: ICL combo PHY DPLL0
|
||||
* @DPLL_ID_ICL_DPLL0: ICL/TGL combo PHY DPLL0
|
||||
*/
|
||||
DPLL_ID_ICL_DPLL0 = 0,
|
||||
/**
|
||||
* @DPLL_ID_ICL_DPLL1: ICL combo PHY DPLL1
|
||||
* @DPLL_ID_ICL_DPLL1: ICL/TGL combo PHY DPLL1
|
||||
*/
|
||||
DPLL_ID_ICL_DPLL1 = 1,
|
||||
/**
|
||||
* @DPLL_ID_ICL_TBTPLL: ICL TBT PLL
|
||||
* @DPLL_ID_EHL_DPLL4: EHL combo PHY DPLL4
|
||||
*/
|
||||
DPLL_ID_EHL_DPLL4 = 2,
|
||||
/**
|
||||
* @DPLL_ID_ICL_TBTPLL: ICL/TGL TBT PLL
|
||||
*/
|
||||
DPLL_ID_ICL_TBTPLL = 2,
|
||||
/**
|
||||
* @DPLL_ID_ICL_MGPLL1: ICL MG PLL 1 port 1 (C)
|
||||
* @DPLL_ID_ICL_MGPLL1: ICL MG PLL 1 port 1 (C),
|
||||
* TGL TC PLL 1 port 1 (TC1)
|
||||
*/
|
||||
DPLL_ID_ICL_MGPLL1 = 3,
|
||||
/**
|
||||
* @DPLL_ID_ICL_MGPLL2: ICL MG PLL 1 port 2 (D)
|
||||
* TGL TC PLL 1 port 2 (TC2)
|
||||
*/
|
||||
DPLL_ID_ICL_MGPLL2 = 4,
|
||||
/**
|
||||
* @DPLL_ID_ICL_MGPLL3: ICL MG PLL 1 port 3 (E)
|
||||
* TGL TC PLL 1 port 3 (TC3)
|
||||
*/
|
||||
DPLL_ID_ICL_MGPLL3 = 5,
|
||||
/**
|
||||
* @DPLL_ID_ICL_MGPLL4: ICL MG PLL 1 port 4 (F)
|
||||
* TGL TC PLL 1 port 4 (TC4)
|
||||
*/
|
||||
DPLL_ID_ICL_MGPLL4 = 6,
|
||||
/**
|
||||
* @DPLL_ID_TGL_TCPLL5: TGL TC PLL port 5 (TC5)
|
||||
*/
|
||||
DPLL_ID_TGL_MGPLL5 = 7,
|
||||
/**
|
||||
* @DPLL_ID_TGL_TCPLL6: TGL TC PLL port 6 (TC6)
|
||||
*/
|
||||
DPLL_ID_TGL_MGPLL6 = 8,
|
||||
};
|
||||
|
||||
#define I915_NUM_PLLS 9
|
||||
|
||||
enum icl_port_dpll_id {
|
||||
ICL_PORT_DPLL_DEFAULT,
|
||||
ICL_PORT_DPLL_MG_PHY,
|
||||
|
||||
ICL_PORT_DPLL_COUNT,
|
||||
};
|
||||
#define I915_NUM_PLLS 7
|
||||
|
||||
struct intel_dpll_hw_state {
|
||||
/* i9xx, pch plls */
|
||||
|
@ -195,7 +220,7 @@ struct intel_dpll_hw_state {
|
|||
* future state which would be applied by an atomic mode set (stored in
|
||||
* a struct &intel_atomic_state).
|
||||
*
|
||||
* See also intel_get_shared_dpll() and intel_release_shared_dpll().
|
||||
* See also intel_reserve_shared_dplls() and intel_release_shared_dplls().
|
||||
*/
|
||||
struct intel_shared_dpll_state {
|
||||
/**
|
||||
|
@ -312,6 +337,7 @@ struct intel_shared_dpll {
|
|||
* @info: platform specific info
|
||||
*/
|
||||
const struct dpll_info *info;
|
||||
intel_wakeref_t wakeref;
|
||||
};
|
||||
|
||||
#define SKL_DPLL0 0
|
||||
|
@ -331,15 +357,20 @@ void assert_shared_dpll(struct drm_i915_private *dev_priv,
|
|||
bool state);
|
||||
#define assert_shared_dpll_enabled(d, p) assert_shared_dpll(d, p, true)
|
||||
#define assert_shared_dpll_disabled(d, p) assert_shared_dpll(d, p, false)
|
||||
struct intel_shared_dpll *intel_get_shared_dpll(struct intel_crtc_state *state,
|
||||
struct intel_encoder *encoder);
|
||||
void intel_release_shared_dpll(struct intel_shared_dpll *dpll,
|
||||
struct intel_crtc *crtc,
|
||||
struct drm_atomic_state *state);
|
||||
bool intel_reserve_shared_dplls(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc,
|
||||
struct intel_encoder *encoder);
|
||||
void intel_release_shared_dplls(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc);
|
||||
void icl_set_active_port_dpll(struct intel_crtc_state *crtc_state,
|
||||
enum icl_port_dpll_id port_dpll_id);
|
||||
void intel_update_active_dpll(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc,
|
||||
struct intel_encoder *encoder);
|
||||
void intel_prepare_shared_dpll(const struct intel_crtc_state *crtc_state);
|
||||
void intel_enable_shared_dpll(const struct intel_crtc_state *crtc_state);
|
||||
void intel_disable_shared_dpll(const struct intel_crtc_state *crtc_state);
|
||||
void intel_shared_dpll_swap_state(struct drm_atomic_state *state);
|
||||
void intel_shared_dpll_swap_state(struct intel_atomic_state *state);
|
||||
void intel_shared_dpll_init(struct drm_device *dev);
|
||||
|
||||
void intel_dpll_dump_hw_state(struct drm_i915_private *dev_priv,
|
||||
|
|
|
@ -49,8 +49,11 @@ struct intel_dsi {
|
|||
|
||||
struct intel_connector *attached_connector;
|
||||
|
||||
/* bit mask of ports being driven */
|
||||
u16 ports;
|
||||
/* bit mask of ports (vlv dsi) or phys (icl dsi) being driven */
|
||||
union {
|
||||
u16 ports; /* VLV DSI */
|
||||
u16 phys; /* ICL DSI */
|
||||
};
|
||||
|
||||
/* if true, use HS mode, otherwise LP */
|
||||
bool hs;
|
||||
|
@ -132,7 +135,10 @@ static inline struct intel_dsi_host *to_intel_dsi_host(struct mipi_dsi_host *h)
|
|||
return container_of(h, struct intel_dsi_host, base);
|
||||
}
|
||||
|
||||
#define for_each_dsi_port(__port, __ports_mask) for_each_port_masked(__port, __ports_mask)
|
||||
#define for_each_dsi_port(__port, __ports_mask) \
|
||||
for_each_port_masked(__port, __ports_mask)
|
||||
#define for_each_dsi_phy(__phy, __phys_mask) \
|
||||
for_each_phy_masked(__phy, __phys_mask)
|
||||
|
||||
static inline struct intel_dsi *enc_to_intel_dsi(struct drm_encoder *encoder)
|
||||
{
|
||||
|
|
|
@ -94,11 +94,25 @@ static const struct gmbus_pin gmbus_pins_mcc[] = {
|
|||
[GMBUS_PIN_9_TC1_ICP] = { "dpc", GPIOJ },
|
||||
};
|
||||
|
||||
static const struct gmbus_pin gmbus_pins_tgp[] = {
|
||||
[GMBUS_PIN_1_BXT] = { "dpa", GPIOB },
|
||||
[GMBUS_PIN_2_BXT] = { "dpb", GPIOC },
|
||||
[GMBUS_PIN_3_BXT] = { "dpc", GPIOD },
|
||||
[GMBUS_PIN_9_TC1_ICP] = { "tc1", GPIOJ },
|
||||
[GMBUS_PIN_10_TC2_ICP] = { "tc2", GPIOK },
|
||||
[GMBUS_PIN_11_TC3_ICP] = { "tc3", GPIOL },
|
||||
[GMBUS_PIN_12_TC4_ICP] = { "tc4", GPIOM },
|
||||
[GMBUS_PIN_13_TC5_TGP] = { "tc5", GPION },
|
||||
[GMBUS_PIN_14_TC6_TGP] = { "tc6", GPIOO },
|
||||
};
|
||||
|
||||
/* pin is expected to be valid */
|
||||
static const struct gmbus_pin *get_gmbus_pin(struct drm_i915_private *dev_priv,
|
||||
unsigned int pin)
|
||||
{
|
||||
if (HAS_PCH_MCC(dev_priv))
|
||||
if (HAS_PCH_TGP(dev_priv))
|
||||
return &gmbus_pins_tgp[pin];
|
||||
else if (HAS_PCH_MCC(dev_priv))
|
||||
return &gmbus_pins_mcc[pin];
|
||||
else if (HAS_PCH_ICP(dev_priv))
|
||||
return &gmbus_pins_icp[pin];
|
||||
|
@ -119,7 +133,9 @@ bool intel_gmbus_is_valid_pin(struct drm_i915_private *dev_priv,
|
|||
{
|
||||
unsigned int size;
|
||||
|
||||
if (HAS_PCH_MCC(dev_priv))
|
||||
if (HAS_PCH_TGP(dev_priv))
|
||||
size = ARRAY_SIZE(gmbus_pins_tgp);
|
||||
else if (HAS_PCH_MCC(dev_priv))
|
||||
size = ARRAY_SIZE(gmbus_pins_mcc);
|
||||
else if (HAS_PCH_ICP(dev_priv))
|
||||
size = ARRAY_SIZE(gmbus_pins_icp);
|
||||
|
|
|
@ -523,12 +523,16 @@ int intel_hdcp_auth_downstream(struct intel_connector *connector)
|
|||
* authentication.
|
||||
*/
|
||||
num_downstream = DRM_HDCP_NUM_DOWNSTREAM(bstatus[0]);
|
||||
if (num_downstream == 0)
|
||||
if (num_downstream == 0) {
|
||||
DRM_DEBUG_KMS("Repeater with zero downstream devices\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
ksv_fifo = kcalloc(DRM_HDCP_KSV_LEN, num_downstream, GFP_KERNEL);
|
||||
if (!ksv_fifo)
|
||||
if (!ksv_fifo) {
|
||||
DRM_DEBUG_KMS("Out of mem: ksv_fifo\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
ret = shim->read_ksv_fifo(intel_dig_port, num_downstream, ksv_fifo);
|
||||
if (ret)
|
||||
|
@ -1206,8 +1210,10 @@ static int hdcp2_authentication_key_exchange(struct intel_connector *connector)
|
|||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
if (msgs.send_cert.rx_caps[0] != HDCP_2_2_RX_CAPS_VERSION_VAL)
|
||||
if (msgs.send_cert.rx_caps[0] != HDCP_2_2_RX_CAPS_VERSION_VAL) {
|
||||
DRM_DEBUG_KMS("cert.rx_caps dont claim HDCP2.2\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
hdcp->is_repeater = HDCP_2_2_RX_REPEATER(msgs.send_cert.rx_caps[2]);
|
||||
|
||||
|
|
|
@ -2930,51 +2930,34 @@ static u8 cnp_port_to_ddc_pin(struct drm_i915_private *dev_priv,
|
|||
|
||||
static u8 icl_port_to_ddc_pin(struct drm_i915_private *dev_priv, enum port port)
|
||||
{
|
||||
u8 ddc_pin;
|
||||
enum phy phy = intel_port_to_phy(dev_priv, port);
|
||||
|
||||
switch (port) {
|
||||
case PORT_A:
|
||||
ddc_pin = GMBUS_PIN_1_BXT;
|
||||
break;
|
||||
case PORT_B:
|
||||
ddc_pin = GMBUS_PIN_2_BXT;
|
||||
break;
|
||||
case PORT_C:
|
||||
ddc_pin = GMBUS_PIN_9_TC1_ICP;
|
||||
break;
|
||||
case PORT_D:
|
||||
ddc_pin = GMBUS_PIN_10_TC2_ICP;
|
||||
break;
|
||||
case PORT_E:
|
||||
ddc_pin = GMBUS_PIN_11_TC3_ICP;
|
||||
break;
|
||||
case PORT_F:
|
||||
ddc_pin = GMBUS_PIN_12_TC4_ICP;
|
||||
break;
|
||||
default:
|
||||
MISSING_CASE(port);
|
||||
ddc_pin = GMBUS_PIN_2_BXT;
|
||||
break;
|
||||
}
|
||||
return ddc_pin;
|
||||
if (intel_phy_is_combo(dev_priv, phy))
|
||||
return GMBUS_PIN_1_BXT + port;
|
||||
else if (intel_phy_is_tc(dev_priv, phy))
|
||||
return GMBUS_PIN_9_TC1_ICP + intel_port_to_tc(dev_priv, port);
|
||||
|
||||
WARN(1, "Unknown port:%c\n", port_name(port));
|
||||
return GMBUS_PIN_2_BXT;
|
||||
}
|
||||
|
||||
static u8 mcc_port_to_ddc_pin(struct drm_i915_private *dev_priv, enum port port)
|
||||
{
|
||||
enum phy phy = intel_port_to_phy(dev_priv, port);
|
||||
u8 ddc_pin;
|
||||
|
||||
switch (port) {
|
||||
case PORT_A:
|
||||
switch (phy) {
|
||||
case PHY_A:
|
||||
ddc_pin = GMBUS_PIN_1_BXT;
|
||||
break;
|
||||
case PORT_B:
|
||||
case PHY_B:
|
||||
ddc_pin = GMBUS_PIN_2_BXT;
|
||||
break;
|
||||
case PORT_C:
|
||||
case PHY_C:
|
||||
ddc_pin = GMBUS_PIN_9_TC1_ICP;
|
||||
break;
|
||||
default:
|
||||
MISSING_CASE(port);
|
||||
MISSING_CASE(phy);
|
||||
ddc_pin = GMBUS_PIN_1_BXT;
|
||||
break;
|
||||
}
|
||||
|
@ -3019,7 +3002,7 @@ static u8 intel_hdmi_ddc_pin(struct drm_i915_private *dev_priv,
|
|||
|
||||
if (HAS_PCH_MCC(dev_priv))
|
||||
ddc_pin = mcc_port_to_ddc_pin(dev_priv, port);
|
||||
else if (HAS_PCH_ICP(dev_priv))
|
||||
else if (HAS_PCH_TGP(dev_priv) || HAS_PCH_ICP(dev_priv))
|
||||
ddc_pin = icl_port_to_ddc_pin(dev_priv, port);
|
||||
else if (HAS_PCH_CNP(dev_priv))
|
||||
ddc_pin = cnp_port_to_ddc_pin(dev_priv, port);
|
||||
|
@ -3143,6 +3126,32 @@ void intel_hdmi_init_connector(struct intel_digital_port *intel_dig_port,
|
|||
DRM_DEBUG_KMS("CEC notifier get failed\n");
|
||||
}
|
||||
|
||||
static enum intel_hotplug_state
|
||||
intel_hdmi_hotplug(struct intel_encoder *encoder,
|
||||
struct intel_connector *connector, bool irq_received)
|
||||
{
|
||||
enum intel_hotplug_state state;
|
||||
|
||||
state = intel_encoder_hotplug(encoder, connector, irq_received);
|
||||
|
||||
/*
|
||||
* On many platforms the HDMI live state signal is known to be
|
||||
* unreliable, so we can't use it to detect if a sink is connected or
|
||||
* not. Instead we detect if it's connected based on whether we can
|
||||
* read the EDID or not. That in turn has a problem during disconnect,
|
||||
* since the HPD interrupt may be raised before the DDC lines get
|
||||
* disconnected (due to how the required length of DDC vs. HPD
|
||||
* connector pins are specified) and so we'll still be able to get a
|
||||
* valid EDID. To solve this schedule another detection cycle if this
|
||||
* time around we didn't detect any change in the sink's connection
|
||||
* status.
|
||||
*/
|
||||
if (state == INTEL_HOTPLUG_UNCHANGED && irq_received)
|
||||
state = INTEL_HOTPLUG_RETRY;
|
||||
|
||||
return state;
|
||||
}
|
||||
|
||||
void intel_hdmi_init(struct drm_i915_private *dev_priv,
|
||||
i915_reg_t hdmi_reg, enum port port)
|
||||
{
|
||||
|
@ -3166,7 +3175,7 @@ void intel_hdmi_init(struct drm_i915_private *dev_priv,
|
|||
&intel_hdmi_enc_funcs, DRM_MODE_ENCODER_TMDS,
|
||||
"HDMI %c", port_name(port));
|
||||
|
||||
intel_encoder->hotplug = intel_encoder_hotplug;
|
||||
intel_encoder->hotplug = intel_hdmi_hotplug;
|
||||
intel_encoder->compute_config = intel_hdmi_compute_config;
|
||||
if (HAS_PCH_SPLIT(dev_priv)) {
|
||||
intel_encoder->disable = pch_disable_hdmi;
|
||||
|
|
|
@ -112,6 +112,7 @@ enum hpd_pin intel_hpd_pin_default(struct drm_i915_private *dev_priv,
|
|||
|
||||
#define HPD_STORM_DETECT_PERIOD 1000
|
||||
#define HPD_STORM_REENABLE_DELAY (2 * 60 * 1000)
|
||||
#define HPD_RETRY_DELAY 1000
|
||||
|
||||
/**
|
||||
* intel_hpd_irq_storm_detect - gather stats and detect HPD IRQ storm on a pin
|
||||
|
@ -266,8 +267,10 @@ static void intel_hpd_irq_storm_reenable_work(struct work_struct *work)
|
|||
intel_runtime_pm_put(&dev_priv->runtime_pm, wakeref);
|
||||
}
|
||||
|
||||
bool intel_encoder_hotplug(struct intel_encoder *encoder,
|
||||
struct intel_connector *connector)
|
||||
enum intel_hotplug_state
|
||||
intel_encoder_hotplug(struct intel_encoder *encoder,
|
||||
struct intel_connector *connector,
|
||||
bool irq_received)
|
||||
{
|
||||
struct drm_device *dev = connector->base.dev;
|
||||
enum drm_connector_status old_status;
|
||||
|
@ -279,7 +282,7 @@ bool intel_encoder_hotplug(struct intel_encoder *encoder,
|
|||
drm_helper_probe_detect(&connector->base, NULL, false);
|
||||
|
||||
if (old_status == connector->base.status)
|
||||
return false;
|
||||
return INTEL_HOTPLUG_UNCHANGED;
|
||||
|
||||
DRM_DEBUG_KMS("[CONNECTOR:%d:%s] status updated from %s to %s\n",
|
||||
connector->base.base.id,
|
||||
|
@ -287,7 +290,7 @@ bool intel_encoder_hotplug(struct intel_encoder *encoder,
|
|||
drm_get_connector_status_name(old_status),
|
||||
drm_get_connector_status_name(connector->base.status));
|
||||
|
||||
return true;
|
||||
return INTEL_HOTPLUG_CHANGED;
|
||||
}
|
||||
|
||||
static bool intel_encoder_has_hpd_pulse(struct intel_encoder *encoder)
|
||||
|
@ -339,7 +342,7 @@ static void i915_digport_work_func(struct work_struct *work)
|
|||
spin_lock_irq(&dev_priv->irq_lock);
|
||||
dev_priv->hotplug.event_bits |= old_bits;
|
||||
spin_unlock_irq(&dev_priv->irq_lock);
|
||||
schedule_work(&dev_priv->hotplug.hotplug_work);
|
||||
queue_delayed_work(system_wq, &dev_priv->hotplug.hotplug_work, 0);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -349,14 +352,16 @@ static void i915_digport_work_func(struct work_struct *work)
|
|||
static void i915_hotplug_work_func(struct work_struct *work)
|
||||
{
|
||||
struct drm_i915_private *dev_priv =
|
||||
container_of(work, struct drm_i915_private, hotplug.hotplug_work);
|
||||
container_of(work, struct drm_i915_private,
|
||||
hotplug.hotplug_work.work);
|
||||
struct drm_device *dev = &dev_priv->drm;
|
||||
struct intel_connector *intel_connector;
|
||||
struct intel_encoder *intel_encoder;
|
||||
struct drm_connector *connector;
|
||||
struct drm_connector_list_iter conn_iter;
|
||||
bool changed = false;
|
||||
u32 changed = 0, retry = 0;
|
||||
u32 hpd_event_bits;
|
||||
u32 hpd_retry_bits;
|
||||
|
||||
mutex_lock(&dev->mode_config.mutex);
|
||||
DRM_DEBUG_KMS("running encoder hotplug functions\n");
|
||||
|
@ -365,6 +370,8 @@ static void i915_hotplug_work_func(struct work_struct *work)
|
|||
|
||||
hpd_event_bits = dev_priv->hotplug.event_bits;
|
||||
dev_priv->hotplug.event_bits = 0;
|
||||
hpd_retry_bits = dev_priv->hotplug.retry_bits;
|
||||
dev_priv->hotplug.retry_bits = 0;
|
||||
|
||||
/* Enable polling for connectors which had HPD IRQ storms */
|
||||
intel_hpd_irq_storm_switch_to_polling(dev_priv);
|
||||
|
@ -373,16 +380,29 @@ static void i915_hotplug_work_func(struct work_struct *work)
|
|||
|
||||
drm_connector_list_iter_begin(dev, &conn_iter);
|
||||
drm_for_each_connector_iter(connector, &conn_iter) {
|
||||
u32 hpd_bit;
|
||||
|
||||
intel_connector = to_intel_connector(connector);
|
||||
if (!intel_connector->encoder)
|
||||
continue;
|
||||
intel_encoder = intel_connector->encoder;
|
||||
if (hpd_event_bits & (1 << intel_encoder->hpd_pin)) {
|
||||
hpd_bit = BIT(intel_encoder->hpd_pin);
|
||||
if ((hpd_event_bits | hpd_retry_bits) & hpd_bit) {
|
||||
DRM_DEBUG_KMS("Connector %s (pin %i) received hotplug event.\n",
|
||||
connector->name, intel_encoder->hpd_pin);
|
||||
|
||||
changed |= intel_encoder->hotplug(intel_encoder,
|
||||
intel_connector);
|
||||
switch (intel_encoder->hotplug(intel_encoder,
|
||||
intel_connector,
|
||||
hpd_event_bits & hpd_bit)) {
|
||||
case INTEL_HOTPLUG_UNCHANGED:
|
||||
break;
|
||||
case INTEL_HOTPLUG_CHANGED:
|
||||
changed |= hpd_bit;
|
||||
break;
|
||||
case INTEL_HOTPLUG_RETRY:
|
||||
retry |= hpd_bit;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
|
@ -390,6 +410,17 @@ static void i915_hotplug_work_func(struct work_struct *work)
|
|||
|
||||
if (changed)
|
||||
drm_kms_helper_hotplug_event(dev);
|
||||
|
||||
/* Remove shared HPD pins that have changed */
|
||||
retry &= ~changed;
|
||||
if (retry) {
|
||||
spin_lock_irq(&dev_priv->irq_lock);
|
||||
dev_priv->hotplug.retry_bits |= retry;
|
||||
spin_unlock_irq(&dev_priv->irq_lock);
|
||||
|
||||
mod_delayed_work(system_wq, &dev_priv->hotplug.hotplug_work,
|
||||
msecs_to_jiffies(HPD_RETRY_DELAY));
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
@ -516,7 +547,7 @@ void intel_hpd_irq_handler(struct drm_i915_private *dev_priv,
|
|||
if (queue_dig)
|
||||
queue_work(dev_priv->hotplug.dp_wq, &dev_priv->hotplug.dig_port_work);
|
||||
if (queue_hp)
|
||||
schedule_work(&dev_priv->hotplug.hotplug_work);
|
||||
queue_delayed_work(system_wq, &dev_priv->hotplug.hotplug_work, 0);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -636,7 +667,8 @@ void intel_hpd_poll_init(struct drm_i915_private *dev_priv)
|
|||
|
||||
void intel_hpd_init_work(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
INIT_WORK(&dev_priv->hotplug.hotplug_work, i915_hotplug_work_func);
|
||||
INIT_DELAYED_WORK(&dev_priv->hotplug.hotplug_work,
|
||||
i915_hotplug_work_func);
|
||||
INIT_WORK(&dev_priv->hotplug.dig_port_work, i915_digport_work_func);
|
||||
INIT_WORK(&dev_priv->hotplug.poll_init_work, i915_hpd_poll_init_work);
|
||||
INIT_DELAYED_WORK(&dev_priv->hotplug.reenable_work,
|
||||
|
@ -650,11 +682,12 @@ void intel_hpd_cancel_work(struct drm_i915_private *dev_priv)
|
|||
dev_priv->hotplug.long_port_mask = 0;
|
||||
dev_priv->hotplug.short_port_mask = 0;
|
||||
dev_priv->hotplug.event_bits = 0;
|
||||
dev_priv->hotplug.retry_bits = 0;
|
||||
|
||||
spin_unlock_irq(&dev_priv->irq_lock);
|
||||
|
||||
cancel_work_sync(&dev_priv->hotplug.dig_port_work);
|
||||
cancel_work_sync(&dev_priv->hotplug.hotplug_work);
|
||||
cancel_delayed_work_sync(&dev_priv->hotplug.hotplug_work);
|
||||
cancel_work_sync(&dev_priv->hotplug.poll_init_work);
|
||||
cancel_delayed_work_sync(&dev_priv->hotplug.reenable_work);
|
||||
}
|
||||
|
|
|
@ -15,8 +15,9 @@ struct intel_connector;
|
|||
struct intel_encoder;
|
||||
|
||||
void intel_hpd_poll_init(struct drm_i915_private *dev_priv);
|
||||
bool intel_encoder_hotplug(struct intel_encoder *encoder,
|
||||
struct intel_connector *connector);
|
||||
enum intel_hotplug_state intel_encoder_hotplug(struct intel_encoder *encoder,
|
||||
struct intel_connector *connector,
|
||||
bool irq_received);
|
||||
void intel_hpd_irq_handler(struct drm_i915_private *dev_priv,
|
||||
u32 pin_mask, u32 long_mask);
|
||||
void intel_hpd_init(struct drm_i915_private *dev_priv);
|
||||
|
|
|
@ -175,6 +175,7 @@ struct overlay_registers {
|
|||
|
||||
struct intel_overlay {
|
||||
struct drm_i915_private *i915;
|
||||
struct intel_context *context;
|
||||
struct intel_crtc *crtc;
|
||||
struct i915_vma *vma;
|
||||
struct i915_vma *old_vma;
|
||||
|
@ -239,9 +240,7 @@ static int intel_overlay_do_wait_request(struct intel_overlay *overlay,
|
|||
|
||||
static struct i915_request *alloc_request(struct intel_overlay *overlay)
|
||||
{
|
||||
struct intel_engine_cs *engine = overlay->i915->engine[RCS0];
|
||||
|
||||
return i915_request_create(engine->kernel_context);
|
||||
return i915_request_create(overlay->context);
|
||||
}
|
||||
|
||||
/* overlay needs to be disable in OCMD reg */
|
||||
|
@ -1359,11 +1358,16 @@ void intel_overlay_setup(struct drm_i915_private *dev_priv)
|
|||
if (!HAS_OVERLAY(dev_priv))
|
||||
return;
|
||||
|
||||
if (!HAS_ENGINE(dev_priv, RCS0))
|
||||
return;
|
||||
|
||||
overlay = kzalloc(sizeof(*overlay), GFP_KERNEL);
|
||||
if (!overlay)
|
||||
return;
|
||||
|
||||
overlay->i915 = dev_priv;
|
||||
overlay->context = dev_priv->engine[RCS0]->kernel_context;
|
||||
GEM_BUG_ON(!overlay->context);
|
||||
|
||||
overlay->color_key = 0x0101fe;
|
||||
overlay->color_key_enabled = true;
|
||||
|
|
|
@ -667,5 +667,5 @@ void intel_crtc_disable_pipe_crc(struct intel_crtc *intel_crtc)
|
|||
|
||||
I915_WRITE(PIPE_CRC_CTL(crtc->index), 0);
|
||||
POSTING_READ(PIPE_CRC_CTL(crtc->index));
|
||||
synchronize_irq(dev_priv->drm.irq);
|
||||
intel_synchronize_irq(dev_priv);
|
||||
}
|
||||
|
|
|
@ -274,130 +274,145 @@ static bool intel_sdvo_read_byte(struct intel_sdvo *intel_sdvo, u8 addr, u8 *ch)
|
|||
return false;
|
||||
}
|
||||
|
||||
#define SDVO_CMD_NAME_ENTRY(cmd) {cmd, #cmd}
|
||||
#define SDVO_CMD_NAME_ENTRY(cmd_) { .cmd = SDVO_CMD_ ## cmd_, .name = #cmd_ }
|
||||
|
||||
/** Mapping of command numbers to names, for debug output */
|
||||
static const struct _sdvo_cmd_name {
|
||||
static const struct {
|
||||
u8 cmd;
|
||||
const char *name;
|
||||
} __attribute__ ((packed)) sdvo_cmd_names[] = {
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_RESET),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_DEVICE_CAPS),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_FIRMWARE_REV),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_TRAINED_INPUTS),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_ACTIVE_OUTPUTS),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_ACTIVE_OUTPUTS),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_IN_OUT_MAP),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_IN_OUT_MAP),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_ATTACHED_DISPLAYS),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_HOT_PLUG_SUPPORT),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_ACTIVE_HOT_PLUG),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_ACTIVE_HOT_PLUG),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_INTERRUPT_EVENT_SOURCE),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_TARGET_INPUT),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_TARGET_OUTPUT),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_INPUT_TIMINGS_PART1),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_INPUT_TIMINGS_PART2),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_INPUT_TIMINGS_PART1),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_INPUT_TIMINGS_PART2),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_INPUT_TIMINGS_PART1),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_OUTPUT_TIMINGS_PART1),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_OUTPUT_TIMINGS_PART2),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_OUTPUT_TIMINGS_PART1),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_OUTPUT_TIMINGS_PART2),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_CREATE_PREFERRED_INPUT_TIMING),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_PREFERRED_INPUT_TIMING_PART1),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_PREFERRED_INPUT_TIMING_PART2),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_INPUT_PIXEL_CLOCK_RANGE),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_OUTPUT_PIXEL_CLOCK_RANGE),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_SUPPORTED_CLOCK_RATE_MULTS),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_CLOCK_RATE_MULT),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_CLOCK_RATE_MULT),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_SUPPORTED_TV_FORMATS),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_TV_FORMAT),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_TV_FORMAT),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_SUPPORTED_POWER_STATES),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_POWER_STATE),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_ENCODER_POWER_STATE),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_DISPLAY_POWER_STATE),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_CONTROL_BUS_SWITCH),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_SDTV_RESOLUTION_SUPPORT),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_SCALED_HDTV_RESOLUTION_SUPPORT),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_SUPPORTED_ENHANCEMENTS),
|
||||
SDVO_CMD_NAME_ENTRY(RESET),
|
||||
SDVO_CMD_NAME_ENTRY(GET_DEVICE_CAPS),
|
||||
SDVO_CMD_NAME_ENTRY(GET_FIRMWARE_REV),
|
||||
SDVO_CMD_NAME_ENTRY(GET_TRAINED_INPUTS),
|
||||
SDVO_CMD_NAME_ENTRY(GET_ACTIVE_OUTPUTS),
|
||||
SDVO_CMD_NAME_ENTRY(SET_ACTIVE_OUTPUTS),
|
||||
SDVO_CMD_NAME_ENTRY(GET_IN_OUT_MAP),
|
||||
SDVO_CMD_NAME_ENTRY(SET_IN_OUT_MAP),
|
||||
SDVO_CMD_NAME_ENTRY(GET_ATTACHED_DISPLAYS),
|
||||
SDVO_CMD_NAME_ENTRY(GET_HOT_PLUG_SUPPORT),
|
||||
SDVO_CMD_NAME_ENTRY(SET_ACTIVE_HOT_PLUG),
|
||||
SDVO_CMD_NAME_ENTRY(GET_ACTIVE_HOT_PLUG),
|
||||
SDVO_CMD_NAME_ENTRY(GET_INTERRUPT_EVENT_SOURCE),
|
||||
SDVO_CMD_NAME_ENTRY(SET_TARGET_INPUT),
|
||||
SDVO_CMD_NAME_ENTRY(SET_TARGET_OUTPUT),
|
||||
SDVO_CMD_NAME_ENTRY(GET_INPUT_TIMINGS_PART1),
|
||||
SDVO_CMD_NAME_ENTRY(GET_INPUT_TIMINGS_PART2),
|
||||
SDVO_CMD_NAME_ENTRY(SET_INPUT_TIMINGS_PART1),
|
||||
SDVO_CMD_NAME_ENTRY(SET_INPUT_TIMINGS_PART2),
|
||||
SDVO_CMD_NAME_ENTRY(SET_OUTPUT_TIMINGS_PART1),
|
||||
SDVO_CMD_NAME_ENTRY(SET_OUTPUT_TIMINGS_PART2),
|
||||
SDVO_CMD_NAME_ENTRY(GET_OUTPUT_TIMINGS_PART1),
|
||||
SDVO_CMD_NAME_ENTRY(GET_OUTPUT_TIMINGS_PART2),
|
||||
SDVO_CMD_NAME_ENTRY(CREATE_PREFERRED_INPUT_TIMING),
|
||||
SDVO_CMD_NAME_ENTRY(GET_PREFERRED_INPUT_TIMING_PART1),
|
||||
SDVO_CMD_NAME_ENTRY(GET_PREFERRED_INPUT_TIMING_PART2),
|
||||
SDVO_CMD_NAME_ENTRY(GET_INPUT_PIXEL_CLOCK_RANGE),
|
||||
SDVO_CMD_NAME_ENTRY(GET_OUTPUT_PIXEL_CLOCK_RANGE),
|
||||
SDVO_CMD_NAME_ENTRY(GET_SUPPORTED_CLOCK_RATE_MULTS),
|
||||
SDVO_CMD_NAME_ENTRY(GET_CLOCK_RATE_MULT),
|
||||
SDVO_CMD_NAME_ENTRY(SET_CLOCK_RATE_MULT),
|
||||
SDVO_CMD_NAME_ENTRY(GET_SUPPORTED_TV_FORMATS),
|
||||
SDVO_CMD_NAME_ENTRY(GET_TV_FORMAT),
|
||||
SDVO_CMD_NAME_ENTRY(SET_TV_FORMAT),
|
||||
SDVO_CMD_NAME_ENTRY(GET_SUPPORTED_POWER_STATES),
|
||||
SDVO_CMD_NAME_ENTRY(GET_POWER_STATE),
|
||||
SDVO_CMD_NAME_ENTRY(SET_ENCODER_POWER_STATE),
|
||||
SDVO_CMD_NAME_ENTRY(SET_DISPLAY_POWER_STATE),
|
||||
SDVO_CMD_NAME_ENTRY(SET_CONTROL_BUS_SWITCH),
|
||||
SDVO_CMD_NAME_ENTRY(GET_SDTV_RESOLUTION_SUPPORT),
|
||||
SDVO_CMD_NAME_ENTRY(GET_SCALED_HDTV_RESOLUTION_SUPPORT),
|
||||
SDVO_CMD_NAME_ENTRY(GET_SUPPORTED_ENHANCEMENTS),
|
||||
|
||||
/* Add the op code for SDVO enhancements */
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_MAX_HPOS),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_HPOS),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_HPOS),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_MAX_VPOS),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_VPOS),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_VPOS),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_MAX_SATURATION),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_SATURATION),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_SATURATION),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_MAX_HUE),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_HUE),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_HUE),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_MAX_CONTRAST),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_CONTRAST),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_CONTRAST),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_MAX_BRIGHTNESS),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_BRIGHTNESS),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_BRIGHTNESS),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_MAX_OVERSCAN_H),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_OVERSCAN_H),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_OVERSCAN_H),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_MAX_OVERSCAN_V),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_OVERSCAN_V),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_OVERSCAN_V),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_MAX_FLICKER_FILTER),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_FLICKER_FILTER),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_FLICKER_FILTER),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_MAX_FLICKER_FILTER_ADAPTIVE),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_FLICKER_FILTER_ADAPTIVE),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_FLICKER_FILTER_ADAPTIVE),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_MAX_FLICKER_FILTER_2D),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_FLICKER_FILTER_2D),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_FLICKER_FILTER_2D),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_MAX_SHARPNESS),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_SHARPNESS),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_SHARPNESS),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_DOT_CRAWL),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_DOT_CRAWL),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_MAX_TV_CHROMA_FILTER),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_TV_CHROMA_FILTER),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_TV_CHROMA_FILTER),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_MAX_TV_LUMA_FILTER),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_TV_LUMA_FILTER),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_TV_LUMA_FILTER),
|
||||
SDVO_CMD_NAME_ENTRY(GET_MAX_HPOS),
|
||||
SDVO_CMD_NAME_ENTRY(GET_HPOS),
|
||||
SDVO_CMD_NAME_ENTRY(SET_HPOS),
|
||||
SDVO_CMD_NAME_ENTRY(GET_MAX_VPOS),
|
||||
SDVO_CMD_NAME_ENTRY(GET_VPOS),
|
||||
SDVO_CMD_NAME_ENTRY(SET_VPOS),
|
||||
SDVO_CMD_NAME_ENTRY(GET_MAX_SATURATION),
|
||||
SDVO_CMD_NAME_ENTRY(GET_SATURATION),
|
||||
SDVO_CMD_NAME_ENTRY(SET_SATURATION),
|
||||
SDVO_CMD_NAME_ENTRY(GET_MAX_HUE),
|
||||
SDVO_CMD_NAME_ENTRY(GET_HUE),
|
||||
SDVO_CMD_NAME_ENTRY(SET_HUE),
|
||||
SDVO_CMD_NAME_ENTRY(GET_MAX_CONTRAST),
|
||||
SDVO_CMD_NAME_ENTRY(GET_CONTRAST),
|
||||
SDVO_CMD_NAME_ENTRY(SET_CONTRAST),
|
||||
SDVO_CMD_NAME_ENTRY(GET_MAX_BRIGHTNESS),
|
||||
SDVO_CMD_NAME_ENTRY(GET_BRIGHTNESS),
|
||||
SDVO_CMD_NAME_ENTRY(SET_BRIGHTNESS),
|
||||
SDVO_CMD_NAME_ENTRY(GET_MAX_OVERSCAN_H),
|
||||
SDVO_CMD_NAME_ENTRY(GET_OVERSCAN_H),
|
||||
SDVO_CMD_NAME_ENTRY(SET_OVERSCAN_H),
|
||||
SDVO_CMD_NAME_ENTRY(GET_MAX_OVERSCAN_V),
|
||||
SDVO_CMD_NAME_ENTRY(GET_OVERSCAN_V),
|
||||
SDVO_CMD_NAME_ENTRY(SET_OVERSCAN_V),
|
||||
SDVO_CMD_NAME_ENTRY(GET_MAX_FLICKER_FILTER),
|
||||
SDVO_CMD_NAME_ENTRY(GET_FLICKER_FILTER),
|
||||
SDVO_CMD_NAME_ENTRY(SET_FLICKER_FILTER),
|
||||
SDVO_CMD_NAME_ENTRY(GET_MAX_FLICKER_FILTER_ADAPTIVE),
|
||||
SDVO_CMD_NAME_ENTRY(GET_FLICKER_FILTER_ADAPTIVE),
|
||||
SDVO_CMD_NAME_ENTRY(SET_FLICKER_FILTER_ADAPTIVE),
|
||||
SDVO_CMD_NAME_ENTRY(GET_MAX_FLICKER_FILTER_2D),
|
||||
SDVO_CMD_NAME_ENTRY(GET_FLICKER_FILTER_2D),
|
||||
SDVO_CMD_NAME_ENTRY(SET_FLICKER_FILTER_2D),
|
||||
SDVO_CMD_NAME_ENTRY(GET_MAX_SHARPNESS),
|
||||
SDVO_CMD_NAME_ENTRY(GET_SHARPNESS),
|
||||
SDVO_CMD_NAME_ENTRY(SET_SHARPNESS),
|
||||
SDVO_CMD_NAME_ENTRY(GET_DOT_CRAWL),
|
||||
SDVO_CMD_NAME_ENTRY(SET_DOT_CRAWL),
|
||||
SDVO_CMD_NAME_ENTRY(GET_MAX_TV_CHROMA_FILTER),
|
||||
SDVO_CMD_NAME_ENTRY(GET_TV_CHROMA_FILTER),
|
||||
SDVO_CMD_NAME_ENTRY(SET_TV_CHROMA_FILTER),
|
||||
SDVO_CMD_NAME_ENTRY(GET_MAX_TV_LUMA_FILTER),
|
||||
SDVO_CMD_NAME_ENTRY(GET_TV_LUMA_FILTER),
|
||||
SDVO_CMD_NAME_ENTRY(SET_TV_LUMA_FILTER),
|
||||
|
||||
/* HDMI op code */
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_SUPP_ENCODE),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_ENCODE),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_ENCODE),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_PIXEL_REPLI),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_PIXEL_REPLI),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_COLORIMETRY_CAP),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_COLORIMETRY),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_COLORIMETRY),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_AUDIO_ENCRYPT_PREFER),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_AUDIO_STAT),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_AUDIO_STAT),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_HBUF_INDEX),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_HBUF_INDEX),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_HBUF_INFO),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_HBUF_AV_SPLIT),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_HBUF_AV_SPLIT),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_HBUF_TXRATE),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_HBUF_TXRATE),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_SET_HBUF_DATA),
|
||||
SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_HBUF_DATA),
|
||||
SDVO_CMD_NAME_ENTRY(GET_SUPP_ENCODE),
|
||||
SDVO_CMD_NAME_ENTRY(GET_ENCODE),
|
||||
SDVO_CMD_NAME_ENTRY(SET_ENCODE),
|
||||
SDVO_CMD_NAME_ENTRY(SET_PIXEL_REPLI),
|
||||
SDVO_CMD_NAME_ENTRY(GET_PIXEL_REPLI),
|
||||
SDVO_CMD_NAME_ENTRY(GET_COLORIMETRY_CAP),
|
||||
SDVO_CMD_NAME_ENTRY(SET_COLORIMETRY),
|
||||
SDVO_CMD_NAME_ENTRY(GET_COLORIMETRY),
|
||||
SDVO_CMD_NAME_ENTRY(GET_AUDIO_ENCRYPT_PREFER),
|
||||
SDVO_CMD_NAME_ENTRY(SET_AUDIO_STAT),
|
||||
SDVO_CMD_NAME_ENTRY(GET_AUDIO_STAT),
|
||||
SDVO_CMD_NAME_ENTRY(GET_HBUF_INDEX),
|
||||
SDVO_CMD_NAME_ENTRY(SET_HBUF_INDEX),
|
||||
SDVO_CMD_NAME_ENTRY(GET_HBUF_INFO),
|
||||
SDVO_CMD_NAME_ENTRY(GET_HBUF_AV_SPLIT),
|
||||
SDVO_CMD_NAME_ENTRY(SET_HBUF_AV_SPLIT),
|
||||
SDVO_CMD_NAME_ENTRY(GET_HBUF_TXRATE),
|
||||
SDVO_CMD_NAME_ENTRY(SET_HBUF_TXRATE),
|
||||
SDVO_CMD_NAME_ENTRY(SET_HBUF_DATA),
|
||||
SDVO_CMD_NAME_ENTRY(GET_HBUF_DATA),
|
||||
};
|
||||
|
||||
#undef SDVO_CMD_NAME_ENTRY
|
||||
|
||||
static const char *sdvo_cmd_name(u8 cmd)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(sdvo_cmd_names); i++) {
|
||||
if (cmd == sdvo_cmd_names[i].cmd)
|
||||
return sdvo_cmd_names[i].name;
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
#define SDVO_NAME(svdo) ((svdo)->port == PORT_B ? "SDVOB" : "SDVOC")
|
||||
|
||||
static void intel_sdvo_debug_write(struct intel_sdvo *intel_sdvo, u8 cmd,
|
||||
const void *args, int args_len)
|
||||
{
|
||||
const char *cmd_name;
|
||||
int i, pos = 0;
|
||||
#define BUF_LEN 256
|
||||
char buffer[BUF_LEN];
|
||||
|
@ -412,15 +427,12 @@ static void intel_sdvo_debug_write(struct intel_sdvo *intel_sdvo, u8 cmd,
|
|||
for (; i < 8; i++) {
|
||||
BUF_PRINT(" ");
|
||||
}
|
||||
for (i = 0; i < ARRAY_SIZE(sdvo_cmd_names); i++) {
|
||||
if (cmd == sdvo_cmd_names[i].cmd) {
|
||||
BUF_PRINT("(%s)", sdvo_cmd_names[i].name);
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (i == ARRAY_SIZE(sdvo_cmd_names)) {
|
||||
|
||||
cmd_name = sdvo_cmd_name(cmd);
|
||||
if (cmd_name)
|
||||
BUF_PRINT("(%s)", cmd_name);
|
||||
else
|
||||
BUF_PRINT("(%02X)", cmd);
|
||||
}
|
||||
BUG_ON(pos >= BUF_LEN - 1);
|
||||
#undef BUF_PRINT
|
||||
#undef BUF_LEN
|
||||
|
@ -429,15 +441,23 @@ static void intel_sdvo_debug_write(struct intel_sdvo *intel_sdvo, u8 cmd,
|
|||
}
|
||||
|
||||
static const char * const cmd_status_names[] = {
|
||||
"Power on",
|
||||
"Success",
|
||||
"Not supported",
|
||||
"Invalid arg",
|
||||
"Pending",
|
||||
"Target not specified",
|
||||
"Scaling not supported"
|
||||
[SDVO_CMD_STATUS_POWER_ON] = "Power on",
|
||||
[SDVO_CMD_STATUS_SUCCESS] = "Success",
|
||||
[SDVO_CMD_STATUS_NOTSUPP] = "Not supported",
|
||||
[SDVO_CMD_STATUS_INVALID_ARG] = "Invalid arg",
|
||||
[SDVO_CMD_STATUS_PENDING] = "Pending",
|
||||
[SDVO_CMD_STATUS_TARGET_NOT_SPECIFIED] = "Target not specified",
|
||||
[SDVO_CMD_STATUS_SCALING_NOT_SUPP] = "Scaling not supported",
|
||||
};
|
||||
|
||||
static const char *sdvo_cmd_status(u8 status)
|
||||
{
|
||||
if (status < ARRAY_SIZE(cmd_status_names))
|
||||
return cmd_status_names[status];
|
||||
else
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static bool __intel_sdvo_write_cmd(struct intel_sdvo *intel_sdvo, u8 cmd,
|
||||
const void *args, int args_len,
|
||||
bool unlocked)
|
||||
|
@ -516,6 +536,7 @@ static bool intel_sdvo_write_cmd(struct intel_sdvo *intel_sdvo, u8 cmd,
|
|||
static bool intel_sdvo_read_response(struct intel_sdvo *intel_sdvo,
|
||||
void *response, int response_len)
|
||||
{
|
||||
const char *cmd_status;
|
||||
u8 retry = 15; /* 5 quick checks, followed by 10 long checks */
|
||||
u8 status;
|
||||
int i, pos = 0;
|
||||
|
@ -562,8 +583,9 @@ static bool intel_sdvo_read_response(struct intel_sdvo *intel_sdvo,
|
|||
#define BUF_PRINT(args...) \
|
||||
pos += snprintf(buffer + pos, max_t(int, BUF_LEN - pos, 0), args)
|
||||
|
||||
if (status <= SDVO_CMD_STATUS_SCALING_NOT_SUPP)
|
||||
BUF_PRINT("(%s)", cmd_status_names[status]);
|
||||
cmd_status = sdvo_cmd_status(status);
|
||||
if (cmd_status)
|
||||
BUF_PRINT("(%s)", cmd_status);
|
||||
else
|
||||
BUF_PRINT("(??? %d)", status);
|
||||
|
||||
|
@ -929,6 +951,20 @@ static bool intel_sdvo_set_audio_state(struct intel_sdvo *intel_sdvo,
|
|||
&audio_state, 1);
|
||||
}
|
||||
|
||||
static bool intel_sdvo_get_hbuf_size(struct intel_sdvo *intel_sdvo,
|
||||
u8 *hbuf_size)
|
||||
{
|
||||
if (!intel_sdvo_get_value(intel_sdvo, SDVO_CMD_GET_HBUF_INFO,
|
||||
hbuf_size, 1))
|
||||
return false;
|
||||
|
||||
/* Buffer size is 0 based, hooray! However zero means zero. */
|
||||
if (*hbuf_size)
|
||||
(*hbuf_size)++;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
#if 0
|
||||
static void intel_sdvo_dump_hdmi_buf(struct intel_sdvo *intel_sdvo)
|
||||
{
|
||||
|
@ -972,14 +1008,10 @@ static bool intel_sdvo_write_infoframe(struct intel_sdvo *intel_sdvo,
|
|||
set_buf_index, 2))
|
||||
return false;
|
||||
|
||||
if (!intel_sdvo_get_value(intel_sdvo, SDVO_CMD_GET_HBUF_INFO,
|
||||
&hbuf_size, 1))
|
||||
if (!intel_sdvo_get_hbuf_size(intel_sdvo, &hbuf_size))
|
||||
return false;
|
||||
|
||||
/* Buffer size is 0 based, hooray! */
|
||||
hbuf_size++;
|
||||
|
||||
DRM_DEBUG_KMS("writing sdvo hbuf: %i, hbuf_size %i, hbuf_size: %i\n",
|
||||
DRM_DEBUG_KMS("writing sdvo hbuf: %i, length %u, hbuf_size: %i\n",
|
||||
if_index, length, hbuf_size);
|
||||
|
||||
if (hbuf_size < length)
|
||||
|
@ -1030,14 +1062,10 @@ static ssize_t intel_sdvo_read_infoframe(struct intel_sdvo *intel_sdvo,
|
|||
if (tx_rate == SDVO_HBUF_TX_DISABLED)
|
||||
return 0;
|
||||
|
||||
if (!intel_sdvo_get_value(intel_sdvo, SDVO_CMD_GET_HBUF_INFO,
|
||||
&hbuf_size, 1))
|
||||
return -ENXIO;
|
||||
if (!intel_sdvo_get_hbuf_size(intel_sdvo, &hbuf_size))
|
||||
return false;
|
||||
|
||||
/* Buffer size is 0 based, hooray! */
|
||||
hbuf_size++;
|
||||
|
||||
DRM_DEBUG_KMS("reading sdvo hbuf: %i, hbuf_size %i, hbuf_size: %i\n",
|
||||
DRM_DEBUG_KMS("reading sdvo hbuf: %i, length %u, hbuf_size: %i\n",
|
||||
if_index, length, hbuf_size);
|
||||
|
||||
hbuf_size = min_t(unsigned int, length, hbuf_size);
|
||||
|
@ -1893,12 +1921,14 @@ static void intel_sdvo_enable_hotplug(struct intel_encoder *encoder)
|
|||
&intel_sdvo->hotplug_active, 2);
|
||||
}
|
||||
|
||||
static bool intel_sdvo_hotplug(struct intel_encoder *encoder,
|
||||
struct intel_connector *connector)
|
||||
static enum intel_hotplug_state
|
||||
intel_sdvo_hotplug(struct intel_encoder *encoder,
|
||||
struct intel_connector *connector,
|
||||
bool irq_received)
|
||||
{
|
||||
intel_sdvo_enable_hotplug(encoder);
|
||||
|
||||
return intel_encoder_hotplug(encoder, connector);
|
||||
return intel_encoder_hotplug(encoder, connector, irq_received);
|
||||
}
|
||||
|
||||
static bool
|
||||
|
|
|
@ -441,9 +441,21 @@ icl_program_input_csc(struct intel_plane *plane,
|
|||
*/
|
||||
[DRM_COLOR_YCBCR_BT709] = {
|
||||
0x7C98, 0x7800, 0x0,
|
||||
0x9EF8, 0x7800, 0xABF8,
|
||||
0x9EF8, 0x7800, 0xAC00,
|
||||
0x0, 0x7800, 0x7ED8,
|
||||
},
|
||||
/*
|
||||
* BT.2020 full range YCbCr -> full range RGB
|
||||
* The matrix required is :
|
||||
* [1.000, 0.000, 1.474,
|
||||
* 1.000, -0.1645, -0.5713,
|
||||
* 1.000, 1.8814, 0.0000]
|
||||
*/
|
||||
[DRM_COLOR_YCBCR_BT2020] = {
|
||||
0x7BC8, 0x7800, 0x0,
|
||||
0x8928, 0x7800, 0xAA88,
|
||||
0x0, 0x7800, 0x7F10,
|
||||
},
|
||||
};
|
||||
|
||||
/* Matrix for Limited Range to Full Range Conversion */
|
||||
|
@ -451,26 +463,38 @@ icl_program_input_csc(struct intel_plane *plane,
|
|||
/*
|
||||
* BT.601 Limted range YCbCr -> full range RGB
|
||||
* The matrix required is :
|
||||
* [1.164384, 0.000, 1.596370,
|
||||
* 1.138393, -0.382500, -0.794598,
|
||||
* 1.138393, 1.971696, 0.0000]
|
||||
* [1.164384, 0.000, 1.596027,
|
||||
* 1.164384, -0.39175, -0.812813,
|
||||
* 1.164384, 2.017232, 0.0000]
|
||||
*/
|
||||
[DRM_COLOR_YCBCR_BT601] = {
|
||||
0x7CC8, 0x7950, 0x0,
|
||||
0x8CB8, 0x7918, 0x9C40,
|
||||
0x0, 0x7918, 0x7FC8,
|
||||
0x8D00, 0x7950, 0x9C88,
|
||||
0x0, 0x7950, 0x6810,
|
||||
},
|
||||
/*
|
||||
* BT.709 Limited range YCbCr -> full range RGB
|
||||
* The matrix required is :
|
||||
* [1.164, 0.000, 1.833671,
|
||||
* 1.138393, -0.213249, -0.532909,
|
||||
* 1.138393, 2.112402, 0.0000]
|
||||
* [1.164384, 0.000, 1.792741,
|
||||
* 1.164384, -0.213249, -0.532909,
|
||||
* 1.164384, 2.112402, 0.0000]
|
||||
*/
|
||||
[DRM_COLOR_YCBCR_BT709] = {
|
||||
0x7EA8, 0x7950, 0x0,
|
||||
0x8888, 0x7918, 0xADA8,
|
||||
0x0, 0x7918, 0x6870,
|
||||
0x7E58, 0x7950, 0x0,
|
||||
0x8888, 0x7950, 0xADA8,
|
||||
0x0, 0x7950, 0x6870,
|
||||
},
|
||||
/*
|
||||
* BT.2020 Limited range YCbCr -> full range RGB
|
||||
* The matrix required is :
|
||||
* [1.164, 0.000, 1.678,
|
||||
* 1.164, -0.1873, -0.6504,
|
||||
* 1.164, 2.1417, 0.0000]
|
||||
*/
|
||||
[DRM_COLOR_YCBCR_BT2020] = {
|
||||
0x7D70, 0x7950, 0x0,
|
||||
0x8A68, 0x7950, 0xAC00,
|
||||
0x0, 0x7950, 0x6890,
|
||||
},
|
||||
};
|
||||
const u16 *csc;
|
||||
|
@ -492,8 +516,11 @@ icl_program_input_csc(struct intel_plane *plane,
|
|||
|
||||
I915_WRITE_FW(PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 0),
|
||||
PREOFF_YUV_TO_RGB_HI);
|
||||
I915_WRITE_FW(PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 1),
|
||||
PREOFF_YUV_TO_RGB_ME);
|
||||
if (plane_state->base.color_range == DRM_COLOR_YCBCR_FULL_RANGE)
|
||||
I915_WRITE_FW(PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 1), 0);
|
||||
else
|
||||
I915_WRITE_FW(PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 1),
|
||||
PREOFF_YUV_TO_RGB_ME);
|
||||
I915_WRITE_FW(PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 2),
|
||||
PREOFF_YUV_TO_RGB_LO);
|
||||
I915_WRITE_FW(PLANE_INPUT_CSC_POSTOFF(pipe, plane_id, 0), 0x0);
|
||||
|
@ -683,6 +710,16 @@ skl_plane_get_hw_state(struct intel_plane *plane,
|
|||
return ret;
|
||||
}
|
||||
|
||||
static void i9xx_plane_linear_gamma(u16 gamma[8])
|
||||
{
|
||||
/* The points are not evenly spaced. */
|
||||
static const u8 in[8] = { 0, 1, 2, 4, 8, 16, 24, 32 };
|
||||
int i;
|
||||
|
||||
for (i = 0; i < 8; i++)
|
||||
gamma[i] = (in[i] << 8) / 32;
|
||||
}
|
||||
|
||||
static void
|
||||
chv_update_csc(const struct intel_plane_state *plane_state)
|
||||
{
|
||||
|
@ -858,6 +895,31 @@ static u32 vlv_sprite_ctl(const struct intel_crtc_state *crtc_state,
|
|||
return sprctl;
|
||||
}
|
||||
|
||||
static void vlv_update_gamma(const struct intel_plane_state *plane_state)
|
||||
{
|
||||
struct intel_plane *plane = to_intel_plane(plane_state->base.plane);
|
||||
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
|
||||
const struct drm_framebuffer *fb = plane_state->base.fb;
|
||||
enum pipe pipe = plane->pipe;
|
||||
enum plane_id plane_id = plane->id;
|
||||
u16 gamma[8];
|
||||
int i;
|
||||
|
||||
/* Seems RGB data bypasses the gamma always */
|
||||
if (!fb->format->is_yuv)
|
||||
return;
|
||||
|
||||
i9xx_plane_linear_gamma(gamma);
|
||||
|
||||
/* FIXME these register are single buffered :( */
|
||||
/* The two end points are implicit (0.0 and 1.0) */
|
||||
for (i = 1; i < 8 - 1; i++)
|
||||
I915_WRITE_FW(SPGAMC(pipe, plane_id, i - 1),
|
||||
gamma[i] << 16 |
|
||||
gamma[i] << 8 |
|
||||
gamma[i]);
|
||||
}
|
||||
|
||||
static void
|
||||
vlv_update_plane(struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
|
@ -916,6 +978,7 @@ vlv_update_plane(struct intel_plane *plane,
|
|||
intel_plane_ggtt_offset(plane_state) + sprsurf_offset);
|
||||
|
||||
vlv_update_clrc(plane_state);
|
||||
vlv_update_gamma(plane_state);
|
||||
|
||||
spin_unlock_irqrestore(&dev_priv->uncore.lock, irqflags);
|
||||
}
|
||||
|
@ -1013,6 +1076,8 @@ static u32 ivb_sprite_ctl(const struct intel_crtc_state *crtc_state,
|
|||
return 0;
|
||||
}
|
||||
|
||||
sprctl |= SPRITE_INT_GAMMA_DISABLE;
|
||||
|
||||
if (plane_state->base.color_encoding == DRM_COLOR_YCBCR_BT709)
|
||||
sprctl |= SPRITE_YUV_TO_RGB_CSC_FORMAT_BT709;
|
||||
|
||||
|
@ -1033,6 +1098,45 @@ static u32 ivb_sprite_ctl(const struct intel_crtc_state *crtc_state,
|
|||
return sprctl;
|
||||
}
|
||||
|
||||
static void ivb_sprite_linear_gamma(u16 gamma[18])
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < 17; i++)
|
||||
gamma[i] = (i << 10) / 16;
|
||||
|
||||
gamma[i] = 3 << 10;
|
||||
i++;
|
||||
}
|
||||
|
||||
static void ivb_update_gamma(const struct intel_plane_state *plane_state)
|
||||
{
|
||||
struct intel_plane *plane = to_intel_plane(plane_state->base.plane);
|
||||
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
|
||||
enum pipe pipe = plane->pipe;
|
||||
u16 gamma[18];
|
||||
int i;
|
||||
|
||||
ivb_sprite_linear_gamma(gamma);
|
||||
|
||||
/* FIXME these register are single buffered :( */
|
||||
for (i = 0; i < 16; i++)
|
||||
I915_WRITE_FW(SPRGAMC(pipe, i),
|
||||
gamma[i] << 20 |
|
||||
gamma[i] << 10 |
|
||||
gamma[i]);
|
||||
|
||||
I915_WRITE_FW(SPRGAMC16(pipe, 0), gamma[i]);
|
||||
I915_WRITE_FW(SPRGAMC16(pipe, 1), gamma[i]);
|
||||
I915_WRITE_FW(SPRGAMC16(pipe, 2), gamma[i]);
|
||||
i++;
|
||||
|
||||
I915_WRITE_FW(SPRGAMC17(pipe, 0), gamma[i]);
|
||||
I915_WRITE_FW(SPRGAMC17(pipe, 1), gamma[i]);
|
||||
I915_WRITE_FW(SPRGAMC17(pipe, 2), gamma[i]);
|
||||
i++;
|
||||
}
|
||||
|
||||
static void
|
||||
ivb_update_plane(struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
|
@ -1099,6 +1203,8 @@ ivb_update_plane(struct intel_plane *plane,
|
|||
I915_WRITE_FW(SPRSURF(pipe),
|
||||
intel_plane_ggtt_offset(plane_state) + sprsurf_offset);
|
||||
|
||||
ivb_update_gamma(plane_state);
|
||||
|
||||
spin_unlock_irqrestore(&dev_priv->uncore.lock, irqflags);
|
||||
}
|
||||
|
||||
|
@ -1224,6 +1330,66 @@ static u32 g4x_sprite_ctl(const struct intel_crtc_state *crtc_state,
|
|||
return dvscntr;
|
||||
}
|
||||
|
||||
static void g4x_update_gamma(const struct intel_plane_state *plane_state)
|
||||
{
|
||||
struct intel_plane *plane = to_intel_plane(plane_state->base.plane);
|
||||
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
|
||||
const struct drm_framebuffer *fb = plane_state->base.fb;
|
||||
enum pipe pipe = plane->pipe;
|
||||
u16 gamma[8];
|
||||
int i;
|
||||
|
||||
/* Seems RGB data bypasses the gamma always */
|
||||
if (!fb->format->is_yuv)
|
||||
return;
|
||||
|
||||
i9xx_plane_linear_gamma(gamma);
|
||||
|
||||
/* FIXME these register are single buffered :( */
|
||||
/* The two end points are implicit (0.0 and 1.0) */
|
||||
for (i = 1; i < 8 - 1; i++)
|
||||
I915_WRITE_FW(DVSGAMC_G4X(pipe, i - 1),
|
||||
gamma[i] << 16 |
|
||||
gamma[i] << 8 |
|
||||
gamma[i]);
|
||||
}
|
||||
|
||||
static void ilk_sprite_linear_gamma(u16 gamma[17])
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < 17; i++)
|
||||
gamma[i] = (i << 10) / 16;
|
||||
}
|
||||
|
||||
static void ilk_update_gamma(const struct intel_plane_state *plane_state)
|
||||
{
|
||||
struct intel_plane *plane = to_intel_plane(plane_state->base.plane);
|
||||
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
|
||||
const struct drm_framebuffer *fb = plane_state->base.fb;
|
||||
enum pipe pipe = plane->pipe;
|
||||
u16 gamma[17];
|
||||
int i;
|
||||
|
||||
/* Seems RGB data bypasses the gamma always */
|
||||
if (!fb->format->is_yuv)
|
||||
return;
|
||||
|
||||
ilk_sprite_linear_gamma(gamma);
|
||||
|
||||
/* FIXME these register are single buffered :( */
|
||||
for (i = 0; i < 16; i++)
|
||||
I915_WRITE_FW(DVSGAMC_ILK(pipe, i),
|
||||
gamma[i] << 20 |
|
||||
gamma[i] << 10 |
|
||||
gamma[i]);
|
||||
|
||||
I915_WRITE_FW(DVSGAMCMAX_ILK(pipe, 0), gamma[i]);
|
||||
I915_WRITE_FW(DVSGAMCMAX_ILK(pipe, 1), gamma[i]);
|
||||
I915_WRITE_FW(DVSGAMCMAX_ILK(pipe, 2), gamma[i]);
|
||||
i++;
|
||||
}
|
||||
|
||||
static void
|
||||
g4x_update_plane(struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
|
@ -1283,6 +1449,11 @@ g4x_update_plane(struct intel_plane *plane,
|
|||
I915_WRITE_FW(DVSSURF(pipe),
|
||||
intel_plane_ggtt_offset(plane_state) + dvssurf_offset);
|
||||
|
||||
if (IS_G4X(dev_priv))
|
||||
g4x_update_gamma(plane_state);
|
||||
else
|
||||
ilk_update_gamma(plane_state);
|
||||
|
||||
spin_unlock_irqrestore(&dev_priv->uncore.lock, irqflags);
|
||||
}
|
||||
|
||||
|
@ -1347,7 +1518,7 @@ g4x_sprite_check_scaling(struct intel_crtc_state *crtc_state,
|
|||
const struct drm_framebuffer *fb = plane_state->base.fb;
|
||||
const struct drm_rect *src = &plane_state->base.src;
|
||||
const struct drm_rect *dst = &plane_state->base.dst;
|
||||
int src_x, src_y, src_w, src_h, crtc_w, crtc_h;
|
||||
int src_x, src_w, src_h, crtc_w, crtc_h;
|
||||
const struct drm_display_mode *adjusted_mode =
|
||||
&crtc_state->base.adjusted_mode;
|
||||
unsigned int cpp = fb->format->cpp[0];
|
||||
|
@ -1358,7 +1529,6 @@ g4x_sprite_check_scaling(struct intel_crtc_state *crtc_state,
|
|||
crtc_h = drm_rect_height(dst);
|
||||
|
||||
src_x = src->x1 >> 16;
|
||||
src_y = src->y1 >> 16;
|
||||
src_w = drm_rect_width(src) >> 16;
|
||||
src_h = drm_rect_height(src) >> 16;
|
||||
|
||||
|
@ -1852,52 +2022,6 @@ static const u32 skl_plane_formats[] = {
|
|||
DRM_FORMAT_VYUY,
|
||||
};
|
||||
|
||||
static const u32 icl_plane_formats[] = {
|
||||
DRM_FORMAT_C8,
|
||||
DRM_FORMAT_RGB565,
|
||||
DRM_FORMAT_XRGB8888,
|
||||
DRM_FORMAT_XBGR8888,
|
||||
DRM_FORMAT_ARGB8888,
|
||||
DRM_FORMAT_ABGR8888,
|
||||
DRM_FORMAT_XRGB2101010,
|
||||
DRM_FORMAT_XBGR2101010,
|
||||
DRM_FORMAT_YUYV,
|
||||
DRM_FORMAT_YVYU,
|
||||
DRM_FORMAT_UYVY,
|
||||
DRM_FORMAT_VYUY,
|
||||
DRM_FORMAT_Y210,
|
||||
DRM_FORMAT_Y212,
|
||||
DRM_FORMAT_Y216,
|
||||
DRM_FORMAT_XVYU2101010,
|
||||
DRM_FORMAT_XVYU12_16161616,
|
||||
DRM_FORMAT_XVYU16161616,
|
||||
};
|
||||
|
||||
static const u32 icl_hdr_plane_formats[] = {
|
||||
DRM_FORMAT_C8,
|
||||
DRM_FORMAT_RGB565,
|
||||
DRM_FORMAT_XRGB8888,
|
||||
DRM_FORMAT_XBGR8888,
|
||||
DRM_FORMAT_ARGB8888,
|
||||
DRM_FORMAT_ABGR8888,
|
||||
DRM_FORMAT_XRGB2101010,
|
||||
DRM_FORMAT_XBGR2101010,
|
||||
DRM_FORMAT_XRGB16161616F,
|
||||
DRM_FORMAT_XBGR16161616F,
|
||||
DRM_FORMAT_ARGB16161616F,
|
||||
DRM_FORMAT_ABGR16161616F,
|
||||
DRM_FORMAT_YUYV,
|
||||
DRM_FORMAT_YVYU,
|
||||
DRM_FORMAT_UYVY,
|
||||
DRM_FORMAT_VYUY,
|
||||
DRM_FORMAT_Y210,
|
||||
DRM_FORMAT_Y212,
|
||||
DRM_FORMAT_Y216,
|
||||
DRM_FORMAT_XVYU2101010,
|
||||
DRM_FORMAT_XVYU12_16161616,
|
||||
DRM_FORMAT_XVYU16161616,
|
||||
};
|
||||
|
||||
static const u32 skl_planar_formats[] = {
|
||||
DRM_FORMAT_C8,
|
||||
DRM_FORMAT_RGB565,
|
||||
|
@ -1933,7 +2057,28 @@ static const u32 glk_planar_formats[] = {
|
|||
DRM_FORMAT_P016,
|
||||
};
|
||||
|
||||
static const u32 icl_planar_formats[] = {
|
||||
static const u32 icl_sdr_y_plane_formats[] = {
|
||||
DRM_FORMAT_C8,
|
||||
DRM_FORMAT_RGB565,
|
||||
DRM_FORMAT_XRGB8888,
|
||||
DRM_FORMAT_XBGR8888,
|
||||
DRM_FORMAT_ARGB8888,
|
||||
DRM_FORMAT_ABGR8888,
|
||||
DRM_FORMAT_XRGB2101010,
|
||||
DRM_FORMAT_XBGR2101010,
|
||||
DRM_FORMAT_YUYV,
|
||||
DRM_FORMAT_YVYU,
|
||||
DRM_FORMAT_UYVY,
|
||||
DRM_FORMAT_VYUY,
|
||||
DRM_FORMAT_Y210,
|
||||
DRM_FORMAT_Y212,
|
||||
DRM_FORMAT_Y216,
|
||||
DRM_FORMAT_XVYU2101010,
|
||||
DRM_FORMAT_XVYU12_16161616,
|
||||
DRM_FORMAT_XVYU16161616,
|
||||
};
|
||||
|
||||
static const u32 icl_sdr_uv_plane_formats[] = {
|
||||
DRM_FORMAT_C8,
|
||||
DRM_FORMAT_RGB565,
|
||||
DRM_FORMAT_XRGB8888,
|
||||
|
@ -1958,7 +2103,7 @@ static const u32 icl_planar_formats[] = {
|
|||
DRM_FORMAT_XVYU16161616,
|
||||
};
|
||||
|
||||
static const u32 icl_hdr_planar_formats[] = {
|
||||
static const u32 icl_hdr_plane_formats[] = {
|
||||
DRM_FORMAT_C8,
|
||||
DRM_FORMAT_RGB565,
|
||||
DRM_FORMAT_XRGB8888,
|
||||
|
@ -2201,9 +2346,6 @@ static bool skl_plane_has_fbc(struct drm_i915_private *dev_priv,
|
|||
static bool skl_plane_has_planar(struct drm_i915_private *dev_priv,
|
||||
enum pipe pipe, enum plane_id plane_id)
|
||||
{
|
||||
if (INTEL_GEN(dev_priv) >= 11)
|
||||
return plane_id <= PLANE_SPRITE3;
|
||||
|
||||
/* Display WA #0870: skl, bxt */
|
||||
if (IS_SKYLAKE(dev_priv) || IS_BROXTON(dev_priv))
|
||||
return false;
|
||||
|
@ -2217,6 +2359,48 @@ static bool skl_plane_has_planar(struct drm_i915_private *dev_priv,
|
|||
return true;
|
||||
}
|
||||
|
||||
static const u32 *skl_get_plane_formats(struct drm_i915_private *dev_priv,
|
||||
enum pipe pipe, enum plane_id plane_id,
|
||||
int *num_formats)
|
||||
{
|
||||
if (skl_plane_has_planar(dev_priv, pipe, plane_id)) {
|
||||
*num_formats = ARRAY_SIZE(skl_planar_formats);
|
||||
return skl_planar_formats;
|
||||
} else {
|
||||
*num_formats = ARRAY_SIZE(skl_plane_formats);
|
||||
return skl_plane_formats;
|
||||
}
|
||||
}
|
||||
|
||||
static const u32 *glk_get_plane_formats(struct drm_i915_private *dev_priv,
|
||||
enum pipe pipe, enum plane_id plane_id,
|
||||
int *num_formats)
|
||||
{
|
||||
if (skl_plane_has_planar(dev_priv, pipe, plane_id)) {
|
||||
*num_formats = ARRAY_SIZE(glk_planar_formats);
|
||||
return glk_planar_formats;
|
||||
} else {
|
||||
*num_formats = ARRAY_SIZE(skl_plane_formats);
|
||||
return skl_plane_formats;
|
||||
}
|
||||
}
|
||||
|
||||
static const u32 *icl_get_plane_formats(struct drm_i915_private *dev_priv,
|
||||
enum pipe pipe, enum plane_id plane_id,
|
||||
int *num_formats)
|
||||
{
|
||||
if (icl_is_hdr_plane(dev_priv, plane_id)) {
|
||||
*num_formats = ARRAY_SIZE(icl_hdr_plane_formats);
|
||||
return icl_hdr_plane_formats;
|
||||
} else if (icl_is_nv12_y_plane(plane_id)) {
|
||||
*num_formats = ARRAY_SIZE(icl_sdr_y_plane_formats);
|
||||
return icl_sdr_y_plane_formats;
|
||||
} else {
|
||||
*num_formats = ARRAY_SIZE(icl_sdr_uv_plane_formats);
|
||||
return icl_sdr_uv_plane_formats;
|
||||
}
|
||||
}
|
||||
|
||||
static bool skl_plane_has_ccs(struct drm_i915_private *dev_priv,
|
||||
enum pipe pipe, enum plane_id plane_id)
|
||||
{
|
||||
|
@ -2270,30 +2454,15 @@ skl_universal_plane_create(struct drm_i915_private *dev_priv,
|
|||
if (icl_is_nv12_y_plane(plane_id))
|
||||
plane->update_slave = icl_update_slave;
|
||||
|
||||
if (skl_plane_has_planar(dev_priv, pipe, plane_id)) {
|
||||
if (icl_is_hdr_plane(dev_priv, plane_id)) {
|
||||
formats = icl_hdr_planar_formats;
|
||||
num_formats = ARRAY_SIZE(icl_hdr_planar_formats);
|
||||
} else if (INTEL_GEN(dev_priv) >= 11) {
|
||||
formats = icl_planar_formats;
|
||||
num_formats = ARRAY_SIZE(icl_planar_formats);
|
||||
} else if (INTEL_GEN(dev_priv) == 10 || IS_GEMINILAKE(dev_priv)) {
|
||||
formats = glk_planar_formats;
|
||||
num_formats = ARRAY_SIZE(glk_planar_formats);
|
||||
} else {
|
||||
formats = skl_planar_formats;
|
||||
num_formats = ARRAY_SIZE(skl_planar_formats);
|
||||
}
|
||||
} else if (icl_is_hdr_plane(dev_priv, plane_id)) {
|
||||
formats = icl_hdr_plane_formats;
|
||||
num_formats = ARRAY_SIZE(icl_hdr_plane_formats);
|
||||
} else if (INTEL_GEN(dev_priv) >= 11) {
|
||||
formats = icl_plane_formats;
|
||||
num_formats = ARRAY_SIZE(icl_plane_formats);
|
||||
} else {
|
||||
formats = skl_plane_formats;
|
||||
num_formats = ARRAY_SIZE(skl_plane_formats);
|
||||
}
|
||||
if (INTEL_GEN(dev_priv) >= 11)
|
||||
formats = icl_get_plane_formats(dev_priv, pipe,
|
||||
plane_id, &num_formats);
|
||||
else if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv))
|
||||
formats = glk_get_plane_formats(dev_priv, pipe,
|
||||
plane_id, &num_formats);
|
||||
else
|
||||
formats = skl_get_plane_formats(dev_priv, pipe,
|
||||
plane_id, &num_formats);
|
||||
|
||||
plane->has_ccs = skl_plane_has_ccs(dev_priv, pipe, plane_id);
|
||||
if (plane->has_ccs)
|
||||
|
|
|
@ -0,0 +1,537 @@
|
|||
// SPDX-License-Identifier: MIT
|
||||
/*
|
||||
* Copyright © 2019 Intel Corporation
|
||||
*/
|
||||
|
||||
#include "i915_drv.h"
|
||||
#include "intel_display.h"
|
||||
#include "intel_dp_mst.h"
|
||||
#include "intel_tc.h"
|
||||
|
||||
static const char *tc_port_mode_name(enum tc_port_mode mode)
|
||||
{
|
||||
static const char * const names[] = {
|
||||
[TC_PORT_TBT_ALT] = "tbt-alt",
|
||||
[TC_PORT_DP_ALT] = "dp-alt",
|
||||
[TC_PORT_LEGACY] = "legacy",
|
||||
};
|
||||
|
||||
if (WARN_ON(mode >= ARRAY_SIZE(names)))
|
||||
mode = TC_PORT_TBT_ALT;
|
||||
|
||||
return names[mode];
|
||||
}
|
||||
|
||||
static bool has_modular_fia(struct drm_i915_private *i915)
|
||||
{
|
||||
if (!INTEL_INFO(i915)->display.has_modular_fia)
|
||||
return false;
|
||||
|
||||
return intel_uncore_read(&i915->uncore,
|
||||
PORT_TX_DFLEXDPSP(FIA1)) & MODULAR_FIA_MASK;
|
||||
}
|
||||
|
||||
static enum phy_fia tc_port_to_fia(struct drm_i915_private *i915,
|
||||
enum tc_port tc_port)
|
||||
{
|
||||
if (!has_modular_fia(i915))
|
||||
return FIA1;
|
||||
|
||||
/*
|
||||
* Each Modular FIA instance houses 2 TC ports. In SOC that has more
|
||||
* than two TC ports, there are multiple instances of Modular FIA.
|
||||
*/
|
||||
return tc_port / 2;
|
||||
}
|
||||
|
||||
u32 intel_tc_port_get_lane_mask(struct intel_digital_port *dig_port)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
|
||||
enum tc_port tc_port = intel_port_to_tc(i915, dig_port->base.port);
|
||||
struct intel_uncore *uncore = &i915->uncore;
|
||||
u32 lane_mask;
|
||||
|
||||
lane_mask = intel_uncore_read(uncore,
|
||||
PORT_TX_DFLEXDPSP(dig_port->tc_phy_fia));
|
||||
|
||||
WARN_ON(lane_mask == 0xffffffff);
|
||||
|
||||
return (lane_mask & DP_LANE_ASSIGNMENT_MASK(tc_port)) >>
|
||||
DP_LANE_ASSIGNMENT_SHIFT(tc_port);
|
||||
}
|
||||
|
||||
int intel_tc_port_fia_max_lane_count(struct intel_digital_port *dig_port)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
|
||||
intel_wakeref_t wakeref;
|
||||
u32 lane_mask;
|
||||
|
||||
if (dig_port->tc_mode != TC_PORT_DP_ALT)
|
||||
return 4;
|
||||
|
||||
lane_mask = 0;
|
||||
with_intel_display_power(i915, POWER_DOMAIN_DISPLAY_CORE, wakeref)
|
||||
lane_mask = intel_tc_port_get_lane_mask(dig_port);
|
||||
|
||||
switch (lane_mask) {
|
||||
default:
|
||||
MISSING_CASE(lane_mask);
|
||||
/* fall-through */
|
||||
case 0x1:
|
||||
case 0x2:
|
||||
case 0x4:
|
||||
case 0x8:
|
||||
return 1;
|
||||
case 0x3:
|
||||
case 0xc:
|
||||
return 2;
|
||||
case 0xf:
|
||||
return 4;
|
||||
}
|
||||
}
|
||||
|
||||
void intel_tc_port_set_fia_lane_count(struct intel_digital_port *dig_port,
|
||||
int required_lanes)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
|
||||
enum tc_port tc_port = intel_port_to_tc(i915, dig_port->base.port);
|
||||
bool lane_reversal = dig_port->saved_port_bits & DDI_BUF_PORT_REVERSAL;
|
||||
struct intel_uncore *uncore = &i915->uncore;
|
||||
u32 val;
|
||||
|
||||
WARN_ON(lane_reversal && dig_port->tc_mode != TC_PORT_LEGACY);
|
||||
|
||||
val = intel_uncore_read(uncore,
|
||||
PORT_TX_DFLEXDPMLE1(dig_port->tc_phy_fia));
|
||||
val &= ~DFLEXDPMLE1_DPMLETC_MASK(tc_port);
|
||||
|
||||
switch (required_lanes) {
|
||||
case 1:
|
||||
val |= lane_reversal ? DFLEXDPMLE1_DPMLETC_ML3(tc_port) :
|
||||
DFLEXDPMLE1_DPMLETC_ML0(tc_port);
|
||||
break;
|
||||
case 2:
|
||||
val |= lane_reversal ? DFLEXDPMLE1_DPMLETC_ML3_2(tc_port) :
|
||||
DFLEXDPMLE1_DPMLETC_ML1_0(tc_port);
|
||||
break;
|
||||
case 4:
|
||||
val |= DFLEXDPMLE1_DPMLETC_ML3_0(tc_port);
|
||||
break;
|
||||
default:
|
||||
MISSING_CASE(required_lanes);
|
||||
}
|
||||
|
||||
intel_uncore_write(uncore,
|
||||
PORT_TX_DFLEXDPMLE1(dig_port->tc_phy_fia), val);
|
||||
}
|
||||
|
||||
static void tc_port_fixup_legacy_flag(struct intel_digital_port *dig_port,
|
||||
u32 live_status_mask)
|
||||
{
|
||||
u32 valid_hpd_mask;
|
||||
|
||||
if (dig_port->tc_legacy_port)
|
||||
valid_hpd_mask = BIT(TC_PORT_LEGACY);
|
||||
else
|
||||
valid_hpd_mask = BIT(TC_PORT_DP_ALT) |
|
||||
BIT(TC_PORT_TBT_ALT);
|
||||
|
||||
if (!(live_status_mask & ~valid_hpd_mask))
|
||||
return;
|
||||
|
||||
/* If live status mismatches the VBT flag, trust the live status. */
|
||||
DRM_ERROR("Port %s: live status %08x mismatches the legacy port flag, fix flag\n",
|
||||
dig_port->tc_port_name, live_status_mask);
|
||||
|
||||
dig_port->tc_legacy_port = !dig_port->tc_legacy_port;
|
||||
}
|
||||
|
||||
static u32 tc_port_live_status_mask(struct intel_digital_port *dig_port)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
|
||||
enum tc_port tc_port = intel_port_to_tc(i915, dig_port->base.port);
|
||||
struct intel_uncore *uncore = &i915->uncore;
|
||||
u32 mask = 0;
|
||||
u32 val;
|
||||
|
||||
val = intel_uncore_read(uncore,
|
||||
PORT_TX_DFLEXDPSP(dig_port->tc_phy_fia));
|
||||
|
||||
if (val == 0xffffffff) {
|
||||
DRM_DEBUG_KMS("Port %s: PHY in TCCOLD, nothing connected\n",
|
||||
dig_port->tc_port_name);
|
||||
return mask;
|
||||
}
|
||||
|
||||
if (val & TC_LIVE_STATE_TBT(tc_port))
|
||||
mask |= BIT(TC_PORT_TBT_ALT);
|
||||
if (val & TC_LIVE_STATE_TC(tc_port))
|
||||
mask |= BIT(TC_PORT_DP_ALT);
|
||||
|
||||
if (intel_uncore_read(uncore, SDEISR) & SDE_TC_HOTPLUG_ICP(tc_port))
|
||||
mask |= BIT(TC_PORT_LEGACY);
|
||||
|
||||
/* The sink can be connected only in a single mode. */
|
||||
if (!WARN_ON(hweight32(mask) > 1))
|
||||
tc_port_fixup_legacy_flag(dig_port, mask);
|
||||
|
||||
return mask;
|
||||
}
|
||||
|
||||
static bool icl_tc_phy_status_complete(struct intel_digital_port *dig_port)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
|
||||
enum tc_port tc_port = intel_port_to_tc(i915, dig_port->base.port);
|
||||
struct intel_uncore *uncore = &i915->uncore;
|
||||
u32 val;
|
||||
|
||||
val = intel_uncore_read(uncore,
|
||||
PORT_TX_DFLEXDPPMS(dig_port->tc_phy_fia));
|
||||
if (val == 0xffffffff) {
|
||||
DRM_DEBUG_KMS("Port %s: PHY in TCCOLD, assuming not complete\n",
|
||||
dig_port->tc_port_name);
|
||||
return false;
|
||||
}
|
||||
|
||||
return val & DP_PHY_MODE_STATUS_COMPLETED(tc_port);
|
||||
}
|
||||
|
||||
static bool icl_tc_phy_set_safe_mode(struct intel_digital_port *dig_port,
|
||||
bool enable)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
|
||||
enum tc_port tc_port = intel_port_to_tc(i915, dig_port->base.port);
|
||||
struct intel_uncore *uncore = &i915->uncore;
|
||||
u32 val;
|
||||
|
||||
val = intel_uncore_read(uncore,
|
||||
PORT_TX_DFLEXDPCSSS(dig_port->tc_phy_fia));
|
||||
if (val == 0xffffffff) {
|
||||
DRM_DEBUG_KMS("Port %s: PHY in TCCOLD, can't set safe-mode to %s\n",
|
||||
dig_port->tc_port_name,
|
||||
enableddisabled(enable));
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
val &= ~DP_PHY_MODE_STATUS_NOT_SAFE(tc_port);
|
||||
if (!enable)
|
||||
val |= DP_PHY_MODE_STATUS_NOT_SAFE(tc_port);
|
||||
|
||||
intel_uncore_write(uncore,
|
||||
PORT_TX_DFLEXDPCSSS(dig_port->tc_phy_fia), val);
|
||||
|
||||
if (enable && wait_for(!icl_tc_phy_status_complete(dig_port), 10))
|
||||
DRM_DEBUG_KMS("Port %s: PHY complete clear timed out\n",
|
||||
dig_port->tc_port_name);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool icl_tc_phy_is_in_safe_mode(struct intel_digital_port *dig_port)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
|
||||
enum tc_port tc_port = intel_port_to_tc(i915, dig_port->base.port);
|
||||
struct intel_uncore *uncore = &i915->uncore;
|
||||
u32 val;
|
||||
|
||||
val = intel_uncore_read(uncore,
|
||||
PORT_TX_DFLEXDPCSSS(dig_port->tc_phy_fia));
|
||||
if (val == 0xffffffff) {
|
||||
DRM_DEBUG_KMS("Port %s: PHY in TCCOLD, assume safe mode\n",
|
||||
dig_port->tc_port_name);
|
||||
return true;
|
||||
}
|
||||
|
||||
return !(val & DP_PHY_MODE_STATUS_NOT_SAFE(tc_port));
|
||||
}
|
||||
|
||||
/*
|
||||
* This function implements the first part of the Connect Flow described by our
|
||||
* specification, Gen11 TypeC Programming chapter. The rest of the flow (reading
|
||||
* lanes, EDID, etc) is done as needed in the typical places.
|
||||
*
|
||||
* Unlike the other ports, type-C ports are not available to use as soon as we
|
||||
* get a hotplug. The type-C PHYs can be shared between multiple controllers:
|
||||
* display, USB, etc. As a result, handshaking through FIA is required around
|
||||
* connect and disconnect to cleanly transfer ownership with the controller and
|
||||
* set the type-C power state.
|
||||
*/
|
||||
static void icl_tc_phy_connect(struct intel_digital_port *dig_port,
|
||||
int required_lanes)
|
||||
{
|
||||
int max_lanes;
|
||||
|
||||
if (!icl_tc_phy_status_complete(dig_port)) {
|
||||
DRM_DEBUG_KMS("Port %s: PHY not ready\n",
|
||||
dig_port->tc_port_name);
|
||||
goto out_set_tbt_alt_mode;
|
||||
}
|
||||
|
||||
if (!icl_tc_phy_set_safe_mode(dig_port, false) &&
|
||||
!WARN_ON(dig_port->tc_legacy_port))
|
||||
goto out_set_tbt_alt_mode;
|
||||
|
||||
max_lanes = intel_tc_port_fia_max_lane_count(dig_port);
|
||||
if (dig_port->tc_legacy_port) {
|
||||
WARN_ON(max_lanes != 4);
|
||||
dig_port->tc_mode = TC_PORT_LEGACY;
|
||||
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
* Now we have to re-check the live state, in case the port recently
|
||||
* became disconnected. Not necessary for legacy mode.
|
||||
*/
|
||||
if (!(tc_port_live_status_mask(dig_port) & BIT(TC_PORT_DP_ALT))) {
|
||||
DRM_DEBUG_KMS("Port %s: PHY sudden disconnect\n",
|
||||
dig_port->tc_port_name);
|
||||
goto out_set_safe_mode;
|
||||
}
|
||||
|
||||
if (max_lanes < required_lanes) {
|
||||
DRM_DEBUG_KMS("Port %s: PHY max lanes %d < required lanes %d\n",
|
||||
dig_port->tc_port_name,
|
||||
max_lanes, required_lanes);
|
||||
goto out_set_safe_mode;
|
||||
}
|
||||
|
||||
dig_port->tc_mode = TC_PORT_DP_ALT;
|
||||
|
||||
return;
|
||||
|
||||
out_set_safe_mode:
|
||||
icl_tc_phy_set_safe_mode(dig_port, true);
|
||||
out_set_tbt_alt_mode:
|
||||
dig_port->tc_mode = TC_PORT_TBT_ALT;
|
||||
}
|
||||
|
||||
/*
|
||||
* See the comment at the connect function. This implements the Disconnect
|
||||
* Flow.
|
||||
*/
|
||||
static void icl_tc_phy_disconnect(struct intel_digital_port *dig_port)
|
||||
{
|
||||
switch (dig_port->tc_mode) {
|
||||
case TC_PORT_LEGACY:
|
||||
/* Nothing to do, we never disconnect from legacy mode */
|
||||
break;
|
||||
case TC_PORT_DP_ALT:
|
||||
icl_tc_phy_set_safe_mode(dig_port, true);
|
||||
dig_port->tc_mode = TC_PORT_TBT_ALT;
|
||||
break;
|
||||
case TC_PORT_TBT_ALT:
|
||||
/* Nothing to do, we stay in TBT-alt mode */
|
||||
break;
|
||||
default:
|
||||
MISSING_CASE(dig_port->tc_mode);
|
||||
}
|
||||
}
|
||||
|
||||
static bool icl_tc_phy_is_connected(struct intel_digital_port *dig_port)
|
||||
{
|
||||
if (!icl_tc_phy_status_complete(dig_port)) {
|
||||
DRM_DEBUG_KMS("Port %s: PHY status not complete\n",
|
||||
dig_port->tc_port_name);
|
||||
return dig_port->tc_mode == TC_PORT_TBT_ALT;
|
||||
}
|
||||
|
||||
if (icl_tc_phy_is_in_safe_mode(dig_port)) {
|
||||
DRM_DEBUG_KMS("Port %s: PHY still in safe mode\n",
|
||||
dig_port->tc_port_name);
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
return dig_port->tc_mode == TC_PORT_DP_ALT ||
|
||||
dig_port->tc_mode == TC_PORT_LEGACY;
|
||||
}
|
||||
|
||||
static enum tc_port_mode
|
||||
intel_tc_port_get_current_mode(struct intel_digital_port *dig_port)
|
||||
{
|
||||
u32 live_status_mask = tc_port_live_status_mask(dig_port);
|
||||
bool in_safe_mode = icl_tc_phy_is_in_safe_mode(dig_port);
|
||||
enum tc_port_mode mode;
|
||||
|
||||
if (in_safe_mode || WARN_ON(!icl_tc_phy_status_complete(dig_port)))
|
||||
return TC_PORT_TBT_ALT;
|
||||
|
||||
mode = dig_port->tc_legacy_port ? TC_PORT_LEGACY : TC_PORT_DP_ALT;
|
||||
if (live_status_mask) {
|
||||
enum tc_port_mode live_mode = fls(live_status_mask) - 1;
|
||||
|
||||
if (!WARN_ON(live_mode == TC_PORT_TBT_ALT))
|
||||
mode = live_mode;
|
||||
}
|
||||
|
||||
return mode;
|
||||
}
|
||||
|
||||
static enum tc_port_mode
|
||||
intel_tc_port_get_target_mode(struct intel_digital_port *dig_port)
|
||||
{
|
||||
u32 live_status_mask = tc_port_live_status_mask(dig_port);
|
||||
|
||||
if (live_status_mask)
|
||||
return fls(live_status_mask) - 1;
|
||||
|
||||
return icl_tc_phy_status_complete(dig_port) &&
|
||||
dig_port->tc_legacy_port ? TC_PORT_LEGACY :
|
||||
TC_PORT_TBT_ALT;
|
||||
}
|
||||
|
||||
static void intel_tc_port_reset_mode(struct intel_digital_port *dig_port,
|
||||
int required_lanes)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
|
||||
enum tc_port_mode old_tc_mode = dig_port->tc_mode;
|
||||
|
||||
intel_display_power_flush_work(i915);
|
||||
WARN_ON(intel_display_power_is_enabled(i915,
|
||||
intel_aux_power_domain(dig_port)));
|
||||
|
||||
icl_tc_phy_disconnect(dig_port);
|
||||
icl_tc_phy_connect(dig_port, required_lanes);
|
||||
|
||||
DRM_DEBUG_KMS("Port %s: TC port mode reset (%s -> %s)\n",
|
||||
dig_port->tc_port_name,
|
||||
tc_port_mode_name(old_tc_mode),
|
||||
tc_port_mode_name(dig_port->tc_mode));
|
||||
}
|
||||
|
||||
static void
|
||||
intel_tc_port_link_init_refcount(struct intel_digital_port *dig_port,
|
||||
int refcount)
|
||||
{
|
||||
WARN_ON(dig_port->tc_link_refcount);
|
||||
dig_port->tc_link_refcount = refcount;
|
||||
}
|
||||
|
||||
void intel_tc_port_sanitize(struct intel_digital_port *dig_port)
|
||||
{
|
||||
struct intel_encoder *encoder = &dig_port->base;
|
||||
int active_links = 0;
|
||||
|
||||
mutex_lock(&dig_port->tc_lock);
|
||||
|
||||
dig_port->tc_mode = intel_tc_port_get_current_mode(dig_port);
|
||||
if (dig_port->dp.is_mst)
|
||||
active_links = intel_dp_mst_encoder_active_links(dig_port);
|
||||
else if (encoder->base.crtc)
|
||||
active_links = to_intel_crtc(encoder->base.crtc)->active;
|
||||
|
||||
if (active_links) {
|
||||
if (!icl_tc_phy_is_connected(dig_port))
|
||||
DRM_DEBUG_KMS("Port %s: PHY disconnected with %d active link(s)\n",
|
||||
dig_port->tc_port_name, active_links);
|
||||
intel_tc_port_link_init_refcount(dig_port, active_links);
|
||||
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (dig_port->tc_legacy_port)
|
||||
icl_tc_phy_connect(dig_port, 1);
|
||||
|
||||
out:
|
||||
DRM_DEBUG_KMS("Port %s: sanitize mode (%s)\n",
|
||||
dig_port->tc_port_name,
|
||||
tc_port_mode_name(dig_port->tc_mode));
|
||||
|
||||
mutex_unlock(&dig_port->tc_lock);
|
||||
}
|
||||
|
||||
static bool intel_tc_port_needs_reset(struct intel_digital_port *dig_port)
|
||||
{
|
||||
return intel_tc_port_get_target_mode(dig_port) != dig_port->tc_mode;
|
||||
}
|
||||
|
||||
/*
|
||||
* The type-C ports are different because even when they are connected, they may
|
||||
* not be available/usable by the graphics driver: see the comment on
|
||||
* icl_tc_phy_connect(). So in our driver instead of adding the additional
|
||||
* concept of "usable" and make everything check for "connected and usable" we
|
||||
* define a port as "connected" when it is not only connected, but also when it
|
||||
* is usable by the rest of the driver. That maintains the old assumption that
|
||||
* connected ports are usable, and avoids exposing to the users objects they
|
||||
* can't really use.
|
||||
*/
|
||||
bool intel_tc_port_connected(struct intel_digital_port *dig_port)
|
||||
{
|
||||
bool is_connected;
|
||||
|
||||
intel_tc_port_lock(dig_port);
|
||||
is_connected = tc_port_live_status_mask(dig_port) &
|
||||
BIT(dig_port->tc_mode);
|
||||
intel_tc_port_unlock(dig_port);
|
||||
|
||||
return is_connected;
|
||||
}
|
||||
|
||||
static void __intel_tc_port_lock(struct intel_digital_port *dig_port,
|
||||
int required_lanes)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
|
||||
intel_wakeref_t wakeref;
|
||||
|
||||
wakeref = intel_display_power_get(i915, POWER_DOMAIN_DISPLAY_CORE);
|
||||
|
||||
mutex_lock(&dig_port->tc_lock);
|
||||
|
||||
if (!dig_port->tc_link_refcount &&
|
||||
intel_tc_port_needs_reset(dig_port))
|
||||
intel_tc_port_reset_mode(dig_port, required_lanes);
|
||||
|
||||
WARN_ON(dig_port->tc_lock_wakeref);
|
||||
dig_port->tc_lock_wakeref = wakeref;
|
||||
}
|
||||
|
||||
void intel_tc_port_lock(struct intel_digital_port *dig_port)
|
||||
{
|
||||
__intel_tc_port_lock(dig_port, 1);
|
||||
}
|
||||
|
||||
void intel_tc_port_unlock(struct intel_digital_port *dig_port)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
|
||||
intel_wakeref_t wakeref = fetch_and_zero(&dig_port->tc_lock_wakeref);
|
||||
|
||||
mutex_unlock(&dig_port->tc_lock);
|
||||
|
||||
intel_display_power_put_async(i915, POWER_DOMAIN_DISPLAY_CORE,
|
||||
wakeref);
|
||||
}
|
||||
|
||||
void intel_tc_port_get_link(struct intel_digital_port *dig_port,
|
||||
int required_lanes)
|
||||
{
|
||||
__intel_tc_port_lock(dig_port, required_lanes);
|
||||
dig_port->tc_link_refcount++;
|
||||
intel_tc_port_unlock(dig_port);
|
||||
}
|
||||
|
||||
void intel_tc_port_put_link(struct intel_digital_port *dig_port)
|
||||
{
|
||||
mutex_lock(&dig_port->tc_lock);
|
||||
dig_port->tc_link_refcount--;
|
||||
mutex_unlock(&dig_port->tc_lock);
|
||||
}
|
||||
|
||||
void intel_tc_port_init(struct intel_digital_port *dig_port, bool is_legacy)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
|
||||
enum port port = dig_port->base.port;
|
||||
enum tc_port tc_port = intel_port_to_tc(i915, port);
|
||||
|
||||
if (WARN_ON(tc_port == PORT_TC_NONE))
|
||||
return;
|
||||
|
||||
snprintf(dig_port->tc_port_name, sizeof(dig_port->tc_port_name),
|
||||
"%c/TC#%d", port_name(port), tc_port + 1);
|
||||
|
||||
mutex_init(&dig_port->tc_lock);
|
||||
dig_port->tc_legacy_port = is_legacy;
|
||||
dig_port->tc_link_refcount = 0;
|
||||
dig_port->tc_phy_fia = tc_port_to_fia(i915, tc_port);
|
||||
}
|
|
@ -0,0 +1,35 @@
|
|||
/* SPDX-License-Identifier: MIT */
|
||||
/*
|
||||
* Copyright © 2019 Intel Corporation
|
||||
*/
|
||||
|
||||
#ifndef __INTEL_TC_H__
|
||||
#define __INTEL_TC_H__
|
||||
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
#include "intel_drv.h"
|
||||
|
||||
bool intel_tc_port_connected(struct intel_digital_port *dig_port);
|
||||
u32 intel_tc_port_get_lane_mask(struct intel_digital_port *dig_port);
|
||||
int intel_tc_port_fia_max_lane_count(struct intel_digital_port *dig_port);
|
||||
void intel_tc_port_set_fia_lane_count(struct intel_digital_port *dig_port,
|
||||
int required_lanes);
|
||||
|
||||
void intel_tc_port_sanitize(struct intel_digital_port *dig_port);
|
||||
void intel_tc_port_lock(struct intel_digital_port *dig_port);
|
||||
void intel_tc_port_unlock(struct intel_digital_port *dig_port);
|
||||
void intel_tc_port_get_link(struct intel_digital_port *dig_port,
|
||||
int required_lanes);
|
||||
void intel_tc_port_put_link(struct intel_digital_port *dig_port);
|
||||
|
||||
static inline int intel_tc_port_ref_held(struct intel_digital_port *dig_port)
|
||||
{
|
||||
return mutex_is_locked(&dig_port->tc_lock) ||
|
||||
dig_port->tc_link_refcount;
|
||||
}
|
||||
|
||||
void intel_tc_port_init(struct intel_digital_port *dig_port, bool is_legacy);
|
||||
|
||||
#endif /* __INTEL_TC_H__ */
|
|
@ -310,10 +310,13 @@ enum vbt_gmbus_ddi {
|
|||
DDC_BUS_DDI_F,
|
||||
ICL_DDC_BUS_DDI_A = 0x1,
|
||||
ICL_DDC_BUS_DDI_B,
|
||||
TGL_DDC_BUS_DDI_C,
|
||||
ICL_DDC_BUS_PORT_1 = 0x4,
|
||||
ICL_DDC_BUS_PORT_2,
|
||||
ICL_DDC_BUS_PORT_3,
|
||||
ICL_DDC_BUS_PORT_4,
|
||||
TGL_DDC_BUS_PORT_5,
|
||||
TGL_DDC_BUS_PORT_6,
|
||||
MCC_DDC_BUS_DDI_A = 0x1,
|
||||
MCC_DDC_BUS_DDI_B,
|
||||
MCC_DDC_BUS_DDI_C = 0x4,
|
||||
|
|
|
@ -459,17 +459,23 @@ int intel_dp_compute_dsc_params(struct intel_dp *intel_dp,
|
|||
enum intel_display_power_domain
|
||||
intel_dsc_power_domain(const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(crtc_state->base.crtc->dev);
|
||||
enum transcoder cpu_transcoder = crtc_state->cpu_transcoder;
|
||||
|
||||
/*
|
||||
* On ICL VDSC/joining for eDP transcoder uses a separate power well PW2
|
||||
* This requires POWER_DOMAIN_TRANSCODER_EDP_VDSC power domain.
|
||||
* On ICL VDSC/joining for eDP transcoder uses a separate power well,
|
||||
* PW2. This requires POWER_DOMAIN_TRANSCODER_VDSC_PW2 power domain.
|
||||
* For any other transcoder, VDSC/joining uses the power well associated
|
||||
* with the pipe/transcoder in use. Hence another reference on the
|
||||
* transcoder power domain will suffice.
|
||||
*
|
||||
* On TGL we have the same mapping, but for transcoder A (the special
|
||||
* TRANSCODER_EDP is gone).
|
||||
*/
|
||||
if (cpu_transcoder == TRANSCODER_EDP)
|
||||
return POWER_DOMAIN_TRANSCODER_EDP_VDSC;
|
||||
if (INTEL_GEN(i915) >= 12 && cpu_transcoder == TRANSCODER_A)
|
||||
return POWER_DOMAIN_TRANSCODER_VDSC_PW2;
|
||||
else if (cpu_transcoder == TRANSCODER_EDP)
|
||||
return POWER_DOMAIN_TRANSCODER_VDSC_PW2;
|
||||
else
|
||||
return POWER_DOMAIN_TRANSCODER(cpu_transcoder);
|
||||
}
|
||||
|
|
|
@ -1644,7 +1644,7 @@ vlv_dsi_get_panel_orientation(struct intel_connector *connector)
|
|||
return intel_dsi_get_panel_orientation(connector);
|
||||
}
|
||||
|
||||
static void intel_dsi_add_properties(struct intel_connector *connector)
|
||||
static void vlv_dsi_add_properties(struct intel_connector *connector)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
|
||||
|
||||
|
@ -1983,7 +1983,7 @@ void vlv_dsi_init(struct drm_i915_private *dev_priv)
|
|||
intel_panel_init(&intel_connector->panel, fixed_mode, NULL);
|
||||
intel_panel_setup_backlight(connector, INVALID_PIPE);
|
||||
|
||||
intel_dsi_add_properties(intel_connector);
|
||||
vlv_dsi_add_properties(intel_connector);
|
||||
|
||||
return;
|
||||
|
||||
|
|
|
@ -1 +1,5 @@
|
|||
include $(src)/Makefile.header-test # Extra header tests
|
||||
# For building individual subdir files on the command line
|
||||
subdir-ccflags-y += -I$(srctree)/$(src)/..
|
||||
|
||||
# Extra header tests
|
||||
header-test-pattern-$(CONFIG_DRM_I915_WERROR) := *.h
|
||||
|
|
|
@ -1,16 +0,0 @@
|
|||
# SPDX-License-Identifier: MIT
|
||||
# Copyright © 2019 Intel Corporation
|
||||
|
||||
# Test the headers are compilable as standalone units
|
||||
header_test := $(notdir $(wildcard $(src)/*.h))
|
||||
|
||||
quiet_cmd_header_test = HDRTEST $@
|
||||
cmd_header_test = echo "\#include \"$(<F)\"" > $@
|
||||
|
||||
header_test_%.c: %.h
|
||||
$(call cmd,header_test)
|
||||
|
||||
extra-$(CONFIG_DRM_I915_WERROR) += \
|
||||
$(foreach h,$(header_test),$(patsubst %.h,header_test_%.o,$(h)))
|
||||
|
||||
clean-files += $(foreach h,$(header_test),$(patsubst %.h,header_test_%.c,$(h)))
|
|
@ -72,7 +72,6 @@ static struct i915_sleeve *create_sleeve(struct i915_address_space *vm,
|
|||
vma->ops = &proxy_vma_ops;
|
||||
|
||||
sleeve->vma = vma;
|
||||
sleeve->obj = i915_gem_object_get(obj);
|
||||
sleeve->pages = pages;
|
||||
sleeve->page_sizes = *page_sizes;
|
||||
|
||||
|
@ -85,7 +84,6 @@ err_free:
|
|||
|
||||
static void destroy_sleeve(struct i915_sleeve *sleeve)
|
||||
{
|
||||
i915_gem_object_put(sleeve->obj);
|
||||
kfree(sleeve);
|
||||
}
|
||||
|
||||
|
@ -155,7 +153,7 @@ static void clear_pages_worker(struct work_struct *work)
|
|||
{
|
||||
struct clear_pages_work *w = container_of(work, typeof(*w), work);
|
||||
struct drm_i915_private *i915 = w->ce->gem_context->i915;
|
||||
struct drm_i915_gem_object *obj = w->sleeve->obj;
|
||||
struct drm_i915_gem_object *obj = w->sleeve->vma->obj;
|
||||
struct i915_vma *vma = w->sleeve->vma;
|
||||
struct i915_request *rq;
|
||||
int err = w->dma.error;
|
||||
|
@ -164,11 +162,12 @@ static void clear_pages_worker(struct work_struct *work)
|
|||
goto out_signal;
|
||||
|
||||
if (obj->cache_dirty) {
|
||||
obj->write_domain = 0;
|
||||
if (i915_gem_object_has_struct_page(obj))
|
||||
drm_clflush_sg(w->sleeve->pages);
|
||||
obj->cache_dirty = false;
|
||||
}
|
||||
obj->read_domains = I915_GEM_GPU_DOMAINS;
|
||||
obj->write_domain = 0;
|
||||
|
||||
/* XXX: we need to kill this */
|
||||
mutex_lock(&i915->drm.struct_mutex);
|
||||
|
@ -193,10 +192,12 @@ static void clear_pages_worker(struct work_struct *work)
|
|||
goto out_request;
|
||||
}
|
||||
|
||||
/* XXX: more feverish nightmares await */
|
||||
i915_vma_lock(vma);
|
||||
err = i915_vma_move_to_active(vma, rq, EXEC_OBJECT_WRITE);
|
||||
i915_vma_unlock(vma);
|
||||
/*
|
||||
* w->dma is already exported via (vma|obj)->resv we need only
|
||||
* keep track of the GPU activity within this vma/request, and
|
||||
* propagate the signal from the request to w->dma.
|
||||
*/
|
||||
err = i915_active_ref(&vma->active, rq->fence.context, rq);
|
||||
if (err)
|
||||
goto out_request;
|
||||
|
||||
|
@ -249,13 +250,11 @@ int i915_gem_schedule_fill_pages_blt(struct drm_i915_gem_object *obj,
|
|||
u32 value)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(obj->base.dev);
|
||||
struct i915_gem_context *ctx = ce->gem_context;
|
||||
struct i915_address_space *vm = ctx->vm ?: &i915->ggtt.vm;
|
||||
struct clear_pages_work *work;
|
||||
struct i915_sleeve *sleeve;
|
||||
int err;
|
||||
|
||||
sleeve = create_sleeve(vm, obj, pages, page_sizes);
|
||||
sleeve = create_sleeve(ce->vm, obj, pages, page_sizes);
|
||||
if (IS_ERR(sleeve))
|
||||
return PTR_ERR(sleeve);
|
||||
|
||||
|
|
|
@ -316,7 +316,7 @@ static void i915_gem_context_free(struct i915_gem_context *ctx)
|
|||
mutex_destroy(&ctx->engines_mutex);
|
||||
|
||||
if (ctx->timeline)
|
||||
i915_timeline_put(ctx->timeline);
|
||||
intel_timeline_put(ctx->timeline);
|
||||
|
||||
kfree(ctx->name);
|
||||
put_pid(ctx->pid);
|
||||
|
@ -459,8 +459,7 @@ __create_context(struct drm_i915_private *i915)
|
|||
i915_gem_context_set_recoverable(ctx);
|
||||
|
||||
ctx->ring_size = 4 * PAGE_SIZE;
|
||||
ctx->desc_template =
|
||||
default_desc_template(i915, &i915->mm.aliasing_ppgtt->vm);
|
||||
ctx->desc_template = default_desc_template(i915, NULL);
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(ctx->hang_timestamp); i++)
|
||||
ctx->hang_timestamp[i] = jiffies - CONTEXT_FAST_HANG_JIFFIES;
|
||||
|
@ -476,10 +475,18 @@ static struct i915_address_space *
|
|||
__set_ppgtt(struct i915_gem_context *ctx, struct i915_address_space *vm)
|
||||
{
|
||||
struct i915_address_space *old = ctx->vm;
|
||||
struct i915_gem_engines_iter it;
|
||||
struct intel_context *ce;
|
||||
|
||||
ctx->vm = i915_vm_get(vm);
|
||||
ctx->desc_template = default_desc_template(ctx->i915, vm);
|
||||
|
||||
for_each_gem_engine(ce, i915_gem_context_lock_engines(ctx), it) {
|
||||
i915_vm_put(ce->vm);
|
||||
ce->vm = i915_vm_get(vm);
|
||||
}
|
||||
i915_gem_context_unlock_engines(ctx);
|
||||
|
||||
return old;
|
||||
}
|
||||
|
||||
|
@ -528,9 +535,9 @@ i915_gem_create_context(struct drm_i915_private *dev_priv, unsigned int flags)
|
|||
}
|
||||
|
||||
if (flags & I915_CONTEXT_CREATE_FLAGS_SINGLE_TIMELINE) {
|
||||
struct i915_timeline *timeline;
|
||||
struct intel_timeline *timeline;
|
||||
|
||||
timeline = i915_timeline_create(dev_priv, NULL);
|
||||
timeline = intel_timeline_create(&dev_priv->gt, NULL);
|
||||
if (IS_ERR(timeline)) {
|
||||
context_close(ctx);
|
||||
return ERR_CAST(timeline);
|
||||
|
@ -644,20 +651,13 @@ static void init_contexts(struct drm_i915_private *i915)
|
|||
init_llist_head(&i915->contexts.free_list);
|
||||
}
|
||||
|
||||
static bool needs_preempt_context(struct drm_i915_private *i915)
|
||||
{
|
||||
return HAS_EXECLISTS(i915);
|
||||
}
|
||||
|
||||
int i915_gem_contexts_init(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
struct i915_gem_context *ctx;
|
||||
|
||||
/* Reassure ourselves we are only called once */
|
||||
GEM_BUG_ON(dev_priv->kernel_context);
|
||||
GEM_BUG_ON(dev_priv->preempt_context);
|
||||
|
||||
intel_engine_init_ctx_wa(dev_priv->engine[RCS0]);
|
||||
init_contexts(dev_priv);
|
||||
|
||||
/* lowest priority; idle task */
|
||||
|
@ -677,15 +677,6 @@ int i915_gem_contexts_init(struct drm_i915_private *dev_priv)
|
|||
GEM_BUG_ON(!atomic_read(&ctx->hw_id_pin_count));
|
||||
dev_priv->kernel_context = ctx;
|
||||
|
||||
/* highest priority; preempting task */
|
||||
if (needs_preempt_context(dev_priv)) {
|
||||
ctx = i915_gem_context_create_kernel(dev_priv, INT_MAX);
|
||||
if (!IS_ERR(ctx))
|
||||
dev_priv->preempt_context = ctx;
|
||||
else
|
||||
DRM_ERROR("Failed to create preempt context; disabling preemption\n");
|
||||
}
|
||||
|
||||
DRM_DEBUG_DRIVER("%s context support initialized\n",
|
||||
DRIVER_CAPS(dev_priv)->has_logical_contexts ?
|
||||
"logical" : "fake");
|
||||
|
@ -696,8 +687,6 @@ void i915_gem_contexts_fini(struct drm_i915_private *i915)
|
|||
{
|
||||
lockdep_assert_held(&i915->drm.struct_mutex);
|
||||
|
||||
if (i915->preempt_context)
|
||||
destroy_kernel_context(&i915->preempt_context);
|
||||
destroy_kernel_context(&i915->kernel_context);
|
||||
|
||||
/* Must free all deferred contexts (via flush_workqueue) first */
|
||||
|
@ -923,8 +912,12 @@ static int context_barrier_task(struct i915_gem_context *ctx,
|
|||
if (!cb)
|
||||
return -ENOMEM;
|
||||
|
||||
i915_active_init(i915, &cb->base, cb_retire);
|
||||
i915_active_acquire(&cb->base);
|
||||
i915_active_init(i915, &cb->base, NULL, cb_retire);
|
||||
err = i915_active_acquire(&cb->base);
|
||||
if (err) {
|
||||
kfree(cb);
|
||||
return err;
|
||||
}
|
||||
|
||||
for_each_gem_engine(ce, i915_gem_context_lock_engines(ctx), it) {
|
||||
struct i915_request *rq;
|
||||
|
@ -1019,7 +1012,7 @@ static void set_ppgtt_barrier(void *data)
|
|||
|
||||
static int emit_ppgtt_update(struct i915_request *rq, void *data)
|
||||
{
|
||||
struct i915_address_space *vm = rq->gem_context->vm;
|
||||
struct i915_address_space *vm = rq->hw_context->vm;
|
||||
struct intel_engine_cs *engine = rq->engine;
|
||||
u32 base = engine->mmio_base;
|
||||
u32 *cs;
|
||||
|
@ -1128,9 +1121,8 @@ static int set_ppgtt(struct drm_i915_file_private *file_priv,
|
|||
set_ppgtt_barrier,
|
||||
old);
|
||||
if (err) {
|
||||
ctx->vm = old;
|
||||
ctx->desc_template = default_desc_template(ctx->i915, old);
|
||||
i915_vm_put(vm);
|
||||
i915_vm_put(__set_ppgtt(ctx, old));
|
||||
i915_vm_put(old);
|
||||
}
|
||||
|
||||
unlock:
|
||||
|
@ -1187,26 +1179,11 @@ gen8_modify_rpcs(struct intel_context *ce, struct intel_sseu sseu)
|
|||
if (IS_ERR(rq))
|
||||
return PTR_ERR(rq);
|
||||
|
||||
/* Queue this switch after all other activity by this context. */
|
||||
ret = i915_active_request_set(&ce->ring->timeline->last_request, rq);
|
||||
if (ret)
|
||||
goto out_add;
|
||||
/* Serialise with the remote context */
|
||||
ret = intel_context_prepare_remote_request(ce, rq);
|
||||
if (ret == 0)
|
||||
ret = gen8_emit_rpcs_config(rq, ce, sseu);
|
||||
|
||||
/*
|
||||
* Guarantee context image and the timeline remains pinned until the
|
||||
* modifying request is retired by setting the ce activity tracker.
|
||||
*
|
||||
* But we only need to take one pin on the account of it. Or in other
|
||||
* words transfer the pinned ce object to tracked active request.
|
||||
*/
|
||||
GEM_BUG_ON(i915_active_is_idle(&ce->active));
|
||||
ret = i915_active_ref(&ce->active, rq->fence.context, rq);
|
||||
if (ret)
|
||||
goto out_add;
|
||||
|
||||
ret = gen8_emit_rpcs_config(rq, ce, sseu);
|
||||
|
||||
out_add:
|
||||
i915_request_add(rq);
|
||||
return ret;
|
||||
}
|
||||
|
@ -2015,8 +1992,8 @@ static int clone_timeline(struct i915_gem_context *dst,
|
|||
GEM_BUG_ON(src->timeline == dst->timeline);
|
||||
|
||||
if (dst->timeline)
|
||||
i915_timeline_put(dst->timeline);
|
||||
dst->timeline = i915_timeline_get(src->timeline);
|
||||
intel_timeline_put(dst->timeline);
|
||||
dst->timeline = intel_timeline_get(src->timeline);
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
@ -2141,7 +2118,7 @@ int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
|
|||
if (args->flags & I915_CONTEXT_CREATE_FLAGS_UNKNOWN)
|
||||
return -EINVAL;
|
||||
|
||||
ret = i915_terminally_wedged(i915);
|
||||
ret = intel_gt_terminally_wedged(&i915->gt);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
|
@ -2287,8 +2264,8 @@ int i915_gem_context_getparam_ioctl(struct drm_device *dev, void *data,
|
|||
args->size = 0;
|
||||
if (ctx->vm)
|
||||
args->value = ctx->vm->total;
|
||||
else if (to_i915(dev)->mm.aliasing_ppgtt)
|
||||
args->value = to_i915(dev)->mm.aliasing_ppgtt->vm.total;
|
||||
else if (to_i915(dev)->ggtt.alias)
|
||||
args->value = to_i915(dev)->ggtt.alias->vm.total;
|
||||
else
|
||||
args->value = to_i915(dev)->ggtt.vm.total;
|
||||
break;
|
||||
|
|
|
@ -197,12 +197,6 @@ i915_gem_context_unlock_engines(struct i915_gem_context *ctx)
|
|||
mutex_unlock(&ctx->engines_mutex);
|
||||
}
|
||||
|
||||
static inline struct intel_context *
|
||||
i915_gem_context_lookup_engine(struct i915_gem_context *ctx, unsigned int idx)
|
||||
{
|
||||
return i915_gem_context_engines(ctx)->engines[idx];
|
||||
}
|
||||
|
||||
static inline struct intel_context *
|
||||
i915_gem_context_get_engine(struct i915_gem_context *ctx, unsigned int idx)
|
||||
{
|
||||
|
|
|
@ -26,7 +26,7 @@ struct pid;
|
|||
struct drm_i915_private;
|
||||
struct drm_i915_file_private;
|
||||
struct i915_address_space;
|
||||
struct i915_timeline;
|
||||
struct intel_timeline;
|
||||
struct intel_ring;
|
||||
|
||||
struct i915_gem_engines {
|
||||
|
@ -77,7 +77,7 @@ struct i915_gem_context {
|
|||
struct i915_gem_engines __rcu *engines;
|
||||
struct mutex engines_mutex; /* guards writes to engines */
|
||||
|
||||
struct i915_timeline *timeline;
|
||||
struct intel_timeline *timeline;
|
||||
|
||||
/**
|
||||
* @vm: unique address space (GTT)
|
||||
|
|
|
@ -16,6 +16,7 @@
|
|||
|
||||
#include "gem/i915_gem_ioctls.h"
|
||||
#include "gt/intel_context.h"
|
||||
#include "gt/intel_gt.h"
|
||||
#include "gt/intel_gt_pm.h"
|
||||
|
||||
#include "i915_gem_ioctls.h"
|
||||
|
@ -222,7 +223,6 @@ struct i915_execbuffer {
|
|||
struct intel_engine_cs *engine; /** engine to queue the request to */
|
||||
struct intel_context *context; /* logical state for the request */
|
||||
struct i915_gem_context *gem_context; /** caller's context */
|
||||
struct i915_address_space *vm; /** GTT and vma for the request */
|
||||
|
||||
struct i915_request *request; /** our request to build */
|
||||
struct i915_vma *batch; /** identity of the batch obj/vma */
|
||||
|
@ -696,7 +696,7 @@ static int eb_reserve(struct i915_execbuffer *eb)
|
|||
|
||||
case 1:
|
||||
/* Too fragmented, unbind everything and retry */
|
||||
err = i915_gem_evict_vm(eb->vm);
|
||||
err = i915_gem_evict_vm(eb->context->vm);
|
||||
if (err)
|
||||
return err;
|
||||
break;
|
||||
|
@ -724,12 +724,8 @@ static int eb_select_context(struct i915_execbuffer *eb)
|
|||
return -ENOENT;
|
||||
|
||||
eb->gem_context = ctx;
|
||||
if (ctx->vm) {
|
||||
eb->vm = ctx->vm;
|
||||
if (ctx->vm)
|
||||
eb->invalid_flags |= EXEC_OBJECT_NEEDS_GTT;
|
||||
} else {
|
||||
eb->vm = &eb->i915->ggtt.vm;
|
||||
}
|
||||
|
||||
eb->context_flags = 0;
|
||||
if (test_bit(UCONTEXT_NO_ZEROMAP, &ctx->user_flags))
|
||||
|
@ -831,7 +827,7 @@ static int eb_lookup_vmas(struct i915_execbuffer *eb)
|
|||
goto err_vma;
|
||||
}
|
||||
|
||||
vma = i915_vma_instance(obj, eb->vm, NULL);
|
||||
vma = i915_vma_instance(obj, eb->context->vm, NULL);
|
||||
if (IS_ERR(vma)) {
|
||||
err = PTR_ERR(vma);
|
||||
goto err_obj;
|
||||
|
@ -994,7 +990,7 @@ static void reloc_gpu_flush(struct reloc_cache *cache)
|
|||
__i915_gem_object_flush_map(cache->rq->batch->obj, 0, cache->rq_size);
|
||||
i915_gem_object_unpin_map(cache->rq->batch->obj);
|
||||
|
||||
i915_gem_chipset_flush(cache->rq->i915);
|
||||
intel_gt_chipset_flush(cache->rq->engine->gt);
|
||||
|
||||
i915_request_add(cache->rq);
|
||||
cache->rq = NULL;
|
||||
|
@ -1954,7 +1950,7 @@ static int eb_move_to_gpu(struct i915_execbuffer *eb)
|
|||
eb->exec = NULL;
|
||||
|
||||
/* Unconditionally flush any chipset caches (for streaming writes). */
|
||||
i915_gem_chipset_flush(eb->i915);
|
||||
intel_gt_chipset_flush(eb->engine->gt);
|
||||
return 0;
|
||||
|
||||
err_skip:
|
||||
|
@ -2129,7 +2125,7 @@ static int eb_pin_context(struct i915_execbuffer *eb, struct intel_context *ce)
|
|||
* ABI: Before userspace accesses the GPU (e.g. execbuffer), report
|
||||
* EIO if the GPU is already wedged.
|
||||
*/
|
||||
err = i915_terminally_wedged(eb->i915);
|
||||
err = intel_gt_terminally_wedged(ce->engine->gt);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
|
@ -2436,7 +2432,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
|
|||
* wakeref that we hold until the GPU has been idle for at least
|
||||
* 100ms.
|
||||
*/
|
||||
intel_gt_pm_get(eb.i915);
|
||||
intel_gt_pm_get(&eb.i915->gt);
|
||||
|
||||
err = i915_mutex_lock_interruptible(dev);
|
||||
if (err)
|
||||
|
@ -2606,7 +2602,7 @@ err_engine:
|
|||
err_unlock:
|
||||
mutex_unlock(&dev->struct_mutex);
|
||||
err_rpm:
|
||||
intel_gt_pm_put(eb.i915);
|
||||
intel_gt_pm_put(&eb.i915->gt);
|
||||
i915_gem_context_put(eb.gem_context);
|
||||
err_destroy:
|
||||
eb_destroy(&eb);
|
||||
|
|
|
@ -7,6 +7,8 @@
|
|||
#include <linux/mman.h>
|
||||
#include <linux/sizes.h>
|
||||
|
||||
#include "gt/intel_gt.h"
|
||||
|
||||
#include "i915_drv.h"
|
||||
#include "i915_gem_gtt.h"
|
||||
#include "i915_gem_ioctls.h"
|
||||
|
@ -246,7 +248,7 @@ vm_fault_t i915_gem_fault(struct vm_fault *vmf)
|
|||
|
||||
wakeref = intel_runtime_pm_get(rpm);
|
||||
|
||||
srcu = i915_reset_trylock(i915);
|
||||
srcu = intel_gt_reset_trylock(ggtt->vm.gt);
|
||||
if (srcu < 0) {
|
||||
ret = srcu;
|
||||
goto err_rpm;
|
||||
|
@ -326,7 +328,7 @@ err_unpin:
|
|||
err_unlock:
|
||||
mutex_unlock(&dev->struct_mutex);
|
||||
err_reset:
|
||||
i915_reset_unlock(i915, srcu);
|
||||
intel_gt_reset_unlock(ggtt->vm.gt, srcu);
|
||||
err_rpm:
|
||||
intel_runtime_pm_put(rpm, wakeref);
|
||||
i915_gem_object_unpin_pages(obj);
|
||||
|
@ -339,7 +341,7 @@ err:
|
|||
* fail). But any other -EIO isn't ours (e.g. swap in failure)
|
||||
* and so needs to be reported.
|
||||
*/
|
||||
if (!i915_terminally_wedged(i915))
|
||||
if (!intel_gt_is_wedged(ggtt->vm.gt))
|
||||
return VM_FAULT_SIGBUS;
|
||||
/* else, fall through */
|
||||
case -EAGAIN:
|
||||
|
|
|
@ -23,7 +23,7 @@
|
|||
*/
|
||||
|
||||
#include "display/intel_frontbuffer.h"
|
||||
|
||||
#include "gt/intel_gt.h"
|
||||
#include "i915_drv.h"
|
||||
#include "i915_gem_clflush.h"
|
||||
#include "i915_gem_context.h"
|
||||
|
@ -146,6 +146,19 @@ void i915_gem_close_object(struct drm_gem_object *gem, struct drm_file *file)
|
|||
}
|
||||
}
|
||||
|
||||
static void __i915_gem_free_object_rcu(struct rcu_head *head)
|
||||
{
|
||||
struct drm_i915_gem_object *obj =
|
||||
container_of(head, typeof(*obj), rcu);
|
||||
struct drm_i915_private *i915 = to_i915(obj->base.dev);
|
||||
|
||||
reservation_object_fini(&obj->base._resv);
|
||||
i915_gem_object_free(obj);
|
||||
|
||||
GEM_BUG_ON(!atomic_read(&i915->mm.free_count));
|
||||
atomic_dec(&i915->mm.free_count);
|
||||
}
|
||||
|
||||
static void __i915_gem_free_objects(struct drm_i915_private *i915,
|
||||
struct llist_node *freed)
|
||||
{
|
||||
|
@ -160,7 +173,6 @@ static void __i915_gem_free_objects(struct drm_i915_private *i915,
|
|||
|
||||
mutex_lock(&i915->drm.struct_mutex);
|
||||
|
||||
GEM_BUG_ON(i915_gem_object_is_active(obj));
|
||||
list_for_each_entry_safe(vma, vn, &obj->vma.list, obj_link) {
|
||||
GEM_BUG_ON(i915_vma_is_active(vma));
|
||||
vma->flags &= ~I915_VMA_PIN_MASK;
|
||||
|
@ -169,22 +181,6 @@ static void __i915_gem_free_objects(struct drm_i915_private *i915,
|
|||
GEM_BUG_ON(!list_empty(&obj->vma.list));
|
||||
GEM_BUG_ON(!RB_EMPTY_ROOT(&obj->vma.tree));
|
||||
|
||||
/*
|
||||
* This serializes freeing with the shrinker. Since the free
|
||||
* is delayed, first by RCU then by the workqueue, we want the
|
||||
* shrinker to be able to free pages of unreferenced objects,
|
||||
* or else we may oom whilst there are plenty of deferred
|
||||
* freed objects.
|
||||
*/
|
||||
if (i915_gem_object_has_pages(obj) &&
|
||||
i915_gem_object_is_shrinkable(obj)) {
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&i915->mm.obj_lock, flags);
|
||||
list_del_init(&obj->mm.link);
|
||||
spin_unlock_irqrestore(&i915->mm.obj_lock, flags);
|
||||
}
|
||||
|
||||
mutex_unlock(&i915->drm.struct_mutex);
|
||||
|
||||
GEM_BUG_ON(atomic_read(&obj->bind_count));
|
||||
|
@ -192,25 +188,21 @@ static void __i915_gem_free_objects(struct drm_i915_private *i915,
|
|||
GEM_BUG_ON(atomic_read(&obj->frontbuffer_bits));
|
||||
GEM_BUG_ON(!list_empty(&obj->lut_list));
|
||||
|
||||
if (obj->ops->release)
|
||||
obj->ops->release(obj);
|
||||
|
||||
atomic_set(&obj->mm.pages_pin_count, 0);
|
||||
__i915_gem_object_put_pages(obj, I915_MM_NORMAL);
|
||||
GEM_BUG_ON(i915_gem_object_has_pages(obj));
|
||||
bitmap_free(obj->bit_17);
|
||||
|
||||
if (obj->base.import_attach)
|
||||
drm_prime_gem_destroy(&obj->base, NULL);
|
||||
|
||||
drm_gem_object_release(&obj->base);
|
||||
drm_gem_free_mmap_offset(&obj->base);
|
||||
|
||||
bitmap_free(obj->bit_17);
|
||||
i915_gem_object_free(obj);
|
||||
if (obj->ops->release)
|
||||
obj->ops->release(obj);
|
||||
|
||||
GEM_BUG_ON(!atomic_read(&i915->mm.free_count));
|
||||
atomic_dec(&i915->mm.free_count);
|
||||
|
||||
cond_resched();
|
||||
/* But keep the pointer alive for RCU-protected lookups */
|
||||
call_rcu(&obj->rcu, __i915_gem_free_object_rcu);
|
||||
}
|
||||
intel_runtime_pm_put(&i915->runtime_pm, wakeref);
|
||||
}
|
||||
|
@ -261,18 +253,34 @@ static void __i915_gem_free_work(struct work_struct *work)
|
|||
spin_unlock(&i915->mm.free_lock);
|
||||
}
|
||||
|
||||
static void __i915_gem_free_object_rcu(struct rcu_head *head)
|
||||
void i915_gem_free_object(struct drm_gem_object *gem_obj)
|
||||
{
|
||||
struct drm_i915_gem_object *obj =
|
||||
container_of(head, typeof(*obj), rcu);
|
||||
struct drm_i915_gem_object *obj = to_intel_bo(gem_obj);
|
||||
struct drm_i915_private *i915 = to_i915(obj->base.dev);
|
||||
|
||||
/*
|
||||
* We reuse obj->rcu for the freed list, so we had better not treat
|
||||
* it like a rcu_head from this point forwards. And we expect all
|
||||
* objects to be freed via this path.
|
||||
* Before we free the object, make sure any pure RCU-only
|
||||
* read-side critical sections are complete, e.g.
|
||||
* i915_gem_busy_ioctl(). For the corresponding synchronized
|
||||
* lookup see i915_gem_object_lookup_rcu().
|
||||
*/
|
||||
destroy_rcu_head(&obj->rcu);
|
||||
atomic_inc(&i915->mm.free_count);
|
||||
|
||||
/*
|
||||
* This serializes freeing with the shrinker. Since the free
|
||||
* is delayed, first by RCU then by the workqueue, we want the
|
||||
* shrinker to be able to free pages of unreferenced objects,
|
||||
* or else we may oom whilst there are plenty of deferred
|
||||
* freed objects.
|
||||
*/
|
||||
if (i915_gem_object_has_pages(obj) &&
|
||||
i915_gem_object_is_shrinkable(obj)) {
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&i915->mm.obj_lock, flags);
|
||||
list_del_init(&obj->mm.link);
|
||||
spin_unlock_irqrestore(&i915->mm.obj_lock, flags);
|
||||
}
|
||||
|
||||
/*
|
||||
* Since we require blocking on struct_mutex to unbind the freed
|
||||
|
@ -288,20 +296,6 @@ static void __i915_gem_free_object_rcu(struct rcu_head *head)
|
|||
queue_work(i915->wq, &i915->mm.free_work);
|
||||
}
|
||||
|
||||
void i915_gem_free_object(struct drm_gem_object *gem_obj)
|
||||
{
|
||||
struct drm_i915_gem_object *obj = to_intel_bo(gem_obj);
|
||||
|
||||
/*
|
||||
* Before we free the object, make sure any pure RCU-only
|
||||
* read-side critical sections are complete, e.g.
|
||||
* i915_gem_busy_ioctl(). For the corresponding synchronized
|
||||
* lookup see i915_gem_object_lookup_rcu().
|
||||
*/
|
||||
atomic_inc(&to_i915(obj->base.dev)->mm.free_count);
|
||||
call_rcu(&obj->rcu, __i915_gem_free_object_rcu);
|
||||
}
|
||||
|
||||
static inline enum fb_op_origin
|
||||
fb_write_origin(struct drm_i915_gem_object *obj, unsigned int domain)
|
||||
{
|
||||
|
@ -319,7 +313,6 @@ void
|
|||
i915_gem_object_flush_write_domain(struct drm_i915_gem_object *obj,
|
||||
unsigned int flush_domains)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(obj->base.dev);
|
||||
struct i915_vma *vma;
|
||||
|
||||
assert_object_held(obj);
|
||||
|
@ -329,7 +322,8 @@ i915_gem_object_flush_write_domain(struct drm_i915_gem_object *obj,
|
|||
|
||||
switch (obj->write_domain) {
|
||||
case I915_GEM_DOMAIN_GTT:
|
||||
i915_gem_flush_ggtt_writes(dev_priv);
|
||||
for_each_ggtt_vma(vma, obj)
|
||||
intel_gt_flush_ggtt_writes(vma->vm->gt);
|
||||
|
||||
intel_fb_obj_flush(obj,
|
||||
fb_write_origin(obj, I915_GEM_DOMAIN_GTT));
|
||||
|
@ -340,6 +334,7 @@ i915_gem_object_flush_write_domain(struct drm_i915_gem_object *obj,
|
|||
|
||||
i915_vma_unset_ggtt_write(vma);
|
||||
}
|
||||
|
||||
break;
|
||||
|
||||
case I915_GEM_DOMAIN_WC:
|
||||
|
|
|
@ -81,7 +81,7 @@ i915_gem_object_lookup(struct drm_file *file, u32 handle)
|
|||
}
|
||||
|
||||
__deprecated
|
||||
extern struct drm_gem_object *
|
||||
struct drm_gem_object *
|
||||
drm_gem_object_lookup(struct drm_file *file, u32 handle);
|
||||
|
||||
__attribute__((nonnull))
|
||||
|
@ -158,12 +158,6 @@ i915_gem_object_needs_async_cancel(const struct drm_i915_gem_object *obj)
|
|||
return obj->ops->flags & I915_GEM_OBJECT_ASYNC_CANCEL;
|
||||
}
|
||||
|
||||
static inline bool
|
||||
i915_gem_object_is_active(const struct drm_i915_gem_object *obj)
|
||||
{
|
||||
return READ_ONCE(obj->active_count);
|
||||
}
|
||||
|
||||
static inline bool
|
||||
i915_gem_object_is_framebuffer(const struct drm_i915_gem_object *obj)
|
||||
{
|
||||
|
|
|
@ -47,15 +47,11 @@ int i915_gem_object_fill_blt(struct drm_i915_gem_object *obj,
|
|||
struct intel_context *ce,
|
||||
u32 value)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(obj->base.dev);
|
||||
struct i915_gem_context *ctx = ce->gem_context;
|
||||
struct i915_address_space *vm = ctx->vm ?: &i915->ggtt.vm;
|
||||
struct i915_request *rq;
|
||||
struct i915_vma *vma;
|
||||
int err;
|
||||
|
||||
/* XXX: ce->vm please */
|
||||
vma = i915_vma_instance(obj, vm, NULL);
|
||||
vma = i915_vma_instance(obj, ce->vm, NULL);
|
||||
if (IS_ERR(vma))
|
||||
return PTR_ERR(vma);
|
||||
|
||||
|
|
|
@ -154,7 +154,6 @@ struct drm_i915_gem_object {
|
|||
|
||||
/** Count of VMA actually bound by this object */
|
||||
atomic_t bind_count;
|
||||
unsigned int active_count;
|
||||
/** Count of how many global VMA are currently pinned for use by HW */
|
||||
unsigned int pin_global;
|
||||
|
||||
|
|
|
@ -13,6 +13,7 @@
|
|||
#include <drm/drm_legacy.h> /* for drm_pci.h! */
|
||||
#include <drm/drm_pci.h>
|
||||
|
||||
#include "gt/intel_gt.h"
|
||||
#include "i915_drv.h"
|
||||
#include "i915_gem_object.h"
|
||||
#include "i915_scatterlist.h"
|
||||
|
@ -60,7 +61,7 @@ static int i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj)
|
|||
vaddr += PAGE_SIZE;
|
||||
}
|
||||
|
||||
i915_gem_chipset_flush(to_i915(obj->base.dev));
|
||||
intel_gt_chipset_flush(&to_i915(obj->base.dev)->gt);
|
||||
|
||||
st = kmalloc(sizeof(*st), GFP_KERNEL);
|
||||
if (!st) {
|
||||
|
@ -132,16 +133,9 @@ i915_gem_object_put_pages_phys(struct drm_i915_gem_object *obj,
|
|||
drm_pci_free(obj->base.dev, obj->phys_handle);
|
||||
}
|
||||
|
||||
static void
|
||||
i915_gem_object_release_phys(struct drm_i915_gem_object *obj)
|
||||
{
|
||||
i915_gem_object_unpin_pages(obj);
|
||||
}
|
||||
|
||||
static const struct drm_i915_gem_object_ops i915_gem_phys_ops = {
|
||||
.get_pages = i915_gem_object_get_pages_phys,
|
||||
.put_pages = i915_gem_object_put_pages_phys,
|
||||
.release = i915_gem_object_release_phys,
|
||||
};
|
||||
|
||||
int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
|
||||
|
@ -158,7 +152,7 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
|
|||
if (obj->ops != &i915_gem_shmem_ops)
|
||||
return -EINVAL;
|
||||
|
||||
err = i915_gem_object_unbind(obj);
|
||||
err = i915_gem_object_unbind(obj, I915_GEM_OBJECT_UNBIND_ACTIVE);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
|
|
|
@ -5,6 +5,7 @@
|
|||
*/
|
||||
|
||||
#include "gem/i915_gem_pm.h"
|
||||
#include "gt/intel_gt.h"
|
||||
#include "gt/intel_gt_pm.h"
|
||||
|
||||
#include "i915_drv.h"
|
||||
|
@ -38,7 +39,7 @@ static void i915_gem_park(struct drm_i915_private *i915)
|
|||
i915_gem_batch_pool_fini(&engine->batch_pool);
|
||||
}
|
||||
|
||||
i915_timelines_park(i915);
|
||||
intel_timelines_park(i915);
|
||||
i915_vma_parked(i915);
|
||||
|
||||
i915_globals_park();
|
||||
|
@ -54,7 +55,8 @@ static void idle_work_handler(struct work_struct *work)
|
|||
mutex_lock(&i915->drm.struct_mutex);
|
||||
|
||||
intel_wakeref_lock(&i915->gt.wakeref);
|
||||
park = !intel_wakeref_active(&i915->gt.wakeref) && !work_pending(work);
|
||||
park = (!intel_wakeref_is_active(&i915->gt.wakeref) &&
|
||||
!work_pending(work));
|
||||
intel_wakeref_unlock(&i915->gt.wakeref);
|
||||
if (park)
|
||||
i915_gem_park(i915);
|
||||
|
@ -105,18 +107,18 @@ static int pm_notifier(struct notifier_block *nb,
|
|||
return NOTIFY_OK;
|
||||
}
|
||||
|
||||
static bool switch_to_kernel_context_sync(struct drm_i915_private *i915)
|
||||
static bool switch_to_kernel_context_sync(struct intel_gt *gt)
|
||||
{
|
||||
bool result = !i915_terminally_wedged(i915);
|
||||
bool result = !intel_gt_is_wedged(gt);
|
||||
|
||||
do {
|
||||
if (i915_gem_wait_for_idle(i915,
|
||||
if (i915_gem_wait_for_idle(gt->i915,
|
||||
I915_WAIT_LOCKED |
|
||||
I915_WAIT_FOR_IDLE_BOOST,
|
||||
I915_GEM_IDLE_TIMEOUT) == -ETIME) {
|
||||
/* XXX hide warning from gem_eio */
|
||||
if (i915_modparams.reset) {
|
||||
dev_err(i915->drm.dev,
|
||||
dev_err(gt->i915->drm.dev,
|
||||
"Failed to idle engines, declaring wedged!\n");
|
||||
GEM_TRACE_DUMP();
|
||||
}
|
||||
|
@ -125,18 +127,18 @@ static bool switch_to_kernel_context_sync(struct drm_i915_private *i915)
|
|||
* Forcibly cancel outstanding work and leave
|
||||
* the gpu quiet.
|
||||
*/
|
||||
i915_gem_set_wedged(i915);
|
||||
intel_gt_set_wedged(gt);
|
||||
result = false;
|
||||
}
|
||||
} while (i915_retire_requests(i915) && result);
|
||||
} while (i915_retire_requests(gt->i915) && result);
|
||||
|
||||
GEM_BUG_ON(i915->gt.awake);
|
||||
GEM_BUG_ON(gt->awake);
|
||||
return result;
|
||||
}
|
||||
|
||||
bool i915_gem_load_power_context(struct drm_i915_private *i915)
|
||||
{
|
||||
return switch_to_kernel_context_sync(i915);
|
||||
return switch_to_kernel_context_sync(&i915->gt);
|
||||
}
|
||||
|
||||
void i915_gem_suspend(struct drm_i915_private *i915)
|
||||
|
@ -157,7 +159,7 @@ void i915_gem_suspend(struct drm_i915_private *i915)
|
|||
* state. Fortunately, the kernel_context is disposable and we do
|
||||
* not rely on its state.
|
||||
*/
|
||||
switch_to_kernel_context_sync(i915);
|
||||
switch_to_kernel_context_sync(&i915->gt);
|
||||
|
||||
mutex_unlock(&i915->drm.struct_mutex);
|
||||
|
||||
|
@ -168,11 +170,11 @@ void i915_gem_suspend(struct drm_i915_private *i915)
|
|||
GEM_BUG_ON(i915->gt.awake);
|
||||
flush_work(&i915->gem.idle_work);
|
||||
|
||||
cancel_delayed_work_sync(&i915->gpu_error.hangcheck_work);
|
||||
cancel_delayed_work_sync(&i915->gt.hangcheck.work);
|
||||
|
||||
i915_gem_drain_freed_objects(i915);
|
||||
|
||||
intel_uc_suspend(i915);
|
||||
intel_uc_suspend(&i915->gt.uc);
|
||||
}
|
||||
|
||||
static struct drm_i915_gem_object *first_mm_object(struct list_head *list)
|
||||
|
@ -237,7 +239,6 @@ void i915_gem_suspend_late(struct drm_i915_private *i915)
|
|||
}
|
||||
spin_unlock_irqrestore(&i915->mm.obj_lock, flags);
|
||||
|
||||
intel_uc_sanitize(i915);
|
||||
i915_gem_sanitize(i915);
|
||||
}
|
||||
|
||||
|
@ -261,10 +262,10 @@ void i915_gem_resume(struct drm_i915_private *i915)
|
|||
* guarantee that the context image is complete. So let's just reset
|
||||
* it and start again.
|
||||
*/
|
||||
if (intel_gt_resume(i915))
|
||||
if (intel_gt_resume(&i915->gt))
|
||||
goto err_wedged;
|
||||
|
||||
intel_uc_resume(i915);
|
||||
intel_uc_resume(&i915->gt.uc);
|
||||
|
||||
/* Always reload a context for powersaving. */
|
||||
if (!i915_gem_load_power_context(i915))
|
||||
|
@ -276,10 +277,10 @@ out_unlock:
|
|||
return;
|
||||
|
||||
err_wedged:
|
||||
if (!i915_reset_failed(i915)) {
|
||||
if (!intel_gt_is_wedged(&i915->gt)) {
|
||||
dev_err(i915->drm.dev,
|
||||
"Failed to re-initialize GPU, declaring it wedged!\n");
|
||||
i915_gem_set_wedged(i915);
|
||||
intel_gt_set_wedged(&i915->gt);
|
||||
}
|
||||
goto out_unlock;
|
||||
}
|
||||
|
|
|
@ -414,6 +414,11 @@ shmem_pwrite(struct drm_i915_gem_object *obj,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void shmem_release(struct drm_i915_gem_object *obj)
|
||||
{
|
||||
fput(obj->base.filp);
|
||||
}
|
||||
|
||||
const struct drm_i915_gem_object_ops i915_gem_shmem_ops = {
|
||||
.flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE |
|
||||
I915_GEM_OBJECT_IS_SHRINKABLE,
|
||||
|
@ -424,6 +429,8 @@ const struct drm_i915_gem_object_ops i915_gem_shmem_ops = {
|
|||
.writeback = shmem_writeback,
|
||||
|
||||
.pwrite = shmem_pwrite,
|
||||
|
||||
.release = shmem_release,
|
||||
};
|
||||
|
||||
static int create_shmem(struct drm_i915_private *i915,
|
||||
|
|
|
@ -88,10 +88,18 @@ static bool can_release_pages(struct drm_i915_gem_object *obj)
|
|||
return swap_available() || obj->mm.madv == I915_MADV_DONTNEED;
|
||||
}
|
||||
|
||||
static bool unsafe_drop_pages(struct drm_i915_gem_object *obj)
|
||||
static bool unsafe_drop_pages(struct drm_i915_gem_object *obj,
|
||||
unsigned long shrink)
|
||||
{
|
||||
if (i915_gem_object_unbind(obj) == 0)
|
||||
unsigned long flags;
|
||||
|
||||
flags = 0;
|
||||
if (shrink & I915_SHRINK_ACTIVE)
|
||||
flags = I915_GEM_OBJECT_UNBIND_ACTIVE;
|
||||
|
||||
if (i915_gem_object_unbind(obj, flags) == 0)
|
||||
__i915_gem_object_put_pages(obj, I915_MM_SHRINKER);
|
||||
|
||||
return !i915_gem_object_has_pages(obj);
|
||||
}
|
||||
|
||||
|
@ -169,7 +177,6 @@ i915_gem_shrink(struct drm_i915_private *i915,
|
|||
*/
|
||||
|
||||
trace_i915_gem_shrink(i915, target, shrink);
|
||||
i915_retire_requests(i915);
|
||||
|
||||
/*
|
||||
* Unbinding of objects will require HW access; Let us not wake the
|
||||
|
@ -230,8 +237,7 @@ i915_gem_shrink(struct drm_i915_private *i915,
|
|||
continue;
|
||||
|
||||
if (!(shrink & I915_SHRINK_ACTIVE) &&
|
||||
(i915_gem_object_is_active(obj) ||
|
||||
i915_gem_object_is_framebuffer(obj)))
|
||||
i915_gem_object_is_framebuffer(obj))
|
||||
continue;
|
||||
|
||||
if (!(shrink & I915_SHRINK_BOUND) &&
|
||||
|
@ -246,7 +252,7 @@ i915_gem_shrink(struct drm_i915_private *i915,
|
|||
|
||||
spin_unlock_irqrestore(&i915->mm.obj_lock, flags);
|
||||
|
||||
if (unsafe_drop_pages(obj)) {
|
||||
if (unsafe_drop_pages(obj, shrink)) {
|
||||
/* May arrive from get_pages on another bo */
|
||||
mutex_lock_nested(&obj->mm.lock,
|
||||
I915_MM_SHRINKER);
|
||||
|
@ -269,8 +275,6 @@ i915_gem_shrink(struct drm_i915_private *i915,
|
|||
if (shrink & I915_SHRINK_BOUND)
|
||||
intel_runtime_pm_put(&i915->runtime_pm, wakeref);
|
||||
|
||||
i915_retire_requests(i915);
|
||||
|
||||
shrinker_unlock(i915, unlock);
|
||||
|
||||
if (nr_scanned)
|
||||
|
@ -427,12 +431,6 @@ i915_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr
|
|||
if (!shrinker_lock(i915, 0, &unlock))
|
||||
return NOTIFY_DONE;
|
||||
|
||||
/* Force everything onto the inactive lists */
|
||||
if (i915_gem_wait_for_idle(i915,
|
||||
I915_WAIT_LOCKED,
|
||||
MAX_SCHEDULE_TIMEOUT))
|
||||
goto out;
|
||||
|
||||
with_intel_runtime_pm(&i915->runtime_pm, wakeref)
|
||||
freed_pages += i915_gem_shrink(i915, -1UL, NULL,
|
||||
I915_SHRINK_BOUND |
|
||||
|
@ -455,7 +453,6 @@ i915_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr
|
|||
}
|
||||
mutex_unlock(&i915->ggtt.vm.mutex);
|
||||
|
||||
out:
|
||||
shrinker_unlock(i915, unlock);
|
||||
|
||||
*(unsigned long *)ptr += freed_pages;
|
||||
|
|
|
@ -529,8 +529,6 @@ i915_gem_object_release_stolen(struct drm_i915_gem_object *obj)
|
|||
|
||||
GEM_BUG_ON(!stolen);
|
||||
|
||||
__i915_gem_object_unpin_pages(obj);
|
||||
|
||||
i915_gem_stolen_remove_node(dev_priv, stolen);
|
||||
kfree(stolen);
|
||||
}
|
||||
|
|
|
@ -41,7 +41,7 @@ i915_gem_throttle_ioctl(struct drm_device *dev, void *data,
|
|||
long ret;
|
||||
|
||||
/* ABI: return -EIO if already wedged */
|
||||
ret = i915_terminally_wedged(to_i915(dev));
|
||||
ret = intel_gt_terminally_wedged(&to_i915(dev)->gt);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
|
|
|
@ -150,7 +150,8 @@ userptr_mn_invalidate_range_start(struct mmu_notifier *_mn,
|
|||
}
|
||||
}
|
||||
|
||||
ret = i915_gem_object_unbind(obj);
|
||||
ret = i915_gem_object_unbind(obj,
|
||||
I915_GEM_OBJECT_UNBIND_ACTIVE);
|
||||
if (ret == 0)
|
||||
ret = __i915_gem_object_put_pages(obj, I915_MM_SHRINKER);
|
||||
i915_gem_object_put(obj);
|
||||
|
@ -662,6 +663,14 @@ i915_gem_userptr_put_pages(struct drm_i915_gem_object *obj,
|
|||
__i915_gem_object_release_shmem(obj, pages, true);
|
||||
i915_gem_gtt_finish_pages(obj, pages);
|
||||
|
||||
/*
|
||||
* We always mark objects as dirty when they are used by the GPU,
|
||||
* just in case. However, if we set the vma as being read-only we know
|
||||
* that the object will never have been written to.
|
||||
*/
|
||||
if (i915_gem_object_is_readonly(obj))
|
||||
obj->mm.dirty = false;
|
||||
|
||||
for_each_sgt_page(page, sgt_iter, pages) {
|
||||
if (obj->mm.dirty)
|
||||
/*
|
||||
|
|
|
@ -10,6 +10,8 @@
|
|||
|
||||
#include "gem/i915_gem_pm.h"
|
||||
|
||||
#include "gt/intel_gt.h"
|
||||
|
||||
#include "igt_gem_utils.h"
|
||||
#include "mock_context.h"
|
||||
|
||||
|
@ -926,7 +928,7 @@ gpu_write_dw(struct i915_vma *vma, u64 offset, u32 val)
|
|||
}
|
||||
|
||||
*cmd = MI_BATCH_BUFFER_END;
|
||||
i915_gem_chipset_flush(i915);
|
||||
intel_gt_chipset_flush(vma->vm->gt);
|
||||
|
||||
i915_gem_object_unpin_map(obj);
|
||||
|
||||
|
@ -1037,8 +1039,7 @@ static int __igt_write_huge(struct i915_gem_context *ctx,
|
|||
u64 size, u64 offset,
|
||||
u32 dword, u32 val)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(obj->base.dev);
|
||||
struct i915_address_space *vm = ctx->vm ?: &i915->ggtt.vm;
|
||||
struct i915_address_space *vm = ctx->vm ?: &engine->gt->ggtt->vm;
|
||||
unsigned int flags = PIN_USER | PIN_OFFSET_FIXED;
|
||||
struct i915_vma *vma;
|
||||
int err;
|
||||
|
@ -1421,6 +1422,9 @@ static int igt_ppgtt_pin_update(void *arg)
|
|||
struct drm_i915_gem_object *obj;
|
||||
struct i915_vma *vma;
|
||||
unsigned int flags = PIN_USER | PIN_OFFSET_FIXED;
|
||||
struct intel_engine_cs *engine;
|
||||
enum intel_engine_id id;
|
||||
unsigned int n;
|
||||
int first, last;
|
||||
int err;
|
||||
|
||||
|
@ -1518,11 +1522,20 @@ static int igt_ppgtt_pin_update(void *arg)
|
|||
* land in the now stale 2M page.
|
||||
*/
|
||||
|
||||
err = gpu_write(vma, ctx, dev_priv->engine[RCS0], 0, 0xdeadbeaf);
|
||||
if (err)
|
||||
goto out_unpin;
|
||||
n = 0;
|
||||
for_each_engine(engine, dev_priv, id) {
|
||||
if (!intel_engine_can_store_dword(engine))
|
||||
continue;
|
||||
|
||||
err = cpu_check(obj, 0, 0xdeadbeaf);
|
||||
err = gpu_write(vma, ctx, engine, n++, 0xdeadbeaf);
|
||||
if (err)
|
||||
goto out_unpin;
|
||||
}
|
||||
while (n--) {
|
||||
err = cpu_check(obj, n, 0xdeadbeaf);
|
||||
if (err)
|
||||
goto out_unpin;
|
||||
}
|
||||
|
||||
out_unpin:
|
||||
i915_vma_unpin(vma);
|
||||
|
@ -1598,8 +1611,11 @@ static int igt_shrink_thp(void *arg)
|
|||
struct drm_i915_private *i915 = ctx->i915;
|
||||
struct i915_address_space *vm = ctx->vm ?: &i915->ggtt.vm;
|
||||
struct drm_i915_gem_object *obj;
|
||||
struct intel_engine_cs *engine;
|
||||
enum intel_engine_id id;
|
||||
struct i915_vma *vma;
|
||||
unsigned int flags = PIN_USER;
|
||||
unsigned int n;
|
||||
int err;
|
||||
|
||||
/*
|
||||
|
@ -1635,9 +1651,15 @@ static int igt_shrink_thp(void *arg)
|
|||
if (err)
|
||||
goto out_unpin;
|
||||
|
||||
err = gpu_write(vma, ctx, i915->engine[RCS0], 0, 0xdeadbeaf);
|
||||
if (err)
|
||||
goto out_unpin;
|
||||
n = 0;
|
||||
for_each_engine(engine, i915, id) {
|
||||
if (!intel_engine_can_store_dword(engine))
|
||||
continue;
|
||||
|
||||
err = gpu_write(vma, ctx, engine, n++, 0xdeadbeaf);
|
||||
if (err)
|
||||
goto out_unpin;
|
||||
}
|
||||
|
||||
i915_vma_unpin(vma);
|
||||
|
||||
|
@ -1662,7 +1684,12 @@ static int igt_shrink_thp(void *arg)
|
|||
if (err)
|
||||
goto out_close;
|
||||
|
||||
err = cpu_check(obj, 0, 0xdeadbeaf);
|
||||
while (n--) {
|
||||
err = cpu_check(obj, n, 0xdeadbeaf);
|
||||
if (err)
|
||||
goto out_unpin;
|
||||
}
|
||||
|
||||
|
||||
out_unpin:
|
||||
i915_vma_unpin(vma);
|
||||
|
@ -1726,7 +1753,7 @@ out_unlock:
|
|||
return err;
|
||||
}
|
||||
|
||||
int i915_gem_huge_page_live_selftests(struct drm_i915_private *dev_priv)
|
||||
int i915_gem_huge_page_live_selftests(struct drm_i915_private *i915)
|
||||
{
|
||||
static const struct i915_subtest tests[] = {
|
||||
SUBTEST(igt_shrink_thp),
|
||||
|
@ -1741,22 +1768,22 @@ int i915_gem_huge_page_live_selftests(struct drm_i915_private *dev_priv)
|
|||
intel_wakeref_t wakeref;
|
||||
int err;
|
||||
|
||||
if (!HAS_PPGTT(dev_priv)) {
|
||||
if (!HAS_PPGTT(i915)) {
|
||||
pr_info("PPGTT not supported, skipping live-selftests\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (i915_terminally_wedged(dev_priv))
|
||||
if (intel_gt_is_wedged(&i915->gt))
|
||||
return 0;
|
||||
|
||||
file = mock_file(dev_priv);
|
||||
file = mock_file(i915);
|
||||
if (IS_ERR(file))
|
||||
return PTR_ERR(file);
|
||||
|
||||
mutex_lock(&dev_priv->drm.struct_mutex);
|
||||
wakeref = intel_runtime_pm_get(&dev_priv->runtime_pm);
|
||||
mutex_lock(&i915->drm.struct_mutex);
|
||||
wakeref = intel_runtime_pm_get(&i915->runtime_pm);
|
||||
|
||||
ctx = live_context(dev_priv, file);
|
||||
ctx = live_context(i915, file);
|
||||
if (IS_ERR(ctx)) {
|
||||
err = PTR_ERR(ctx);
|
||||
goto out_unlock;
|
||||
|
@ -1768,10 +1795,10 @@ int i915_gem_huge_page_live_selftests(struct drm_i915_private *dev_priv)
|
|||
err = i915_subtests(tests, ctx);
|
||||
|
||||
out_unlock:
|
||||
intel_runtime_pm_put(&dev_priv->runtime_pm, wakeref);
|
||||
mutex_unlock(&dev_priv->drm.struct_mutex);
|
||||
intel_runtime_pm_put(&i915->runtime_pm, wakeref);
|
||||
mutex_unlock(&i915->drm.struct_mutex);
|
||||
|
||||
mock_file_free(dev_priv, file);
|
||||
mock_file_free(i915, file);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
|
|
@ -5,14 +5,16 @@
|
|||
|
||||
#include "i915_selftest.h"
|
||||
|
||||
#include "gt/intel_gt.h"
|
||||
|
||||
#include "selftests/igt_flush_test.h"
|
||||
#include "selftests/mock_drm.h"
|
||||
#include "mock_context.h"
|
||||
|
||||
static int igt_client_fill(void *arg)
|
||||
{
|
||||
struct intel_context *ce = arg;
|
||||
struct drm_i915_private *i915 = ce->gem_context->i915;
|
||||
struct drm_i915_private *i915 = arg;
|
||||
struct intel_context *ce = i915->engine[BCS0]->kernel_context;
|
||||
struct drm_i915_gem_object *obj;
|
||||
struct rnd_state prng;
|
||||
IGT_TIMEOUT(end);
|
||||
|
@ -63,17 +65,6 @@ static int igt_client_fill(void *arg)
|
|||
if (err)
|
||||
goto err_unpin;
|
||||
|
||||
/*
|
||||
* XXX: For now do the wait without the object resv lock to
|
||||
* ensure we don't deadlock.
|
||||
*/
|
||||
err = i915_gem_object_wait(obj,
|
||||
I915_WAIT_INTERRUPTIBLE |
|
||||
I915_WAIT_ALL,
|
||||
MAX_SCHEDULE_TIMEOUT);
|
||||
if (err)
|
||||
goto err_unpin;
|
||||
|
||||
i915_gem_object_lock(obj);
|
||||
err = i915_gem_object_set_to_cpu_domain(obj, false);
|
||||
i915_gem_object_unlock(obj);
|
||||
|
@ -100,11 +91,6 @@ err_unpin:
|
|||
err_put:
|
||||
i915_gem_object_put(obj);
|
||||
err_flush:
|
||||
mutex_lock(&i915->drm.struct_mutex);
|
||||
if (igt_flush_test(i915, I915_WAIT_LOCKED))
|
||||
err = -EIO;
|
||||
mutex_unlock(&i915->drm.struct_mutex);
|
||||
|
||||
if (err == -ENOMEM)
|
||||
err = 0;
|
||||
|
||||
|
@ -117,11 +103,11 @@ int i915_gem_client_blt_live_selftests(struct drm_i915_private *i915)
|
|||
SUBTEST(igt_client_fill),
|
||||
};
|
||||
|
||||
if (i915_terminally_wedged(i915))
|
||||
if (intel_gt_is_wedged(&i915->gt))
|
||||
return 0;
|
||||
|
||||
if (!HAS_ENGINE(i915, BCS0))
|
||||
return 0;
|
||||
|
||||
return i915_subtests(tests, i915->engine[BCS0]->kernel_context);
|
||||
return i915_live_subtests(tests, i915);
|
||||
}
|
||||
|
|
|
@ -6,6 +6,8 @@
|
|||
|
||||
#include <linux/prime_numbers.h>
|
||||
|
||||
#include "gt/intel_gt.h"
|
||||
|
||||
#include "i915_selftest.h"
|
||||
#include "selftests/i915_random.h"
|
||||
|
||||
|
@ -242,12 +244,15 @@ static bool always_valid(struct drm_i915_private *i915)
|
|||
|
||||
static bool needs_fence_registers(struct drm_i915_private *i915)
|
||||
{
|
||||
return !i915_terminally_wedged(i915);
|
||||
return !intel_gt_is_wedged(&i915->gt);
|
||||
}
|
||||
|
||||
static bool needs_mi_store_dword(struct drm_i915_private *i915)
|
||||
{
|
||||
if (i915_terminally_wedged(i915))
|
||||
if (intel_gt_is_wedged(&i915->gt))
|
||||
return false;
|
||||
|
||||
if (!HAS_ENGINE(i915, RCS0))
|
||||
return false;
|
||||
|
||||
return intel_engine_can_store_dword(i915->engine[RCS0]);
|
||||
|
|
|
@ -7,6 +7,7 @@
|
|||
#include <linux/prime_numbers.h>
|
||||
|
||||
#include "gem/i915_gem_pm.h"
|
||||
#include "gt/intel_gt.h"
|
||||
#include "gt/intel_reset.h"
|
||||
#include "i915_selftest.h"
|
||||
|
||||
|
@ -31,7 +32,6 @@ static int live_nop_switch(void *arg)
|
|||
struct intel_engine_cs *engine;
|
||||
struct i915_gem_context **ctx;
|
||||
enum intel_engine_id id;
|
||||
intel_wakeref_t wakeref;
|
||||
struct igt_live_test t;
|
||||
struct drm_file *file;
|
||||
unsigned long n;
|
||||
|
@ -53,7 +53,6 @@ static int live_nop_switch(void *arg)
|
|||
return PTR_ERR(file);
|
||||
|
||||
mutex_lock(&i915->drm.struct_mutex);
|
||||
wakeref = intel_runtime_pm_get(&i915->runtime_pm);
|
||||
|
||||
ctx = kcalloc(nctx, sizeof(*ctx), GFP_KERNEL);
|
||||
if (!ctx) {
|
||||
|
@ -85,7 +84,7 @@ static int live_nop_switch(void *arg)
|
|||
}
|
||||
if (i915_request_wait(rq, 0, HZ / 5) < 0) {
|
||||
pr_err("Failed to populated %d contexts\n", nctx);
|
||||
i915_gem_set_wedged(i915);
|
||||
intel_gt_set_wedged(&i915->gt);
|
||||
err = -EIO;
|
||||
goto out_unlock;
|
||||
}
|
||||
|
@ -129,7 +128,7 @@ static int live_nop_switch(void *arg)
|
|||
if (i915_request_wait(rq, 0, HZ / 5) < 0) {
|
||||
pr_err("Switching between %ld contexts timed out\n",
|
||||
prime);
|
||||
i915_gem_set_wedged(i915);
|
||||
intel_gt_set_wedged(&i915->gt);
|
||||
break;
|
||||
}
|
||||
|
||||
|
@ -152,7 +151,6 @@ static int live_nop_switch(void *arg)
|
|||
}
|
||||
|
||||
out_unlock:
|
||||
intel_runtime_pm_put(&i915->runtime_pm, wakeref);
|
||||
mutex_unlock(&i915->drm.struct_mutex);
|
||||
mock_file_free(i915, file);
|
||||
return err;
|
||||
|
@ -237,8 +235,7 @@ static int gpu_fill(struct drm_i915_gem_object *obj,
|
|||
struct intel_engine_cs *engine,
|
||||
unsigned int dw)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(obj->base.dev);
|
||||
struct i915_address_space *vm = ctx->vm ?: &i915->ggtt.vm;
|
||||
struct i915_address_space *vm = ctx->vm ?: &engine->gt->ggtt->vm;
|
||||
struct i915_request *rq;
|
||||
struct i915_vma *vma;
|
||||
struct i915_vma *batch;
|
||||
|
@ -431,6 +428,9 @@ create_test_object(struct i915_gem_context *ctx,
|
|||
u64 size;
|
||||
int err;
|
||||
|
||||
/* Keep in GEM's good graces */
|
||||
i915_retire_requests(ctx->i915);
|
||||
|
||||
size = min(vm->total / 2, 1024ull * DW_PER_PAGE * PAGE_SIZE);
|
||||
size = round_down(size, DW_PER_PAGE * PAGE_SIZE);
|
||||
|
||||
|
@ -507,7 +507,6 @@ static int igt_ctx_exec(void *arg)
|
|||
dw = 0;
|
||||
while (!time_after(jiffies, end_time)) {
|
||||
struct i915_gem_context *ctx;
|
||||
intel_wakeref_t wakeref;
|
||||
|
||||
ctx = live_context(i915, file);
|
||||
if (IS_ERR(ctx)) {
|
||||
|
@ -523,8 +522,7 @@ static int igt_ctx_exec(void *arg)
|
|||
}
|
||||
}
|
||||
|
||||
with_intel_runtime_pm(&i915->runtime_pm, wakeref)
|
||||
err = gpu_fill(obj, ctx, engine, dw);
|
||||
err = gpu_fill(obj, ctx, engine, dw);
|
||||
if (err) {
|
||||
pr_err("Failed to fill dword %lu [%lu/%lu] with gpu (%s) in ctx %u [full-ppgtt? %s], err=%d\n",
|
||||
ndwords, dw, max_dwords(obj),
|
||||
|
@ -565,6 +563,8 @@ out_unlock:
|
|||
mock_file_free(i915, file);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
i915_gem_drain_freed_objects(i915);
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
@ -623,7 +623,6 @@ static int igt_shared_ctx_exec(void *arg)
|
|||
ncontexts = 0;
|
||||
while (!time_after(jiffies, end_time)) {
|
||||
struct i915_gem_context *ctx;
|
||||
intel_wakeref_t wakeref;
|
||||
|
||||
ctx = kernel_context(i915);
|
||||
if (IS_ERR(ctx)) {
|
||||
|
@ -642,9 +641,7 @@ static int igt_shared_ctx_exec(void *arg)
|
|||
}
|
||||
}
|
||||
|
||||
err = 0;
|
||||
with_intel_runtime_pm(&i915->runtime_pm, wakeref)
|
||||
err = gpu_fill(obj, ctx, engine, dw);
|
||||
err = gpu_fill(obj, ctx, engine, dw);
|
||||
if (err) {
|
||||
pr_err("Failed to fill dword %lu [%lu/%lu] with gpu (%s) in ctx %u [full-ppgtt? %s], err=%d\n",
|
||||
ndwords, dw, max_dwords(obj),
|
||||
|
@ -678,6 +675,10 @@ static int igt_shared_ctx_exec(void *arg)
|
|||
|
||||
dw += rem;
|
||||
}
|
||||
|
||||
mutex_unlock(&i915->drm.struct_mutex);
|
||||
i915_gem_drain_freed_objects(i915);
|
||||
mutex_lock(&i915->drm.struct_mutex);
|
||||
}
|
||||
out_test:
|
||||
if (igt_live_test_end(&t))
|
||||
|
@ -746,7 +747,7 @@ emit_rpcs_query(struct drm_i915_gem_object *obj,
|
|||
|
||||
GEM_BUG_ON(!intel_engine_can_store_dword(ce->engine));
|
||||
|
||||
vma = i915_vma_instance(obj, ce->gem_context->vm, NULL);
|
||||
vma = i915_vma_instance(obj, ce->vm, NULL);
|
||||
if (IS_ERR(vma))
|
||||
return PTR_ERR(vma);
|
||||
|
||||
|
@ -956,7 +957,7 @@ __sseu_finish(struct drm_i915_private *i915,
|
|||
int ret = 0;
|
||||
|
||||
if (flags & TEST_RESET) {
|
||||
ret = i915_reset_engine(ce->engine, "sseu");
|
||||
ret = intel_engine_reset(ce->engine, "sseu");
|
||||
if (ret)
|
||||
goto out;
|
||||
}
|
||||
|
@ -1025,35 +1026,33 @@ __igt_ctx_sseu(struct drm_i915_private *i915,
|
|||
unsigned int flags)
|
||||
{
|
||||
struct intel_engine_cs *engine = i915->engine[RCS0];
|
||||
struct intel_sseu default_sseu = engine->sseu;
|
||||
struct drm_i915_gem_object *obj;
|
||||
struct i915_gem_context *ctx;
|
||||
struct intel_context *ce;
|
||||
struct intel_sseu pg_sseu;
|
||||
intel_wakeref_t wakeref;
|
||||
struct drm_file *file;
|
||||
int ret;
|
||||
|
||||
if (INTEL_GEN(i915) < 9)
|
||||
if (INTEL_GEN(i915) < 9 || !engine)
|
||||
return 0;
|
||||
|
||||
if (!RUNTIME_INFO(i915)->sseu.has_slice_pg)
|
||||
return 0;
|
||||
|
||||
if (hweight32(default_sseu.slice_mask) < 2)
|
||||
if (hweight32(engine->sseu.slice_mask) < 2)
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* Gen11 VME friendly power-gated configuration with half enabled
|
||||
* sub-slices.
|
||||
*/
|
||||
pg_sseu = default_sseu;
|
||||
pg_sseu = engine->sseu;
|
||||
pg_sseu.slice_mask = 1;
|
||||
pg_sseu.subslice_mask =
|
||||
~(~0 << (hweight32(default_sseu.subslice_mask) / 2));
|
||||
~(~0 << (hweight32(engine->sseu.subslice_mask) / 2));
|
||||
|
||||
pr_info("SSEU subtest '%s', flags=%x, def_slices=%u, pg_slices=%u\n",
|
||||
name, flags, hweight32(default_sseu.slice_mask),
|
||||
name, flags, hweight32(engine->sseu.slice_mask),
|
||||
hweight32(pg_sseu.slice_mask));
|
||||
|
||||
file = mock_file(i915);
|
||||
|
@ -1061,7 +1060,7 @@ __igt_ctx_sseu(struct drm_i915_private *i915,
|
|||
return PTR_ERR(file);
|
||||
|
||||
if (flags & TEST_RESET)
|
||||
igt_global_reset_lock(i915);
|
||||
igt_global_reset_lock(&i915->gt);
|
||||
|
||||
mutex_lock(&i915->drm.struct_mutex);
|
||||
|
||||
|
@ -1078,12 +1077,10 @@ __igt_ctx_sseu(struct drm_i915_private *i915,
|
|||
goto out_unlock;
|
||||
}
|
||||
|
||||
wakeref = intel_runtime_pm_get(&i915->runtime_pm);
|
||||
|
||||
ce = i915_gem_context_get_engine(ctx, RCS0);
|
||||
if (IS_ERR(ce)) {
|
||||
ret = PTR_ERR(ce);
|
||||
goto out_rpm;
|
||||
goto out_put;
|
||||
}
|
||||
|
||||
ret = intel_context_pin(ce);
|
||||
|
@ -1091,7 +1088,7 @@ __igt_ctx_sseu(struct drm_i915_private *i915,
|
|||
goto out_context;
|
||||
|
||||
/* First set the default mask. */
|
||||
ret = __sseu_test(i915, name, flags, ce, obj, default_sseu);
|
||||
ret = __sseu_test(i915, name, flags, ce, obj, engine->sseu);
|
||||
if (ret)
|
||||
goto out_fail;
|
||||
|
||||
|
@ -1101,7 +1098,7 @@ __igt_ctx_sseu(struct drm_i915_private *i915,
|
|||
goto out_fail;
|
||||
|
||||
/* Back to defaults. */
|
||||
ret = __sseu_test(i915, name, flags, ce, obj, default_sseu);
|
||||
ret = __sseu_test(i915, name, flags, ce, obj, engine->sseu);
|
||||
if (ret)
|
||||
goto out_fail;
|
||||
|
||||
|
@ -1117,15 +1114,14 @@ out_fail:
|
|||
intel_context_unpin(ce);
|
||||
out_context:
|
||||
intel_context_put(ce);
|
||||
out_rpm:
|
||||
intel_runtime_pm_put(&i915->runtime_pm, wakeref);
|
||||
out_put:
|
||||
i915_gem_object_put(obj);
|
||||
|
||||
out_unlock:
|
||||
mutex_unlock(&i915->drm.struct_mutex);
|
||||
|
||||
if (flags & TEST_RESET)
|
||||
igt_global_reset_unlock(i915);
|
||||
igt_global_reset_unlock(&i915->gt);
|
||||
|
||||
mock_file_free(i915, file);
|
||||
|
||||
|
@ -1194,7 +1190,7 @@ static int igt_ctx_readonly(void *arg)
|
|||
goto out_unlock;
|
||||
}
|
||||
|
||||
vm = ctx->vm ?: &i915->mm.aliasing_ppgtt->vm;
|
||||
vm = ctx->vm ?: &i915->ggtt.alias->vm;
|
||||
if (!vm || !vm->has_read_only) {
|
||||
err = 0;
|
||||
goto out_unlock;
|
||||
|
@ -1207,8 +1203,6 @@ static int igt_ctx_readonly(void *arg)
|
|||
unsigned int id;
|
||||
|
||||
for_each_engine(engine, i915, id) {
|
||||
intel_wakeref_t wakeref;
|
||||
|
||||
if (!intel_engine_can_store_dword(engine))
|
||||
continue;
|
||||
|
||||
|
@ -1223,9 +1217,7 @@ static int igt_ctx_readonly(void *arg)
|
|||
i915_gem_object_set_readonly(obj);
|
||||
}
|
||||
|
||||
err = 0;
|
||||
with_intel_runtime_pm(&i915->runtime_pm, wakeref)
|
||||
err = gpu_fill(obj, ctx, engine, dw);
|
||||
err = gpu_fill(obj, ctx, engine, dw);
|
||||
if (err) {
|
||||
pr_err("Failed to fill dword %lu [%lu/%lu] with gpu (%s) in ctx %u [full-ppgtt? %s], err=%d\n",
|
||||
ndwords, dw, max_dwords(obj),
|
||||
|
@ -1488,7 +1480,6 @@ static int igt_vm_isolation(void *arg)
|
|||
struct drm_i915_private *i915 = arg;
|
||||
struct i915_gem_context *ctx_a, *ctx_b;
|
||||
struct intel_engine_cs *engine;
|
||||
intel_wakeref_t wakeref;
|
||||
struct igt_live_test t;
|
||||
struct drm_file *file;
|
||||
I915_RND_STATE(prng);
|
||||
|
@ -1535,8 +1526,6 @@ static int igt_vm_isolation(void *arg)
|
|||
GEM_BUG_ON(ctx_b->vm->total != vm_total);
|
||||
vm_total -= I915_GTT_PAGE_SIZE;
|
||||
|
||||
wakeref = intel_runtime_pm_get(&i915->runtime_pm);
|
||||
|
||||
count = 0;
|
||||
for_each_engine(engine, i915, id) {
|
||||
IGT_TIMEOUT(end_time);
|
||||
|
@ -1551,7 +1540,7 @@ static int igt_vm_isolation(void *arg)
|
|||
|
||||
div64_u64_rem(i915_prandom_u64_state(&prng),
|
||||
vm_total, &offset);
|
||||
offset &= -sizeof(u32);
|
||||
offset = round_down(offset, alignof_dword);
|
||||
offset += I915_GTT_PAGE_SIZE;
|
||||
|
||||
err = write_to_scratch(ctx_a, engine,
|
||||
|
@ -1560,7 +1549,7 @@ static int igt_vm_isolation(void *arg)
|
|||
err = read_from_scratch(ctx_b, engine,
|
||||
offset, &value);
|
||||
if (err)
|
||||
goto out_rpm;
|
||||
goto out_unlock;
|
||||
|
||||
if (value) {
|
||||
pr_err("%s: Read %08x from scratch (offset 0x%08x_%08x), after %lu reads!\n",
|
||||
|
@ -1569,7 +1558,7 @@ static int igt_vm_isolation(void *arg)
|
|||
lower_32_bits(offset),
|
||||
this);
|
||||
err = -EINVAL;
|
||||
goto out_rpm;
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
this++;
|
||||
|
@ -1579,8 +1568,6 @@ static int igt_vm_isolation(void *arg)
|
|||
pr_info("Checked %lu scratch offsets across %d engines\n",
|
||||
count, RUNTIME_INFO(i915)->num_engines);
|
||||
|
||||
out_rpm:
|
||||
intel_runtime_pm_put(&i915->runtime_pm, wakeref);
|
||||
out_unlock:
|
||||
if (igt_live_test_end(&t))
|
||||
err = -EIO;
|
||||
|
@ -1736,7 +1723,7 @@ int i915_gem_context_mock_selftests(void)
|
|||
return err;
|
||||
}
|
||||
|
||||
int i915_gem_context_live_selftests(struct drm_i915_private *dev_priv)
|
||||
int i915_gem_context_live_selftests(struct drm_i915_private *i915)
|
||||
{
|
||||
static const struct i915_subtest tests[] = {
|
||||
SUBTEST(live_nop_switch),
|
||||
|
@ -1747,8 +1734,8 @@ int i915_gem_context_live_selftests(struct drm_i915_private *dev_priv)
|
|||
SUBTEST(igt_vm_isolation),
|
||||
};
|
||||
|
||||
if (i915_terminally_wedged(dev_priv))
|
||||
if (intel_gt_is_wedged(&i915->gt))
|
||||
return 0;
|
||||
|
||||
return i915_subtests(tests, dev_priv);
|
||||
return i915_live_subtests(tests, i915);
|
||||
}
|
||||
|
|
|
@ -6,6 +6,7 @@
|
|||
|
||||
#include <linux/prime_numbers.h>
|
||||
|
||||
#include "gt/intel_gt.h"
|
||||
#include "gt/intel_gt_pm.h"
|
||||
#include "huge_gem_object.h"
|
||||
#include "i915_selftest.h"
|
||||
|
@ -143,7 +144,7 @@ static int check_partial_mapping(struct drm_i915_gem_object *obj,
|
|||
if (offset >= obj->base.size)
|
||||
continue;
|
||||
|
||||
i915_gem_flush_ggtt_writes(to_i915(obj->base.dev));
|
||||
intel_gt_flush_ggtt_writes(&to_i915(obj->base.dev)->gt);
|
||||
|
||||
p = i915_gem_object_get_page(obj, offset >> PAGE_SHIFT);
|
||||
cpu = kmap(p) + offset_in_page(offset);
|
||||
|
@ -327,7 +328,8 @@ out:
|
|||
static int make_obj_busy(struct drm_i915_gem_object *obj)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(obj->base.dev);
|
||||
struct i915_request *rq;
|
||||
struct intel_engine_cs *engine;
|
||||
enum intel_engine_id id;
|
||||
struct i915_vma *vma;
|
||||
int err;
|
||||
|
||||
|
@ -339,18 +341,22 @@ static int make_obj_busy(struct drm_i915_gem_object *obj)
|
|||
if (err)
|
||||
return err;
|
||||
|
||||
rq = i915_request_create(i915->engine[RCS0]->kernel_context);
|
||||
if (IS_ERR(rq)) {
|
||||
i915_vma_unpin(vma);
|
||||
return PTR_ERR(rq);
|
||||
for_each_engine(engine, i915, id) {
|
||||
struct i915_request *rq;
|
||||
|
||||
rq = i915_request_create(engine->kernel_context);
|
||||
if (IS_ERR(rq)) {
|
||||
i915_vma_unpin(vma);
|
||||
return PTR_ERR(rq);
|
||||
}
|
||||
|
||||
i915_vma_lock(vma);
|
||||
err = i915_vma_move_to_active(vma, rq, EXEC_OBJECT_WRITE);
|
||||
i915_vma_unlock(vma);
|
||||
|
||||
i915_request_add(rq);
|
||||
}
|
||||
|
||||
i915_vma_lock(vma);
|
||||
err = i915_vma_move_to_active(vma, rq, EXEC_OBJECT_WRITE);
|
||||
i915_vma_unlock(vma);
|
||||
|
||||
i915_request_add(rq);
|
||||
|
||||
i915_vma_unpin(vma);
|
||||
i915_gem_object_put(obj); /* leave it only alive via its active ref */
|
||||
|
||||
|
@ -378,7 +384,7 @@ static void disable_retire_worker(struct drm_i915_private *i915)
|
|||
{
|
||||
i915_gem_shrinker_unregister(i915);
|
||||
|
||||
intel_gt_pm_get(i915);
|
||||
intel_gt_pm_get(&i915->gt);
|
||||
|
||||
cancel_delayed_work_sync(&i915->gem.retire_work);
|
||||
flush_work(&i915->gem.idle_work);
|
||||
|
@ -386,7 +392,7 @@ static void disable_retire_worker(struct drm_i915_private *i915)
|
|||
|
||||
static void restore_retire_worker(struct drm_i915_private *i915)
|
||||
{
|
||||
intel_gt_pm_put(i915);
|
||||
intel_gt_pm_put(&i915->gt);
|
||||
|
||||
mutex_lock(&i915->drm.struct_mutex);
|
||||
igt_flush_test(i915, I915_WAIT_LOCKED);
|
||||
|
@ -395,6 +401,18 @@ static void restore_retire_worker(struct drm_i915_private *i915)
|
|||
i915_gem_shrinker_register(i915);
|
||||
}
|
||||
|
||||
static void mmap_offset_lock(struct drm_i915_private *i915)
|
||||
__acquires(&i915->drm.vma_offset_manager->vm_lock)
|
||||
{
|
||||
write_lock(&i915->drm.vma_offset_manager->vm_lock);
|
||||
}
|
||||
|
||||
static void mmap_offset_unlock(struct drm_i915_private *i915)
|
||||
__releases(&i915->drm.vma_offset_manager->vm_lock)
|
||||
{
|
||||
write_unlock(&i915->drm.vma_offset_manager->vm_lock);
|
||||
}
|
||||
|
||||
static int igt_mmap_offset_exhaustion(void *arg)
|
||||
{
|
||||
struct drm_i915_private *i915 = arg;
|
||||
|
@ -413,7 +431,9 @@ static int igt_mmap_offset_exhaustion(void *arg)
|
|||
drm_mm_for_each_hole(hole, mm, hole_start, hole_end) {
|
||||
resv.start = hole_start;
|
||||
resv.size = hole_end - hole_start - 1; /* PAGE_SIZE units */
|
||||
mmap_offset_lock(i915);
|
||||
err = drm_mm_reserve_node(mm, &resv);
|
||||
mmap_offset_unlock(i915);
|
||||
if (err) {
|
||||
pr_err("Failed to trim VMA manager, err=%d\n", err);
|
||||
goto out_park;
|
||||
|
@ -458,7 +478,7 @@ static int igt_mmap_offset_exhaustion(void *arg)
|
|||
|
||||
/* Now fill with busy dead objects that we expect to reap */
|
||||
for (loop = 0; loop < 3; loop++) {
|
||||
if (i915_terminally_wedged(i915))
|
||||
if (intel_gt_is_wedged(&i915->gt))
|
||||
break;
|
||||
|
||||
obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
|
||||
|
@ -474,19 +494,12 @@ static int igt_mmap_offset_exhaustion(void *arg)
|
|||
pr_err("[loop %d] Failed to busy the object\n", loop);
|
||||
goto err_obj;
|
||||
}
|
||||
|
||||
/* NB we rely on the _active_ reference to access obj now */
|
||||
GEM_BUG_ON(!i915_gem_object_is_active(obj));
|
||||
err = create_mmap_offset(obj);
|
||||
if (err) {
|
||||
pr_err("[loop %d] create_mmap_offset failed with err=%d\n",
|
||||
loop, err);
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
out:
|
||||
mmap_offset_lock(i915);
|
||||
drm_mm_remove_node(&resv);
|
||||
mmap_offset_unlock(i915);
|
||||
out_park:
|
||||
restore_retire_worker(i915);
|
||||
return err;
|
||||
|
|
|
@ -3,6 +3,8 @@
|
|||
* Copyright © 2019 Intel Corporation
|
||||
*/
|
||||
|
||||
#include "gt/intel_gt.h"
|
||||
|
||||
#include "i915_selftest.h"
|
||||
|
||||
#include "selftests/igt_flush_test.h"
|
||||
|
@ -11,8 +13,8 @@
|
|||
|
||||
static int igt_fill_blt(void *arg)
|
||||
{
|
||||
struct intel_context *ce = arg;
|
||||
struct drm_i915_private *i915 = ce->gem_context->i915;
|
||||
struct drm_i915_private *i915 = arg;
|
||||
struct intel_context *ce = i915->engine[BCS0]->kernel_context;
|
||||
struct drm_i915_gem_object *obj;
|
||||
struct rnd_state prng;
|
||||
IGT_TIMEOUT(end);
|
||||
|
@ -83,11 +85,6 @@ err_unpin:
|
|||
err_put:
|
||||
i915_gem_object_put(obj);
|
||||
err_flush:
|
||||
mutex_lock(&i915->drm.struct_mutex);
|
||||
if (igt_flush_test(i915, I915_WAIT_LOCKED))
|
||||
err = -EIO;
|
||||
mutex_unlock(&i915->drm.struct_mutex);
|
||||
|
||||
if (err == -ENOMEM)
|
||||
err = 0;
|
||||
|
||||
|
@ -100,11 +97,11 @@ int i915_gem_object_blt_live_selftests(struct drm_i915_private *i915)
|
|||
SUBTEST(igt_fill_blt),
|
||||
};
|
||||
|
||||
if (i915_terminally_wedged(i915))
|
||||
if (intel_gt_is_wedged(&i915->gt))
|
||||
return 0;
|
||||
|
||||
if (!HAS_ENGINE(i915, BCS0))
|
||||
return 0;
|
||||
|
||||
return i915_subtests(tests, i915->engine[BCS0]->kernel_context);
|
||||
return i915_live_subtests(tests, i915);
|
||||
}
|
||||
|
|
|
@ -1,2 +1,5 @@
|
|||
# For building individual subdir files on the command line
|
||||
subdir-ccflags-y += -I$(srctree)/$(src)/..
|
||||
|
||||
# Extra header tests
|
||||
include $(src)/Makefile.header-test
|
||||
header-test-pattern-$(CONFIG_DRM_I915_WERROR) := *.h
|
||||
|
|
|
@ -1,16 +0,0 @@
|
|||
# SPDX-License-Identifier: MIT
|
||||
# Copyright © 2019 Intel Corporation
|
||||
|
||||
# Test the headers are compilable as standalone units
|
||||
header_test := $(notdir $(wildcard $(src)/*.h))
|
||||
|
||||
quiet_cmd_header_test = HDRTEST $@
|
||||
cmd_header_test = echo "\#include \"$(<F)\"" > $@
|
||||
|
||||
header_test_%.c: %.h
|
||||
$(call cmd,header_test)
|
||||
|
||||
extra-$(CONFIG_DRM_I915_WERROR) += \
|
||||
$(foreach h,$(header_test),$(patsubst %.h,header_test_%.o,$(h)))
|
||||
|
||||
clean-files += $(foreach h,$(header_test),$(patsubst %.h,header_test_%.c,$(h)))
|
|
@ -59,6 +59,10 @@ int __intel_context_do_pin(struct intel_context *ce)
|
|||
if (err)
|
||||
goto err;
|
||||
|
||||
GEM_TRACE("%s context:%llx pin ring:{head:%04x, tail:%04x}\n",
|
||||
ce->engine->name, ce->ring->timeline->fence_context,
|
||||
ce->ring->head, ce->ring->tail);
|
||||
|
||||
i915_gem_context_get(ce->gem_context); /* for ctx->ppgtt */
|
||||
|
||||
smp_mb__before_atomic(); /* flush pin before it is visible */
|
||||
|
@ -85,6 +89,9 @@ void intel_context_unpin(struct intel_context *ce)
|
|||
mutex_lock_nested(&ce->pin_mutex, SINGLE_DEPTH_NESTING);
|
||||
|
||||
if (likely(atomic_dec_and_test(&ce->pin_count))) {
|
||||
GEM_TRACE("%s context:%llx retire\n",
|
||||
ce->engine->name, ce->ring->timeline->fence_context);
|
||||
|
||||
ce->ops->unpin(ce);
|
||||
|
||||
i915_gem_context_put(ce->gem_context);
|
||||
|
@ -95,11 +102,15 @@ void intel_context_unpin(struct intel_context *ce)
|
|||
intel_context_put(ce);
|
||||
}
|
||||
|
||||
static int __context_pin_state(struct i915_vma *vma, unsigned long flags)
|
||||
static int __context_pin_state(struct i915_vma *vma)
|
||||
{
|
||||
u64 flags;
|
||||
int err;
|
||||
|
||||
err = i915_vma_pin(vma, 0, 0, flags | PIN_GLOBAL);
|
||||
flags = i915_ggtt_pin_bias(vma) | PIN_OFFSET_BIAS;
|
||||
flags |= PIN_HIGH | PIN_GLOBAL;
|
||||
|
||||
err = i915_vma_pin(vma, 0, 0, flags);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
|
@ -119,10 +130,13 @@ static void __context_unpin_state(struct i915_vma *vma)
|
|||
__i915_vma_unpin(vma);
|
||||
}
|
||||
|
||||
static void intel_context_retire(struct i915_active *active)
|
||||
static void __intel_context_retire(struct i915_active *active)
|
||||
{
|
||||
struct intel_context *ce = container_of(active, typeof(*ce), active);
|
||||
|
||||
GEM_TRACE("%s context:%llx retire\n",
|
||||
ce->engine->name, ce->ring->timeline->fence_context);
|
||||
|
||||
if (ce->state)
|
||||
__context_unpin_state(ce->state);
|
||||
|
||||
|
@ -130,35 +144,11 @@ static void intel_context_retire(struct i915_active *active)
|
|||
intel_context_put(ce);
|
||||
}
|
||||
|
||||
void
|
||||
intel_context_init(struct intel_context *ce,
|
||||
struct i915_gem_context *ctx,
|
||||
struct intel_engine_cs *engine)
|
||||
{
|
||||
GEM_BUG_ON(!engine->cops);
|
||||
|
||||
kref_init(&ce->ref);
|
||||
|
||||
ce->gem_context = ctx;
|
||||
ce->engine = engine;
|
||||
ce->ops = engine->cops;
|
||||
ce->sseu = engine->sseu;
|
||||
|
||||
INIT_LIST_HEAD(&ce->signal_link);
|
||||
INIT_LIST_HEAD(&ce->signals);
|
||||
|
||||
mutex_init(&ce->pin_mutex);
|
||||
|
||||
i915_active_init(ctx->i915, &ce->active, intel_context_retire);
|
||||
}
|
||||
|
||||
int intel_context_active_acquire(struct intel_context *ce, unsigned long flags)
|
||||
static int __intel_context_active(struct i915_active *active)
|
||||
{
|
||||
struct intel_context *ce = container_of(active, typeof(*ce), active);
|
||||
int err;
|
||||
|
||||
if (!i915_active_acquire(&ce->active))
|
||||
return 0;
|
||||
|
||||
intel_context_get(ce);
|
||||
|
||||
err = intel_ring_pin(ce->ring);
|
||||
|
@ -168,7 +158,7 @@ int intel_context_active_acquire(struct intel_context *ce, unsigned long flags)
|
|||
if (!ce->state)
|
||||
return 0;
|
||||
|
||||
err = __context_pin_state(ce->state, flags);
|
||||
err = __context_pin_state(ce->state);
|
||||
if (err)
|
||||
goto err_ring;
|
||||
|
||||
|
@ -188,15 +178,40 @@ err_ring:
|
|||
intel_ring_unpin(ce->ring);
|
||||
err_put:
|
||||
intel_context_put(ce);
|
||||
i915_active_cancel(&ce->active);
|
||||
return err;
|
||||
}
|
||||
|
||||
void intel_context_active_release(struct intel_context *ce)
|
||||
void
|
||||
intel_context_init(struct intel_context *ce,
|
||||
struct i915_gem_context *ctx,
|
||||
struct intel_engine_cs *engine)
|
||||
{
|
||||
/* Nodes preallocated in intel_context_active() */
|
||||
i915_active_acquire_barrier(&ce->active);
|
||||
i915_active_release(&ce->active);
|
||||
GEM_BUG_ON(!engine->cops);
|
||||
|
||||
kref_init(&ce->ref);
|
||||
|
||||
ce->gem_context = ctx;
|
||||
ce->vm = i915_vm_get(ctx->vm ?: &engine->gt->ggtt->vm);
|
||||
|
||||
ce->engine = engine;
|
||||
ce->ops = engine->cops;
|
||||
ce->sseu = engine->sseu;
|
||||
|
||||
INIT_LIST_HEAD(&ce->signal_link);
|
||||
INIT_LIST_HEAD(&ce->signals);
|
||||
|
||||
mutex_init(&ce->pin_mutex);
|
||||
|
||||
i915_active_init(ctx->i915, &ce->active,
|
||||
__intel_context_active, __intel_context_retire);
|
||||
}
|
||||
|
||||
void intel_context_fini(struct intel_context *ce)
|
||||
{
|
||||
i915_vm_put(ce->vm);
|
||||
|
||||
mutex_destroy(&ce->pin_mutex);
|
||||
i915_active_fini(&ce->active);
|
||||
}
|
||||
|
||||
static void i915_global_context_shrink(void)
|
||||
|
@ -234,6 +249,44 @@ void intel_context_exit_engine(struct intel_context *ce)
|
|||
intel_engine_pm_put(ce->engine);
|
||||
}
|
||||
|
||||
int intel_context_prepare_remote_request(struct intel_context *ce,
|
||||
struct i915_request *rq)
|
||||
{
|
||||
struct intel_timeline *tl = ce->ring->timeline;
|
||||
int err;
|
||||
|
||||
/* Only suitable for use in remotely modifying this context */
|
||||
GEM_BUG_ON(rq->hw_context == ce);
|
||||
|
||||
if (rq->timeline != tl) { /* beware timeline sharing */
|
||||
err = mutex_lock_interruptible_nested(&tl->mutex,
|
||||
SINGLE_DEPTH_NESTING);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
/* Queue this switch after current activity by this context. */
|
||||
err = i915_active_request_set(&tl->last_request, rq);
|
||||
if (err)
|
||||
goto unlock;
|
||||
}
|
||||
lockdep_assert_held(&tl->mutex);
|
||||
|
||||
/*
|
||||
* Guarantee context image and the timeline remains pinned until the
|
||||
* modifying request is retired by setting the ce activity tracker.
|
||||
*
|
||||
* But we only need to take one pin on the account of it. Or in other
|
||||
* words transfer the pinned ce object to tracked active request.
|
||||
*/
|
||||
GEM_BUG_ON(i915_active_is_idle(&ce->active));
|
||||
err = i915_active_ref(&ce->active, rq->fence.context, rq);
|
||||
|
||||
unlock:
|
||||
if (rq->timeline != tl)
|
||||
mutex_unlock(&tl->mutex);
|
||||
return err;
|
||||
}
|
||||
|
||||
struct i915_request *intel_context_create_request(struct intel_context *ce)
|
||||
{
|
||||
struct i915_request *rq;
|
||||
|
|
|
@ -9,12 +9,14 @@
|
|||
|
||||
#include <linux/lockdep.h>
|
||||
|
||||
#include "i915_active.h"
|
||||
#include "intel_context_types.h"
|
||||
#include "intel_engine_types.h"
|
||||
|
||||
void intel_context_init(struct intel_context *ce,
|
||||
struct i915_gem_context *ctx,
|
||||
struct intel_engine_cs *engine);
|
||||
void intel_context_fini(struct intel_context *ce);
|
||||
|
||||
struct intel_context *
|
||||
intel_context_create(struct i915_gem_context *ctx,
|
||||
|
@ -102,8 +104,17 @@ static inline void intel_context_exit(struct intel_context *ce)
|
|||
ce->ops->exit(ce);
|
||||
}
|
||||
|
||||
int intel_context_active_acquire(struct intel_context *ce, unsigned long flags);
|
||||
void intel_context_active_release(struct intel_context *ce);
|
||||
static inline int intel_context_active_acquire(struct intel_context *ce)
|
||||
{
|
||||
return i915_active_acquire(&ce->active);
|
||||
}
|
||||
|
||||
static inline void intel_context_active_release(struct intel_context *ce)
|
||||
{
|
||||
/* Nodes preallocated in intel_context_active() */
|
||||
i915_active_acquire_barrier(&ce->active);
|
||||
i915_active_release(&ce->active);
|
||||
}
|
||||
|
||||
static inline struct intel_context *intel_context_get(struct intel_context *ce)
|
||||
{
|
||||
|
@ -129,6 +140,9 @@ static inline void intel_context_timeline_unlock(struct intel_context *ce)
|
|||
mutex_unlock(&ce->ring->timeline->mutex);
|
||||
}
|
||||
|
||||
int intel_context_prepare_remote_request(struct intel_context *ce,
|
||||
struct i915_request *rq);
|
||||
|
||||
struct i915_request *intel_context_create_request(struct intel_context *ce);
|
||||
|
||||
#endif /* __INTEL_CONTEXT_H__ */
|
||||
|
|
|
@ -13,6 +13,7 @@
|
|||
#include <linux/types.h>
|
||||
|
||||
#include "i915_active_types.h"
|
||||
#include "i915_utils.h"
|
||||
#include "intel_engine_types.h"
|
||||
#include "intel_sseu.h"
|
||||
|
||||
|
@ -35,9 +36,15 @@ struct intel_context_ops {
|
|||
struct intel_context {
|
||||
struct kref ref;
|
||||
|
||||
struct i915_gem_context *gem_context;
|
||||
struct intel_engine_cs *engine;
|
||||
struct intel_engine_cs *inflight;
|
||||
#define intel_context_inflight(ce) ptr_mask_bits((ce)->inflight, 2)
|
||||
#define intel_context_inflight_count(ce) ptr_unmask_bits((ce)->inflight, 2)
|
||||
#define intel_context_inflight_inc(ce) ptr_count_inc(&(ce)->inflight)
|
||||
#define intel_context_inflight_dec(ce) ptr_count_dec(&(ce)->inflight)
|
||||
|
||||
struct i915_address_space *vm;
|
||||
struct i915_gem_context *gem_context;
|
||||
|
||||
struct list_head signal_link;
|
||||
struct list_head signals;
|
||||
|
|
|
@ -14,7 +14,7 @@
|
|||
#include "i915_reg.h"
|
||||
#include "i915_request.h"
|
||||
#include "i915_selftest.h"
|
||||
#include "i915_timeline.h"
|
||||
#include "gt/intel_timeline.h"
|
||||
#include "intel_engine_types.h"
|
||||
#include "intel_gpu_commands.h"
|
||||
#include "intel_workarounds.h"
|
||||
|
@ -51,7 +51,7 @@ struct drm_printer;
|
|||
#define ENGINE_READ16(...) __ENGINE_READ_OP(read16, __VA_ARGS__)
|
||||
#define ENGINE_READ(...) __ENGINE_READ_OP(read, __VA_ARGS__)
|
||||
#define ENGINE_READ_FW(...) __ENGINE_READ_OP(read_fw, __VA_ARGS__)
|
||||
#define ENGINE_POSTING_READ(...) __ENGINE_READ_OP(posting_read, __VA_ARGS__)
|
||||
#define ENGINE_POSTING_READ(...) __ENGINE_READ_OP(posting_read_fw, __VA_ARGS__)
|
||||
#define ENGINE_POSTING_READ16(...) __ENGINE_READ_OP(posting_read16, __VA_ARGS__)
|
||||
|
||||
#define ENGINE_READ64(engine__, lower_reg__, upper_reg__) \
|
||||
|
@ -125,71 +125,26 @@ hangcheck_action_to_str(const enum intel_engine_hangcheck_action a)
|
|||
|
||||
void intel_engines_set_scheduler_caps(struct drm_i915_private *i915);
|
||||
|
||||
static inline void
|
||||
execlists_set_active(struct intel_engine_execlists *execlists,
|
||||
unsigned int bit)
|
||||
{
|
||||
__set_bit(bit, (unsigned long *)&execlists->active);
|
||||
}
|
||||
|
||||
static inline bool
|
||||
execlists_set_active_once(struct intel_engine_execlists *execlists,
|
||||
unsigned int bit)
|
||||
{
|
||||
return !__test_and_set_bit(bit, (unsigned long *)&execlists->active);
|
||||
}
|
||||
|
||||
static inline void
|
||||
execlists_clear_active(struct intel_engine_execlists *execlists,
|
||||
unsigned int bit)
|
||||
{
|
||||
__clear_bit(bit, (unsigned long *)&execlists->active);
|
||||
}
|
||||
|
||||
static inline void
|
||||
execlists_clear_all_active(struct intel_engine_execlists *execlists)
|
||||
{
|
||||
execlists->active = 0;
|
||||
}
|
||||
|
||||
static inline bool
|
||||
execlists_is_active(const struct intel_engine_execlists *execlists,
|
||||
unsigned int bit)
|
||||
{
|
||||
return test_bit(bit, (unsigned long *)&execlists->active);
|
||||
}
|
||||
|
||||
void execlists_user_begin(struct intel_engine_execlists *execlists,
|
||||
const struct execlist_port *port);
|
||||
void execlists_user_end(struct intel_engine_execlists *execlists);
|
||||
|
||||
void
|
||||
execlists_cancel_port_requests(struct intel_engine_execlists * const execlists);
|
||||
|
||||
struct i915_request *
|
||||
execlists_unwind_incomplete_requests(struct intel_engine_execlists *execlists);
|
||||
|
||||
static inline unsigned int
|
||||
execlists_num_ports(const struct intel_engine_execlists * const execlists)
|
||||
{
|
||||
return execlists->port_mask + 1;
|
||||
}
|
||||
|
||||
static inline struct execlist_port *
|
||||
execlists_port_complete(struct intel_engine_execlists * const execlists,
|
||||
struct execlist_port * const port)
|
||||
static inline struct i915_request *
|
||||
execlists_active(const struct intel_engine_execlists *execlists)
|
||||
{
|
||||
const unsigned int m = execlists->port_mask;
|
||||
|
||||
GEM_BUG_ON(port_index(port, execlists) != 0);
|
||||
GEM_BUG_ON(!execlists_is_active(execlists, EXECLISTS_ACTIVE_USER));
|
||||
|
||||
memmove(port, port + 1, m * sizeof(struct execlist_port));
|
||||
memset(port + m, 0, sizeof(struct execlist_port));
|
||||
|
||||
return port;
|
||||
GEM_BUG_ON(execlists->active - execlists->inflight >
|
||||
execlists_num_ports(execlists));
|
||||
return READ_ONCE(*execlists->active);
|
||||
}
|
||||
|
||||
void
|
||||
execlists_cancel_port_requests(struct intel_engine_execlists * const execlists);
|
||||
|
||||
struct i915_request *
|
||||
execlists_unwind_incomplete_requests(struct intel_engine_execlists *execlists);
|
||||
|
||||
static inline u32
|
||||
intel_read_status_page(const struct intel_engine_cs *engine, int reg)
|
||||
{
|
||||
|
@ -245,7 +200,7 @@ intel_write_status_page(struct intel_engine_cs *engine, int reg, u32 value)
|
|||
|
||||
struct intel_ring *
|
||||
intel_engine_create_ring(struct intel_engine_cs *engine,
|
||||
struct i915_timeline *timeline,
|
||||
struct intel_timeline *timeline,
|
||||
int size);
|
||||
int intel_ring_pin(struct intel_ring *ring);
|
||||
void intel_ring_reset(struct intel_ring *ring, u32 tail);
|
||||
|
@ -456,8 +411,8 @@ gen8_emit_ggtt_write(u32 *cs, u32 value, u32 gtt_offset, u32 flags)
|
|||
return cs;
|
||||
}
|
||||
|
||||
static inline void intel_engine_reset(struct intel_engine_cs *engine,
|
||||
bool stalled)
|
||||
static inline void __intel_engine_reset(struct intel_engine_cs *engine,
|
||||
bool stalled)
|
||||
{
|
||||
if (engine->reset.reset)
|
||||
engine->reset.reset(engine, stalled);
|
||||
|
@ -465,9 +420,9 @@ static inline void intel_engine_reset(struct intel_engine_cs *engine,
|
|||
}
|
||||
|
||||
bool intel_engine_is_idle(struct intel_engine_cs *engine);
|
||||
bool intel_engines_are_idle(struct drm_i915_private *dev_priv);
|
||||
bool intel_engines_are_idle(struct intel_gt *gt);
|
||||
|
||||
void intel_engines_reset_default_submission(struct drm_i915_private *i915);
|
||||
void intel_engines_reset_default_submission(struct intel_gt *gt);
|
||||
unsigned int intel_engines_has_context_isolation(struct drm_i915_private *i915);
|
||||
|
||||
bool intel_engine_can_store_dword(struct intel_engine_cs *engine);
|
||||
|
|
|
@ -28,6 +28,8 @@
|
|||
|
||||
#include "i915_drv.h"
|
||||
|
||||
#include "gt/intel_gt.h"
|
||||
|
||||
#include "intel_engine.h"
|
||||
#include "intel_engine_pm.h"
|
||||
#include "intel_context.h"
|
||||
|
@ -314,6 +316,7 @@ intel_engine_setup(struct drm_i915_private *dev_priv,
|
|||
engine->id = id;
|
||||
engine->mask = BIT(id);
|
||||
engine->i915 = dev_priv;
|
||||
engine->gt = &dev_priv->gt;
|
||||
engine->uncore = &dev_priv->uncore;
|
||||
__sprint_engine_name(engine->name, info);
|
||||
engine->hw_id = engine->guc_id = info->hw_id;
|
||||
|
@ -423,7 +426,7 @@ int intel_engines_init_mmio(struct drm_i915_private *i915)
|
|||
WARN_ON(engine_mask &
|
||||
GENMASK(BITS_PER_TYPE(mask) - 1, I915_NUM_ENGINES));
|
||||
|
||||
if (i915_inject_load_failure())
|
||||
if (i915_inject_probe_failure())
|
||||
return -ENODEV;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(intel_engines); i++) {
|
||||
|
@ -445,15 +448,9 @@ int intel_engines_init_mmio(struct drm_i915_private *i915)
|
|||
if (WARN_ON(mask != engine_mask))
|
||||
device_info->engine_mask = mask;
|
||||
|
||||
/* We always presume we have at least RCS available for later probing */
|
||||
if (WARN_ON(!HAS_ENGINE(i915, RCS0))) {
|
||||
err = -ENODEV;
|
||||
goto cleanup;
|
||||
}
|
||||
|
||||
RUNTIME_INFO(i915)->num_engines = hweight32(mask);
|
||||
|
||||
i915_check_and_clear_faults(i915);
|
||||
intel_gt_check_and_clear_faults(&i915->gt);
|
||||
|
||||
intel_setup_engine_capabilities(i915);
|
||||
|
||||
|
@ -508,6 +505,10 @@ void intel_engine_init_execlists(struct intel_engine_cs *engine)
|
|||
GEM_BUG_ON(!is_power_of_2(execlists_num_ports(execlists)));
|
||||
GEM_BUG_ON(execlists_num_ports(execlists) > EXECLIST_MAX_PORTS);
|
||||
|
||||
memset(execlists->pending, 0, sizeof(execlists->pending));
|
||||
execlists->active =
|
||||
memset(execlists->inflight, 0, sizeof(execlists->inflight));
|
||||
|
||||
execlists->queue_priority_hint = INT_MIN;
|
||||
execlists->queue = RB_ROOT_CACHED;
|
||||
}
|
||||
|
@ -577,7 +578,7 @@ static int init_status_page(struct intel_engine_cs *engine)
|
|||
|
||||
i915_gem_object_set_cache_coherency(obj, I915_CACHE_LLC);
|
||||
|
||||
vma = i915_vma_instance(obj, &engine->i915->ggtt.vm, NULL);
|
||||
vma = i915_vma_instance(obj, &engine->gt->ggtt->vm, NULL);
|
||||
if (IS_ERR(vma)) {
|
||||
ret = PTR_ERR(vma);
|
||||
goto err;
|
||||
|
@ -629,6 +630,10 @@ static int intel_engine_setup_common(struct intel_engine_cs *engine)
|
|||
engine->sseu =
|
||||
intel_sseu_from_device_info(&RUNTIME_INFO(engine->i915)->sseu);
|
||||
|
||||
intel_engine_init_workarounds(engine);
|
||||
intel_engine_init_whitelist(engine);
|
||||
intel_engine_init_ctx_wa(engine);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -681,9 +686,10 @@ void intel_engines_set_scheduler_caps(struct drm_i915_private *i915)
|
|||
u8 engine;
|
||||
u8 sched;
|
||||
} map[] = {
|
||||
#define MAP(x, y) { ilog2(I915_ENGINE_HAS_##x), ilog2(I915_SCHEDULER_CAP_##y) }
|
||||
MAP(PREEMPTION, PREEMPTION),
|
||||
MAP(SEMAPHORES, SEMAPHORES),
|
||||
#define MAP(x, y) { ilog2(I915_ENGINE_##x), ilog2(I915_SCHEDULER_CAP_##y) }
|
||||
MAP(HAS_PREEMPTION, PREEMPTION),
|
||||
MAP(HAS_SEMAPHORES, SEMAPHORES),
|
||||
MAP(SUPPORTS_STATS, ENGINE_BUSY_STATS),
|
||||
#undef MAP
|
||||
};
|
||||
struct intel_engine_cs *engine;
|
||||
|
@ -717,7 +723,7 @@ void intel_engines_set_scheduler_caps(struct drm_i915_private *i915)
|
|||
|
||||
struct measure_breadcrumb {
|
||||
struct i915_request rq;
|
||||
struct i915_timeline timeline;
|
||||
struct intel_timeline timeline;
|
||||
struct intel_ring ring;
|
||||
u32 cs[1024];
|
||||
};
|
||||
|
@ -727,15 +733,15 @@ static int measure_breadcrumb_dw(struct intel_engine_cs *engine)
|
|||
struct measure_breadcrumb *frame;
|
||||
int dw = -ENOMEM;
|
||||
|
||||
GEM_BUG_ON(!engine->i915->gt.scratch);
|
||||
GEM_BUG_ON(!engine->gt->scratch);
|
||||
|
||||
frame = kzalloc(sizeof(*frame), GFP_KERNEL);
|
||||
if (!frame)
|
||||
return -ENOMEM;
|
||||
|
||||
if (i915_timeline_init(engine->i915,
|
||||
&frame->timeline,
|
||||
engine->status_page.vma))
|
||||
if (intel_timeline_init(&frame->timeline,
|
||||
engine->gt,
|
||||
engine->status_page.vma))
|
||||
goto out_frame;
|
||||
|
||||
INIT_LIST_HEAD(&frame->ring.request_list);
|
||||
|
@ -750,17 +756,17 @@ static int measure_breadcrumb_dw(struct intel_engine_cs *engine)
|
|||
frame->rq.ring = &frame->ring;
|
||||
frame->rq.timeline = &frame->timeline;
|
||||
|
||||
dw = i915_timeline_pin(&frame->timeline);
|
||||
dw = intel_timeline_pin(&frame->timeline);
|
||||
if (dw < 0)
|
||||
goto out_timeline;
|
||||
|
||||
dw = engine->emit_fini_breadcrumb(&frame->rq, frame->cs) - frame->cs;
|
||||
GEM_BUG_ON(dw & 1); /* RING_TAIL must be qword aligned */
|
||||
|
||||
i915_timeline_unpin(&frame->timeline);
|
||||
intel_timeline_unpin(&frame->timeline);
|
||||
|
||||
out_timeline:
|
||||
i915_timeline_fini(&frame->timeline);
|
||||
intel_timeline_fini(&frame->timeline);
|
||||
out_frame:
|
||||
kfree(frame);
|
||||
return dw;
|
||||
|
@ -823,6 +829,8 @@ int intel_engine_init_common(struct intel_engine_cs *engine)
|
|||
struct drm_i915_private *i915 = engine->i915;
|
||||
int ret;
|
||||
|
||||
engine->set_default_submission(engine);
|
||||
|
||||
/* We may need to do things with the shrinker which
|
||||
* require us to immediately switch back to the default
|
||||
* context. This can cause a problem as pinning the
|
||||
|
@ -835,28 +843,15 @@ int intel_engine_init_common(struct intel_engine_cs *engine)
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* Similarly the preempt context must always be available so that
|
||||
* we can interrupt the engine at any time. However, as preemption
|
||||
* is optional, we allow it to fail.
|
||||
*/
|
||||
if (i915->preempt_context)
|
||||
pin_context(i915->preempt_context, engine,
|
||||
&engine->preempt_context);
|
||||
|
||||
ret = measure_breadcrumb_dw(engine);
|
||||
if (ret < 0)
|
||||
goto err_unpin;
|
||||
|
||||
engine->emit_fini_breadcrumb_dw = ret;
|
||||
|
||||
engine->set_default_submission(engine);
|
||||
|
||||
return 0;
|
||||
|
||||
err_unpin:
|
||||
if (engine->preempt_context)
|
||||
intel_context_unpin(engine->preempt_context);
|
||||
intel_context_unpin(engine->kernel_context);
|
||||
return ret;
|
||||
}
|
||||
|
@ -881,8 +876,6 @@ void intel_engine_cleanup_common(struct intel_engine_cs *engine)
|
|||
if (engine->default_state)
|
||||
i915_gem_object_put(engine->default_state);
|
||||
|
||||
if (engine->preempt_context)
|
||||
intel_context_unpin(engine->preempt_context);
|
||||
intel_context_unpin(engine->kernel_context);
|
||||
GEM_BUG_ON(!llist_empty(&engine->barrier_tasks));
|
||||
|
||||
|
@ -966,57 +959,23 @@ const char *i915_cache_level_str(struct drm_i915_private *i915, int type)
|
|||
}
|
||||
}
|
||||
|
||||
u32 intel_calculate_mcr_s_ss_select(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
const struct sseu_dev_info *sseu = &RUNTIME_INFO(dev_priv)->sseu;
|
||||
unsigned int slice = fls(sseu->slice_mask) - 1;
|
||||
unsigned int subslice;
|
||||
u32 mcr_s_ss_select;
|
||||
|
||||
GEM_BUG_ON(slice >= ARRAY_SIZE(sseu->subslice_mask));
|
||||
subslice = fls(sseu->subslice_mask[slice]);
|
||||
GEM_BUG_ON(!subslice);
|
||||
subslice--;
|
||||
|
||||
if (IS_GEN(dev_priv, 10))
|
||||
mcr_s_ss_select = GEN8_MCR_SLICE(slice) |
|
||||
GEN8_MCR_SUBSLICE(subslice);
|
||||
else if (INTEL_GEN(dev_priv) >= 11)
|
||||
mcr_s_ss_select = GEN11_MCR_SLICE(slice) |
|
||||
GEN11_MCR_SUBSLICE(subslice);
|
||||
else
|
||||
mcr_s_ss_select = 0;
|
||||
|
||||
return mcr_s_ss_select;
|
||||
}
|
||||
|
||||
static u32
|
||||
read_subslice_reg(struct intel_engine_cs *engine, int slice, int subslice,
|
||||
i915_reg_t reg)
|
||||
{
|
||||
struct drm_i915_private *i915 = engine->i915;
|
||||
struct intel_uncore *uncore = engine->uncore;
|
||||
u32 mcr_slice_subslice_mask;
|
||||
u32 mcr_slice_subslice_select;
|
||||
u32 default_mcr_s_ss_select;
|
||||
u32 mcr;
|
||||
u32 ret;
|
||||
u32 mcr_mask, mcr_ss, mcr, old_mcr, val;
|
||||
enum forcewake_domains fw_domains;
|
||||
|
||||
if (INTEL_GEN(i915) >= 11) {
|
||||
mcr_slice_subslice_mask = GEN11_MCR_SLICE_MASK |
|
||||
GEN11_MCR_SUBSLICE_MASK;
|
||||
mcr_slice_subslice_select = GEN11_MCR_SLICE(slice) |
|
||||
GEN11_MCR_SUBSLICE(subslice);
|
||||
mcr_mask = GEN11_MCR_SLICE_MASK | GEN11_MCR_SUBSLICE_MASK;
|
||||
mcr_ss = GEN11_MCR_SLICE(slice) | GEN11_MCR_SUBSLICE(subslice);
|
||||
} else {
|
||||
mcr_slice_subslice_mask = GEN8_MCR_SLICE_MASK |
|
||||
GEN8_MCR_SUBSLICE_MASK;
|
||||
mcr_slice_subslice_select = GEN8_MCR_SLICE(slice) |
|
||||
GEN8_MCR_SUBSLICE(subslice);
|
||||
mcr_mask = GEN8_MCR_SLICE_MASK | GEN8_MCR_SUBSLICE_MASK;
|
||||
mcr_ss = GEN8_MCR_SLICE(slice) | GEN8_MCR_SUBSLICE(subslice);
|
||||
}
|
||||
|
||||
default_mcr_s_ss_select = intel_calculate_mcr_s_ss_select(i915);
|
||||
|
||||
fw_domains = intel_uncore_forcewake_for_reg(uncore, reg,
|
||||
FW_REG_READ);
|
||||
fw_domains |= intel_uncore_forcewake_for_reg(uncore,
|
||||
|
@ -1026,26 +985,23 @@ read_subslice_reg(struct intel_engine_cs *engine, int slice, int subslice,
|
|||
spin_lock_irq(&uncore->lock);
|
||||
intel_uncore_forcewake_get__locked(uncore, fw_domains);
|
||||
|
||||
mcr = intel_uncore_read_fw(uncore, GEN8_MCR_SELECTOR);
|
||||
old_mcr = mcr = intel_uncore_read_fw(uncore, GEN8_MCR_SELECTOR);
|
||||
|
||||
WARN_ON_ONCE((mcr & mcr_slice_subslice_mask) !=
|
||||
default_mcr_s_ss_select);
|
||||
|
||||
mcr &= ~mcr_slice_subslice_mask;
|
||||
mcr |= mcr_slice_subslice_select;
|
||||
mcr &= ~mcr_mask;
|
||||
mcr |= mcr_ss;
|
||||
intel_uncore_write_fw(uncore, GEN8_MCR_SELECTOR, mcr);
|
||||
|
||||
ret = intel_uncore_read_fw(uncore, reg);
|
||||
val = intel_uncore_read_fw(uncore, reg);
|
||||
|
||||
mcr &= ~mcr_slice_subslice_mask;
|
||||
mcr |= default_mcr_s_ss_select;
|
||||
mcr &= ~mcr_mask;
|
||||
mcr |= old_mcr & mcr_mask;
|
||||
|
||||
intel_uncore_write_fw(uncore, GEN8_MCR_SELECTOR, mcr);
|
||||
|
||||
intel_uncore_forcewake_put__locked(uncore, fw_domains);
|
||||
spin_unlock_irq(&uncore->lock);
|
||||
|
||||
return ret;
|
||||
return val;
|
||||
}
|
||||
|
||||
/* NB: please notice the memset */
|
||||
|
@ -1150,17 +1106,17 @@ static bool ring_is_idle(struct intel_engine_cs *engine)
|
|||
bool intel_engine_is_idle(struct intel_engine_cs *engine)
|
||||
{
|
||||
/* More white lies, if wedged, hw state is inconsistent */
|
||||
if (i915_reset_failed(engine->i915))
|
||||
if (intel_gt_is_wedged(engine->gt))
|
||||
return true;
|
||||
|
||||
if (!intel_wakeref_active(&engine->wakeref))
|
||||
if (!intel_engine_pm_is_awake(engine))
|
||||
return true;
|
||||
|
||||
/* Waiting to drain ELSP? */
|
||||
if (READ_ONCE(engine->execlists.active)) {
|
||||
if (execlists_active(&engine->execlists)) {
|
||||
struct tasklet_struct *t = &engine->execlists.tasklet;
|
||||
|
||||
synchronize_hardirq(engine->i915->drm.irq);
|
||||
synchronize_hardirq(engine->i915->drm.pdev->irq);
|
||||
|
||||
local_bh_disable();
|
||||
if (tasklet_trylock(t)) {
|
||||
|
@ -1174,7 +1130,7 @@ bool intel_engine_is_idle(struct intel_engine_cs *engine)
|
|||
/* Otherwise flush the tasklet if it was on another cpu */
|
||||
tasklet_unlock_wait(t);
|
||||
|
||||
if (READ_ONCE(engine->execlists.active))
|
||||
if (execlists_active(&engine->execlists))
|
||||
return false;
|
||||
}
|
||||
|
||||
|
@ -1186,7 +1142,7 @@ bool intel_engine_is_idle(struct intel_engine_cs *engine)
|
|||
return ring_is_idle(engine);
|
||||
}
|
||||
|
||||
bool intel_engines_are_idle(struct drm_i915_private *i915)
|
||||
bool intel_engines_are_idle(struct intel_gt *gt)
|
||||
{
|
||||
struct intel_engine_cs *engine;
|
||||
enum intel_engine_id id;
|
||||
|
@ -1195,14 +1151,14 @@ bool intel_engines_are_idle(struct drm_i915_private *i915)
|
|||
* If the driver is wedged, HW state may be very inconsistent and
|
||||
* report that it is still busy, even though we have stopped using it.
|
||||
*/
|
||||
if (i915_reset_failed(i915))
|
||||
if (intel_gt_is_wedged(gt))
|
||||
return true;
|
||||
|
||||
/* Already parked (and passed an idleness test); must still be idle */
|
||||
if (!READ_ONCE(i915->gt.awake))
|
||||
if (!READ_ONCE(gt->awake))
|
||||
return true;
|
||||
|
||||
for_each_engine(engine, i915, id) {
|
||||
for_each_engine(engine, gt->i915, id) {
|
||||
if (!intel_engine_is_idle(engine))
|
||||
return false;
|
||||
}
|
||||
|
@ -1210,12 +1166,12 @@ bool intel_engines_are_idle(struct drm_i915_private *i915)
|
|||
return true;
|
||||
}
|
||||
|
||||
void intel_engines_reset_default_submission(struct drm_i915_private *i915)
|
||||
void intel_engines_reset_default_submission(struct intel_gt *gt)
|
||||
{
|
||||
struct intel_engine_cs *engine;
|
||||
enum intel_engine_id id;
|
||||
|
||||
for_each_engine(engine, i915, id)
|
||||
for_each_engine(engine, gt->i915, id)
|
||||
engine->set_default_submission(engine);
|
||||
}
|
||||
|
||||
|
@ -1372,6 +1328,7 @@ static void intel_engine_print_registers(struct intel_engine_cs *engine,
|
|||
}
|
||||
|
||||
if (HAS_EXECLISTS(dev_priv)) {
|
||||
struct i915_request * const *port, *rq;
|
||||
const u32 *hws =
|
||||
&engine->status_page.addr[I915_HWS_CSB_BUF0_INDEX];
|
||||
const u8 num_entries = execlists->csb_size;
|
||||
|
@ -1404,27 +1361,33 @@ static void intel_engine_print_registers(struct intel_engine_cs *engine,
|
|||
}
|
||||
|
||||
spin_lock_irqsave(&engine->active.lock, flags);
|
||||
for (idx = 0; idx < execlists_num_ports(execlists); idx++) {
|
||||
struct i915_request *rq;
|
||||
unsigned int count;
|
||||
for (port = execlists->active; (rq = *port); port++) {
|
||||
char hdr[80];
|
||||
int len;
|
||||
|
||||
len = snprintf(hdr, sizeof(hdr),
|
||||
"\t\tActive[%d: ",
|
||||
(int)(port - execlists->active));
|
||||
if (!i915_request_signaled(rq))
|
||||
len += snprintf(hdr + len, sizeof(hdr) - len,
|
||||
"ring:{start:%08x, hwsp:%08x, seqno:%08x}, ",
|
||||
i915_ggtt_offset(rq->ring->vma),
|
||||
rq->timeline->hwsp_offset,
|
||||
hwsp_seqno(rq));
|
||||
snprintf(hdr + len, sizeof(hdr) - len, "rq: ");
|
||||
print_request(m, rq, hdr);
|
||||
}
|
||||
for (port = execlists->pending; (rq = *port); port++) {
|
||||
char hdr[80];
|
||||
|
||||
rq = port_unpack(&execlists->port[idx], &count);
|
||||
if (!rq) {
|
||||
drm_printf(m, "\t\tELSP[%d] idle\n", idx);
|
||||
} else if (!i915_request_signaled(rq)) {
|
||||
snprintf(hdr, sizeof(hdr),
|
||||
"\t\tELSP[%d] count=%d, ring:{start:%08x, hwsp:%08x, seqno:%08x}, rq: ",
|
||||
idx, count,
|
||||
i915_ggtt_offset(rq->ring->vma),
|
||||
rq->timeline->hwsp_offset,
|
||||
hwsp_seqno(rq));
|
||||
print_request(m, rq, hdr);
|
||||
} else {
|
||||
print_request(m, rq, "\t\tELSP[%d] rq: ");
|
||||
}
|
||||
snprintf(hdr, sizeof(hdr),
|
||||
"\t\tPending[%d] ring:{start:%08x, hwsp:%08x, seqno:%08x}, rq: ",
|
||||
(int)(port - execlists->pending),
|
||||
i915_ggtt_offset(rq->ring->vma),
|
||||
rq->timeline->hwsp_offset,
|
||||
hwsp_seqno(rq));
|
||||
print_request(m, rq, hdr);
|
||||
}
|
||||
drm_printf(m, "\t\tHW active? 0x%x\n", execlists->active);
|
||||
spin_unlock_irqrestore(&engine->active.lock, flags);
|
||||
} else if (INTEL_GEN(dev_priv) > 6) {
|
||||
drm_printf(m, "\tPP_DIR_BASE: 0x%08x\n",
|
||||
|
@ -1486,7 +1449,7 @@ void intel_engine_dump(struct intel_engine_cs *engine,
|
|||
va_end(ap);
|
||||
}
|
||||
|
||||
if (i915_reset_failed(engine->i915))
|
||||
if (intel_gt_is_wedged(engine->gt))
|
||||
drm_printf(m, "*** WEDGED ***\n");
|
||||
|
||||
drm_printf(m, "\tAwake? %d\n", atomic_read(&engine->wakeref.count));
|
||||
|
@ -1587,15 +1550,19 @@ int intel_enable_engine_stats(struct intel_engine_cs *engine)
|
|||
}
|
||||
|
||||
if (engine->stats.enabled++ == 0) {
|
||||
const struct execlist_port *port = execlists->port;
|
||||
unsigned int num_ports = execlists_num_ports(execlists);
|
||||
struct i915_request * const *port;
|
||||
struct i915_request *rq;
|
||||
|
||||
engine->stats.enabled_at = ktime_get();
|
||||
|
||||
/* XXX submission method oblivious? */
|
||||
while (num_ports-- && port_isset(port)) {
|
||||
for (port = execlists->active; (rq = *port); port++)
|
||||
engine->stats.active++;
|
||||
port++;
|
||||
|
||||
for (port = execlists->pending; (rq = *port); port++) {
|
||||
/* Exclude any contexts already counted in active */
|
||||
if (intel_context_inflight_count(rq->hw_context) == 1)
|
||||
engine->stats.active++;
|
||||
}
|
||||
|
||||
if (engine->stats.active)
|
||||
|
|
|
@ -8,6 +8,7 @@
|
|||
|
||||
#include "intel_engine.h"
|
||||
#include "intel_engine_pm.h"
|
||||
#include "intel_gt.h"
|
||||
#include "intel_gt_pm.h"
|
||||
|
||||
static int __engine_unpark(struct intel_wakeref *wf)
|
||||
|
@ -18,7 +19,7 @@ static int __engine_unpark(struct intel_wakeref *wf)
|
|||
|
||||
GEM_TRACE("%s\n", engine->name);
|
||||
|
||||
intel_gt_pm_get(engine->i915);
|
||||
intel_gt_pm_get(engine->gt);
|
||||
|
||||
/* Pin the default state for fast resets from atomic context. */
|
||||
map = NULL;
|
||||
|
@ -66,7 +67,7 @@ static bool switch_to_kernel_context(struct intel_engine_cs *engine)
|
|||
return true;
|
||||
|
||||
/* GPU is pointing to the void, as good as in the kernel context. */
|
||||
if (i915_reset_failed(engine->i915))
|
||||
if (intel_gt_is_wedged(engine->gt))
|
||||
return true;
|
||||
|
||||
/*
|
||||
|
@ -129,7 +130,7 @@ static int __engine_park(struct intel_wakeref *wf)
|
|||
|
||||
engine->execlists.no_priolist = false;
|
||||
|
||||
intel_gt_pm_put(engine->i915);
|
||||
intel_gt_pm_put(engine->gt);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -15,6 +15,12 @@ struct drm_i915_private;
|
|||
void intel_engine_pm_get(struct intel_engine_cs *engine);
|
||||
void intel_engine_pm_put(struct intel_engine_cs *engine);
|
||||
|
||||
static inline bool
|
||||
intel_engine_pm_is_awake(const struct intel_engine_cs *engine)
|
||||
{
|
||||
return intel_wakeref_is_active(&engine->wakeref);
|
||||
}
|
||||
|
||||
static inline bool
|
||||
intel_engine_pm_get_if_awake(struct intel_engine_cs *engine)
|
||||
{
|
||||
|
|
|
@ -12,6 +12,7 @@
|
|||
#include <linux/kref.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/llist.h>
|
||||
#include <linux/timer.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
#include "i915_gem.h"
|
||||
|
@ -19,7 +20,7 @@
|
|||
#include "i915_pmu.h"
|
||||
#include "i915_priolist_types.h"
|
||||
#include "i915_selftest.h"
|
||||
#include "i915_timeline_types.h"
|
||||
#include "gt/intel_timeline_types.h"
|
||||
#include "intel_sseu.h"
|
||||
#include "intel_wakeref.h"
|
||||
#include "intel_workarounds_types.h"
|
||||
|
@ -35,6 +36,7 @@ struct drm_i915_reg_table;
|
|||
struct i915_gem_context;
|
||||
struct i915_request;
|
||||
struct i915_sched_attr;
|
||||
struct intel_gt;
|
||||
struct intel_uncore;
|
||||
|
||||
typedef u8 intel_engine_mask_t;
|
||||
|
@ -66,7 +68,7 @@ struct intel_ring {
|
|||
struct i915_vma *vma;
|
||||
void *vaddr;
|
||||
|
||||
struct i915_timeline *timeline;
|
||||
struct intel_timeline *timeline;
|
||||
struct list_head request_list;
|
||||
struct list_head active_link;
|
||||
|
||||
|
@ -149,6 +151,11 @@ struct intel_engine_execlists {
|
|||
*/
|
||||
struct tasklet_struct tasklet;
|
||||
|
||||
/**
|
||||
* @timer: kick the current context if its timeslice expires
|
||||
*/
|
||||
struct timer_list timer;
|
||||
|
||||
/**
|
||||
* @default_priolist: priority list for I915_PRIORITY_NORMAL
|
||||
*/
|
||||
|
@ -172,51 +179,28 @@ struct intel_engine_execlists {
|
|||
*/
|
||||
u32 __iomem *ctrl_reg;
|
||||
|
||||
/**
|
||||
* @port: execlist port states
|
||||
*
|
||||
* For each hardware ELSP (ExecList Submission Port) we keep
|
||||
* track of the last request and the number of times we submitted
|
||||
* that port to hw. We then count the number of times the hw reports
|
||||
* a context completion or preemption. As only one context can
|
||||
* be active on hw, we limit resubmission of context to port[0]. This
|
||||
* is called Lite Restore, of the context.
|
||||
*/
|
||||
struct execlist_port {
|
||||
/**
|
||||
* @request_count: combined request and submission count
|
||||
*/
|
||||
struct i915_request *request_count;
|
||||
#define EXECLIST_COUNT_BITS 2
|
||||
#define port_request(p) ptr_mask_bits((p)->request_count, EXECLIST_COUNT_BITS)
|
||||
#define port_count(p) ptr_unmask_bits((p)->request_count, EXECLIST_COUNT_BITS)
|
||||
#define port_pack(rq, count) ptr_pack_bits(rq, count, EXECLIST_COUNT_BITS)
|
||||
#define port_unpack(p, count) ptr_unpack_bits((p)->request_count, count, EXECLIST_COUNT_BITS)
|
||||
#define port_set(p, packed) ((p)->request_count = (packed))
|
||||
#define port_isset(p) ((p)->request_count)
|
||||
#define port_index(p, execlists) ((p) - (execlists)->port)
|
||||
|
||||
/**
|
||||
* @context_id: context ID for port
|
||||
*/
|
||||
GEM_DEBUG_DECL(u32 context_id);
|
||||
|
||||
#define EXECLIST_MAX_PORTS 2
|
||||
} port[EXECLIST_MAX_PORTS];
|
||||
|
||||
/**
|
||||
* @active: is the HW active? We consider the HW as active after
|
||||
* submitting any context for execution and until we have seen the
|
||||
* last context completion event. After that, we do not expect any
|
||||
* more events until we submit, and so can park the HW.
|
||||
*
|
||||
* As we have a small number of different sources from which we feed
|
||||
* the HW, we track the state of each inside a single bitfield.
|
||||
* @active: the currently known context executing on HW
|
||||
*/
|
||||
unsigned int active;
|
||||
#define EXECLISTS_ACTIVE_USER 0
|
||||
#define EXECLISTS_ACTIVE_PREEMPT 1
|
||||
#define EXECLISTS_ACTIVE_HWACK 2
|
||||
struct i915_request * const *active;
|
||||
/**
|
||||
* @inflight: the set of contexts submitted and acknowleged by HW
|
||||
*
|
||||
* The set of inflight contexts is managed by reading CS events
|
||||
* from the HW. On a context-switch event (not preemption), we
|
||||
* know the HW has transitioned from port0 to port1, and we
|
||||
* advance our inflight/active tracking accordingly.
|
||||
*/
|
||||
struct i915_request *inflight[EXECLIST_MAX_PORTS + 1 /* sentinel */];
|
||||
/**
|
||||
* @pending: the next set of contexts submitted to ELSP
|
||||
*
|
||||
* We store the array of contexts that we submit to HW (via ELSP) and
|
||||
* promote them to the inflight array once HW has signaled the
|
||||
* preemption or idle-to-active event.
|
||||
*/
|
||||
struct i915_request *pending[EXECLIST_MAX_PORTS + 1];
|
||||
|
||||
/**
|
||||
* @port_mask: number of execlist ports - 1
|
||||
|
@ -257,11 +241,6 @@ struct intel_engine_execlists {
|
|||
*/
|
||||
u32 *csb_status;
|
||||
|
||||
/**
|
||||
* @preempt_complete_status: expected CSB upon completing preemption
|
||||
*/
|
||||
u32 preempt_complete_status;
|
||||
|
||||
/**
|
||||
* @csb_size: context status buffer FIFO size
|
||||
*/
|
||||
|
@ -279,6 +258,7 @@ struct intel_engine_execlists {
|
|||
|
||||
struct intel_engine_cs {
|
||||
struct drm_i915_private *i915;
|
||||
struct intel_gt *gt;
|
||||
struct intel_uncore *uncore;
|
||||
char name[INTEL_ENGINE_CS_MAX_NAME];
|
||||
|
||||
|
@ -308,7 +288,6 @@ struct intel_engine_cs {
|
|||
struct llist_head barrier_tasks;
|
||||
|
||||
struct intel_context *kernel_context; /* pinned */
|
||||
struct intel_context *preempt_context; /* pinned; optional */
|
||||
|
||||
intel_engine_mask_t saturated; /* submitting semaphores too late? */
|
||||
|
||||
|
@ -404,7 +383,6 @@ struct intel_engine_cs {
|
|||
const struct intel_context_ops *cops;
|
||||
|
||||
int (*request_alloc)(struct i915_request *rq);
|
||||
int (*init_context)(struct i915_request *rq);
|
||||
|
||||
int (*emit_flush)(struct i915_request *request, u32 mode);
|
||||
#define EMIT_INVALIDATE BIT(0)
|
||||
|
|
|
@ -7,6 +7,13 @@
|
|||
#ifndef _INTEL_GPU_COMMANDS_H_
|
||||
#define _INTEL_GPU_COMMANDS_H_
|
||||
|
||||
/*
|
||||
* Target address alignments required for GPU access e.g.
|
||||
* MI_STORE_DWORD_IMM.
|
||||
*/
|
||||
#define alignof_dword 4
|
||||
#define alignof_qword 8
|
||||
|
||||
/*
|
||||
* Instruction field definitions used by the command parser
|
||||
*/
|
||||
|
|
|
@ -0,0 +1,250 @@
|
|||
// SPDX-License-Identifier: MIT
|
||||
/*
|
||||
* Copyright © 2019 Intel Corporation
|
||||
*/
|
||||
|
||||
#include "i915_drv.h"
|
||||
|
||||
#include "intel_gt.h"
|
||||
#include "intel_gt_pm.h"
|
||||
#include "intel_uncore.h"
|
||||
|
||||
void intel_gt_init_early(struct intel_gt *gt, struct drm_i915_private *i915)
|
||||
{
|
||||
gt->i915 = i915;
|
||||
gt->uncore = &i915->uncore;
|
||||
|
||||
INIT_LIST_HEAD(>->active_rings);
|
||||
INIT_LIST_HEAD(>->closed_vma);
|
||||
|
||||
spin_lock_init(>->closed_lock);
|
||||
|
||||
intel_gt_init_hangcheck(gt);
|
||||
intel_gt_init_reset(gt);
|
||||
intel_gt_pm_init_early(gt);
|
||||
}
|
||||
|
||||
void intel_gt_init_hw(struct drm_i915_private *i915)
|
||||
{
|
||||
i915->gt.ggtt = &i915->ggtt;
|
||||
}
|
||||
|
||||
static void rmw_set(struct intel_uncore *uncore, i915_reg_t reg, u32 set)
|
||||
{
|
||||
intel_uncore_rmw(uncore, reg, 0, set);
|
||||
}
|
||||
|
||||
static void rmw_clear(struct intel_uncore *uncore, i915_reg_t reg, u32 clr)
|
||||
{
|
||||
intel_uncore_rmw(uncore, reg, clr, 0);
|
||||
}
|
||||
|
||||
static void clear_register(struct intel_uncore *uncore, i915_reg_t reg)
|
||||
{
|
||||
intel_uncore_rmw(uncore, reg, 0, 0);
|
||||
}
|
||||
|
||||
static void gen8_clear_engine_error_register(struct intel_engine_cs *engine)
|
||||
{
|
||||
GEN6_RING_FAULT_REG_RMW(engine, RING_FAULT_VALID, 0);
|
||||
GEN6_RING_FAULT_REG_POSTING_READ(engine);
|
||||
}
|
||||
|
||||
void
|
||||
intel_gt_clear_error_registers(struct intel_gt *gt,
|
||||
intel_engine_mask_t engine_mask)
|
||||
{
|
||||
struct drm_i915_private *i915 = gt->i915;
|
||||
struct intel_uncore *uncore = gt->uncore;
|
||||
u32 eir;
|
||||
|
||||
if (!IS_GEN(i915, 2))
|
||||
clear_register(uncore, PGTBL_ER);
|
||||
|
||||
if (INTEL_GEN(i915) < 4)
|
||||
clear_register(uncore, IPEIR(RENDER_RING_BASE));
|
||||
else
|
||||
clear_register(uncore, IPEIR_I965);
|
||||
|
||||
clear_register(uncore, EIR);
|
||||
eir = intel_uncore_read(uncore, EIR);
|
||||
if (eir) {
|
||||
/*
|
||||
* some errors might have become stuck,
|
||||
* mask them.
|
||||
*/
|
||||
DRM_DEBUG_DRIVER("EIR stuck: 0x%08x, masking\n", eir);
|
||||
rmw_set(uncore, EMR, eir);
|
||||
intel_uncore_write(uncore, GEN2_IIR,
|
||||
I915_MASTER_ERROR_INTERRUPT);
|
||||
}
|
||||
|
||||
if (INTEL_GEN(i915) >= 8) {
|
||||
rmw_clear(uncore, GEN8_RING_FAULT_REG, RING_FAULT_VALID);
|
||||
intel_uncore_posting_read(uncore, GEN8_RING_FAULT_REG);
|
||||
} else if (INTEL_GEN(i915) >= 6) {
|
||||
struct intel_engine_cs *engine;
|
||||
enum intel_engine_id id;
|
||||
|
||||
for_each_engine_masked(engine, i915, engine_mask, id)
|
||||
gen8_clear_engine_error_register(engine);
|
||||
}
|
||||
}
|
||||
|
||||
static void gen6_check_faults(struct intel_gt *gt)
|
||||
{
|
||||
struct intel_engine_cs *engine;
|
||||
enum intel_engine_id id;
|
||||
u32 fault;
|
||||
|
||||
for_each_engine(engine, gt->i915, id) {
|
||||
fault = GEN6_RING_FAULT_REG_READ(engine);
|
||||
if (fault & RING_FAULT_VALID) {
|
||||
DRM_DEBUG_DRIVER("Unexpected fault\n"
|
||||
"\tAddr: 0x%08lx\n"
|
||||
"\tAddress space: %s\n"
|
||||
"\tSource ID: %d\n"
|
||||
"\tType: %d\n",
|
||||
fault & PAGE_MASK,
|
||||
fault & RING_FAULT_GTTSEL_MASK ?
|
||||
"GGTT" : "PPGTT",
|
||||
RING_FAULT_SRCID(fault),
|
||||
RING_FAULT_FAULT_TYPE(fault));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static void gen8_check_faults(struct intel_gt *gt)
|
||||
{
|
||||
struct intel_uncore *uncore = gt->uncore;
|
||||
u32 fault = intel_uncore_read(uncore, GEN8_RING_FAULT_REG);
|
||||
|
||||
if (fault & RING_FAULT_VALID) {
|
||||
u32 fault_data0, fault_data1;
|
||||
u64 fault_addr;
|
||||
|
||||
fault_data0 = intel_uncore_read(uncore, GEN8_FAULT_TLB_DATA0);
|
||||
fault_data1 = intel_uncore_read(uncore, GEN8_FAULT_TLB_DATA1);
|
||||
fault_addr = ((u64)(fault_data1 & FAULT_VA_HIGH_BITS) << 44) |
|
||||
((u64)fault_data0 << 12);
|
||||
|
||||
DRM_DEBUG_DRIVER("Unexpected fault\n"
|
||||
"\tAddr: 0x%08x_%08x\n"
|
||||
"\tAddress space: %s\n"
|
||||
"\tEngine ID: %d\n"
|
||||
"\tSource ID: %d\n"
|
||||
"\tType: %d\n",
|
||||
upper_32_bits(fault_addr),
|
||||
lower_32_bits(fault_addr),
|
||||
fault_data1 & FAULT_GTT_SEL ? "GGTT" : "PPGTT",
|
||||
GEN8_RING_FAULT_ENGINE_ID(fault),
|
||||
RING_FAULT_SRCID(fault),
|
||||
RING_FAULT_FAULT_TYPE(fault));
|
||||
}
|
||||
}
|
||||
|
||||
void intel_gt_check_and_clear_faults(struct intel_gt *gt)
|
||||
{
|
||||
struct drm_i915_private *i915 = gt->i915;
|
||||
|
||||
/* From GEN8 onwards we only have one 'All Engine Fault Register' */
|
||||
if (INTEL_GEN(i915) >= 8)
|
||||
gen8_check_faults(gt);
|
||||
else if (INTEL_GEN(i915) >= 6)
|
||||
gen6_check_faults(gt);
|
||||
else
|
||||
return;
|
||||
|
||||
intel_gt_clear_error_registers(gt, ALL_ENGINES);
|
||||
}
|
||||
|
||||
void intel_gt_flush_ggtt_writes(struct intel_gt *gt)
|
||||
{
|
||||
struct drm_i915_private *i915 = gt->i915;
|
||||
intel_wakeref_t wakeref;
|
||||
|
||||
/*
|
||||
* No actual flushing is required for the GTT write domain for reads
|
||||
* from the GTT domain. Writes to it "immediately" go to main memory
|
||||
* as far as we know, so there's no chipset flush. It also doesn't
|
||||
* land in the GPU render cache.
|
||||
*
|
||||
* However, we do have to enforce the order so that all writes through
|
||||
* the GTT land before any writes to the device, such as updates to
|
||||
* the GATT itself.
|
||||
*
|
||||
* We also have to wait a bit for the writes to land from the GTT.
|
||||
* An uncached read (i.e. mmio) seems to be ideal for the round-trip
|
||||
* timing. This issue has only been observed when switching quickly
|
||||
* between GTT writes and CPU reads from inside the kernel on recent hw,
|
||||
* and it appears to only affect discrete GTT blocks (i.e. on LLC
|
||||
* system agents we cannot reproduce this behaviour, until Cannonlake
|
||||
* that was!).
|
||||
*/
|
||||
|
||||
wmb();
|
||||
|
||||
if (INTEL_INFO(i915)->has_coherent_ggtt)
|
||||
return;
|
||||
|
||||
intel_gt_chipset_flush(gt);
|
||||
|
||||
with_intel_runtime_pm(&i915->runtime_pm, wakeref) {
|
||||
struct intel_uncore *uncore = gt->uncore;
|
||||
|
||||
spin_lock_irq(&uncore->lock);
|
||||
intel_uncore_posting_read_fw(uncore,
|
||||
RING_HEAD(RENDER_RING_BASE));
|
||||
spin_unlock_irq(&uncore->lock);
|
||||
}
|
||||
}
|
||||
|
||||
void intel_gt_chipset_flush(struct intel_gt *gt)
|
||||
{
|
||||
wmb();
|
||||
if (INTEL_GEN(gt->i915) < 6)
|
||||
intel_gtt_chipset_flush();
|
||||
}
|
||||
|
||||
int intel_gt_init_scratch(struct intel_gt *gt, unsigned int size)
|
||||
{
|
||||
struct drm_i915_private *i915 = gt->i915;
|
||||
struct drm_i915_gem_object *obj;
|
||||
struct i915_vma *vma;
|
||||
int ret;
|
||||
|
||||
obj = i915_gem_object_create_stolen(i915, size);
|
||||
if (!obj)
|
||||
obj = i915_gem_object_create_internal(i915, size);
|
||||
if (IS_ERR(obj)) {
|
||||
DRM_ERROR("Failed to allocate scratch page\n");
|
||||
return PTR_ERR(obj);
|
||||
}
|
||||
|
||||
vma = i915_vma_instance(obj, >->ggtt->vm, NULL);
|
||||
if (IS_ERR(vma)) {
|
||||
ret = PTR_ERR(vma);
|
||||
goto err_unref;
|
||||
}
|
||||
|
||||
ret = i915_vma_pin(vma, 0, 0, PIN_GLOBAL | PIN_HIGH);
|
||||
if (ret)
|
||||
goto err_unref;
|
||||
|
||||
gt->scratch = vma;
|
||||
return 0;
|
||||
|
||||
err_unref:
|
||||
i915_gem_object_put(obj);
|
||||
return ret;
|
||||
}
|
||||
|
||||
void intel_gt_fini_scratch(struct intel_gt *gt)
|
||||
{
|
||||
i915_vma_unpin_and_release(>->scratch, 0);
|
||||
}
|
||||
|
||||
void intel_gt_cleanup_early(struct intel_gt *gt)
|
||||
{
|
||||
intel_gt_fini_reset(gt);
|
||||
}
|
|
@ -0,0 +1,60 @@
|
|||
/* SPDX-License-Identifier: MIT */
|
||||
/*
|
||||
* Copyright © 2019 Intel Corporation
|
||||
*/
|
||||
|
||||
#ifndef __INTEL_GT__
|
||||
#define __INTEL_GT__
|
||||
|
||||
#include "intel_engine_types.h"
|
||||
#include "intel_gt_types.h"
|
||||
#include "intel_reset.h"
|
||||
|
||||
struct drm_i915_private;
|
||||
|
||||
static inline struct intel_gt *uc_to_gt(struct intel_uc *uc)
|
||||
{
|
||||
return container_of(uc, struct intel_gt, uc);
|
||||
}
|
||||
|
||||
static inline struct intel_gt *guc_to_gt(struct intel_guc *guc)
|
||||
{
|
||||
return container_of(guc, struct intel_gt, uc.guc);
|
||||
}
|
||||
|
||||
static inline struct intel_gt *huc_to_gt(struct intel_huc *huc)
|
||||
{
|
||||
return container_of(huc, struct intel_gt, uc.huc);
|
||||
}
|
||||
|
||||
void intel_gt_init_early(struct intel_gt *gt, struct drm_i915_private *i915);
|
||||
void intel_gt_init_hw(struct drm_i915_private *i915);
|
||||
|
||||
void intel_gt_cleanup_early(struct intel_gt *gt);
|
||||
|
||||
void intel_gt_check_and_clear_faults(struct intel_gt *gt);
|
||||
void intel_gt_clear_error_registers(struct intel_gt *gt,
|
||||
intel_engine_mask_t engine_mask);
|
||||
|
||||
void intel_gt_flush_ggtt_writes(struct intel_gt *gt);
|
||||
void intel_gt_chipset_flush(struct intel_gt *gt);
|
||||
|
||||
void intel_gt_init_hangcheck(struct intel_gt *gt);
|
||||
|
||||
int intel_gt_init_scratch(struct intel_gt *gt, unsigned int size);
|
||||
void intel_gt_fini_scratch(struct intel_gt *gt);
|
||||
|
||||
static inline u32 intel_gt_scratch_offset(const struct intel_gt *gt,
|
||||
enum intel_gt_scratch_field field)
|
||||
{
|
||||
return i915_ggtt_offset(gt->scratch) + field;
|
||||
}
|
||||
|
||||
static inline bool intel_gt_is_wedged(struct intel_gt *gt)
|
||||
{
|
||||
return __intel_reset_failed(>->reset);
|
||||
}
|
||||
|
||||
void intel_gt_queue_hangcheck(struct intel_gt *gt);
|
||||
|
||||
#endif /* __INTEL_GT_H__ */
|
|
@ -5,7 +5,9 @@
|
|||
*/
|
||||
|
||||
#include "i915_drv.h"
|
||||
#include "i915_params.h"
|
||||
#include "intel_engine_pm.h"
|
||||
#include "intel_gt.h"
|
||||
#include "intel_gt_pm.h"
|
||||
#include "intel_pm.h"
|
||||
#include "intel_wakeref.h"
|
||||
|
@ -17,8 +19,8 @@ static void pm_notify(struct drm_i915_private *i915, int state)
|
|||
|
||||
static int intel_gt_unpark(struct intel_wakeref *wf)
|
||||
{
|
||||
struct drm_i915_private *i915 =
|
||||
container_of(wf, typeof(*i915), gt.wakeref);
|
||||
struct intel_gt *gt = container_of(wf, typeof(*gt), wakeref);
|
||||
struct drm_i915_private *i915 = gt->i915;
|
||||
|
||||
GEM_TRACE("\n");
|
||||
|
||||
|
@ -33,8 +35,8 @@ static int intel_gt_unpark(struct intel_wakeref *wf)
|
|||
* Work around it by grabbing a GT IRQ power domain whilst there is any
|
||||
* GT activity, preventing any DC state transitions.
|
||||
*/
|
||||
i915->gt.awake = intel_display_power_get(i915, POWER_DOMAIN_GT_IRQ);
|
||||
GEM_BUG_ON(!i915->gt.awake);
|
||||
gt->awake = intel_display_power_get(i915, POWER_DOMAIN_GT_IRQ);
|
||||
GEM_BUG_ON(!gt->awake);
|
||||
|
||||
intel_enable_gt_powersave(i915);
|
||||
|
||||
|
@ -44,16 +46,18 @@ static int intel_gt_unpark(struct intel_wakeref *wf)
|
|||
|
||||
i915_pmu_gt_unparked(i915);
|
||||
|
||||
i915_queue_hangcheck(i915);
|
||||
intel_gt_queue_hangcheck(gt);
|
||||
|
||||
pm_notify(i915, INTEL_GT_UNPARK);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void intel_gt_pm_get(struct drm_i915_private *i915)
|
||||
void intel_gt_pm_get(struct intel_gt *gt)
|
||||
{
|
||||
intel_wakeref_get(&i915->runtime_pm, &i915->gt.wakeref, intel_gt_unpark);
|
||||
struct intel_runtime_pm *rpm = >->i915->runtime_pm;
|
||||
|
||||
intel_wakeref_get(rpm, >->wakeref, intel_gt_unpark);
|
||||
}
|
||||
|
||||
static int intel_gt_park(struct intel_wakeref *wf)
|
||||
|
@ -76,28 +80,30 @@ static int intel_gt_park(struct intel_wakeref *wf)
|
|||
return 0;
|
||||
}
|
||||
|
||||
void intel_gt_pm_put(struct drm_i915_private *i915)
|
||||
void intel_gt_pm_put(struct intel_gt *gt)
|
||||
{
|
||||
intel_wakeref_put(&i915->runtime_pm, &i915->gt.wakeref, intel_gt_park);
|
||||
struct intel_runtime_pm *rpm = >->i915->runtime_pm;
|
||||
|
||||
intel_wakeref_put(rpm, >->wakeref, intel_gt_park);
|
||||
}
|
||||
|
||||
void intel_gt_pm_init(struct drm_i915_private *i915)
|
||||
void intel_gt_pm_init_early(struct intel_gt *gt)
|
||||
{
|
||||
intel_wakeref_init(&i915->gt.wakeref);
|
||||
BLOCKING_INIT_NOTIFIER_HEAD(&i915->gt.pm_notifications);
|
||||
intel_wakeref_init(>->wakeref);
|
||||
BLOCKING_INIT_NOTIFIER_HEAD(>->pm_notifications);
|
||||
}
|
||||
|
||||
static bool reset_engines(struct drm_i915_private *i915)
|
||||
static bool reset_engines(struct intel_gt *gt)
|
||||
{
|
||||
if (INTEL_INFO(i915)->gpu_reset_clobbers_display)
|
||||
if (INTEL_INFO(gt->i915)->gpu_reset_clobbers_display)
|
||||
return false;
|
||||
|
||||
return intel_gpu_reset(i915, ALL_ENGINES) == 0;
|
||||
return __intel_gt_reset(gt, ALL_ENGINES) == 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* intel_gt_sanitize: called after the GPU has lost power
|
||||
* @i915: the i915 device
|
||||
* @gt: the i915 GT container
|
||||
* @force: ignore a failed reset and sanitize engine state anyway
|
||||
*
|
||||
* Anytime we reset the GPU, either with an explicit GPU reset or through a
|
||||
|
@ -105,21 +111,23 @@ static bool reset_engines(struct drm_i915_private *i915)
|
|||
* to match. Note that calling intel_gt_sanitize() if the GPU has not
|
||||
* been reset results in much confusion!
|
||||
*/
|
||||
void intel_gt_sanitize(struct drm_i915_private *i915, bool force)
|
||||
void intel_gt_sanitize(struct intel_gt *gt, bool force)
|
||||
{
|
||||
struct intel_engine_cs *engine;
|
||||
enum intel_engine_id id;
|
||||
|
||||
GEM_TRACE("\n");
|
||||
|
||||
if (!reset_engines(i915) && !force)
|
||||
intel_uc_sanitize(>->uc);
|
||||
|
||||
if (!reset_engines(gt) && !force)
|
||||
return;
|
||||
|
||||
for_each_engine(engine, i915, id)
|
||||
intel_engine_reset(engine, false);
|
||||
for_each_engine(engine, gt->i915, id)
|
||||
__intel_engine_reset(engine, false);
|
||||
}
|
||||
|
||||
int intel_gt_resume(struct drm_i915_private *i915)
|
||||
int intel_gt_resume(struct intel_gt *gt)
|
||||
{
|
||||
struct intel_engine_cs *engine;
|
||||
enum intel_engine_id id;
|
||||
|
@ -131,8 +139,8 @@ int intel_gt_resume(struct drm_i915_private *i915)
|
|||
* Only the kernel contexts should remain pinned over suspend,
|
||||
* allowing us to fixup the user contexts on their first pin.
|
||||
*/
|
||||
intel_gt_pm_get(i915);
|
||||
for_each_engine(engine, i915, id) {
|
||||
intel_gt_pm_get(gt);
|
||||
for_each_engine(engine, gt->i915, id) {
|
||||
struct intel_context *ce;
|
||||
|
||||
intel_engine_pm_get(engine);
|
||||
|
@ -141,22 +149,18 @@ int intel_gt_resume(struct drm_i915_private *i915)
|
|||
if (ce)
|
||||
ce->ops->reset(ce);
|
||||
|
||||
ce = engine->preempt_context;
|
||||
if (ce)
|
||||
ce->ops->reset(ce);
|
||||
|
||||
engine->serial++; /* kernel context lost */
|
||||
err = engine->resume(engine);
|
||||
|
||||
intel_engine_pm_put(engine);
|
||||
if (err) {
|
||||
dev_err(i915->drm.dev,
|
||||
dev_err(gt->i915->drm.dev,
|
||||
"Failed to restart %s (%d)\n",
|
||||
engine->name, err);
|
||||
break;
|
||||
}
|
||||
}
|
||||
intel_gt_pm_put(i915);
|
||||
intel_gt_pm_put(gt);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
|
|
@ -9,19 +9,19 @@
|
|||
|
||||
#include <linux/types.h>
|
||||
|
||||
struct drm_i915_private;
|
||||
struct intel_gt;
|
||||
|
||||
enum {
|
||||
INTEL_GT_UNPARK,
|
||||
INTEL_GT_PARK,
|
||||
};
|
||||
|
||||
void intel_gt_pm_get(struct drm_i915_private *i915);
|
||||
void intel_gt_pm_put(struct drm_i915_private *i915);
|
||||
void intel_gt_pm_get(struct intel_gt *gt);
|
||||
void intel_gt_pm_put(struct intel_gt *gt);
|
||||
|
||||
void intel_gt_pm_init(struct drm_i915_private *i915);
|
||||
void intel_gt_pm_init_early(struct intel_gt *gt);
|
||||
|
||||
void intel_gt_sanitize(struct drm_i915_private *i915, bool force);
|
||||
int intel_gt_resume(struct drm_i915_private *i915);
|
||||
void intel_gt_sanitize(struct intel_gt *gt, bool force);
|
||||
int intel_gt_resume(struct intel_gt *gt);
|
||||
|
||||
#endif /* INTEL_GT_PM_H */
|
||||
|
|
|
@ -0,0 +1,96 @@
|
|||
/* SPDX-License-Identifier: MIT */
|
||||
/*
|
||||
* Copyright © 2019 Intel Corporation
|
||||
*/
|
||||
|
||||
#ifndef __INTEL_GT_TYPES__
|
||||
#define __INTEL_GT_TYPES__
|
||||
|
||||
#include <linux/ktime.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/notifier.h>
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
#include "uc/intel_uc.h"
|
||||
|
||||
#include "i915_vma.h"
|
||||
#include "intel_reset_types.h"
|
||||
#include "intel_wakeref.h"
|
||||
|
||||
struct drm_i915_private;
|
||||
struct i915_ggtt;
|
||||
struct intel_uncore;
|
||||
|
||||
struct intel_hangcheck {
|
||||
/* For hangcheck timer */
|
||||
#define DRM_I915_HANGCHECK_PERIOD 1500 /* in ms */
|
||||
#define DRM_I915_HANGCHECK_JIFFIES msecs_to_jiffies(DRM_I915_HANGCHECK_PERIOD)
|
||||
|
||||
struct delayed_work work;
|
||||
};
|
||||
|
||||
struct intel_gt {
|
||||
struct drm_i915_private *i915;
|
||||
struct intel_uncore *uncore;
|
||||
struct i915_ggtt *ggtt;
|
||||
|
||||
struct intel_uc uc;
|
||||
|
||||
struct intel_gt_timelines {
|
||||
struct mutex mutex; /* protects list */
|
||||
struct list_head active_list;
|
||||
|
||||
/* Pack multiple timelines' seqnos into the same page */
|
||||
spinlock_t hwsp_lock;
|
||||
struct list_head hwsp_free_list;
|
||||
} timelines;
|
||||
|
||||
struct list_head active_rings;
|
||||
|
||||
struct intel_wakeref wakeref;
|
||||
|
||||
struct list_head closed_vma;
|
||||
spinlock_t closed_lock; /* guards the list of closed_vma */
|
||||
|
||||
struct intel_hangcheck hangcheck;
|
||||
struct intel_reset reset;
|
||||
|
||||
/**
|
||||
* Is the GPU currently considered idle, or busy executing
|
||||
* userspace requests? Whilst idle, we allow runtime power
|
||||
* management to power down the hardware and display clocks.
|
||||
* In order to reduce the effect on performance, there
|
||||
* is a slight delay before we do so.
|
||||
*/
|
||||
intel_wakeref_t awake;
|
||||
|
||||
struct blocking_notifier_head pm_notifications;
|
||||
|
||||
ktime_t last_init_time;
|
||||
|
||||
struct i915_vma *scratch;
|
||||
|
||||
u32 pm_imr;
|
||||
u32 pm_ier;
|
||||
|
||||
u32 pm_guc_events;
|
||||
};
|
||||
|
||||
enum intel_gt_scratch_field {
|
||||
/* 8 bytes */
|
||||
INTEL_GT_SCRATCH_FIELD_DEFAULT = 0,
|
||||
|
||||
/* 8 bytes */
|
||||
INTEL_GT_SCRATCH_FIELD_CLEAR_SLM_WA = 128,
|
||||
|
||||
/* 8 bytes */
|
||||
INTEL_GT_SCRATCH_FIELD_RENDER_FLUSH = 128,
|
||||
|
||||
/* 8 bytes */
|
||||
INTEL_GT_SCRATCH_FIELD_COHERENTL3_WA = 256,
|
||||
|
||||
};
|
||||
|
||||
#endif /* __INTEL_GT_TYPES_H__ */
|
|
@ -22,8 +22,10 @@
|
|||
*
|
||||
*/
|
||||
|
||||
#include "intel_reset.h"
|
||||
#include "i915_drv.h"
|
||||
#include "intel_engine.h"
|
||||
#include "intel_gt.h"
|
||||
#include "intel_reset.h"
|
||||
|
||||
struct hangcheck {
|
||||
u64 acthd;
|
||||
|
@ -57,9 +59,6 @@ static bool subunits_stuck(struct intel_engine_cs *engine)
|
|||
int slice;
|
||||
int subslice;
|
||||
|
||||
if (engine->id != RCS0)
|
||||
return true;
|
||||
|
||||
intel_engine_get_instdone(engine, &instdone);
|
||||
|
||||
/* There might be unstable subunit states even when
|
||||
|
@ -103,7 +102,6 @@ head_stuck(struct intel_engine_cs *engine, u64 acthd)
|
|||
static enum intel_engine_hangcheck_action
|
||||
engine_stuck(struct intel_engine_cs *engine, u64 acthd)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = engine->i915;
|
||||
enum intel_engine_hangcheck_action ha;
|
||||
u32 tmp;
|
||||
|
||||
|
@ -111,7 +109,7 @@ engine_stuck(struct intel_engine_cs *engine, u64 acthd)
|
|||
if (ha != ENGINE_DEAD)
|
||||
return ha;
|
||||
|
||||
if (IS_GEN(dev_priv, 2))
|
||||
if (IS_GEN(engine->i915, 2))
|
||||
return ENGINE_DEAD;
|
||||
|
||||
/* Is the chip hanging on a WAIT_FOR_EVENT?
|
||||
|
@ -121,8 +119,8 @@ engine_stuck(struct intel_engine_cs *engine, u64 acthd)
|
|||
*/
|
||||
tmp = ENGINE_READ(engine, RING_CTL);
|
||||
if (tmp & RING_WAIT) {
|
||||
i915_handle_error(dev_priv, engine->mask, 0,
|
||||
"stuck wait on %s", engine->name);
|
||||
intel_gt_handle_error(engine->gt, engine->mask, 0,
|
||||
"stuck wait on %s", engine->name);
|
||||
ENGINE_WRITE(engine, RING_CTL, tmp);
|
||||
return ENGINE_WAIT_KICK;
|
||||
}
|
||||
|
@ -222,7 +220,7 @@ static void hangcheck_accumulate_sample(struct intel_engine_cs *engine,
|
|||
I915_ENGINE_WEDGED_TIMEOUT);
|
||||
}
|
||||
|
||||
static void hangcheck_declare_hang(struct drm_i915_private *i915,
|
||||
static void hangcheck_declare_hang(struct intel_gt *gt,
|
||||
intel_engine_mask_t hung,
|
||||
intel_engine_mask_t stuck)
|
||||
{
|
||||
|
@ -238,12 +236,12 @@ static void hangcheck_declare_hang(struct drm_i915_private *i915,
|
|||
hung &= ~stuck;
|
||||
len = scnprintf(msg, sizeof(msg),
|
||||
"%s on ", stuck == hung ? "no progress" : "hang");
|
||||
for_each_engine_masked(engine, i915, hung, tmp)
|
||||
for_each_engine_masked(engine, gt->i915, hung, tmp)
|
||||
len += scnprintf(msg + len, sizeof(msg) - len,
|
||||
"%s, ", engine->name);
|
||||
msg[len-2] = '\0';
|
||||
|
||||
return i915_handle_error(i915, hung, I915_ERROR_CAPTURE, "%s", msg);
|
||||
return intel_gt_handle_error(gt, hung, I915_ERROR_CAPTURE, "%s", msg);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -254,11 +252,10 @@ static void hangcheck_declare_hang(struct drm_i915_private *i915,
|
|||
* we kick the ring. If we see no progress on three subsequent calls
|
||||
* we assume chip is wedged and try to fix it by resetting the chip.
|
||||
*/
|
||||
static void i915_hangcheck_elapsed(struct work_struct *work)
|
||||
static void hangcheck_elapsed(struct work_struct *work)
|
||||
{
|
||||
struct drm_i915_private *dev_priv =
|
||||
container_of(work, typeof(*dev_priv),
|
||||
gpu_error.hangcheck_work.work);
|
||||
struct intel_gt *gt =
|
||||
container_of(work, typeof(*gt), hangcheck.work.work);
|
||||
intel_engine_mask_t hung = 0, stuck = 0, wedged = 0;
|
||||
struct intel_engine_cs *engine;
|
||||
enum intel_engine_id id;
|
||||
|
@ -267,13 +264,13 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
|
|||
if (!i915_modparams.enable_hangcheck)
|
||||
return;
|
||||
|
||||
if (!READ_ONCE(dev_priv->gt.awake))
|
||||
if (!READ_ONCE(gt->awake))
|
||||
return;
|
||||
|
||||
if (i915_terminally_wedged(dev_priv))
|
||||
if (intel_gt_is_wedged(gt))
|
||||
return;
|
||||
|
||||
wakeref = intel_runtime_pm_get_if_in_use(&dev_priv->runtime_pm);
|
||||
wakeref = intel_runtime_pm_get_if_in_use(>->i915->runtime_pm);
|
||||
if (!wakeref)
|
||||
return;
|
||||
|
||||
|
@ -281,9 +278,9 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
|
|||
* periodically arm the mmio checker to see if we are triggering
|
||||
* any invalid access.
|
||||
*/
|
||||
intel_uncore_arm_unclaimed_mmio_detection(&dev_priv->uncore);
|
||||
intel_uncore_arm_unclaimed_mmio_detection(gt->uncore);
|
||||
|
||||
for_each_engine(engine, dev_priv, id) {
|
||||
for_each_engine(engine, gt->i915, id) {
|
||||
struct hangcheck hc;
|
||||
|
||||
intel_engine_signal_breadcrumbs(engine);
|
||||
|
@ -305,7 +302,7 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
|
|||
if (GEM_SHOW_DEBUG() && (hung | stuck)) {
|
||||
struct drm_printer p = drm_debug_printer("hangcheck");
|
||||
|
||||
for_each_engine(engine, dev_priv, id) {
|
||||
for_each_engine(engine, gt->i915, id) {
|
||||
if (intel_engine_is_idle(engine))
|
||||
continue;
|
||||
|
||||
|
@ -314,20 +311,37 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
|
|||
}
|
||||
|
||||
if (wedged) {
|
||||
dev_err(dev_priv->drm.dev,
|
||||
dev_err(gt->i915->drm.dev,
|
||||
"GPU recovery timed out,"
|
||||
" cancelling all in-flight rendering.\n");
|
||||
GEM_TRACE_DUMP();
|
||||
i915_gem_set_wedged(dev_priv);
|
||||
intel_gt_set_wedged(gt);
|
||||
}
|
||||
|
||||
if (hung)
|
||||
hangcheck_declare_hang(dev_priv, hung, stuck);
|
||||
hangcheck_declare_hang(gt, hung, stuck);
|
||||
|
||||
intel_runtime_pm_put(&dev_priv->runtime_pm, wakeref);
|
||||
intel_runtime_pm_put(>->i915->runtime_pm, wakeref);
|
||||
|
||||
/* Reset timer in case GPU hangs without another request being added */
|
||||
i915_queue_hangcheck(dev_priv);
|
||||
intel_gt_queue_hangcheck(gt);
|
||||
}
|
||||
|
||||
void intel_gt_queue_hangcheck(struct intel_gt *gt)
|
||||
{
|
||||
unsigned long delay;
|
||||
|
||||
if (unlikely(!i915_modparams.enable_hangcheck))
|
||||
return;
|
||||
|
||||
/*
|
||||
* Don't continually defer the hangcheck so that it is always run at
|
||||
* least once after work has been scheduled on any ring. Otherwise,
|
||||
* we will ignore a hung ring if a second ring is kept busy.
|
||||
*/
|
||||
|
||||
delay = round_jiffies_up_relative(DRM_I915_HANGCHECK_JIFFIES);
|
||||
queue_delayed_work(system_long_wq, >->hangcheck.work, delay);
|
||||
}
|
||||
|
||||
void intel_engine_init_hangcheck(struct intel_engine_cs *engine)
|
||||
|
@ -336,10 +350,9 @@ void intel_engine_init_hangcheck(struct intel_engine_cs *engine)
|
|||
engine->hangcheck.action_timestamp = jiffies;
|
||||
}
|
||||
|
||||
void intel_hangcheck_init(struct drm_i915_private *i915)
|
||||
void intel_gt_init_hangcheck(struct intel_gt *gt)
|
||||
{
|
||||
INIT_DELAYED_WORK(&i915->gpu_error.hangcheck_work,
|
||||
i915_hangcheck_elapsed);
|
||||
INIT_DELAYED_WORK(>->hangcheck.work, hangcheck_elapsed);
|
||||
}
|
||||
|
||||
#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -23,6 +23,7 @@
|
|||
#include "i915_drv.h"
|
||||
|
||||
#include "intel_engine.h"
|
||||
#include "intel_gt.h"
|
||||
#include "intel_mocs.h"
|
||||
#include "intel_lrc.h"
|
||||
|
||||
|
@ -247,7 +248,7 @@ static const struct drm_i915_mocs_entry icelake_mocs_table[] = {
|
|||
|
||||
/**
|
||||
* get_mocs_settings()
|
||||
* @dev_priv: i915 device.
|
||||
* @gt: gt device
|
||||
* @table: Output table that will be made to point at appropriate
|
||||
* MOCS values for the device.
|
||||
*
|
||||
|
@ -257,33 +258,34 @@ static const struct drm_i915_mocs_entry icelake_mocs_table[] = {
|
|||
*
|
||||
* Return: true if there are applicable MOCS settings for the device.
|
||||
*/
|
||||
static bool get_mocs_settings(struct drm_i915_private *dev_priv,
|
||||
static bool get_mocs_settings(struct intel_gt *gt,
|
||||
struct drm_i915_mocs_table *table)
|
||||
{
|
||||
struct drm_i915_private *i915 = gt->i915;
|
||||
bool result = false;
|
||||
|
||||
if (INTEL_GEN(dev_priv) >= 11) {
|
||||
if (INTEL_GEN(i915) >= 11) {
|
||||
table->size = ARRAY_SIZE(icelake_mocs_table);
|
||||
table->table = icelake_mocs_table;
|
||||
table->n_entries = GEN11_NUM_MOCS_ENTRIES;
|
||||
result = true;
|
||||
} else if (IS_GEN9_BC(dev_priv) || IS_CANNONLAKE(dev_priv)) {
|
||||
} else if (IS_GEN9_BC(i915) || IS_CANNONLAKE(i915)) {
|
||||
table->size = ARRAY_SIZE(skylake_mocs_table);
|
||||
table->n_entries = GEN9_NUM_MOCS_ENTRIES;
|
||||
table->table = skylake_mocs_table;
|
||||
result = true;
|
||||
} else if (IS_GEN9_LP(dev_priv)) {
|
||||
} else if (IS_GEN9_LP(i915)) {
|
||||
table->size = ARRAY_SIZE(broxton_mocs_table);
|
||||
table->n_entries = GEN9_NUM_MOCS_ENTRIES;
|
||||
table->table = broxton_mocs_table;
|
||||
result = true;
|
||||
} else {
|
||||
WARN_ONCE(INTEL_GEN(dev_priv) >= 9,
|
||||
WARN_ONCE(INTEL_GEN(i915) >= 9,
|
||||
"Platform that should have a MOCS table does not.\n");
|
||||
}
|
||||
|
||||
/* WaDisableSkipCaching:skl,bxt,kbl,glk */
|
||||
if (IS_GEN(dev_priv, 9)) {
|
||||
if (IS_GEN(i915, 9)) {
|
||||
int i;
|
||||
|
||||
for (i = 0; i < table->size; i++)
|
||||
|
@ -338,12 +340,16 @@ static u32 get_entry_control(const struct drm_i915_mocs_table *table,
|
|||
*/
|
||||
void intel_mocs_init_engine(struct intel_engine_cs *engine)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = engine->i915;
|
||||
struct intel_gt *gt = engine->gt;
|
||||
struct intel_uncore *uncore = gt->uncore;
|
||||
struct drm_i915_mocs_table table;
|
||||
unsigned int index;
|
||||
u32 unused_value;
|
||||
|
||||
if (!get_mocs_settings(dev_priv, &table))
|
||||
/* Called under a blanket forcewake */
|
||||
assert_forcewakes_active(uncore, FORCEWAKE_ALL);
|
||||
|
||||
if (!get_mocs_settings(gt, &table))
|
||||
return;
|
||||
|
||||
/* Set unused values to PTE */
|
||||
|
@ -352,12 +358,16 @@ void intel_mocs_init_engine(struct intel_engine_cs *engine)
|
|||
for (index = 0; index < table.size; index++) {
|
||||
u32 value = get_entry_control(&table, index);
|
||||
|
||||
I915_WRITE(mocs_register(engine->id, index), value);
|
||||
intel_uncore_write_fw(uncore,
|
||||
mocs_register(engine->id, index),
|
||||
value);
|
||||
}
|
||||
|
||||
/* All remaining entries are also unused */
|
||||
for (; index < table.n_entries; index++)
|
||||
I915_WRITE(mocs_register(engine->id, index), unused_value);
|
||||
intel_uncore_write_fw(uncore,
|
||||
mocs_register(engine->id, index),
|
||||
unused_value);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -490,7 +500,7 @@ static int emit_mocs_l3cc_table(struct i915_request *rq,
|
|||
|
||||
/**
|
||||
* intel_mocs_init_l3cc_table() - program the mocs control table
|
||||
* @dev_priv: i915 device private
|
||||
* @gt: the intel_gt container
|
||||
*
|
||||
* This function simply programs the mocs registers for the given table
|
||||
* starting at the given address. This register set is programmed in pairs.
|
||||
|
@ -502,13 +512,14 @@ static int emit_mocs_l3cc_table(struct i915_request *rq,
|
|||
*
|
||||
* Return: Nothing.
|
||||
*/
|
||||
void intel_mocs_init_l3cc_table(struct drm_i915_private *dev_priv)
|
||||
void intel_mocs_init_l3cc_table(struct intel_gt *gt)
|
||||
{
|
||||
struct intel_uncore *uncore = gt->uncore;
|
||||
struct drm_i915_mocs_table table;
|
||||
unsigned int i;
|
||||
u16 unused_value;
|
||||
|
||||
if (!get_mocs_settings(dev_priv, &table))
|
||||
if (!get_mocs_settings(gt, &table))
|
||||
return;
|
||||
|
||||
/* Set unused values to PTE */
|
||||
|
@ -518,23 +529,27 @@ void intel_mocs_init_l3cc_table(struct drm_i915_private *dev_priv)
|
|||
u16 low = get_entry_l3cc(&table, 2 * i);
|
||||
u16 high = get_entry_l3cc(&table, 2 * i + 1);
|
||||
|
||||
I915_WRITE(GEN9_LNCFCMOCS(i),
|
||||
l3cc_combine(&table, low, high));
|
||||
intel_uncore_write(uncore,
|
||||
GEN9_LNCFCMOCS(i),
|
||||
l3cc_combine(&table, low, high));
|
||||
}
|
||||
|
||||
/* Odd table size - 1 left over */
|
||||
if (table.size & 0x01) {
|
||||
u16 low = get_entry_l3cc(&table, 2 * i);
|
||||
|
||||
I915_WRITE(GEN9_LNCFCMOCS(i),
|
||||
l3cc_combine(&table, low, unused_value));
|
||||
intel_uncore_write(uncore,
|
||||
GEN9_LNCFCMOCS(i),
|
||||
l3cc_combine(&table, low, unused_value));
|
||||
i++;
|
||||
}
|
||||
|
||||
/* All remaining entries are also unused */
|
||||
for (; i < table.n_entries / 2; i++)
|
||||
I915_WRITE(GEN9_LNCFCMOCS(i),
|
||||
l3cc_combine(&table, unused_value, unused_value));
|
||||
intel_uncore_write(uncore,
|
||||
GEN9_LNCFCMOCS(i),
|
||||
l3cc_combine(&table, unused_value,
|
||||
unused_value));
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -553,12 +568,15 @@ void intel_mocs_init_l3cc_table(struct drm_i915_private *dev_priv)
|
|||
*
|
||||
* Return: 0 on success, otherwise the error status.
|
||||
*/
|
||||
int intel_rcs_context_init_mocs(struct i915_request *rq)
|
||||
int intel_mocs_emit(struct i915_request *rq)
|
||||
{
|
||||
struct drm_i915_mocs_table t;
|
||||
int ret;
|
||||
|
||||
if (get_mocs_settings(rq->i915, &t)) {
|
||||
if (rq->engine->class != RENDER_CLASS)
|
||||
return 0;
|
||||
|
||||
if (get_mocs_settings(rq->engine->gt, &t)) {
|
||||
/* Program the RCS control registers */
|
||||
ret = emit_mocs_control_table(rq, &t);
|
||||
if (ret)
|
||||
|
|
|
@ -52,9 +52,11 @@
|
|||
struct drm_i915_private;
|
||||
struct i915_request;
|
||||
struct intel_engine_cs;
|
||||
struct intel_gt;
|
||||
|
||||
int intel_rcs_context_init_mocs(struct i915_request *rq);
|
||||
void intel_mocs_init_l3cc_table(struct drm_i915_private *dev_priv);
|
||||
void intel_mocs_init_l3cc_table(struct intel_gt *gt);
|
||||
void intel_mocs_init_engine(struct intel_engine_cs *engine);
|
||||
|
||||
int intel_mocs_emit(struct i915_request *rq);
|
||||
|
||||
#endif
|
||||
|
|
|
@ -26,10 +26,9 @@
|
|||
*/
|
||||
|
||||
#include "i915_drv.h"
|
||||
#include "i915_gem_render_state.h"
|
||||
#include "intel_renderstate.h"
|
||||
|
||||
struct intel_render_state {
|
||||
struct intel_renderstate {
|
||||
const struct intel_renderstate_rodata *rodata;
|
||||
struct drm_i915_gem_object *obj;
|
||||
struct i915_vma *vma;
|
||||
|
@ -42,7 +41,7 @@ struct intel_render_state {
|
|||
static const struct intel_renderstate_rodata *
|
||||
render_state_get_rodata(const struct intel_engine_cs *engine)
|
||||
{
|
||||
if (engine->id != RCS0)
|
||||
if (engine->class != RENDER_CLASS)
|
||||
return NULL;
|
||||
|
||||
switch (INTEL_GEN(engine->i915)) {
|
||||
|
@ -75,7 +74,7 @@ render_state_get_rodata(const struct intel_engine_cs *engine)
|
|||
(batch)[(i)++] = (val); \
|
||||
} while(0)
|
||||
|
||||
static int render_state_setup(struct intel_render_state *so,
|
||||
static int render_state_setup(struct intel_renderstate *so,
|
||||
struct drm_i915_private *i915)
|
||||
{
|
||||
const struct intel_renderstate_rodata *rodata = so->rodata;
|
||||
|
@ -177,10 +176,10 @@ err:
|
|||
|
||||
#undef OUT_BATCH
|
||||
|
||||
int i915_gem_render_state_emit(struct i915_request *rq)
|
||||
int intel_renderstate_emit(struct i915_request *rq)
|
||||
{
|
||||
struct intel_engine_cs *engine = rq->engine;
|
||||
struct intel_render_state so = {}; /* keep the compiler happy */
|
||||
struct intel_renderstate so = {}; /* keep the compiler happy */
|
||||
int err;
|
||||
|
||||
so.rodata = render_state_get_rodata(engine);
|
||||
|
@ -194,7 +193,7 @@ int i915_gem_render_state_emit(struct i915_request *rq)
|
|||
if (IS_ERR(so.obj))
|
||||
return PTR_ERR(so.obj);
|
||||
|
||||
so.vma = i915_vma_instance(so.obj, &engine->i915->ggtt.vm, NULL);
|
||||
so.vma = i915_vma_instance(so.obj, &engine->gt->ggtt->vm, NULL);
|
||||
if (IS_ERR(so.vma)) {
|
||||
err = PTR_ERR(so.vma);
|
||||
goto err_obj;
|
|
@ -21,11 +21,13 @@
|
|||
* DEALINGS IN THE SOFTWARE.
|
||||
*/
|
||||
|
||||
#ifndef _INTEL_RENDERSTATE_H
|
||||
#define _INTEL_RENDERSTATE_H
|
||||
#ifndef _INTEL_RENDERSTATE_H_
|
||||
#define _INTEL_RENDERSTATE_H_
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
struct i915_request;
|
||||
|
||||
struct intel_renderstate_rodata {
|
||||
const u32 *reloc;
|
||||
const u32 *batch;
|
||||
|
@ -44,4 +46,6 @@ extern const struct intel_renderstate_rodata gen7_null_state;
|
|||
extern const struct intel_renderstate_rodata gen8_null_state;
|
||||
extern const struct intel_renderstate_rodata gen9_null_state;
|
||||
|
||||
#endif /* INTEL_RENDERSTATE_H */
|
||||
int intel_renderstate_emit(struct i915_request *rq);
|
||||
|
||||
#endif /* _INTEL_RENDERSTATE_H_ */
|
File diff suppressed because it is too large
Load Diff
|
@ -11,58 +11,67 @@
|
|||
#include <linux/types.h>
|
||||
#include <linux/srcu.h>
|
||||
|
||||
#include "gt/intel_engine_types.h"
|
||||
#include "intel_engine_types.h"
|
||||
#include "intel_reset_types.h"
|
||||
|
||||
struct drm_i915_private;
|
||||
struct i915_request;
|
||||
struct intel_engine_cs;
|
||||
struct intel_gt;
|
||||
struct intel_guc;
|
||||
|
||||
void intel_gt_init_reset(struct intel_gt *gt);
|
||||
void intel_gt_fini_reset(struct intel_gt *gt);
|
||||
|
||||
__printf(4, 5)
|
||||
void i915_handle_error(struct drm_i915_private *i915,
|
||||
intel_engine_mask_t engine_mask,
|
||||
unsigned long flags,
|
||||
const char *fmt, ...);
|
||||
void intel_gt_handle_error(struct intel_gt *gt,
|
||||
intel_engine_mask_t engine_mask,
|
||||
unsigned long flags,
|
||||
const char *fmt, ...);
|
||||
#define I915_ERROR_CAPTURE BIT(0)
|
||||
|
||||
void i915_check_and_clear_faults(struct drm_i915_private *i915);
|
||||
void intel_gt_reset(struct intel_gt *gt,
|
||||
intel_engine_mask_t stalled_mask,
|
||||
const char *reason);
|
||||
int intel_engine_reset(struct intel_engine_cs *engine,
|
||||
const char *reason);
|
||||
|
||||
void i915_reset(struct drm_i915_private *i915,
|
||||
intel_engine_mask_t stalled_mask,
|
||||
const char *reason);
|
||||
int i915_reset_engine(struct intel_engine_cs *engine,
|
||||
const char *reason);
|
||||
void __i915_request_reset(struct i915_request *rq, bool guilty);
|
||||
|
||||
void i915_reset_request(struct i915_request *rq, bool guilty);
|
||||
int __must_check intel_gt_reset_trylock(struct intel_gt *gt);
|
||||
void intel_gt_reset_unlock(struct intel_gt *gt, int tag);
|
||||
|
||||
int __must_check i915_reset_trylock(struct drm_i915_private *i915);
|
||||
void i915_reset_unlock(struct drm_i915_private *i915, int tag);
|
||||
void intel_gt_set_wedged(struct intel_gt *gt);
|
||||
bool intel_gt_unset_wedged(struct intel_gt *gt);
|
||||
int intel_gt_terminally_wedged(struct intel_gt *gt);
|
||||
|
||||
int i915_terminally_wedged(struct drm_i915_private *i915);
|
||||
int __intel_gt_reset(struct intel_gt *gt, intel_engine_mask_t engine_mask);
|
||||
|
||||
int intel_reset_guc(struct intel_gt *gt);
|
||||
|
||||
struct intel_wedge_me {
|
||||
struct delayed_work work;
|
||||
struct intel_gt *gt;
|
||||
const char *name;
|
||||
};
|
||||
|
||||
void __intel_init_wedge(struct intel_wedge_me *w,
|
||||
struct intel_gt *gt,
|
||||
long timeout,
|
||||
const char *name);
|
||||
void __intel_fini_wedge(struct intel_wedge_me *w);
|
||||
|
||||
#define intel_wedge_on_timeout(W, GT, TIMEOUT) \
|
||||
for (__intel_init_wedge((W), (GT), (TIMEOUT), __func__); \
|
||||
(W)->gt; \
|
||||
__intel_fini_wedge((W)))
|
||||
|
||||
static inline bool __intel_reset_failed(const struct intel_reset *reset)
|
||||
{
|
||||
return unlikely(test_bit(I915_WEDGED, &reset->flags));
|
||||
}
|
||||
|
||||
bool intel_has_gpu_reset(struct drm_i915_private *i915);
|
||||
bool intel_has_reset_engine(struct drm_i915_private *i915);
|
||||
|
||||
int intel_gpu_reset(struct drm_i915_private *i915,
|
||||
intel_engine_mask_t engine_mask);
|
||||
|
||||
int intel_reset_guc(struct drm_i915_private *i915);
|
||||
|
||||
struct i915_wedge_me {
|
||||
struct delayed_work work;
|
||||
struct drm_i915_private *i915;
|
||||
const char *name;
|
||||
};
|
||||
|
||||
void __i915_init_wedge(struct i915_wedge_me *w,
|
||||
struct drm_i915_private *i915,
|
||||
long timeout,
|
||||
const char *name);
|
||||
void __i915_fini_wedge(struct i915_wedge_me *w);
|
||||
|
||||
#define i915_wedge_on_timeout(W, DEV, TIMEOUT) \
|
||||
for (__i915_init_wedge((W), (DEV), (TIMEOUT), __func__); \
|
||||
(W)->i915; \
|
||||
__i915_fini_wedge((W)))
|
||||
|
||||
#endif /* I915_RESET_H */
|
||||
|
|
|
@ -0,0 +1,50 @@
|
|||
/* SPDX-License-Identifier: MIT */
|
||||
/*
|
||||
* Copyright © 2019 Intel Corporation
|
||||
*/
|
||||
|
||||
#ifndef __INTEL_RESET_TYPES_H_
|
||||
#define __INTEL_RESET_TYPES_H_
|
||||
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/wait.h>
|
||||
#include <linux/srcu.h>
|
||||
|
||||
struct intel_reset {
|
||||
/**
|
||||
* flags: Control various stages of the GPU reset
|
||||
*
|
||||
* #I915_RESET_BACKOFF - When we start a global reset, we need to
|
||||
* serialise with any other users attempting to do the same, and
|
||||
* any global resources that may be clobber by the reset (such as
|
||||
* FENCE registers).
|
||||
*
|
||||
* #I915_RESET_ENGINE[num_engines] - Since the driver doesn't need to
|
||||
* acquire the struct_mutex to reset an engine, we need an explicit
|
||||
* flag to prevent two concurrent reset attempts in the same engine.
|
||||
* As the number of engines continues to grow, allocate the flags from
|
||||
* the most significant bits.
|
||||
*
|
||||
* #I915_WEDGED - If reset fails and we can no longer use the GPU,
|
||||
* we set the #I915_WEDGED bit. Prior to command submission, e.g.
|
||||
* i915_request_alloc(), this bit is checked and the sequence
|
||||
* aborted (with -EIO reported to userspace) if set.
|
||||
*/
|
||||
unsigned long flags;
|
||||
#define I915_RESET_BACKOFF 0
|
||||
#define I915_RESET_MODESET 1
|
||||
#define I915_RESET_ENGINE 2
|
||||
#define I915_WEDGED (BITS_PER_LONG - 1)
|
||||
|
||||
struct mutex mutex; /* serialises wedging/unwedging */
|
||||
|
||||
/**
|
||||
* Waitqueue to signal when the reset has completed. Used by clients
|
||||
* that wait for dev_priv->mm.wedged to settle.
|
||||
*/
|
||||
wait_queue_head_t queue;
|
||||
|
||||
struct srcu_struct backoff_srcu;
|
||||
};
|
||||
|
||||
#endif /* _INTEL_RESET_TYPES_H_ */
|
|
@ -34,9 +34,9 @@
|
|||
#include "gem/i915_gem_context.h"
|
||||
|
||||
#include "i915_drv.h"
|
||||
#include "i915_gem_render_state.h"
|
||||
#include "i915_trace.h"
|
||||
#include "intel_context.h"
|
||||
#include "intel_gt.h"
|
||||
#include "intel_reset.h"
|
||||
#include "intel_workarounds.h"
|
||||
|
||||
|
@ -75,7 +75,8 @@ gen2_render_ring_flush(struct i915_request *rq, u32 mode)
|
|||
*cs++ = cmd;
|
||||
while (num_store_dw--) {
|
||||
*cs++ = MI_STORE_DWORD_IMM | MI_MEM_VIRTUAL;
|
||||
*cs++ = i915_scratch_offset(rq->i915);
|
||||
*cs++ = intel_gt_scratch_offset(rq->engine->gt,
|
||||
INTEL_GT_SCRATCH_FIELD_DEFAULT);
|
||||
*cs++ = 0;
|
||||
}
|
||||
*cs++ = MI_FLUSH | MI_NO_WRITE_FLUSH;
|
||||
|
@ -148,7 +149,9 @@ gen4_render_ring_flush(struct i915_request *rq, u32 mode)
|
|||
*/
|
||||
if (mode & EMIT_INVALIDATE) {
|
||||
*cs++ = GFX_OP_PIPE_CONTROL(4) | PIPE_CONTROL_QW_WRITE;
|
||||
*cs++ = i915_scratch_offset(rq->i915) | PIPE_CONTROL_GLOBAL_GTT;
|
||||
*cs++ = intel_gt_scratch_offset(rq->engine->gt,
|
||||
INTEL_GT_SCRATCH_FIELD_DEFAULT) |
|
||||
PIPE_CONTROL_GLOBAL_GTT;
|
||||
*cs++ = 0;
|
||||
*cs++ = 0;
|
||||
|
||||
|
@ -156,7 +159,9 @@ gen4_render_ring_flush(struct i915_request *rq, u32 mode)
|
|||
*cs++ = MI_FLUSH;
|
||||
|
||||
*cs++ = GFX_OP_PIPE_CONTROL(4) | PIPE_CONTROL_QW_WRITE;
|
||||
*cs++ = i915_scratch_offset(rq->i915) | PIPE_CONTROL_GLOBAL_GTT;
|
||||
*cs++ = intel_gt_scratch_offset(rq->engine->gt,
|
||||
INTEL_GT_SCRATCH_FIELD_DEFAULT) |
|
||||
PIPE_CONTROL_GLOBAL_GTT;
|
||||
*cs++ = 0;
|
||||
*cs++ = 0;
|
||||
}
|
||||
|
@ -208,7 +213,9 @@ gen4_render_ring_flush(struct i915_request *rq, u32 mode)
|
|||
static int
|
||||
gen6_emit_post_sync_nonzero_flush(struct i915_request *rq)
|
||||
{
|
||||
u32 scratch_addr = i915_scratch_offset(rq->i915) + 2 * CACHELINE_BYTES;
|
||||
u32 scratch_addr =
|
||||
intel_gt_scratch_offset(rq->engine->gt,
|
||||
INTEL_GT_SCRATCH_FIELD_RENDER_FLUSH);
|
||||
u32 *cs;
|
||||
|
||||
cs = intel_ring_begin(rq, 6);
|
||||
|
@ -241,7 +248,9 @@ gen6_emit_post_sync_nonzero_flush(struct i915_request *rq)
|
|||
static int
|
||||
gen6_render_ring_flush(struct i915_request *rq, u32 mode)
|
||||
{
|
||||
u32 scratch_addr = i915_scratch_offset(rq->i915) + 2 * CACHELINE_BYTES;
|
||||
u32 scratch_addr =
|
||||
intel_gt_scratch_offset(rq->engine->gt,
|
||||
INTEL_GT_SCRATCH_FIELD_RENDER_FLUSH);
|
||||
u32 *cs, flags = 0;
|
||||
int ret;
|
||||
|
||||
|
@ -299,7 +308,9 @@ static u32 *gen6_rcs_emit_breadcrumb(struct i915_request *rq, u32 *cs)
|
|||
|
||||
*cs++ = GFX_OP_PIPE_CONTROL(4);
|
||||
*cs++ = PIPE_CONTROL_QW_WRITE;
|
||||
*cs++ = i915_scratch_offset(rq->i915) | PIPE_CONTROL_GLOBAL_GTT;
|
||||
*cs++ = intel_gt_scratch_offset(rq->engine->gt,
|
||||
INTEL_GT_SCRATCH_FIELD_DEFAULT) |
|
||||
PIPE_CONTROL_GLOBAL_GTT;
|
||||
*cs++ = 0;
|
||||
|
||||
/* Finally we can flush and with it emit the breadcrumb */
|
||||
|
@ -342,7 +353,9 @@ gen7_render_ring_cs_stall_wa(struct i915_request *rq)
|
|||
static int
|
||||
gen7_render_ring_flush(struct i915_request *rq, u32 mode)
|
||||
{
|
||||
u32 scratch_addr = i915_scratch_offset(rq->i915) + 2 * CACHELINE_BYTES;
|
||||
u32 scratch_addr =
|
||||
intel_gt_scratch_offset(rq->engine->gt,
|
||||
INTEL_GT_SCRATCH_FIELD_RENDER_FLUSH);
|
||||
u32 *cs, flags = 0;
|
||||
|
||||
/*
|
||||
|
@ -725,7 +738,45 @@ out:
|
|||
|
||||
static void reset_prepare(struct intel_engine_cs *engine)
|
||||
{
|
||||
intel_engine_stop_cs(engine);
|
||||
struct intel_uncore *uncore = engine->uncore;
|
||||
const u32 base = engine->mmio_base;
|
||||
|
||||
/*
|
||||
* We stop engines, otherwise we might get failed reset and a
|
||||
* dead gpu (on elk). Also as modern gpu as kbl can suffer
|
||||
* from system hang if batchbuffer is progressing when
|
||||
* the reset is issued, regardless of READY_TO_RESET ack.
|
||||
* Thus assume it is best to stop engines on all gens
|
||||
* where we have a gpu reset.
|
||||
*
|
||||
* WaKBLVECSSemaphoreWaitPoll:kbl (on ALL_ENGINES)
|
||||
*
|
||||
* WaMediaResetMainRingCleanup:ctg,elk (presumably)
|
||||
*
|
||||
* FIXME: Wa for more modern gens needs to be validated
|
||||
*/
|
||||
GEM_TRACE("%s\n", engine->name);
|
||||
|
||||
if (intel_engine_stop_cs(engine))
|
||||
GEM_TRACE("%s: timed out on STOP_RING\n", engine->name);
|
||||
|
||||
intel_uncore_write_fw(uncore,
|
||||
RING_HEAD(base),
|
||||
intel_uncore_read_fw(uncore, RING_TAIL(base)));
|
||||
intel_uncore_posting_read_fw(uncore, RING_HEAD(base)); /* paranoia */
|
||||
|
||||
intel_uncore_write_fw(uncore, RING_HEAD(base), 0);
|
||||
intel_uncore_write_fw(uncore, RING_TAIL(base), 0);
|
||||
intel_uncore_posting_read_fw(uncore, RING_TAIL(base));
|
||||
|
||||
/* The ring must be empty before it is disabled */
|
||||
intel_uncore_write_fw(uncore, RING_CTL(base), 0);
|
||||
|
||||
/* Check acts as a post */
|
||||
if (intel_uncore_read_fw(uncore, RING_HEAD(base)))
|
||||
GEM_TRACE("%s: ring head [%x] not parked\n",
|
||||
engine->name,
|
||||
intel_uncore_read_fw(uncore, RING_HEAD(base)));
|
||||
}
|
||||
|
||||
static void reset_ring(struct intel_engine_cs *engine, bool stalled)
|
||||
|
@ -781,7 +832,7 @@ static void reset_ring(struct intel_engine_cs *engine, bool stalled)
|
|||
* If the request was innocent, we try to replay the request
|
||||
* with the restored context.
|
||||
*/
|
||||
i915_reset_request(rq, stalled);
|
||||
__i915_request_reset(rq, stalled);
|
||||
|
||||
GEM_BUG_ON(rq->ring != engine->buffer);
|
||||
head = rq->head;
|
||||
|
@ -797,21 +848,6 @@ static void reset_finish(struct intel_engine_cs *engine)
|
|||
{
|
||||
}
|
||||
|
||||
static int intel_rcs_ctx_init(struct i915_request *rq)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = intel_engine_emit_ctx_wa(rq);
|
||||
if (ret != 0)
|
||||
return ret;
|
||||
|
||||
ret = i915_gem_render_state_emit(rq);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int rcs_resume(struct intel_engine_cs *engine)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = engine->i915;
|
||||
|
@ -1033,14 +1069,14 @@ hsw_vebox_irq_enable(struct intel_engine_cs *engine)
|
|||
/* Flush/delay to ensure the RING_IMR is active before the GT IMR */
|
||||
ENGINE_POSTING_READ(engine, RING_IMR);
|
||||
|
||||
gen6_unmask_pm_irq(engine->i915, engine->irq_enable_mask);
|
||||
gen6_unmask_pm_irq(engine->gt, engine->irq_enable_mask);
|
||||
}
|
||||
|
||||
static void
|
||||
hsw_vebox_irq_disable(struct intel_engine_cs *engine)
|
||||
{
|
||||
ENGINE_WRITE(engine, RING_IMR, ~0);
|
||||
gen6_mask_pm_irq(engine->i915, engine->irq_enable_mask);
|
||||
gen6_mask_pm_irq(engine->gt, engine->irq_enable_mask);
|
||||
}
|
||||
|
||||
static int
|
||||
|
@ -1071,9 +1107,11 @@ i830_emit_bb_start(struct i915_request *rq,
|
|||
u64 offset, u32 len,
|
||||
unsigned int dispatch_flags)
|
||||
{
|
||||
u32 *cs, cs_offset = i915_scratch_offset(rq->i915);
|
||||
u32 *cs, cs_offset =
|
||||
intel_gt_scratch_offset(rq->engine->gt,
|
||||
INTEL_GT_SCRATCH_FIELD_DEFAULT);
|
||||
|
||||
GEM_BUG_ON(rq->i915->gt.scratch->size < I830_WA_SIZE);
|
||||
GEM_BUG_ON(rq->engine->gt->scratch->size < I830_WA_SIZE);
|
||||
|
||||
cs = intel_ring_begin(rq, 6);
|
||||
if (IS_ERR(cs))
|
||||
|
@ -1156,7 +1194,7 @@ int intel_ring_pin(struct intel_ring *ring)
|
|||
if (atomic_fetch_inc(&ring->pin_count))
|
||||
return 0;
|
||||
|
||||
ret = i915_timeline_pin(ring->timeline);
|
||||
ret = intel_timeline_pin(ring->timeline);
|
||||
if (ret)
|
||||
goto err_unpin;
|
||||
|
||||
|
@ -1189,12 +1227,13 @@ int intel_ring_pin(struct intel_ring *ring)
|
|||
GEM_BUG_ON(ring->vaddr);
|
||||
ring->vaddr = addr;
|
||||
|
||||
GEM_TRACE("ring:%llx pin\n", ring->timeline->fence_context);
|
||||
return 0;
|
||||
|
||||
err_ring:
|
||||
i915_vma_unpin(vma);
|
||||
err_timeline:
|
||||
i915_timeline_unpin(ring->timeline);
|
||||
intel_timeline_unpin(ring->timeline);
|
||||
err_unpin:
|
||||
atomic_dec(&ring->pin_count);
|
||||
return ret;
|
||||
|
@ -1215,10 +1254,13 @@ void intel_ring_unpin(struct intel_ring *ring)
|
|||
if (!atomic_dec_and_test(&ring->pin_count))
|
||||
return;
|
||||
|
||||
GEM_TRACE("ring:%llx unpin\n", ring->timeline->fence_context);
|
||||
|
||||
/* Discard any unused bytes beyond that submitted to hw. */
|
||||
intel_ring_reset(ring, ring->tail);
|
||||
|
||||
GEM_BUG_ON(!ring->vma);
|
||||
i915_vma_unset_ggtt_write(ring->vma);
|
||||
if (i915_vma_is_map_and_fenceable(ring->vma))
|
||||
i915_vma_unpin_iomap(ring->vma);
|
||||
else
|
||||
|
@ -1230,19 +1272,19 @@ void intel_ring_unpin(struct intel_ring *ring)
|
|||
ring->vma->obj->pin_global--;
|
||||
i915_vma_unpin(ring->vma);
|
||||
|
||||
i915_timeline_unpin(ring->timeline);
|
||||
intel_timeline_unpin(ring->timeline);
|
||||
}
|
||||
|
||||
static struct i915_vma *
|
||||
intel_ring_create_vma(struct drm_i915_private *dev_priv, int size)
|
||||
static struct i915_vma *create_ring_vma(struct i915_ggtt *ggtt, int size)
|
||||
{
|
||||
struct i915_address_space *vm = &dev_priv->ggtt.vm;
|
||||
struct i915_address_space *vm = &ggtt->vm;
|
||||
struct drm_i915_private *i915 = vm->i915;
|
||||
struct drm_i915_gem_object *obj;
|
||||
struct i915_vma *vma;
|
||||
|
||||
obj = i915_gem_object_create_stolen(dev_priv, size);
|
||||
obj = i915_gem_object_create_stolen(i915, size);
|
||||
if (!obj)
|
||||
obj = i915_gem_object_create_internal(dev_priv, size);
|
||||
obj = i915_gem_object_create_internal(i915, size);
|
||||
if (IS_ERR(obj))
|
||||
return ERR_CAST(obj);
|
||||
|
||||
|
@ -1266,9 +1308,10 @@ err:
|
|||
|
||||
struct intel_ring *
|
||||
intel_engine_create_ring(struct intel_engine_cs *engine,
|
||||
struct i915_timeline *timeline,
|
||||
struct intel_timeline *timeline,
|
||||
int size)
|
||||
{
|
||||
struct drm_i915_private *i915 = engine->i915;
|
||||
struct intel_ring *ring;
|
||||
struct i915_vma *vma;
|
||||
|
||||
|
@ -1281,7 +1324,7 @@ intel_engine_create_ring(struct intel_engine_cs *engine,
|
|||
|
||||
kref_init(&ring->ref);
|
||||
INIT_LIST_HEAD(&ring->request_list);
|
||||
ring->timeline = i915_timeline_get(timeline);
|
||||
ring->timeline = intel_timeline_get(timeline);
|
||||
|
||||
ring->size = size;
|
||||
/* Workaround an erratum on the i830 which causes a hang if
|
||||
|
@ -1289,12 +1332,12 @@ intel_engine_create_ring(struct intel_engine_cs *engine,
|
|||
* of the buffer.
|
||||
*/
|
||||
ring->effective_size = size;
|
||||
if (IS_I830(engine->i915) || IS_I845G(engine->i915))
|
||||
if (IS_I830(i915) || IS_I845G(i915))
|
||||
ring->effective_size -= 2 * CACHELINE_BYTES;
|
||||
|
||||
intel_ring_update_space(ring);
|
||||
|
||||
vma = intel_ring_create_vma(engine->i915, size);
|
||||
vma = create_ring_vma(engine->gt->ggtt, size);
|
||||
if (IS_ERR(vma)) {
|
||||
kfree(ring);
|
||||
return ERR_CAST(vma);
|
||||
|
@ -1311,13 +1354,12 @@ void intel_ring_free(struct kref *ref)
|
|||
i915_vma_close(ring->vma);
|
||||
i915_vma_put(ring->vma);
|
||||
|
||||
i915_timeline_put(ring->timeline);
|
||||
intel_timeline_put(ring->timeline);
|
||||
kfree(ring);
|
||||
}
|
||||
|
||||
static void __ring_context_fini(struct intel_context *ce)
|
||||
{
|
||||
GEM_BUG_ON(i915_gem_object_is_active(ce->state->obj));
|
||||
i915_gem_object_put(ce->state->obj);
|
||||
}
|
||||
|
||||
|
@ -1330,33 +1372,45 @@ static void ring_context_destroy(struct kref *ref)
|
|||
if (ce->state)
|
||||
__ring_context_fini(ce);
|
||||
|
||||
intel_context_fini(ce);
|
||||
intel_context_free(ce);
|
||||
}
|
||||
|
||||
static int __context_pin_ppgtt(struct i915_gem_context *ctx)
|
||||
static struct i915_address_space *vm_alias(struct intel_context *ce)
|
||||
{
|
||||
struct i915_address_space *vm;
|
||||
|
||||
vm = ce->vm;
|
||||
if (i915_is_ggtt(vm))
|
||||
vm = &i915_vm_to_ggtt(vm)->alias->vm;
|
||||
|
||||
return vm;
|
||||
}
|
||||
|
||||
static int __context_pin_ppgtt(struct intel_context *ce)
|
||||
{
|
||||
struct i915_address_space *vm;
|
||||
int err = 0;
|
||||
|
||||
vm = ctx->vm ?: &ctx->i915->mm.aliasing_ppgtt->vm;
|
||||
vm = vm_alias(ce);
|
||||
if (vm)
|
||||
err = gen6_ppgtt_pin(i915_vm_to_ppgtt((vm)));
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static void __context_unpin_ppgtt(struct i915_gem_context *ctx)
|
||||
static void __context_unpin_ppgtt(struct intel_context *ce)
|
||||
{
|
||||
struct i915_address_space *vm;
|
||||
|
||||
vm = ctx->vm ?: &ctx->i915->mm.aliasing_ppgtt->vm;
|
||||
vm = vm_alias(ce);
|
||||
if (vm)
|
||||
gen6_ppgtt_unpin(i915_vm_to_ppgtt(vm));
|
||||
}
|
||||
|
||||
static void ring_context_unpin(struct intel_context *ce)
|
||||
{
|
||||
__context_unpin_ppgtt(ce->gem_context);
|
||||
__context_unpin_ppgtt(ce);
|
||||
}
|
||||
|
||||
static struct i915_vma *
|
||||
|
@ -1412,7 +1466,7 @@ alloc_context_vma(struct intel_engine_cs *engine)
|
|||
i915_gem_object_unpin_map(obj);
|
||||
}
|
||||
|
||||
vma = i915_vma_instance(obj, &i915->ggtt.vm, NULL);
|
||||
vma = i915_vma_instance(obj, &engine->gt->ggtt->vm, NULL);
|
||||
if (IS_ERR(vma)) {
|
||||
err = PTR_ERR(vma);
|
||||
goto err_obj;
|
||||
|
@ -1446,11 +1500,11 @@ static int ring_context_pin(struct intel_context *ce)
|
|||
ce->state = vma;
|
||||
}
|
||||
|
||||
err = intel_context_active_acquire(ce, PIN_HIGH);
|
||||
err = intel_context_active_acquire(ce);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
err = __context_pin_ppgtt(ce->gem_context);
|
||||
err = __context_pin_ppgtt(ce);
|
||||
if (err)
|
||||
goto err_active;
|
||||
|
||||
|
@ -1492,7 +1546,7 @@ static int load_pd_dir(struct i915_request *rq, const struct i915_ppgtt *ppgtt)
|
|||
|
||||
*cs++ = MI_LOAD_REGISTER_IMM(1);
|
||||
*cs++ = i915_mmio_reg_offset(RING_PP_DIR_BASE(engine->mmio_base));
|
||||
*cs++ = ppgtt->pd->base.ggtt_offset << 10;
|
||||
*cs++ = px_base(ppgtt->pd)->ggtt_offset << 10;
|
||||
|
||||
intel_ring_advance(rq, cs);
|
||||
|
||||
|
@ -1511,7 +1565,8 @@ static int flush_pd_dir(struct i915_request *rq)
|
|||
/* Stall until the page table load is complete */
|
||||
*cs++ = MI_STORE_REGISTER_MEM | MI_SRM_LRM_GLOBAL_GTT;
|
||||
*cs++ = i915_mmio_reg_offset(RING_PP_DIR_BASE(engine->mmio_base));
|
||||
*cs++ = i915_scratch_offset(rq->i915);
|
||||
*cs++ = intel_gt_scratch_offset(rq->engine->gt,
|
||||
INTEL_GT_SCRATCH_FIELD_DEFAULT);
|
||||
*cs++ = MI_NOOP;
|
||||
|
||||
intel_ring_advance(rq, cs);
|
||||
|
@ -1627,7 +1682,8 @@ static inline int mi_set_context(struct i915_request *rq, u32 flags)
|
|||
/* Insert a delay before the next switch! */
|
||||
*cs++ = MI_STORE_REGISTER_MEM | MI_SRM_LRM_GLOBAL_GTT;
|
||||
*cs++ = i915_mmio_reg_offset(last_reg);
|
||||
*cs++ = i915_scratch_offset(rq->i915);
|
||||
*cs++ = intel_gt_scratch_offset(rq->engine->gt,
|
||||
INTEL_GT_SCRATCH_FIELD_DEFAULT);
|
||||
*cs++ = MI_NOOP;
|
||||
}
|
||||
*cs++ = MI_ARB_ON_OFF | MI_ARB_ENABLE;
|
||||
|
@ -1640,7 +1696,7 @@ static inline int mi_set_context(struct i915_request *rq, u32 flags)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int remap_l3(struct i915_request *rq, int slice)
|
||||
static int remap_l3_slice(struct i915_request *rq, int slice)
|
||||
{
|
||||
u32 *cs, *remap_info = rq->i915->l3_parity.remap_info[slice];
|
||||
int i;
|
||||
|
@ -1668,15 +1724,34 @@ static int remap_l3(struct i915_request *rq, int slice)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int remap_l3(struct i915_request *rq)
|
||||
{
|
||||
struct i915_gem_context *ctx = rq->gem_context;
|
||||
int i, err;
|
||||
|
||||
if (!ctx->remap_slice)
|
||||
return 0;
|
||||
|
||||
for (i = 0; i < MAX_L3_SLICES; i++) {
|
||||
if (!(ctx->remap_slice & BIT(i)))
|
||||
continue;
|
||||
|
||||
err = remap_l3_slice(rq, i);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
ctx->remap_slice = 0;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int switch_context(struct i915_request *rq)
|
||||
{
|
||||
struct intel_engine_cs *engine = rq->engine;
|
||||
struct i915_gem_context *ctx = rq->gem_context;
|
||||
struct i915_address_space *vm =
|
||||
ctx->vm ?: &rq->i915->mm.aliasing_ppgtt->vm;
|
||||
struct i915_address_space *vm = vm_alias(rq->hw_context);
|
||||
unsigned int unwind_mm = 0;
|
||||
u32 hw_flags = 0;
|
||||
int ret, i;
|
||||
int ret;
|
||||
|
||||
GEM_BUG_ON(HAS_EXECLISTS(rq->i915));
|
||||
|
||||
|
@ -1720,7 +1795,7 @@ static int switch_context(struct i915_request *rq)
|
|||
* as nothing actually executes using the kernel context; it
|
||||
* is purely used for flushing user contexts.
|
||||
*/
|
||||
if (i915_gem_context_is_kernel(ctx))
|
||||
if (i915_gem_context_is_kernel(rq->gem_context))
|
||||
hw_flags = MI_RESTORE_INHIBIT;
|
||||
|
||||
ret = mi_set_context(rq, hw_flags);
|
||||
|
@ -1754,18 +1829,9 @@ static int switch_context(struct i915_request *rq)
|
|||
goto err_mm;
|
||||
}
|
||||
|
||||
if (ctx->remap_slice) {
|
||||
for (i = 0; i < MAX_L3_SLICES; i++) {
|
||||
if (!(ctx->remap_slice & BIT(i)))
|
||||
continue;
|
||||
|
||||
ret = remap_l3(rq, i);
|
||||
if (ret)
|
||||
goto err_mm;
|
||||
}
|
||||
|
||||
ctx->remap_slice = 0;
|
||||
}
|
||||
ret = remap_l3(rq);
|
||||
if (ret)
|
||||
goto err_mm;
|
||||
|
||||
return 0;
|
||||
|
||||
|
@ -2166,11 +2232,9 @@ static void setup_rcs(struct intel_engine_cs *engine)
|
|||
engine->irq_enable_mask = GT_RENDER_USER_INTERRUPT;
|
||||
|
||||
if (INTEL_GEN(i915) >= 7) {
|
||||
engine->init_context = intel_rcs_ctx_init;
|
||||
engine->emit_flush = gen7_render_ring_flush;
|
||||
engine->emit_fini_breadcrumb = gen7_rcs_emit_breadcrumb;
|
||||
} else if (IS_GEN(i915, 6)) {
|
||||
engine->init_context = intel_rcs_ctx_init;
|
||||
engine->emit_flush = gen6_render_ring_flush;
|
||||
engine->emit_fini_breadcrumb = gen6_rcs_emit_breadcrumb;
|
||||
} else if (IS_GEN(i915, 5)) {
|
||||
|
@ -2267,11 +2331,11 @@ int intel_ring_submission_setup(struct intel_engine_cs *engine)
|
|||
|
||||
int intel_ring_submission_init(struct intel_engine_cs *engine)
|
||||
{
|
||||
struct i915_timeline *timeline;
|
||||
struct intel_timeline *timeline;
|
||||
struct intel_ring *ring;
|
||||
int err;
|
||||
|
||||
timeline = i915_timeline_create(engine->i915, engine->status_page.vma);
|
||||
timeline = intel_timeline_create(engine->gt, engine->status_page.vma);
|
||||
if (IS_ERR(timeline)) {
|
||||
err = PTR_ERR(timeline);
|
||||
goto err;
|
||||
|
@ -2279,7 +2343,7 @@ int intel_ring_submission_init(struct intel_engine_cs *engine)
|
|||
GEM_BUG_ON(timeline->has_initial_breadcrumb);
|
||||
|
||||
ring = intel_engine_create_ring(engine, timeline, 32 * PAGE_SIZE);
|
||||
i915_timeline_put(timeline);
|
||||
intel_timeline_put(timeline);
|
||||
if (IS_ERR(ring)) {
|
||||
err = PTR_ERR(ring);
|
||||
goto err;
|
||||
|
|
|
@ -4,38 +4,36 @@
|
|||
* Copyright © 2016-2018 Intel Corporation
|
||||
*/
|
||||
|
||||
#include "gt/intel_gt_types.h"
|
||||
|
||||
#include "i915_drv.h"
|
||||
|
||||
#include "i915_active.h"
|
||||
#include "i915_syncmap.h"
|
||||
#include "i915_timeline.h"
|
||||
#include "gt/intel_timeline.h"
|
||||
|
||||
#define ptr_set_bit(ptr, bit) ((typeof(ptr))((unsigned long)(ptr) | BIT(bit)))
|
||||
#define ptr_test_bit(ptr, bit) ((unsigned long)(ptr) & BIT(bit))
|
||||
|
||||
struct i915_timeline_hwsp {
|
||||
struct i915_gt_timelines *gt;
|
||||
struct intel_timeline_hwsp {
|
||||
struct intel_gt *gt;
|
||||
struct intel_gt_timelines *gt_timelines;
|
||||
struct list_head free_link;
|
||||
struct i915_vma *vma;
|
||||
u64 free_bitmap;
|
||||
};
|
||||
|
||||
struct i915_timeline_cacheline {
|
||||
struct intel_timeline_cacheline {
|
||||
struct i915_active active;
|
||||
struct i915_timeline_hwsp *hwsp;
|
||||
struct intel_timeline_hwsp *hwsp;
|
||||
void *vaddr;
|
||||
#define CACHELINE_BITS 6
|
||||
#define CACHELINE_FREE CACHELINE_BITS
|
||||
};
|
||||
|
||||
static inline struct drm_i915_private *
|
||||
hwsp_to_i915(struct i915_timeline_hwsp *hwsp)
|
||||
{
|
||||
return container_of(hwsp->gt, struct drm_i915_private, gt.timelines);
|
||||
}
|
||||
|
||||
static struct i915_vma *__hwsp_alloc(struct drm_i915_private *i915)
|
||||
static struct i915_vma *__hwsp_alloc(struct intel_gt *gt)
|
||||
{
|
||||
struct drm_i915_private *i915 = gt->i915;
|
||||
struct drm_i915_gem_object *obj;
|
||||
struct i915_vma *vma;
|
||||
|
||||
|
@ -45,7 +43,7 @@ static struct i915_vma *__hwsp_alloc(struct drm_i915_private *i915)
|
|||
|
||||
i915_gem_object_set_cache_coherency(obj, I915_CACHE_LLC);
|
||||
|
||||
vma = i915_vma_instance(obj, &i915->ggtt.vm, NULL);
|
||||
vma = i915_vma_instance(obj, >->ggtt->vm, NULL);
|
||||
if (IS_ERR(vma))
|
||||
i915_gem_object_put(obj);
|
||||
|
||||
|
@ -53,11 +51,10 @@ static struct i915_vma *__hwsp_alloc(struct drm_i915_private *i915)
|
|||
}
|
||||
|
||||
static struct i915_vma *
|
||||
hwsp_alloc(struct i915_timeline *timeline, unsigned int *cacheline)
|
||||
hwsp_alloc(struct intel_timeline *timeline, unsigned int *cacheline)
|
||||
{
|
||||
struct drm_i915_private *i915 = timeline->i915;
|
||||
struct i915_gt_timelines *gt = &i915->gt.timelines;
|
||||
struct i915_timeline_hwsp *hwsp;
|
||||
struct intel_gt_timelines *gt = &timeline->gt->timelines;
|
||||
struct intel_timeline_hwsp *hwsp;
|
||||
|
||||
BUILD_BUG_ON(BITS_PER_TYPE(u64) * CACHELINE_BYTES > PAGE_SIZE);
|
||||
|
||||
|
@ -75,16 +72,17 @@ hwsp_alloc(struct i915_timeline *timeline, unsigned int *cacheline)
|
|||
if (!hwsp)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
vma = __hwsp_alloc(i915);
|
||||
vma = __hwsp_alloc(timeline->gt);
|
||||
if (IS_ERR(vma)) {
|
||||
kfree(hwsp);
|
||||
return vma;
|
||||
}
|
||||
|
||||
vma->private = hwsp;
|
||||
hwsp->gt = timeline->gt;
|
||||
hwsp->vma = vma;
|
||||
hwsp->free_bitmap = ~0ull;
|
||||
hwsp->gt = gt;
|
||||
hwsp->gt_timelines = gt;
|
||||
|
||||
spin_lock_irq(>->hwsp_lock);
|
||||
list_add(&hwsp->free_link, >->hwsp_free_list);
|
||||
|
@ -102,9 +100,9 @@ hwsp_alloc(struct i915_timeline *timeline, unsigned int *cacheline)
|
|||
return hwsp->vma;
|
||||
}
|
||||
|
||||
static void __idle_hwsp_free(struct i915_timeline_hwsp *hwsp, int cacheline)
|
||||
static void __idle_hwsp_free(struct intel_timeline_hwsp *hwsp, int cacheline)
|
||||
{
|
||||
struct i915_gt_timelines *gt = hwsp->gt;
|
||||
struct intel_gt_timelines *gt = hwsp->gt_timelines;
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(>->hwsp_lock, flags);
|
||||
|
@ -126,7 +124,7 @@ static void __idle_hwsp_free(struct i915_timeline_hwsp *hwsp, int cacheline)
|
|||
spin_unlock_irqrestore(>->hwsp_lock, flags);
|
||||
}
|
||||
|
||||
static void __idle_cacheline_free(struct i915_timeline_cacheline *cl)
|
||||
static void __idle_cacheline_free(struct intel_timeline_cacheline *cl)
|
||||
{
|
||||
GEM_BUG_ON(!i915_active_is_idle(&cl->active));
|
||||
|
||||
|
@ -140,7 +138,7 @@ static void __idle_cacheline_free(struct i915_timeline_cacheline *cl)
|
|||
|
||||
static void __cacheline_retire(struct i915_active *active)
|
||||
{
|
||||
struct i915_timeline_cacheline *cl =
|
||||
struct intel_timeline_cacheline *cl =
|
||||
container_of(active, typeof(*cl), active);
|
||||
|
||||
i915_vma_unpin(cl->hwsp->vma);
|
||||
|
@ -148,10 +146,19 @@ static void __cacheline_retire(struct i915_active *active)
|
|||
__idle_cacheline_free(cl);
|
||||
}
|
||||
|
||||
static struct i915_timeline_cacheline *
|
||||
cacheline_alloc(struct i915_timeline_hwsp *hwsp, unsigned int cacheline)
|
||||
static int __cacheline_active(struct i915_active *active)
|
||||
{
|
||||
struct i915_timeline_cacheline *cl;
|
||||
struct intel_timeline_cacheline *cl =
|
||||
container_of(active, typeof(*cl), active);
|
||||
|
||||
__i915_vma_pin(cl->hwsp->vma);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct intel_timeline_cacheline *
|
||||
cacheline_alloc(struct intel_timeline_hwsp *hwsp, unsigned int cacheline)
|
||||
{
|
||||
struct intel_timeline_cacheline *cl;
|
||||
void *vaddr;
|
||||
|
||||
GEM_BUG_ON(cacheline >= BIT(CACHELINE_BITS));
|
||||
|
@ -170,24 +177,25 @@ cacheline_alloc(struct i915_timeline_hwsp *hwsp, unsigned int cacheline)
|
|||
cl->hwsp = hwsp;
|
||||
cl->vaddr = page_pack_bits(vaddr, cacheline);
|
||||
|
||||
i915_active_init(hwsp_to_i915(hwsp), &cl->active, __cacheline_retire);
|
||||
i915_active_init(hwsp->gt->i915, &cl->active,
|
||||
__cacheline_active, __cacheline_retire);
|
||||
|
||||
return cl;
|
||||
}
|
||||
|
||||
static void cacheline_acquire(struct i915_timeline_cacheline *cl)
|
||||
static void cacheline_acquire(struct intel_timeline_cacheline *cl)
|
||||
{
|
||||
if (cl && i915_active_acquire(&cl->active))
|
||||
__i915_vma_pin(cl->hwsp->vma);
|
||||
if (cl)
|
||||
i915_active_acquire(&cl->active);
|
||||
}
|
||||
|
||||
static void cacheline_release(struct i915_timeline_cacheline *cl)
|
||||
static void cacheline_release(struct intel_timeline_cacheline *cl)
|
||||
{
|
||||
if (cl)
|
||||
i915_active_release(&cl->active);
|
||||
}
|
||||
|
||||
static void cacheline_free(struct i915_timeline_cacheline *cl)
|
||||
static void cacheline_free(struct intel_timeline_cacheline *cl)
|
||||
{
|
||||
GEM_BUG_ON(ptr_test_bit(cl->vaddr, CACHELINE_FREE));
|
||||
cl->vaddr = ptr_set_bit(cl->vaddr, CACHELINE_FREE);
|
||||
|
@ -196,29 +204,22 @@ static void cacheline_free(struct i915_timeline_cacheline *cl)
|
|||
__idle_cacheline_free(cl);
|
||||
}
|
||||
|
||||
int i915_timeline_init(struct drm_i915_private *i915,
|
||||
struct i915_timeline *timeline,
|
||||
struct i915_vma *hwsp)
|
||||
int intel_timeline_init(struct intel_timeline *timeline,
|
||||
struct intel_gt *gt,
|
||||
struct i915_vma *hwsp)
|
||||
{
|
||||
void *vaddr;
|
||||
|
||||
/*
|
||||
* Ideally we want a set of engines on a single leaf as we expect
|
||||
* to mostly be tracking synchronisation between engines. It is not
|
||||
* a huge issue if this is not the case, but we may want to mitigate
|
||||
* any page crossing penalties if they become an issue.
|
||||
*
|
||||
* Called during early_init before we know how many engines there are.
|
||||
*/
|
||||
BUILD_BUG_ON(KSYNCMAP < I915_NUM_ENGINES);
|
||||
kref_init(&timeline->kref);
|
||||
|
||||
timeline->i915 = i915;
|
||||
timeline->gt = gt;
|
||||
timeline->pin_count = 0;
|
||||
|
||||
timeline->has_initial_breadcrumb = !hwsp;
|
||||
timeline->hwsp_cacheline = NULL;
|
||||
|
||||
if (!hwsp) {
|
||||
struct i915_timeline_cacheline *cl;
|
||||
struct intel_timeline_cacheline *cl;
|
||||
unsigned int cacheline;
|
||||
|
||||
hwsp = hwsp_alloc(timeline, &cacheline);
|
||||
|
@ -261,55 +262,47 @@ int i915_timeline_init(struct drm_i915_private *i915,
|
|||
return 0;
|
||||
}
|
||||
|
||||
void i915_timelines_init(struct drm_i915_private *i915)
|
||||
static void timelines_init(struct intel_gt *gt)
|
||||
{
|
||||
struct i915_gt_timelines *gt = &i915->gt.timelines;
|
||||
struct intel_gt_timelines *timelines = >->timelines;
|
||||
|
||||
mutex_init(>->mutex);
|
||||
INIT_LIST_HEAD(>->active_list);
|
||||
mutex_init(&timelines->mutex);
|
||||
INIT_LIST_HEAD(&timelines->active_list);
|
||||
|
||||
spin_lock_init(>->hwsp_lock);
|
||||
INIT_LIST_HEAD(>->hwsp_free_list);
|
||||
|
||||
/* via i915_gem_wait_for_idle() */
|
||||
i915_gem_shrinker_taints_mutex(i915, >->mutex);
|
||||
spin_lock_init(&timelines->hwsp_lock);
|
||||
INIT_LIST_HEAD(&timelines->hwsp_free_list);
|
||||
}
|
||||
|
||||
static void timeline_add_to_active(struct i915_timeline *tl)
|
||||
void intel_timelines_init(struct drm_i915_private *i915)
|
||||
{
|
||||
struct i915_gt_timelines *gt = &tl->i915->gt.timelines;
|
||||
timelines_init(&i915->gt);
|
||||
}
|
||||
|
||||
static void timeline_add_to_active(struct intel_timeline *tl)
|
||||
{
|
||||
struct intel_gt_timelines *gt = &tl->gt->timelines;
|
||||
|
||||
mutex_lock(>->mutex);
|
||||
list_add(&tl->link, >->active_list);
|
||||
mutex_unlock(>->mutex);
|
||||
}
|
||||
|
||||
static void timeline_remove_from_active(struct i915_timeline *tl)
|
||||
static void timeline_remove_from_active(struct intel_timeline *tl)
|
||||
{
|
||||
struct i915_gt_timelines *gt = &tl->i915->gt.timelines;
|
||||
struct intel_gt_timelines *gt = &tl->gt->timelines;
|
||||
|
||||
mutex_lock(>->mutex);
|
||||
list_del(&tl->link);
|
||||
mutex_unlock(>->mutex);
|
||||
}
|
||||
|
||||
/**
|
||||
* i915_timelines_park - called when the driver idles
|
||||
* @i915: the drm_i915_private device
|
||||
*
|
||||
* When the driver is completely idle, we know that all of our sync points
|
||||
* have been signaled and our tracking is then entirely redundant. Any request
|
||||
* to wait upon an older sync point will be completed instantly as we know
|
||||
* the fence is signaled and therefore we will not even look them up in the
|
||||
* sync point map.
|
||||
*/
|
||||
void i915_timelines_park(struct drm_i915_private *i915)
|
||||
static void timelines_park(struct intel_gt *gt)
|
||||
{
|
||||
struct i915_gt_timelines *gt = &i915->gt.timelines;
|
||||
struct i915_timeline *timeline;
|
||||
struct intel_gt_timelines *timelines = >->timelines;
|
||||
struct intel_timeline *timeline;
|
||||
|
||||
mutex_lock(>->mutex);
|
||||
list_for_each_entry(timeline, >->active_list, link) {
|
||||
mutex_lock(&timelines->mutex);
|
||||
list_for_each_entry(timeline, &timelines->active_list, link) {
|
||||
/*
|
||||
* All known fences are completed so we can scrap
|
||||
* the current sync point tracking and start afresh,
|
||||
|
@ -318,10 +311,25 @@ void i915_timelines_park(struct drm_i915_private *i915)
|
|||
*/
|
||||
i915_syncmap_free(&timeline->sync);
|
||||
}
|
||||
mutex_unlock(>->mutex);
|
||||
mutex_unlock(&timelines->mutex);
|
||||
}
|
||||
|
||||
void i915_timeline_fini(struct i915_timeline *timeline)
|
||||
/**
|
||||
* intel_timelines_park - called when the driver idles
|
||||
* @i915: the drm_i915_private device
|
||||
*
|
||||
* When the driver is completely idle, we know that all of our sync points
|
||||
* have been signaled and our tracking is then entirely redundant. Any request
|
||||
* to wait upon an older sync point will be completed instantly as we know
|
||||
* the fence is signaled and therefore we will not even look them up in the
|
||||
* sync point map.
|
||||
*/
|
||||
void intel_timelines_park(struct drm_i915_private *i915)
|
||||
{
|
||||
timelines_park(&i915->gt);
|
||||
}
|
||||
|
||||
void intel_timeline_fini(struct intel_timeline *timeline)
|
||||
{
|
||||
GEM_BUG_ON(timeline->pin_count);
|
||||
GEM_BUG_ON(!list_empty(&timeline->requests));
|
||||
|
@ -336,29 +344,26 @@ void i915_timeline_fini(struct i915_timeline *timeline)
|
|||
i915_vma_put(timeline->hwsp_ggtt);
|
||||
}
|
||||
|
||||
struct i915_timeline *
|
||||
i915_timeline_create(struct drm_i915_private *i915,
|
||||
struct i915_vma *global_hwsp)
|
||||
struct intel_timeline *
|
||||
intel_timeline_create(struct intel_gt *gt, struct i915_vma *global_hwsp)
|
||||
{
|
||||
struct i915_timeline *timeline;
|
||||
struct intel_timeline *timeline;
|
||||
int err;
|
||||
|
||||
timeline = kzalloc(sizeof(*timeline), GFP_KERNEL);
|
||||
if (!timeline)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
err = i915_timeline_init(i915, timeline, global_hwsp);
|
||||
err = intel_timeline_init(timeline, gt, global_hwsp);
|
||||
if (err) {
|
||||
kfree(timeline);
|
||||
return ERR_PTR(err);
|
||||
}
|
||||
|
||||
kref_init(&timeline->kref);
|
||||
|
||||
return timeline;
|
||||
}
|
||||
|
||||
int i915_timeline_pin(struct i915_timeline *tl)
|
||||
int intel_timeline_pin(struct intel_timeline *tl)
|
||||
{
|
||||
int err;
|
||||
|
||||
|
@ -384,7 +389,7 @@ unpin:
|
|||
return err;
|
||||
}
|
||||
|
||||
static u32 timeline_advance(struct i915_timeline *tl)
|
||||
static u32 timeline_advance(struct intel_timeline *tl)
|
||||
{
|
||||
GEM_BUG_ON(!tl->pin_count);
|
||||
GEM_BUG_ON(tl->seqno & tl->has_initial_breadcrumb);
|
||||
|
@ -392,17 +397,17 @@ static u32 timeline_advance(struct i915_timeline *tl)
|
|||
return tl->seqno += 1 + tl->has_initial_breadcrumb;
|
||||
}
|
||||
|
||||
static void timeline_rollback(struct i915_timeline *tl)
|
||||
static void timeline_rollback(struct intel_timeline *tl)
|
||||
{
|
||||
tl->seqno -= 1 + tl->has_initial_breadcrumb;
|
||||
}
|
||||
|
||||
static noinline int
|
||||
__i915_timeline_get_seqno(struct i915_timeline *tl,
|
||||
struct i915_request *rq,
|
||||
u32 *seqno)
|
||||
__intel_timeline_get_seqno(struct intel_timeline *tl,
|
||||
struct i915_request *rq,
|
||||
u32 *seqno)
|
||||
{
|
||||
struct i915_timeline_cacheline *cl;
|
||||
struct intel_timeline_cacheline *cl;
|
||||
unsigned int cacheline;
|
||||
struct i915_vma *vma;
|
||||
void *vaddr;
|
||||
|
@ -488,31 +493,31 @@ err_rollback:
|
|||
return err;
|
||||
}
|
||||
|
||||
int i915_timeline_get_seqno(struct i915_timeline *tl,
|
||||
struct i915_request *rq,
|
||||
u32 *seqno)
|
||||
int intel_timeline_get_seqno(struct intel_timeline *tl,
|
||||
struct i915_request *rq,
|
||||
u32 *seqno)
|
||||
{
|
||||
*seqno = timeline_advance(tl);
|
||||
|
||||
/* Replace the HWSP on wraparound for HW semaphores */
|
||||
if (unlikely(!*seqno && tl->hwsp_cacheline))
|
||||
return __i915_timeline_get_seqno(tl, rq, seqno);
|
||||
return __intel_timeline_get_seqno(tl, rq, seqno);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int cacheline_ref(struct i915_timeline_cacheline *cl,
|
||||
static int cacheline_ref(struct intel_timeline_cacheline *cl,
|
||||
struct i915_request *rq)
|
||||
{
|
||||
return i915_active_ref(&cl->active, rq->fence.context, rq);
|
||||
}
|
||||
|
||||
int i915_timeline_read_hwsp(struct i915_request *from,
|
||||
struct i915_request *to,
|
||||
u32 *hwsp)
|
||||
int intel_timeline_read_hwsp(struct i915_request *from,
|
||||
struct i915_request *to,
|
||||
u32 *hwsp)
|
||||
{
|
||||
struct i915_timeline_cacheline *cl = from->hwsp_cacheline;
|
||||
struct i915_timeline *tl = from->timeline;
|
||||
struct intel_timeline_cacheline *cl = from->hwsp_cacheline;
|
||||
struct intel_timeline *tl = from->timeline;
|
||||
int err;
|
||||
|
||||
GEM_BUG_ON(to->timeline == tl);
|
||||
|
@ -535,7 +540,7 @@ int i915_timeline_read_hwsp(struct i915_request *from,
|
|||
return err;
|
||||
}
|
||||
|
||||
void i915_timeline_unpin(struct i915_timeline *tl)
|
||||
void intel_timeline_unpin(struct intel_timeline *tl)
|
||||
{
|
||||
GEM_BUG_ON(!tl->pin_count);
|
||||
if (--tl->pin_count)
|
||||
|
@ -554,26 +559,31 @@ void i915_timeline_unpin(struct i915_timeline *tl)
|
|||
__i915_vma_unpin(tl->hwsp_ggtt);
|
||||
}
|
||||
|
||||
void __i915_timeline_free(struct kref *kref)
|
||||
void __intel_timeline_free(struct kref *kref)
|
||||
{
|
||||
struct i915_timeline *timeline =
|
||||
struct intel_timeline *timeline =
|
||||
container_of(kref, typeof(*timeline), kref);
|
||||
|
||||
i915_timeline_fini(timeline);
|
||||
intel_timeline_fini(timeline);
|
||||
kfree(timeline);
|
||||
}
|
||||
|
||||
void i915_timelines_fini(struct drm_i915_private *i915)
|
||||
static void timelines_fini(struct intel_gt *gt)
|
||||
{
|
||||
struct i915_gt_timelines *gt = &i915->gt.timelines;
|
||||
struct intel_gt_timelines *timelines = >->timelines;
|
||||
|
||||
GEM_BUG_ON(!list_empty(>->active_list));
|
||||
GEM_BUG_ON(!list_empty(>->hwsp_free_list));
|
||||
GEM_BUG_ON(!list_empty(&timelines->active_list));
|
||||
GEM_BUG_ON(!list_empty(&timelines->hwsp_free_list));
|
||||
|
||||
mutex_destroy(>->mutex);
|
||||
mutex_destroy(&timelines->mutex);
|
||||
}
|
||||
|
||||
void intel_timelines_fini(struct drm_i915_private *i915)
|
||||
{
|
||||
timelines_fini(&i915->gt);
|
||||
}
|
||||
|
||||
#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
|
||||
#include "selftests/mock_timeline.c"
|
||||
#include "selftests/i915_timeline.c"
|
||||
#include "gt/selftests/mock_timeline.c"
|
||||
#include "gt/selftest_timeline.c"
|
||||
#endif
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue