drm fixes for 5.18-rc2
dma-fence: - fix warning about fence containers - fix logic error in new fence merge code - handle empty dma_fence_arrays gracefully bridge: - Try all possible cases for bridge/panel detection. bindings: - Don't require input port for MIPI-DSI, and make width/height mandatory. fbdev: - Fix unregistering of framebuffers without device. nouveau: - Fix a crash when booting with nouveau on tegra. amdgpu: - GFX 10.3.7 fixes - noretry updates - VCN fixes - TMDS fix - zstate fix for freesync video - DCN 3.1.5 fix - Display stack size fix - Audio fix - DCN 3.1 pstate fix - TMZ VCN fix - APU passthrough fix - Misc other fixes - VCN 3.0 fixes - Misc display fixes - GC 10.3 golden register fix - Suspend fix - SMU 10 fix amdkfd: - Error handling fix - xgmi p2p fix - HWS VMIDs fix - Event fix panel: - ili9341: Fix optional regulator handling imx: - Catch an EDID allocation failure in imx-ldb - fix a leaked drm display mode on DT parsing error in parallel-display - properly remove the dw_hdmi bridge in case the component_add fails in dw_hdmi-imx - fix the IPU clock frequency debug printout in ipu-di. -----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEEEKbZHaGwW9KfbeusDHTzWXnEhr4FAmJPfl8ACgkQDHTzWXnE hr54mA/+LVf1UneJwvxSyhG1JI5DOD9h/+v2uAuqXEn0qPlqpIFQ6cmhtHoHyWBl HVyIPkfOPo6n4UyXxVaBtbcomGcy3jLi/ZRf7kP6L0w7qOvrAvrIJKobkJkyyDId fml2lQ1M6q4D/y2W07O0pwNsmI2UxclDB4b4GJ1lbgFCwiKWVfLvbSge3itznV+c 1pjByy9DPkkP3b93Q2m5nCJoIofnTONx72d3zZx0chV8jY+LwrXdKutTkjsm/En2 UYVA9ilgo0lnw8cK6TEyADFPCT2pTideFr8svlRUmpcUOZQj5GgiYYQ6zRKjqVjj PDSBe1BLXO1I5JDKdz5FBz9gcPyLzS6IDE0dKKJnf/5mTLmq67tCc+f0Hkx/WKt5 xGAsP9U4hyA8I2EgSPuL7wya/TXV8wFxcU21Q8On/RshHpPBkguJspGIZE/JhiNf RNTCwTXTQHt/OyJpXJzZ9XejyPf5w4q5cZOXVN6slKqKsMhCgf6PQw59YKlXaKdX IIH4+f1ONAMjD8gcLm+f8/9cgtefNbNqmsN2daHMtT/6hKbqkFjOhu0b6Z+Is4el JUN60sg9omn8f7gDeTe9klg4NiORh+X1D4ubw2lXytbIn9gODhbhHMCNuq3RsY/R vlLOQ/797f57PKMlOkoTX9FWv0cklRqTZ1P6d52p0JAueJXSbew= =D6WR -----END PGP SIGNATURE----- Merge tag 'drm-fixes-2022-04-08' of git://anongit.freedesktop.org/drm/drm Pull drm fixes from Dave Airlie: "Main set of fixes for rc2, mostly amdgpu, but some dma-fence fixups as well, along with some other misc ones. dma-fence: - fix warning about fence containers - fix logic error in new fence merge code - handle empty dma_fence_arrays gracefully bridge: - Try all possible cases for bridge/panel detection. bindings: - Don't require input port for MIPI-DSI, and make width/height mandatory. fbdev: - Fix unregistering of framebuffers without device. nouveau: - Fix a crash when booting with nouveau on tegra. amdgpu: - GFX 10.3.7 fixes - noretry updates - VCN fixes - TMDS fix - zstate fix for freesync video - DCN 3.1.5 fix - Display stack size fix - Audio fix - DCN 3.1 pstate fix - TMZ VCN fix - APU passthrough fix - Misc other fixes - VCN 3.0 fixes - Misc display fixes - GC 10.3 golden register fix - Suspend fix - SMU 10 fix amdkfd: - Error handling fix - xgmi p2p fix - HWS VMIDs fix - Event fix panel: - ili9341: Fix optional regulator handling imx: - Catch an EDID allocation failure in imx-ldb - fix a leaked drm display mode on DT parsing error in parallel-display - properly remove the dw_hdmi bridge in case the component_add fails in dw_hdmi-imx - fix the IPU clock frequency debug printout in ipu-di" * tag 'drm-fixes-2022-04-08' of git://anongit.freedesktop.org/drm/drm: (61 commits) dt-bindings: display: panel: mipi-dbi-spi: Make width-mm/height-mm mandatory fbdev: Fix unregistering of framebuffers without device drm/amdgpu/smu10: fix SoC/fclk units in auto mode drm/amd/display: update dcn315 clock table read drm/amdgpu/display: change pipe policy for DCN 2.1 drm/amd/display: Add configuration options for AUX wake work around. drm/amd/display: remove assert for odm transition case drm/amdgpu: don't use BACO for reset in S3 drm/amd/display: Fix by adding FPU protection for dcn30_internal_validate_bw drm/amdkfd: Create file descriptor after client is added to smi_clients list drm/amdgpu: Sync up header and implementation to use the same parameter names drm/amdgpu: fix incorrect GCR_GENERAL_CNTL address amd/display: set backlight only if required drm/amd/display: Fix allocate_mst_payload assert on resume drm/amd/display: Revert FEC check in validation drm/amd/display: Add work around for AUX failure on wake. drm/amd/display: Clear optc false state when disable otg drm/amd/display: Enable power gating before init_pipes drm/amd/display: Remove redundant dsc power gating from init_hw drm/amd/display: Correct Slice reset calculation ...
This commit is contained in:
commit
1831fed559
|
@ -51,7 +51,6 @@ properties:
|
|||
Video port for MIPI DPI output (panel or connector).
|
||||
|
||||
required:
|
||||
- port@0
|
||||
- port@1
|
||||
|
||||
required:
|
||||
|
|
|
@ -39,7 +39,6 @@ properties:
|
|||
Video port for MIPI DPI output (panel or connector).
|
||||
|
||||
required:
|
||||
- port@0
|
||||
- port@1
|
||||
|
||||
required:
|
||||
|
|
|
@ -83,6 +83,8 @@ properties:
|
|||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- width-mm
|
||||
- height-mm
|
||||
- panel-timing
|
||||
|
||||
unevaluatedProperties: false
|
||||
|
|
|
@ -185,6 +185,12 @@ DMA Fence Chain
|
|||
.. kernel-doc:: include/linux/dma-fence-chain.h
|
||||
:internal:
|
||||
|
||||
DMA Fence unwrap
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
.. kernel-doc:: include/linux/dma-fence-unwrap.h
|
||||
:internal:
|
||||
|
||||
DMA Fence uABI/Sync File
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
|
|
|
@ -12,6 +12,7 @@ dmabuf_selftests-y := \
|
|||
selftest.o \
|
||||
st-dma-fence.o \
|
||||
st-dma-fence-chain.o \
|
||||
st-dma-fence-unwrap.o \
|
||||
st-dma-resv.o
|
||||
|
||||
obj-$(CONFIG_DMABUF_SELFTESTS) += dmabuf_selftests.o
|
||||
|
|
|
@ -159,6 +159,8 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
|
|||
struct dma_fence_array *array;
|
||||
size_t size = sizeof(*array);
|
||||
|
||||
WARN_ON(!num_fences || !fences);
|
||||
|
||||
/* Allocate the callback structures behind the array. */
|
||||
size += num_fences * sizeof(struct dma_fence_array_cb);
|
||||
array = kzalloc(size, GFP_KERNEL);
|
||||
|
@ -219,3 +221,33 @@ bool dma_fence_match_context(struct dma_fence *fence, u64 context)
|
|||
return true;
|
||||
}
|
||||
EXPORT_SYMBOL(dma_fence_match_context);
|
||||
|
||||
struct dma_fence *dma_fence_array_first(struct dma_fence *head)
|
||||
{
|
||||
struct dma_fence_array *array;
|
||||
|
||||
if (!head)
|
||||
return NULL;
|
||||
|
||||
array = to_dma_fence_array(head);
|
||||
if (!array)
|
||||
return head;
|
||||
|
||||
if (!array->num_fences)
|
||||
return NULL;
|
||||
|
||||
return array->fences[0];
|
||||
}
|
||||
EXPORT_SYMBOL(dma_fence_array_first);
|
||||
|
||||
struct dma_fence *dma_fence_array_next(struct dma_fence *head,
|
||||
unsigned int index)
|
||||
{
|
||||
struct dma_fence_array *array = to_dma_fence_array(head);
|
||||
|
||||
if (!array || index >= array->num_fences)
|
||||
return NULL;
|
||||
|
||||
return array->fences[index];
|
||||
}
|
||||
EXPORT_SYMBOL(dma_fence_array_next);
|
||||
|
|
|
@ -12,4 +12,5 @@
|
|||
selftest(sanitycheck, __sanitycheck__) /* keep first (igt selfcheck) */
|
||||
selftest(dma_fence, dma_fence)
|
||||
selftest(dma_fence_chain, dma_fence_chain)
|
||||
selftest(dma_fence_unwrap, dma_fence_unwrap)
|
||||
selftest(dma_resv, dma_resv)
|
||||
|
|
|
@ -0,0 +1,261 @@
|
|||
// SPDX-License-Identifier: MIT
|
||||
|
||||
/*
|
||||
* Copyright (C) 2022 Advanced Micro Devices, Inc.
|
||||
*/
|
||||
|
||||
#include <linux/dma-fence-unwrap.h>
|
||||
#if 0
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/kthread.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/sched/signal.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/random.h>
|
||||
#endif
|
||||
|
||||
#include "selftest.h"
|
||||
|
||||
#define CHAIN_SZ (4 << 10)
|
||||
|
||||
static inline struct mock_fence {
|
||||
struct dma_fence base;
|
||||
spinlock_t lock;
|
||||
} *to_mock_fence(struct dma_fence *f) {
|
||||
return container_of(f, struct mock_fence, base);
|
||||
}
|
||||
|
||||
static const char *mock_name(struct dma_fence *f)
|
||||
{
|
||||
return "mock";
|
||||
}
|
||||
|
||||
static const struct dma_fence_ops mock_ops = {
|
||||
.get_driver_name = mock_name,
|
||||
.get_timeline_name = mock_name,
|
||||
};
|
||||
|
||||
static struct dma_fence *mock_fence(void)
|
||||
{
|
||||
struct mock_fence *f;
|
||||
|
||||
f = kmalloc(sizeof(*f), GFP_KERNEL);
|
||||
if (!f)
|
||||
return NULL;
|
||||
|
||||
spin_lock_init(&f->lock);
|
||||
dma_fence_init(&f->base, &mock_ops, &f->lock, 0, 0);
|
||||
|
||||
return &f->base;
|
||||
}
|
||||
|
||||
static struct dma_fence *mock_array(unsigned int num_fences, ...)
|
||||
{
|
||||
struct dma_fence_array *array;
|
||||
struct dma_fence **fences;
|
||||
va_list valist;
|
||||
int i;
|
||||
|
||||
fences = kcalloc(num_fences, sizeof(*fences), GFP_KERNEL);
|
||||
if (!fences)
|
||||
return NULL;
|
||||
|
||||
va_start(valist, num_fences);
|
||||
for (i = 0; i < num_fences; ++i)
|
||||
fences[i] = va_arg(valist, typeof(*fences));
|
||||
va_end(valist);
|
||||
|
||||
array = dma_fence_array_create(num_fences, fences,
|
||||
dma_fence_context_alloc(1),
|
||||
1, false);
|
||||
if (!array)
|
||||
goto cleanup;
|
||||
return &array->base;
|
||||
|
||||
cleanup:
|
||||
for (i = 0; i < num_fences; ++i)
|
||||
dma_fence_put(fences[i]);
|
||||
kfree(fences);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static struct dma_fence *mock_chain(struct dma_fence *prev,
|
||||
struct dma_fence *fence)
|
||||
{
|
||||
struct dma_fence_chain *f;
|
||||
|
||||
f = dma_fence_chain_alloc();
|
||||
if (!f) {
|
||||
dma_fence_put(prev);
|
||||
dma_fence_put(fence);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
dma_fence_chain_init(f, prev, fence, 1);
|
||||
return &f->base;
|
||||
}
|
||||
|
||||
static int sanitycheck(void *arg)
|
||||
{
|
||||
struct dma_fence *f, *chain, *array;
|
||||
int err = 0;
|
||||
|
||||
f = mock_fence();
|
||||
if (!f)
|
||||
return -ENOMEM;
|
||||
|
||||
array = mock_array(1, f);
|
||||
if (!array)
|
||||
return -ENOMEM;
|
||||
|
||||
chain = mock_chain(NULL, array);
|
||||
if (!chain)
|
||||
return -ENOMEM;
|
||||
|
||||
dma_fence_signal(f);
|
||||
dma_fence_put(chain);
|
||||
return err;
|
||||
}
|
||||
|
||||
static int unwrap_array(void *arg)
|
||||
{
|
||||
struct dma_fence *fence, *f1, *f2, *array;
|
||||
struct dma_fence_unwrap iter;
|
||||
int err = 0;
|
||||
|
||||
f1 = mock_fence();
|
||||
if (!f1)
|
||||
return -ENOMEM;
|
||||
|
||||
f2 = mock_fence();
|
||||
if (!f2) {
|
||||
dma_fence_put(f1);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
array = mock_array(2, f1, f2);
|
||||
if (!array)
|
||||
return -ENOMEM;
|
||||
|
||||
dma_fence_unwrap_for_each(fence, &iter, array) {
|
||||
if (fence == f1) {
|
||||
f1 = NULL;
|
||||
} else if (fence == f2) {
|
||||
f2 = NULL;
|
||||
} else {
|
||||
pr_err("Unexpected fence!\n");
|
||||
err = -EINVAL;
|
||||
}
|
||||
}
|
||||
|
||||
if (f1 || f2) {
|
||||
pr_err("Not all fences seen!\n");
|
||||
err = -EINVAL;
|
||||
}
|
||||
|
||||
dma_fence_signal(f1);
|
||||
dma_fence_signal(f2);
|
||||
dma_fence_put(array);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int unwrap_chain(void *arg)
|
||||
{
|
||||
struct dma_fence *fence, *f1, *f2, *chain;
|
||||
struct dma_fence_unwrap iter;
|
||||
int err = 0;
|
||||
|
||||
f1 = mock_fence();
|
||||
if (!f1)
|
||||
return -ENOMEM;
|
||||
|
||||
f2 = mock_fence();
|
||||
if (!f2) {
|
||||
dma_fence_put(f1);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
chain = mock_chain(f1, f2);
|
||||
if (!chain)
|
||||
return -ENOMEM;
|
||||
|
||||
dma_fence_unwrap_for_each(fence, &iter, chain) {
|
||||
if (fence == f1) {
|
||||
f1 = NULL;
|
||||
} else if (fence == f2) {
|
||||
f2 = NULL;
|
||||
} else {
|
||||
pr_err("Unexpected fence!\n");
|
||||
err = -EINVAL;
|
||||
}
|
||||
}
|
||||
|
||||
if (f1 || f2) {
|
||||
pr_err("Not all fences seen!\n");
|
||||
err = -EINVAL;
|
||||
}
|
||||
|
||||
dma_fence_signal(f1);
|
||||
dma_fence_signal(f2);
|
||||
dma_fence_put(chain);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int unwrap_chain_array(void *arg)
|
||||
{
|
||||
struct dma_fence *fence, *f1, *f2, *array, *chain;
|
||||
struct dma_fence_unwrap iter;
|
||||
int err = 0;
|
||||
|
||||
f1 = mock_fence();
|
||||
if (!f1)
|
||||
return -ENOMEM;
|
||||
|
||||
f2 = mock_fence();
|
||||
if (!f2) {
|
||||
dma_fence_put(f1);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
array = mock_array(2, f1, f2);
|
||||
if (!array)
|
||||
return -ENOMEM;
|
||||
|
||||
chain = mock_chain(NULL, array);
|
||||
if (!chain)
|
||||
return -ENOMEM;
|
||||
|
||||
dma_fence_unwrap_for_each(fence, &iter, chain) {
|
||||
if (fence == f1) {
|
||||
f1 = NULL;
|
||||
} else if (fence == f2) {
|
||||
f2 = NULL;
|
||||
} else {
|
||||
pr_err("Unexpected fence!\n");
|
||||
err = -EINVAL;
|
||||
}
|
||||
}
|
||||
|
||||
if (f1 || f2) {
|
||||
pr_err("Not all fences seen!\n");
|
||||
err = -EINVAL;
|
||||
}
|
||||
|
||||
dma_fence_signal(f1);
|
||||
dma_fence_signal(f2);
|
||||
dma_fence_put(chain);
|
||||
return 0;
|
||||
}
|
||||
|
||||
int dma_fence_unwrap(void)
|
||||
{
|
||||
static const struct subtest tests[] = {
|
||||
SUBTEST(sanitycheck),
|
||||
SUBTEST(unwrap_array),
|
||||
SUBTEST(unwrap_chain),
|
||||
SUBTEST(unwrap_chain_array),
|
||||
};
|
||||
|
||||
return subtests(tests, NULL);
|
||||
}
|
|
@ -5,6 +5,7 @@
|
|||
* Copyright (C) 2012 Google, Inc.
|
||||
*/
|
||||
|
||||
#include <linux/dma-fence-unwrap.h>
|
||||
#include <linux/export.h>
|
||||
#include <linux/file.h>
|
||||
#include <linux/fs.h>
|
||||
|
@ -172,20 +173,6 @@ static int sync_file_set_fence(struct sync_file *sync_file,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static struct dma_fence **get_fences(struct sync_file *sync_file,
|
||||
int *num_fences)
|
||||
{
|
||||
if (dma_fence_is_array(sync_file->fence)) {
|
||||
struct dma_fence_array *array = to_dma_fence_array(sync_file->fence);
|
||||
|
||||
*num_fences = array->num_fences;
|
||||
return array->fences;
|
||||
}
|
||||
|
||||
*num_fences = 1;
|
||||
return &sync_file->fence;
|
||||
}
|
||||
|
||||
static void add_fence(struct dma_fence **fences,
|
||||
int *i, struct dma_fence *fence)
|
||||
{
|
||||
|
@ -210,86 +197,97 @@ static void add_fence(struct dma_fence **fences,
|
|||
static struct sync_file *sync_file_merge(const char *name, struct sync_file *a,
|
||||
struct sync_file *b)
|
||||
{
|
||||
struct dma_fence *a_fence, *b_fence, **fences;
|
||||
struct dma_fence_unwrap a_iter, b_iter;
|
||||
unsigned int index, num_fences;
|
||||
struct sync_file *sync_file;
|
||||
struct dma_fence **fences = NULL, **nfences, **a_fences, **b_fences;
|
||||
int i = 0, i_a, i_b, num_fences, a_num_fences, b_num_fences;
|
||||
|
||||
sync_file = sync_file_alloc();
|
||||
if (!sync_file)
|
||||
return NULL;
|
||||
|
||||
a_fences = get_fences(a, &a_num_fences);
|
||||
b_fences = get_fences(b, &b_num_fences);
|
||||
if (a_num_fences > INT_MAX - b_num_fences)
|
||||
goto err;
|
||||
num_fences = 0;
|
||||
dma_fence_unwrap_for_each(a_fence, &a_iter, a->fence)
|
||||
++num_fences;
|
||||
dma_fence_unwrap_for_each(b_fence, &b_iter, b->fence)
|
||||
++num_fences;
|
||||
|
||||
num_fences = a_num_fences + b_num_fences;
|
||||
if (num_fences > INT_MAX)
|
||||
goto err_free_sync_file;
|
||||
|
||||
fences = kcalloc(num_fences, sizeof(*fences), GFP_KERNEL);
|
||||
if (!fences)
|
||||
goto err;
|
||||
goto err_free_sync_file;
|
||||
|
||||
/*
|
||||
* Assume sync_file a and b are both ordered and have no
|
||||
* duplicates with the same context.
|
||||
* We can't guarantee that fences in both a and b are ordered, but it is
|
||||
* still quite likely.
|
||||
*
|
||||
* If a sync_file can only be created with sync_file_merge
|
||||
* and sync_file_create, this is a reasonable assumption.
|
||||
* So attempt to order the fences as we pass over them and merge fences
|
||||
* with the same context.
|
||||
*/
|
||||
for (i_a = i_b = 0; i_a < a_num_fences && i_b < b_num_fences; ) {
|
||||
struct dma_fence *pt_a = a_fences[i_a];
|
||||
struct dma_fence *pt_b = b_fences[i_b];
|
||||
|
||||
if (pt_a->context < pt_b->context) {
|
||||
add_fence(fences, &i, pt_a);
|
||||
index = 0;
|
||||
for (a_fence = dma_fence_unwrap_first(a->fence, &a_iter),
|
||||
b_fence = dma_fence_unwrap_first(b->fence, &b_iter);
|
||||
a_fence || b_fence; ) {
|
||||
|
||||
i_a++;
|
||||
} else if (pt_a->context > pt_b->context) {
|
||||
add_fence(fences, &i, pt_b);
|
||||
if (!b_fence) {
|
||||
add_fence(fences, &index, a_fence);
|
||||
a_fence = dma_fence_unwrap_next(&a_iter);
|
||||
|
||||
} else if (!a_fence) {
|
||||
add_fence(fences, &index, b_fence);
|
||||
b_fence = dma_fence_unwrap_next(&b_iter);
|
||||
|
||||
} else if (a_fence->context < b_fence->context) {
|
||||
add_fence(fences, &index, a_fence);
|
||||
a_fence = dma_fence_unwrap_next(&a_iter);
|
||||
|
||||
} else if (b_fence->context < a_fence->context) {
|
||||
add_fence(fences, &index, b_fence);
|
||||
b_fence = dma_fence_unwrap_next(&b_iter);
|
||||
|
||||
} else if (__dma_fence_is_later(a_fence->seqno, b_fence->seqno,
|
||||
a_fence->ops)) {
|
||||
add_fence(fences, &index, a_fence);
|
||||
a_fence = dma_fence_unwrap_next(&a_iter);
|
||||
b_fence = dma_fence_unwrap_next(&b_iter);
|
||||
|
||||
i_b++;
|
||||
} else {
|
||||
if (__dma_fence_is_later(pt_a->seqno, pt_b->seqno,
|
||||
pt_a->ops))
|
||||
add_fence(fences, &i, pt_a);
|
||||
else
|
||||
add_fence(fences, &i, pt_b);
|
||||
|
||||
i_a++;
|
||||
i_b++;
|
||||
add_fence(fences, &index, b_fence);
|
||||
a_fence = dma_fence_unwrap_next(&a_iter);
|
||||
b_fence = dma_fence_unwrap_next(&b_iter);
|
||||
}
|
||||
}
|
||||
|
||||
for (; i_a < a_num_fences; i_a++)
|
||||
add_fence(fences, &i, a_fences[i_a]);
|
||||
if (index == 0)
|
||||
fences[index++] = dma_fence_get_stub();
|
||||
|
||||
for (; i_b < b_num_fences; i_b++)
|
||||
add_fence(fences, &i, b_fences[i_b]);
|
||||
if (num_fences > index) {
|
||||
struct dma_fence **tmp;
|
||||
|
||||
if (i == 0)
|
||||
fences[i++] = dma_fence_get(a_fences[0]);
|
||||
|
||||
if (num_fences > i) {
|
||||
nfences = krealloc_array(fences, i, sizeof(*fences), GFP_KERNEL);
|
||||
if (!nfences)
|
||||
goto err;
|
||||
|
||||
fences = nfences;
|
||||
/* Keep going even when reducing the size failed */
|
||||
tmp = krealloc_array(fences, index, sizeof(*fences),
|
||||
GFP_KERNEL);
|
||||
if (tmp)
|
||||
fences = tmp;
|
||||
}
|
||||
|
||||
if (sync_file_set_fence(sync_file, fences, i) < 0)
|
||||
goto err;
|
||||
if (sync_file_set_fence(sync_file, fences, index) < 0)
|
||||
goto err_put_fences;
|
||||
|
||||
strlcpy(sync_file->user_name, name, sizeof(sync_file->user_name));
|
||||
return sync_file;
|
||||
|
||||
err:
|
||||
while (i)
|
||||
dma_fence_put(fences[--i]);
|
||||
err_put_fences:
|
||||
while (index)
|
||||
dma_fence_put(fences[--index]);
|
||||
kfree(fences);
|
||||
|
||||
err_free_sync_file:
|
||||
fput(sync_file->file);
|
||||
return NULL;
|
||||
|
||||
}
|
||||
|
||||
static int sync_file_release(struct inode *inode, struct file *file)
|
||||
|
@ -398,11 +396,13 @@ static int sync_fill_fence_info(struct dma_fence *fence,
|
|||
static long sync_file_ioctl_fence_info(struct sync_file *sync_file,
|
||||
unsigned long arg)
|
||||
{
|
||||
struct sync_file_info info;
|
||||
struct sync_fence_info *fence_info = NULL;
|
||||
struct dma_fence **fences;
|
||||
struct dma_fence_unwrap iter;
|
||||
struct sync_file_info info;
|
||||
unsigned int num_fences;
|
||||
struct dma_fence *fence;
|
||||
int ret;
|
||||
__u32 size;
|
||||
int num_fences, ret, i;
|
||||
|
||||
if (copy_from_user(&info, (void __user *)arg, sizeof(info)))
|
||||
return -EFAULT;
|
||||
|
@ -410,7 +410,9 @@ static long sync_file_ioctl_fence_info(struct sync_file *sync_file,
|
|||
if (info.flags || info.pad)
|
||||
return -EINVAL;
|
||||
|
||||
fences = get_fences(sync_file, &num_fences);
|
||||
num_fences = 0;
|
||||
dma_fence_unwrap_for_each(fence, &iter, sync_file->fence)
|
||||
++num_fences;
|
||||
|
||||
/*
|
||||
* Passing num_fences = 0 means that userspace doesn't want to
|
||||
|
@ -433,8 +435,11 @@ static long sync_file_ioctl_fence_info(struct sync_file *sync_file,
|
|||
if (!fence_info)
|
||||
return -ENOMEM;
|
||||
|
||||
for (i = 0; i < num_fences; i++) {
|
||||
int status = sync_fill_fence_info(fences[i], &fence_info[i]);
|
||||
num_fences = 0;
|
||||
dma_fence_unwrap_for_each(fence, &iter, sync_file->fence) {
|
||||
int status;
|
||||
|
||||
status = sync_fill_fence_info(fence, &fence_info[num_fences++]);
|
||||
info.status = info.status <= 0 ? info.status : status;
|
||||
}
|
||||
|
||||
|
|
|
@ -119,6 +119,7 @@
|
|||
#define CONNECTOR_OBJECT_ID_eDP 0x14
|
||||
#define CONNECTOR_OBJECT_ID_MXM 0x15
|
||||
#define CONNECTOR_OBJECT_ID_LVDS_eDP 0x16
|
||||
#define CONNECTOR_OBJECT_ID_USBC 0x17
|
||||
|
||||
/* deleted */
|
||||
|
||||
|
|
|
@ -5733,7 +5733,7 @@ void amdgpu_device_flush_hdp(struct amdgpu_device *adev,
|
|||
struct amdgpu_ring *ring)
|
||||
{
|
||||
#ifdef CONFIG_X86_64
|
||||
if (adev->flags & AMD_IS_APU)
|
||||
if ((adev->flags & AMD_IS_APU) && !amdgpu_passthrough(adev))
|
||||
return;
|
||||
#endif
|
||||
if (adev->gmc.xgmi.connected_to_cpu)
|
||||
|
@ -5749,7 +5749,7 @@ void amdgpu_device_invalidate_hdp(struct amdgpu_device *adev,
|
|||
struct amdgpu_ring *ring)
|
||||
{
|
||||
#ifdef CONFIG_X86_64
|
||||
if (adev->flags & AMD_IS_APU)
|
||||
if ((adev->flags & AMD_IS_APU) && !amdgpu_passthrough(adev))
|
||||
return;
|
||||
#endif
|
||||
if (adev->gmc.xgmi.connected_to_cpu)
|
||||
|
|
|
@ -680,7 +680,7 @@ MODULE_PARM_DESC(sched_policy,
|
|||
* Maximum number of processes that HWS can schedule concurrently. The maximum is the
|
||||
* number of VMIDs assigned to the HWS, which is also the default.
|
||||
*/
|
||||
int hws_max_conc_proc = 8;
|
||||
int hws_max_conc_proc = -1;
|
||||
module_param(hws_max_conc_proc, int, 0444);
|
||||
MODULE_PARM_DESC(hws_max_conc_proc,
|
||||
"Max # processes HWS can execute concurrently when sched_policy=0 (0 = no concurrency, #VMIDs for KFD = Maximum(default))");
|
||||
|
|
|
@ -266,7 +266,7 @@ static int amdgpu_gfx_kiq_acquire(struct amdgpu_device *adev,
|
|||
* adev->gfx.mec.num_pipe_per_mec
|
||||
* adev->gfx.mec.num_queue_per_pipe;
|
||||
|
||||
while (queue_bit-- >= 0) {
|
||||
while (--queue_bit >= 0) {
|
||||
if (test_bit(queue_bit, adev->gfx.mec.queue_bitmap))
|
||||
continue;
|
||||
|
||||
|
|
|
@ -561,9 +561,15 @@ void amdgpu_gmc_noretry_set(struct amdgpu_device *adev)
|
|||
|
||||
switch (adev->ip_versions[GC_HWIP][0]) {
|
||||
case IP_VERSION(9, 0, 1):
|
||||
case IP_VERSION(9, 3, 0):
|
||||
case IP_VERSION(9, 4, 0):
|
||||
case IP_VERSION(9, 4, 1):
|
||||
case IP_VERSION(9, 4, 2):
|
||||
case IP_VERSION(10, 3, 3):
|
||||
case IP_VERSION(10, 3, 4):
|
||||
case IP_VERSION(10, 3, 5):
|
||||
case IP_VERSION(10, 3, 6):
|
||||
case IP_VERSION(10, 3, 7):
|
||||
/*
|
||||
* noretry = 0 will cause kfd page fault tests fail
|
||||
* for some ASICs, so set default to 1 for these ASICs.
|
||||
|
|
|
@ -1284,6 +1284,7 @@ void amdgpu_bo_get_memory(struct amdgpu_bo *bo, uint64_t *vram_mem,
|
|||
*/
|
||||
void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
|
||||
{
|
||||
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->bdev);
|
||||
struct dma_fence *fence = NULL;
|
||||
struct amdgpu_bo *abo;
|
||||
int r;
|
||||
|
@ -1303,7 +1304,8 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
|
|||
amdgpu_amdkfd_remove_fence_on_pt_pd_bos(abo);
|
||||
|
||||
if (bo->resource->mem_type != TTM_PL_VRAM ||
|
||||
!(abo->flags & AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE))
|
||||
!(abo->flags & AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE) ||
|
||||
adev->in_suspend || adev->shutdown)
|
||||
return;
|
||||
|
||||
if (WARN_ON_ONCE(!dma_resv_trylock(bo->base.resv)))
|
||||
|
|
|
@ -300,8 +300,8 @@ void amdgpu_ring_generic_pad_ib(struct amdgpu_ring *ring, struct amdgpu_ib *ib);
|
|||
void amdgpu_ring_commit(struct amdgpu_ring *ring);
|
||||
void amdgpu_ring_undo(struct amdgpu_ring *ring);
|
||||
int amdgpu_ring_init(struct amdgpu_device *adev, struct amdgpu_ring *ring,
|
||||
unsigned int ring_size, struct amdgpu_irq_src *irq_src,
|
||||
unsigned int irq_type, unsigned int prio,
|
||||
unsigned int max_dw, struct amdgpu_irq_src *irq_src,
|
||||
unsigned int irq_type, unsigned int hw_prio,
|
||||
atomic_t *sched_score);
|
||||
void amdgpu_ring_fini(struct amdgpu_ring *ring);
|
||||
void amdgpu_ring_emit_reg_write_reg_wait_helper(struct amdgpu_ring *ring,
|
||||
|
|
|
@ -159,6 +159,7 @@
|
|||
#define AMDGPU_VCN_MULTI_QUEUE_FLAG (1 << 8)
|
||||
#define AMDGPU_VCN_SW_RING_FLAG (1 << 9)
|
||||
#define AMDGPU_VCN_FW_LOGGING_FLAG (1 << 10)
|
||||
#define AMDGPU_VCN_SMU_VERSION_INFO_FLAG (1 << 11)
|
||||
|
||||
#define AMDGPU_VCN_IB_FLAG_DECODE_BUFFER 0x00000001
|
||||
#define AMDGPU_VCN_CMD_FLAG_MSG_BUFFER 0x00000001
|
||||
|
@ -279,6 +280,11 @@ struct amdgpu_fw_shared_fw_logging {
|
|||
uint32_t size;
|
||||
};
|
||||
|
||||
struct amdgpu_fw_shared_smu_interface_info {
|
||||
uint8_t smu_interface_type;
|
||||
uint8_t padding[3];
|
||||
};
|
||||
|
||||
struct amdgpu_fw_shared {
|
||||
uint32_t present_flag_0;
|
||||
uint8_t pad[44];
|
||||
|
@ -287,6 +293,7 @@ struct amdgpu_fw_shared {
|
|||
struct amdgpu_fw_shared_multi_queue multi_queue;
|
||||
struct amdgpu_fw_shared_sw_ring sw_ring;
|
||||
struct amdgpu_fw_shared_fw_logging fw_log;
|
||||
struct amdgpu_fw_shared_smu_interface_info smu_interface_info;
|
||||
};
|
||||
|
||||
struct amdgpu_vcn_fwlog {
|
||||
|
|
|
@ -3293,7 +3293,7 @@ static const struct soc15_reg_golden golden_settings_gc_10_3_3[] =
|
|||
SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG3, 0xffffffff, 0x00000280),
|
||||
SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG4, 0xffffffff, 0x00800000),
|
||||
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGB_ADDR_CONFIG, 0x0c1807ff, 0x00000242),
|
||||
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGCR_GENERAL_CNTL, 0x1ff1ffff, 0x00000500),
|
||||
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGCR_GENERAL_CNTL_Vangogh, 0x1ff1ffff, 0x00000500),
|
||||
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL1_PIPE_STEER, 0x000000ff, 0x000000e4),
|
||||
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2_PIPE_STEER_0, 0x77777777, 0x32103210),
|
||||
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2_PIPE_STEER_1, 0x77777777, 0x32103210),
|
||||
|
@ -3429,7 +3429,7 @@ static const struct soc15_reg_golden golden_settings_gc_10_3_6[] =
|
|||
SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG3, 0xffffffff, 0x00000280),
|
||||
SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG4, 0xffffffff, 0x00800000),
|
||||
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGB_ADDR_CONFIG, 0x0c1807ff, 0x00000042),
|
||||
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGCR_GENERAL_CNTL, 0x1ff1ffff, 0x00000500),
|
||||
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGCR_GENERAL_CNTL_Vangogh, 0x1ff1ffff, 0x00000500),
|
||||
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL1_PIPE_STEER, 0x000000ff, 0x00000044),
|
||||
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2_PIPE_STEER_0, 0x77777777, 0x32103210),
|
||||
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2_PIPE_STEER_1, 0x77777777, 0x32103210),
|
||||
|
@ -3454,7 +3454,7 @@ static const struct soc15_reg_golden golden_settings_gc_10_3_7[] = {
|
|||
SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG3, 0xffffffff, 0x00000280),
|
||||
SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG4, 0xffffffff, 0x00800000),
|
||||
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGB_ADDR_CONFIG, 0x0c1807ff, 0x00000041),
|
||||
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGCR_GENERAL_CNTL, 0x1ff1ffff, 0x00000500),
|
||||
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGCR_GENERAL_CNTL_Vangogh, 0x1ff1ffff, 0x00000500),
|
||||
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL1_PIPE_STEER, 0x000000ff, 0x000000e4),
|
||||
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2_PIPE_STEER_0, 0x77777777, 0x32103210),
|
||||
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2_PIPE_STEER_1, 0x77777777, 0x32103210),
|
||||
|
@ -7689,6 +7689,7 @@ static uint64_t gfx_v10_0_get_gpu_clock_counter(struct amdgpu_device *adev)
|
|||
switch (adev->ip_versions[GC_HWIP][0]) {
|
||||
case IP_VERSION(10, 3, 1):
|
||||
case IP_VERSION(10, 3, 3):
|
||||
case IP_VERSION(10, 3, 7):
|
||||
preempt_disable();
|
||||
clock_hi = RREG32_SOC15_NO_KIQ(SMUIO, 0, mmGOLDEN_TSC_COUNT_UPPER_Vangogh);
|
||||
clock_lo = RREG32_SOC15_NO_KIQ(SMUIO, 0, mmGOLDEN_TSC_COUNT_LOWER_Vangogh);
|
||||
|
|
|
@ -814,7 +814,7 @@ static int gmc_v10_0_mc_init(struct amdgpu_device *adev)
|
|||
adev->gmc.aper_size = pci_resource_len(adev->pdev, 0);
|
||||
|
||||
#ifdef CONFIG_X86_64
|
||||
if (adev->flags & AMD_IS_APU) {
|
||||
if ((adev->flags & AMD_IS_APU) && !amdgpu_passthrough(adev)) {
|
||||
adev->gmc.aper_base = adev->gfxhub.funcs->get_mc_fb_offset(adev);
|
||||
adev->gmc.aper_size = adev->gmc.real_vram_size;
|
||||
}
|
||||
|
|
|
@ -381,8 +381,9 @@ static int gmc_v7_0_mc_init(struct amdgpu_device *adev)
|
|||
adev->gmc.aper_size = pci_resource_len(adev->pdev, 0);
|
||||
|
||||
#ifdef CONFIG_X86_64
|
||||
if (adev->flags & AMD_IS_APU &&
|
||||
adev->gmc.real_vram_size > adev->gmc.aper_size) {
|
||||
if ((adev->flags & AMD_IS_APU) &&
|
||||
adev->gmc.real_vram_size > adev->gmc.aper_size &&
|
||||
!amdgpu_passthrough(adev)) {
|
||||
adev->gmc.aper_base = ((u64)RREG32(mmMC_VM_FB_OFFSET)) << 22;
|
||||
adev->gmc.aper_size = adev->gmc.real_vram_size;
|
||||
}
|
||||
|
|
|
@ -581,7 +581,7 @@ static int gmc_v8_0_mc_init(struct amdgpu_device *adev)
|
|||
adev->gmc.aper_size = pci_resource_len(adev->pdev, 0);
|
||||
|
||||
#ifdef CONFIG_X86_64
|
||||
if (adev->flags & AMD_IS_APU) {
|
||||
if ((adev->flags & AMD_IS_APU) && !amdgpu_passthrough(adev)) {
|
||||
adev->gmc.aper_base = ((u64)RREG32(mmMC_VM_FB_OFFSET)) << 22;
|
||||
adev->gmc.aper_size = adev->gmc.real_vram_size;
|
||||
}
|
||||
|
|
|
@ -1456,7 +1456,7 @@ static int gmc_v9_0_mc_init(struct amdgpu_device *adev)
|
|||
*/
|
||||
|
||||
/* check whether both host-gpu and gpu-gpu xgmi links exist */
|
||||
if ((adev->flags & AMD_IS_APU) ||
|
||||
if (((adev->flags & AMD_IS_APU) && !amdgpu_passthrough(adev)) ||
|
||||
(adev->gmc.xgmi.supported &&
|
||||
adev->gmc.xgmi.connected_to_cpu)) {
|
||||
adev->gmc.aper_base =
|
||||
|
@ -1721,7 +1721,7 @@ static int gmc_v9_0_sw_fini(void *handle)
|
|||
amdgpu_gem_force_release(adev);
|
||||
amdgpu_vm_manager_fini(adev);
|
||||
amdgpu_gart_table_vram_free(adev);
|
||||
amdgpu_bo_unref(&adev->gmc.pdb0_bo);
|
||||
amdgpu_bo_free_kernel(&adev->gmc.pdb0_bo, NULL, &adev->gmc.ptr_pdb0);
|
||||
amdgpu_bo_fini(adev);
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -24,6 +24,7 @@
|
|||
#include <linux/firmware.h>
|
||||
|
||||
#include "amdgpu.h"
|
||||
#include "amdgpu_cs.h"
|
||||
#include "amdgpu_vcn.h"
|
||||
#include "amdgpu_pm.h"
|
||||
#include "soc15.h"
|
||||
|
@ -1900,6 +1901,75 @@ static const struct amd_ip_funcs vcn_v1_0_ip_funcs = {
|
|||
.set_powergating_state = vcn_v1_0_set_powergating_state,
|
||||
};
|
||||
|
||||
/*
|
||||
* It is a hardware issue that VCN can't handle a GTT TMZ buffer on
|
||||
* CHIP_RAVEN series ASIC. Move such a GTT TMZ buffer to VRAM domain
|
||||
* before command submission as a workaround.
|
||||
*/
|
||||
static int vcn_v1_0_validate_bo(struct amdgpu_cs_parser *parser,
|
||||
struct amdgpu_job *job,
|
||||
uint64_t addr)
|
||||
{
|
||||
struct ttm_operation_ctx ctx = { false, false };
|
||||
struct amdgpu_fpriv *fpriv = parser->filp->driver_priv;
|
||||
struct amdgpu_vm *vm = &fpriv->vm;
|
||||
struct amdgpu_bo_va_mapping *mapping;
|
||||
struct amdgpu_bo *bo;
|
||||
int r;
|
||||
|
||||
addr &= AMDGPU_GMC_HOLE_MASK;
|
||||
if (addr & 0x7) {
|
||||
DRM_ERROR("VCN messages must be 8 byte aligned!\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
mapping = amdgpu_vm_bo_lookup_mapping(vm, addr/AMDGPU_GPU_PAGE_SIZE);
|
||||
if (!mapping || !mapping->bo_va || !mapping->bo_va->base.bo)
|
||||
return -EINVAL;
|
||||
|
||||
bo = mapping->bo_va->base.bo;
|
||||
if (!(bo->flags & AMDGPU_GEM_CREATE_ENCRYPTED))
|
||||
return 0;
|
||||
|
||||
amdgpu_bo_placement_from_domain(bo, AMDGPU_GEM_DOMAIN_VRAM);
|
||||
r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx);
|
||||
if (r) {
|
||||
DRM_ERROR("Failed to validate the VCN message BO (%d)!\n", r);
|
||||
return r;
|
||||
}
|
||||
|
||||
return r;
|
||||
}
|
||||
|
||||
static int vcn_v1_0_ring_patch_cs_in_place(struct amdgpu_cs_parser *p,
|
||||
struct amdgpu_job *job,
|
||||
struct amdgpu_ib *ib)
|
||||
{
|
||||
uint32_t msg_lo = 0, msg_hi = 0;
|
||||
int i, r;
|
||||
|
||||
if (!(ib->flags & AMDGPU_IB_FLAGS_SECURE))
|
||||
return 0;
|
||||
|
||||
for (i = 0; i < ib->length_dw; i += 2) {
|
||||
uint32_t reg = amdgpu_ib_get_value(ib, i);
|
||||
uint32_t val = amdgpu_ib_get_value(ib, i + 1);
|
||||
|
||||
if (reg == PACKET0(p->adev->vcn.internal.data0, 0)) {
|
||||
msg_lo = val;
|
||||
} else if (reg == PACKET0(p->adev->vcn.internal.data1, 0)) {
|
||||
msg_hi = val;
|
||||
} else if (reg == PACKET0(p->adev->vcn.internal.cmd, 0)) {
|
||||
r = vcn_v1_0_validate_bo(p, job,
|
||||
((u64)msg_hi) << 32 | msg_lo);
|
||||
if (r)
|
||||
return r;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct amdgpu_ring_funcs vcn_v1_0_dec_ring_vm_funcs = {
|
||||
.type = AMDGPU_RING_TYPE_VCN_DEC,
|
||||
.align_mask = 0xf,
|
||||
|
@ -1910,6 +1980,7 @@ static const struct amdgpu_ring_funcs vcn_v1_0_dec_ring_vm_funcs = {
|
|||
.get_rptr = vcn_v1_0_dec_ring_get_rptr,
|
||||
.get_wptr = vcn_v1_0_dec_ring_get_wptr,
|
||||
.set_wptr = vcn_v1_0_dec_ring_set_wptr,
|
||||
.patch_cs_in_place = vcn_v1_0_ring_patch_cs_in_place,
|
||||
.emit_frame_size =
|
||||
6 + 6 + /* hdp invalidate / flush */
|
||||
SOC15_FLUSH_GPU_TLB_NUM_WREG * 6 +
|
||||
|
|
|
@ -219,6 +219,11 @@ static int vcn_v3_0_sw_init(void *handle)
|
|||
cpu_to_le32(AMDGPU_VCN_MULTI_QUEUE_FLAG) |
|
||||
cpu_to_le32(AMDGPU_VCN_FW_SHARED_FLAG_0_RB);
|
||||
fw_shared->sw_ring.is_enabled = cpu_to_le32(DEC_SW_RING_ENABLED);
|
||||
fw_shared->present_flag_0 |= AMDGPU_VCN_SMU_VERSION_INFO_FLAG;
|
||||
if (adev->ip_versions[UVD_HWIP][0] == IP_VERSION(3, 1, 2))
|
||||
fw_shared->smu_interface_info.smu_interface_type = 2;
|
||||
else if (adev->ip_versions[UVD_HWIP][0] == IP_VERSION(3, 1, 1))
|
||||
fw_shared->smu_interface_info.smu_interface_type = 1;
|
||||
|
||||
if (amdgpu_vcnfw_log)
|
||||
amdgpu_vcn_fwlog_init(&adev->vcn.inst[i]);
|
||||
|
@ -575,8 +580,8 @@ static void vcn_v3_0_mc_resume_dpg_mode(struct amdgpu_device *adev, int inst_idx
|
|||
AMDGPU_GPU_PAGE_ALIGN(sizeof(struct amdgpu_fw_shared)), 0, indirect);
|
||||
|
||||
/* VCN global tiling registers */
|
||||
WREG32_SOC15_DPG_MODE(0, SOC15_DPG_MODE_OFFSET(
|
||||
UVD, 0, mmUVD_GFX10_ADDR_CONFIG), adev->gfx.config.gb_addr_config, 0, indirect);
|
||||
WREG32_SOC15_DPG_MODE(inst_idx, SOC15_DPG_MODE_OFFSET(
|
||||
UVD, inst_idx, mmUVD_GFX10_ADDR_CONFIG), adev->gfx.config.gb_addr_config, 0, indirect);
|
||||
}
|
||||
|
||||
static void vcn_v3_0_disable_static_power_gating(struct amdgpu_device *adev, int inst)
|
||||
|
@ -1480,8 +1485,11 @@ static int vcn_v3_0_start_sriov(struct amdgpu_device *adev)
|
|||
|
||||
static int vcn_v3_0_stop_dpg_mode(struct amdgpu_device *adev, int inst_idx)
|
||||
{
|
||||
struct dpg_pause_state state = {.fw_based = VCN_DPG_STATE__UNPAUSE};
|
||||
uint32_t tmp;
|
||||
|
||||
vcn_v3_0_pause_dpg_mode(adev, inst_idx, &state);
|
||||
|
||||
/* Wait for power status to be 1 */
|
||||
SOC15_WAIT_ON_RREG(VCN, inst_idx, mmUVD_POWER_STATUS, 1,
|
||||
UVD_POWER_STATUS__UVD_POWER_STATUS_MASK);
|
||||
|
|
|
@ -483,15 +483,10 @@ bool kgd2kfd_device_init(struct kfd_dev *kfd,
|
|||
}
|
||||
|
||||
/* Verify module parameters regarding mapped process number*/
|
||||
if ((hws_max_conc_proc < 0)
|
||||
|| (hws_max_conc_proc > kfd->vm_info.vmid_num_kfd)) {
|
||||
dev_err(kfd_device,
|
||||
"hws_max_conc_proc %d must be between 0 and %d, use %d instead\n",
|
||||
hws_max_conc_proc, kfd->vm_info.vmid_num_kfd,
|
||||
kfd->vm_info.vmid_num_kfd);
|
||||
if (hws_max_conc_proc >= 0)
|
||||
kfd->max_proc_per_quantum = min((u32)hws_max_conc_proc, kfd->vm_info.vmid_num_kfd);
|
||||
else
|
||||
kfd->max_proc_per_quantum = kfd->vm_info.vmid_num_kfd;
|
||||
} else
|
||||
kfd->max_proc_per_quantum = hws_max_conc_proc;
|
||||
|
||||
/* calculate max size of mqds needed for queues */
|
||||
size = max_num_of_queues_per_device *
|
||||
|
@ -536,7 +531,8 @@ bool kgd2kfd_device_init(struct kfd_dev *kfd,
|
|||
goto kfd_doorbell_error;
|
||||
}
|
||||
|
||||
kfd->hive_id = kfd->adev->gmc.xgmi.hive_id;
|
||||
if (amdgpu_use_xgmi_p2p)
|
||||
kfd->hive_id = kfd->adev->gmc.xgmi.hive_id;
|
||||
|
||||
kfd->noretry = kfd->adev->gmc.noretry;
|
||||
|
||||
|
|
|
@ -749,6 +749,8 @@ static struct kfd_event_waiter *alloc_event_waiters(uint32_t num_events)
|
|||
event_waiters = kmalloc_array(num_events,
|
||||
sizeof(struct kfd_event_waiter),
|
||||
GFP_KERNEL);
|
||||
if (!event_waiters)
|
||||
return NULL;
|
||||
|
||||
for (i = 0; (event_waiters) && (i < num_events) ; i++) {
|
||||
init_wait(&event_waiters[i].wait);
|
||||
|
|
|
@ -247,15 +247,6 @@ int kfd_smi_event_open(struct kfd_dev *dev, uint32_t *fd)
|
|||
return ret;
|
||||
}
|
||||
|
||||
ret = anon_inode_getfd(kfd_smi_name, &kfd_smi_ev_fops, (void *)client,
|
||||
O_RDWR);
|
||||
if (ret < 0) {
|
||||
kfifo_free(&client->fifo);
|
||||
kfree(client);
|
||||
return ret;
|
||||
}
|
||||
*fd = ret;
|
||||
|
||||
init_waitqueue_head(&client->wait_queue);
|
||||
spin_lock_init(&client->lock);
|
||||
client->events = 0;
|
||||
|
@ -265,5 +256,20 @@ int kfd_smi_event_open(struct kfd_dev *dev, uint32_t *fd)
|
|||
list_add_rcu(&client->list, &dev->smi_clients);
|
||||
spin_unlock(&dev->smi_lock);
|
||||
|
||||
ret = anon_inode_getfd(kfd_smi_name, &kfd_smi_ev_fops, (void *)client,
|
||||
O_RDWR);
|
||||
if (ret < 0) {
|
||||
spin_lock(&dev->smi_lock);
|
||||
list_del_rcu(&client->list);
|
||||
spin_unlock(&dev->smi_lock);
|
||||
|
||||
synchronize_rcu();
|
||||
|
||||
kfifo_free(&client->fifo);
|
||||
kfree(client);
|
||||
return ret;
|
||||
}
|
||||
*fd = ret;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -2714,7 +2714,8 @@ static int dm_resume(void *handle)
|
|||
* this is the case when traversing through already created
|
||||
* MST connectors, should be skipped
|
||||
*/
|
||||
if (aconnector->mst_port)
|
||||
if (aconnector->dc_link &&
|
||||
aconnector->dc_link->type == dc_connection_mst_branch)
|
||||
continue;
|
||||
|
||||
mutex_lock(&aconnector->hpd_lock);
|
||||
|
@ -3972,7 +3973,7 @@ static u32 convert_brightness_to_user(const struct amdgpu_dm_backlight_caps *cap
|
|||
max - min);
|
||||
}
|
||||
|
||||
static int amdgpu_dm_backlight_set_level(struct amdgpu_display_manager *dm,
|
||||
static void amdgpu_dm_backlight_set_level(struct amdgpu_display_manager *dm,
|
||||
int bl_idx,
|
||||
u32 user_brightness)
|
||||
{
|
||||
|
@ -4003,7 +4004,8 @@ static int amdgpu_dm_backlight_set_level(struct amdgpu_display_manager *dm,
|
|||
DRM_DEBUG("DM: Failed to update backlight on eDP[%d]\n", bl_idx);
|
||||
}
|
||||
|
||||
return rc ? 0 : 1;
|
||||
if (rc)
|
||||
dm->actual_brightness[bl_idx] = user_brightness;
|
||||
}
|
||||
|
||||
static int amdgpu_dm_backlight_update_status(struct backlight_device *bd)
|
||||
|
@ -9947,7 +9949,7 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
|
|||
/* restore the backlight level */
|
||||
for (i = 0; i < dm->num_of_edps; i++) {
|
||||
if (dm->backlight_dev[i] &&
|
||||
(amdgpu_dm_backlight_get_level(dm, i) != dm->brightness[i]))
|
||||
(dm->actual_brightness[i] != dm->brightness[i]))
|
||||
amdgpu_dm_backlight_set_level(dm, i, dm->brightness[i]);
|
||||
}
|
||||
#endif
|
||||
|
|
|
@ -540,6 +540,12 @@ struct amdgpu_display_manager {
|
|||
* cached backlight values.
|
||||
*/
|
||||
u32 brightness[AMDGPU_DM_MAX_NUM_EDP];
|
||||
/**
|
||||
* @actual_brightness:
|
||||
*
|
||||
* last successfully applied backlight values.
|
||||
*/
|
||||
u32 actual_brightness[AMDGPU_DM_MAX_NUM_EDP];
|
||||
};
|
||||
|
||||
enum dsc_clock_force_state {
|
||||
|
|
|
@ -436,57 +436,84 @@ static void dcn315_clk_mgr_helper_populate_bw_params(
|
|||
struct integrated_info *bios_info,
|
||||
const DpmClocks_315_t *clock_table)
|
||||
{
|
||||
int i, j;
|
||||
int i;
|
||||
struct clk_bw_params *bw_params = clk_mgr->base.bw_params;
|
||||
uint32_t max_dispclk = 0, max_dppclk = 0;
|
||||
uint32_t max_dispclk, max_dppclk, max_pstate, max_socclk, max_fclk = 0, min_pstate = 0;
|
||||
struct clk_limit_table_entry def_max = bw_params->clk_table.entries[bw_params->clk_table.num_entries - 1];
|
||||
|
||||
j = -1;
|
||||
max_dispclk = find_max_clk_value(clock_table->DispClocks, clock_table->NumDispClkLevelsEnabled);
|
||||
max_dppclk = find_max_clk_value(clock_table->DppClocks, clock_table->NumDispClkLevelsEnabled);
|
||||
max_socclk = find_max_clk_value(clock_table->SocClocks, clock_table->NumSocClkLevelsEnabled);
|
||||
|
||||
ASSERT(NUM_DF_PSTATE_LEVELS <= MAX_NUM_DPM_LVL);
|
||||
|
||||
/* Find lowest DPM, FCLK is filled in reverse order*/
|
||||
|
||||
for (i = NUM_DF_PSTATE_LEVELS - 1; i >= 0; i--) {
|
||||
if (clock_table->DfPstateTable[i].FClk != 0) {
|
||||
j = i;
|
||||
break;
|
||||
/* Find highest fclk pstate */
|
||||
for (i = 0; i < clock_table->NumDfPstatesEnabled; i++) {
|
||||
if (clock_table->DfPstateTable[i].FClk > max_fclk) {
|
||||
max_fclk = clock_table->DfPstateTable[i].FClk;
|
||||
max_pstate = i;
|
||||
}
|
||||
}
|
||||
|
||||
if (j == -1) {
|
||||
/* clock table is all 0s, just use our own hardcode */
|
||||
ASSERT(0);
|
||||
return;
|
||||
}
|
||||
/* For 315 we want to base clock table on dcfclk, need at least one entry regardless of pmfw table */
|
||||
for (i = 0; i < clock_table->NumDcfClkLevelsEnabled; i++) {
|
||||
int j;
|
||||
uint32_t min_fclk = clock_table->DfPstateTable[0].FClk;
|
||||
|
||||
bw_params->clk_table.num_entries = j + 1;
|
||||
for (j = 1; j < clock_table->NumDfPstatesEnabled; j++) {
|
||||
if (clock_table->DfPstateTable[j].Voltage <= clock_table->SocVoltage[i]
|
||||
&& clock_table->DfPstateTable[j].FClk < min_fclk) {
|
||||
min_fclk = clock_table->DfPstateTable[j].FClk;
|
||||
min_pstate = j;
|
||||
}
|
||||
}
|
||||
|
||||
/* dispclk and dppclk can be max at any voltage, same number of levels for both */
|
||||
if (clock_table->NumDispClkLevelsEnabled <= NUM_DISPCLK_DPM_LEVELS &&
|
||||
clock_table->NumDispClkLevelsEnabled <= NUM_DPPCLK_DPM_LEVELS) {
|
||||
max_dispclk = find_max_clk_value(clock_table->DispClocks, clock_table->NumDispClkLevelsEnabled);
|
||||
max_dppclk = find_max_clk_value(clock_table->DppClocks, clock_table->NumDispClkLevelsEnabled);
|
||||
} else {
|
||||
ASSERT(0);
|
||||
}
|
||||
|
||||
for (i = 0; i < bw_params->clk_table.num_entries; i++, j--) {
|
||||
int temp;
|
||||
|
||||
bw_params->clk_table.entries[i].fclk_mhz = clock_table->DfPstateTable[j].FClk;
|
||||
bw_params->clk_table.entries[i].memclk_mhz = clock_table->DfPstateTable[j].MemClk;
|
||||
bw_params->clk_table.entries[i].voltage = clock_table->DfPstateTable[j].Voltage;
|
||||
bw_params->clk_table.entries[i].wck_ratio = 1;
|
||||
temp = find_clk_for_voltage(clock_table, clock_table->DcfClocks, clock_table->DfPstateTable[j].Voltage);
|
||||
if (temp)
|
||||
bw_params->clk_table.entries[i].dcfclk_mhz = temp;
|
||||
temp = find_clk_for_voltage(clock_table, clock_table->SocClocks, clock_table->DfPstateTable[j].Voltage);
|
||||
if (temp)
|
||||
bw_params->clk_table.entries[i].socclk_mhz = temp;
|
||||
bw_params->clk_table.entries[i].fclk_mhz = min_fclk;
|
||||
bw_params->clk_table.entries[i].memclk_mhz = clock_table->DfPstateTable[min_pstate].MemClk;
|
||||
bw_params->clk_table.entries[i].voltage = clock_table->DfPstateTable[min_pstate].Voltage;
|
||||
bw_params->clk_table.entries[i].dcfclk_mhz = clock_table->DcfClocks[i];
|
||||
bw_params->clk_table.entries[i].socclk_mhz = clock_table->SocClocks[i];
|
||||
bw_params->clk_table.entries[i].dispclk_mhz = max_dispclk;
|
||||
bw_params->clk_table.entries[i].dppclk_mhz = max_dppclk;
|
||||
}
|
||||
bw_params->clk_table.entries[i].wck_ratio = 1;
|
||||
};
|
||||
|
||||
/* Make sure to include at least one entry and highest pstate */
|
||||
if (max_pstate != min_pstate) {
|
||||
bw_params->clk_table.entries[i].fclk_mhz = max_fclk;
|
||||
bw_params->clk_table.entries[i].memclk_mhz = clock_table->DfPstateTable[max_pstate].MemClk;
|
||||
bw_params->clk_table.entries[i].voltage = clock_table->DfPstateTable[max_pstate].Voltage;
|
||||
bw_params->clk_table.entries[i].dcfclk_mhz = find_clk_for_voltage(
|
||||
clock_table, clock_table->DcfClocks, clock_table->DfPstateTable[max_pstate].Voltage);
|
||||
bw_params->clk_table.entries[i].socclk_mhz = find_clk_for_voltage(
|
||||
clock_table, clock_table->SocClocks, clock_table->DfPstateTable[max_pstate].Voltage);
|
||||
bw_params->clk_table.entries[i].dispclk_mhz = max_dispclk;
|
||||
bw_params->clk_table.entries[i].dppclk_mhz = max_dppclk;
|
||||
bw_params->clk_table.entries[i].wck_ratio = 1;
|
||||
i++;
|
||||
}
|
||||
bw_params->clk_table.num_entries = i;
|
||||
|
||||
/* Include highest socclk */
|
||||
if (bw_params->clk_table.entries[i-1].socclk_mhz < max_socclk)
|
||||
bw_params->clk_table.entries[i-1].socclk_mhz = max_socclk;
|
||||
|
||||
/* Set any 0 clocks to max default setting. Not an issue for
|
||||
* power since we aren't doing switching in such case anyway
|
||||
*/
|
||||
for (i = 0; i < bw_params->clk_table.num_entries; i++) {
|
||||
if (!bw_params->clk_table.entries[i].fclk_mhz) {
|
||||
bw_params->clk_table.entries[i].fclk_mhz = def_max.fclk_mhz;
|
||||
bw_params->clk_table.entries[i].memclk_mhz = def_max.memclk_mhz;
|
||||
bw_params->clk_table.entries[i].voltage = def_max.voltage;
|
||||
}
|
||||
if (!bw_params->clk_table.entries[i].dcfclk_mhz)
|
||||
bw_params->clk_table.entries[i].dcfclk_mhz = def_max.dcfclk_mhz;
|
||||
if (!bw_params->clk_table.entries[i].socclk_mhz)
|
||||
bw_params->clk_table.entries[i].socclk_mhz = def_max.socclk_mhz;
|
||||
if (!bw_params->clk_table.entries[i].dispclk_mhz)
|
||||
bw_params->clk_table.entries[i].dispclk_mhz = def_max.dispclk_mhz;
|
||||
if (!bw_params->clk_table.entries[i].dppclk_mhz)
|
||||
bw_params->clk_table.entries[i].dppclk_mhz = def_max.dppclk_mhz;
|
||||
}
|
||||
bw_params->vram_type = bios_info->memory_type;
|
||||
bw_params->num_channels = bios_info->ma_channel_number;
|
||||
|
||||
|
|
|
@ -80,8 +80,8 @@ static const struct IP_BASE NBIO_BASE = { { { { 0x00000000, 0x00000014, 0x00000D
|
|||
#define VBIOSSMC_MSG_SetDppclkFreq 0x06 ///< Set DPP clock frequency in MHZ
|
||||
#define VBIOSSMC_MSG_SetHardMinDcfclkByFreq 0x07 ///< Set DCF clock frequency hard min in MHZ
|
||||
#define VBIOSSMC_MSG_SetMinDeepSleepDcfclk 0x08 ///< Set DCF clock minimum frequency in deep sleep in MHZ
|
||||
#define VBIOSSMC_MSG_SetPhyclkVoltageByFreq 0x09 ///< Set display phy clock frequency in MHZ in case VMIN does not support phy frequency
|
||||
#define VBIOSSMC_MSG_GetFclkFrequency 0x0A ///< Get FCLK frequency, return frequemcy in MHZ
|
||||
#define VBIOSSMC_MSG_GetDtbclkFreq 0x09 ///< Get display dtb clock frequency in MHZ in case VMIN does not support phy frequency
|
||||
#define VBIOSSMC_MSG_SetDtbClk 0x0A ///< Set dtb clock frequency, return frequemcy in MHZ
|
||||
#define VBIOSSMC_MSG_SetDisplayCount 0x0B ///< Inform PMFW of number of display connected
|
||||
#define VBIOSSMC_MSG_EnableTmdp48MHzRefclkPwrDown 0x0C ///< To ask PMFW turn off TMDP 48MHz refclk during display off to save power
|
||||
#define VBIOSSMC_MSG_UpdatePmeRestore 0x0D ///< To ask PMFW to write into Azalia for PME wake up event
|
||||
|
@ -324,15 +324,26 @@ int dcn315_smu_get_dpref_clk(struct clk_mgr_internal *clk_mgr)
|
|||
return (dprefclk_get_mhz * 1000);
|
||||
}
|
||||
|
||||
int dcn315_smu_get_smu_fclk(struct clk_mgr_internal *clk_mgr)
|
||||
int dcn315_smu_get_dtbclk(struct clk_mgr_internal *clk_mgr)
|
||||
{
|
||||
int fclk_get_mhz = -1;
|
||||
|
||||
if (clk_mgr->smu_present) {
|
||||
fclk_get_mhz = dcn315_smu_send_msg_with_param(
|
||||
clk_mgr,
|
||||
VBIOSSMC_MSG_GetFclkFrequency,
|
||||
VBIOSSMC_MSG_GetDtbclkFreq,
|
||||
0);
|
||||
}
|
||||
return (fclk_get_mhz * 1000);
|
||||
}
|
||||
|
||||
void dcn315_smu_set_dtbclk(struct clk_mgr_internal *clk_mgr, bool enable)
|
||||
{
|
||||
if (!clk_mgr->smu_present)
|
||||
return;
|
||||
|
||||
dcn315_smu_send_msg_with_param(
|
||||
clk_mgr,
|
||||
VBIOSSMC_MSG_SetDtbClk,
|
||||
enable);
|
||||
}
|
||||
|
|
|
@ -37,6 +37,7 @@
|
|||
#define NUM_SOC_VOLTAGE_LEVELS 4
|
||||
#define NUM_DF_PSTATE_LEVELS 4
|
||||
|
||||
|
||||
typedef struct {
|
||||
uint16_t MinClock; // This is either DCFCLK or SOCCLK (in MHz)
|
||||
uint16_t MaxClock; // This is either DCFCLK or SOCCLK (in MHz)
|
||||
|
@ -124,5 +125,6 @@ void dcn315_smu_transfer_wm_table_dram_2_smu(struct clk_mgr_internal *clk_mgr);
|
|||
void dcn315_smu_request_voltage_via_phyclk(struct clk_mgr_internal *clk_mgr, int requested_phyclk_khz);
|
||||
void dcn315_smu_enable_pme_wa(struct clk_mgr_internal *clk_mgr);
|
||||
int dcn315_smu_get_dpref_clk(struct clk_mgr_internal *clk_mgr);
|
||||
int dcn315_smu_get_smu_fclk(struct clk_mgr_internal *clk_mgr);
|
||||
int dcn315_smu_get_dtbclk(struct clk_mgr_internal *clk_mgr);
|
||||
void dcn315_smu_set_dtbclk(struct clk_mgr_internal *clk_mgr, bool enable);
|
||||
#endif /* DAL_DC_315_SMU_H_ */
|
||||
|
|
|
@ -2389,6 +2389,8 @@ static enum surface_update_type check_update_surfaces_for_stream(
|
|||
|
||||
if (stream_update->mst_bw_update)
|
||||
su_flags->bits.mst_bw = 1;
|
||||
if (stream_update->crtc_timing_adjust && dc_extended_blank_supported(dc))
|
||||
su_flags->bits.crtc_timing_adjust = 1;
|
||||
|
||||
if (su_flags->raw != 0)
|
||||
overall_type = UPDATE_TYPE_FULL;
|
||||
|
@ -2650,6 +2652,9 @@ static void copy_stream_update_to_stream(struct dc *dc,
|
|||
if (update->vrr_infopacket)
|
||||
stream->vrr_infopacket = *update->vrr_infopacket;
|
||||
|
||||
if (update->crtc_timing_adjust)
|
||||
stream->adjust = *update->crtc_timing_adjust;
|
||||
|
||||
if (update->dpms_off)
|
||||
stream->dpms_off = *update->dpms_off;
|
||||
|
||||
|
@ -4051,3 +4056,17 @@ void dc_notify_vsync_int_state(struct dc *dc, struct dc_stream_state *stream, bo
|
|||
if (pipe->stream_res.abm && pipe->stream_res.abm->funcs->set_abm_pause)
|
||||
pipe->stream_res.abm->funcs->set_abm_pause(pipe->stream_res.abm, !enable, i, pipe->stream_res.tg->inst);
|
||||
}
|
||||
/*
|
||||
* dc_extended_blank_supported: Decide whether extended blank is supported
|
||||
*
|
||||
* Extended blank is a freesync optimization feature to be enabled in the future.
|
||||
* During the extra vblank period gained from freesync, we have the ability to enter z9/z10.
|
||||
*
|
||||
* @param [in] dc: Current DC state
|
||||
* @return: Indicate whether extended blank is supported (true or false)
|
||||
*/
|
||||
bool dc_extended_blank_supported(struct dc *dc)
|
||||
{
|
||||
return dc->debug.extended_blank_optimization && !dc->debug.disable_z10
|
||||
&& dc->caps.zstate_support && dc->caps.is_apu;
|
||||
}
|
||||
|
|
|
@ -983,8 +983,7 @@ static bool should_verify_link_capability_destructively(struct dc_link *link,
|
|||
destrictive = false;
|
||||
}
|
||||
}
|
||||
} else if (dc_is_hdmi_signal(link->local_sink->sink_signal))
|
||||
destrictive = true;
|
||||
}
|
||||
|
||||
return destrictive;
|
||||
}
|
||||
|
|
|
@ -5216,6 +5216,62 @@ static void retrieve_cable_id(struct dc_link *link)
|
|||
&link->dpcd_caps.cable_id, &usbc_cable_id);
|
||||
}
|
||||
|
||||
/* DPRX may take some time to respond to AUX messages after HPD asserted.
|
||||
* If AUX read unsuccessful, try to wake unresponsive DPRX by toggling DPCD SET_POWER (0x600).
|
||||
*/
|
||||
static enum dc_status wa_try_to_wake_dprx(struct dc_link *link, uint64_t timeout_ms)
|
||||
{
|
||||
enum dc_status status = DC_ERROR_UNEXPECTED;
|
||||
uint8_t dpcd_data = 0;
|
||||
uint64_t start_ts = 0;
|
||||
uint64_t current_ts = 0;
|
||||
uint64_t time_taken_ms = 0;
|
||||
enum dc_connection_type type = dc_connection_none;
|
||||
|
||||
status = core_link_read_dpcd(
|
||||
link,
|
||||
DP_LT_TUNABLE_PHY_REPEATER_FIELD_DATA_STRUCTURE_REV,
|
||||
&dpcd_data,
|
||||
sizeof(dpcd_data));
|
||||
|
||||
if (status != DC_OK) {
|
||||
DC_LOG_WARNING("%s: Read DPCD LTTPR_CAP failed - try to toggle DPCD SET_POWER for %lld ms.",
|
||||
__func__,
|
||||
timeout_ms);
|
||||
start_ts = dm_get_timestamp(link->ctx);
|
||||
|
||||
do {
|
||||
if (!dc_link_detect_sink(link, &type) || type == dc_connection_none)
|
||||
break;
|
||||
|
||||
dpcd_data = DP_SET_POWER_D3;
|
||||
status = core_link_write_dpcd(
|
||||
link,
|
||||
DP_SET_POWER,
|
||||
&dpcd_data,
|
||||
sizeof(dpcd_data));
|
||||
|
||||
dpcd_data = DP_SET_POWER_D0;
|
||||
status = core_link_write_dpcd(
|
||||
link,
|
||||
DP_SET_POWER,
|
||||
&dpcd_data,
|
||||
sizeof(dpcd_data));
|
||||
|
||||
current_ts = dm_get_timestamp(link->ctx);
|
||||
time_taken_ms = div_u64(dm_get_elapse_time_in_ns(link->ctx, current_ts, start_ts), 1000000);
|
||||
} while (status != DC_OK && time_taken_ms < timeout_ms);
|
||||
|
||||
DC_LOG_WARNING("%s: DPCD SET_POWER %s after %lld ms%s",
|
||||
__func__,
|
||||
(status == DC_OK) ? "succeeded" : "failed",
|
||||
time_taken_ms,
|
||||
(type == dc_connection_none) ? ". Unplugged." : ".");
|
||||
}
|
||||
|
||||
return status;
|
||||
}
|
||||
|
||||
static bool retrieve_link_cap(struct dc_link *link)
|
||||
{
|
||||
/* DP_ADAPTER_CAP - DP_DPCD_REV + 1 == 16 and also DP_DSC_BITS_PER_PIXEL_INC - DP_DSC_SUPPORT + 1 == 16,
|
||||
|
@ -5251,6 +5307,15 @@ static bool retrieve_link_cap(struct dc_link *link)
|
|||
dc_link_aux_try_to_configure_timeout(link->ddc,
|
||||
LINK_AUX_DEFAULT_LTTPR_TIMEOUT_PERIOD);
|
||||
|
||||
/* Try to ensure AUX channel active before proceeding. */
|
||||
if (link->dc->debug.aux_wake_wa.bits.enable_wa) {
|
||||
uint64_t timeout_ms = link->dc->debug.aux_wake_wa.bits.timeout_ms;
|
||||
|
||||
if (link->dc->debug.aux_wake_wa.bits.use_default_timeout)
|
||||
timeout_ms = LINK_AUX_WAKE_TIMEOUT_MS;
|
||||
status = wa_try_to_wake_dprx(link, timeout_ms);
|
||||
}
|
||||
|
||||
is_lttpr_present = dp_retrieve_lttpr_cap(link);
|
||||
/* Read DP tunneling information. */
|
||||
status = dpcd_get_tunneling_device_data(link);
|
||||
|
|
|
@ -1685,8 +1685,8 @@ bool dc_is_stream_unchanged(
|
|||
if (old_stream->ignore_msa_timing_param != stream->ignore_msa_timing_param)
|
||||
return false;
|
||||
|
||||
// Only Have Audio left to check whether it is same or not. This is a corner case for Tiled sinks
|
||||
if (old_stream->audio_info.mode_count != stream->audio_info.mode_count)
|
||||
/*compare audio info*/
|
||||
if (memcmp(&old_stream->audio_info, &stream->audio_info, sizeof(stream->audio_info)) != 0)
|
||||
return false;
|
||||
|
||||
return true;
|
||||
|
|
|
@ -188,6 +188,7 @@ struct dc_caps {
|
|||
bool psp_setup_panel_mode;
|
||||
bool extended_aux_timeout_support;
|
||||
bool dmcub_support;
|
||||
bool zstate_support;
|
||||
uint32_t num_of_internal_disp;
|
||||
enum dp_protocol_version max_dp_protocol_version;
|
||||
unsigned int mall_size_per_mem_channel;
|
||||
|
@ -525,6 +526,22 @@ union dpia_debug_options {
|
|||
uint32_t raw;
|
||||
};
|
||||
|
||||
/* AUX wake work around options
|
||||
* 0: enable/disable work around
|
||||
* 1: use default timeout LINK_AUX_WAKE_TIMEOUT_MS
|
||||
* 15-2: reserved
|
||||
* 31-16: timeout in ms
|
||||
*/
|
||||
union aux_wake_wa_options {
|
||||
struct {
|
||||
uint32_t enable_wa : 1;
|
||||
uint32_t use_default_timeout : 1;
|
||||
uint32_t rsvd: 14;
|
||||
uint32_t timeout_ms : 16;
|
||||
} bits;
|
||||
uint32_t raw;
|
||||
};
|
||||
|
||||
struct dc_debug_data {
|
||||
uint32_t ltFailCount;
|
||||
uint32_t i2cErrorCount;
|
||||
|
@ -703,13 +720,15 @@ struct dc_debug_options {
|
|||
bool enable_driver_sequence_debug;
|
||||
enum det_size crb_alloc_policy;
|
||||
int crb_alloc_policy_min_disp_count;
|
||||
#if defined(CONFIG_DRM_AMD_DC_DCN)
|
||||
bool disable_z10;
|
||||
#if defined(CONFIG_DRM_AMD_DC_DCN)
|
||||
bool enable_z9_disable_interface;
|
||||
bool enable_sw_cntl_psr;
|
||||
union dpia_debug_options dpia_debug;
|
||||
#endif
|
||||
bool apply_vendor_specific_lttpr_wa;
|
||||
bool extended_blank_optimization;
|
||||
union aux_wake_wa_options aux_wake_wa;
|
||||
bool ignore_dpref_ss;
|
||||
uint8_t psr_power_use_phy_fsm;
|
||||
};
|
||||
|
@ -1369,6 +1388,8 @@ struct dc_sink_init_data {
|
|||
bool converter_disable_audio;
|
||||
};
|
||||
|
||||
bool dc_extended_blank_supported(struct dc *dc);
|
||||
|
||||
struct dc_sink *dc_sink_create(const struct dc_sink_init_data *init_params);
|
||||
|
||||
/* Newer interfaces */
|
||||
|
|
|
@ -131,6 +131,7 @@ union stream_update_flags {
|
|||
uint32_t wb_update:1;
|
||||
uint32_t dsc_changed : 1;
|
||||
uint32_t mst_bw : 1;
|
||||
uint32_t crtc_timing_adjust : 1;
|
||||
} bits;
|
||||
|
||||
uint32_t raw;
|
||||
|
@ -289,6 +290,7 @@ struct dc_stream_update {
|
|||
struct dc_3dlut *lut3d_func;
|
||||
|
||||
struct test_pattern *pending_test_pattern;
|
||||
struct dc_crtc_timing_adjust *crtc_timing_adjust;
|
||||
};
|
||||
|
||||
bool dc_is_stream_unchanged(
|
||||
|
|
|
@ -1497,16 +1497,12 @@ void dcn10_init_hw(struct dc *dc)
|
|||
link->link_status.link_active = true;
|
||||
}
|
||||
|
||||
/* Power gate DSCs */
|
||||
if (!is_optimized_init_done) {
|
||||
for (i = 0; i < res_pool->res_cap->num_dsc; i++)
|
||||
if (hws->funcs.dsc_pg_control != NULL)
|
||||
hws->funcs.dsc_pg_control(hws, res_pool->dscs[i]->inst, false);
|
||||
}
|
||||
|
||||
/* we want to turn off all dp displays before doing detection */
|
||||
dc_link_blank_all_dp_displays(dc);
|
||||
|
||||
if (hws->funcs.enable_power_gating_plane)
|
||||
hws->funcs.enable_power_gating_plane(dc->hwseq, true);
|
||||
|
||||
/* If taking control over from VBIOS, we may want to optimize our first
|
||||
* mode set, so we need to skip powering down pipes until we know which
|
||||
* pipes we want to use.
|
||||
|
@ -1559,8 +1555,6 @@ void dcn10_init_hw(struct dc *dc)
|
|||
|
||||
REG_UPDATE(DCFCLK_CNTL, DCFCLK_GATE_DIS, 0);
|
||||
}
|
||||
if (hws->funcs.enable_power_gating_plane)
|
||||
hws->funcs.enable_power_gating_plane(dc->hwseq, true);
|
||||
|
||||
if (dc->clk_mgr->funcs->notify_wm_ranges)
|
||||
dc->clk_mgr->funcs->notify_wm_ranges(dc->clk_mgr);
|
||||
|
@ -2056,7 +2050,7 @@ static int dcn10_align_pixel_clocks(struct dc *dc, int group_size,
|
|||
{
|
||||
struct dc_context *dc_ctx = dc->ctx;
|
||||
int i, master = -1, embedded = -1;
|
||||
struct dc_crtc_timing hw_crtc_timing[MAX_PIPES] = {0};
|
||||
struct dc_crtc_timing *hw_crtc_timing;
|
||||
uint64_t phase[MAX_PIPES];
|
||||
uint64_t modulo[MAX_PIPES];
|
||||
unsigned int pclk;
|
||||
|
@ -2067,6 +2061,10 @@ static int dcn10_align_pixel_clocks(struct dc *dc, int group_size,
|
|||
uint32_t dp_ref_clk_100hz =
|
||||
dc->res_pool->dp_clock_source->ctx->dc->clk_mgr->dprefclk_khz*10;
|
||||
|
||||
hw_crtc_timing = kcalloc(MAX_PIPES, sizeof(*hw_crtc_timing), GFP_KERNEL);
|
||||
if (!hw_crtc_timing)
|
||||
return master;
|
||||
|
||||
if (dc->config.vblank_alignment_dto_params &&
|
||||
dc->res_pool->dp_clock_source->funcs->override_dp_pix_clk) {
|
||||
embedded_h_total =
|
||||
|
@ -2130,6 +2128,8 @@ static int dcn10_align_pixel_clocks(struct dc *dc, int group_size,
|
|||
}
|
||||
|
||||
}
|
||||
|
||||
kfree(hw_crtc_timing);
|
||||
return master;
|
||||
}
|
||||
|
||||
|
|
|
@ -1857,6 +1857,7 @@ void dcn20_optimize_bandwidth(
|
|||
struct dc_state *context)
|
||||
{
|
||||
struct hubbub *hubbub = dc->res_pool->hubbub;
|
||||
int i;
|
||||
|
||||
/* program dchubbub watermarks */
|
||||
hubbub->funcs->program_watermarks(hubbub,
|
||||
|
@ -1873,6 +1874,17 @@ void dcn20_optimize_bandwidth(
|
|||
dc->clk_mgr,
|
||||
context,
|
||||
true);
|
||||
if (dc_extended_blank_supported(dc) && context->bw_ctx.bw.dcn.clk.zstate_support == DCN_ZSTATE_SUPPORT_ALLOW) {
|
||||
for (i = 0; i < dc->res_pool->pipe_count; ++i) {
|
||||
struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[i];
|
||||
|
||||
if (pipe_ctx->stream && pipe_ctx->plane_res.hubp->funcs->program_extended_blank
|
||||
&& pipe_ctx->stream->adjust.v_total_min == pipe_ctx->stream->adjust.v_total_max
|
||||
&& pipe_ctx->stream->adjust.v_total_max > pipe_ctx->stream->timing.v_total)
|
||||
pipe_ctx->plane_res.hubp->funcs->program_extended_blank(pipe_ctx->plane_res.hubp,
|
||||
pipe_ctx->dlg_regs.optimized_min_dst_y_next_start);
|
||||
}
|
||||
}
|
||||
/* increase compbuf size */
|
||||
if (hubbub->funcs->program_compbuf_size)
|
||||
hubbub->funcs->program_compbuf_size(hubbub, context->bw_ctx.bw.dcn.compbuf_size_kb, true);
|
||||
|
|
|
@ -1976,7 +1976,6 @@ int dcn20_validate_apply_pipe_split_flags(
|
|||
/*If need split for odm but 4 way split already*/
|
||||
if (split[i] == 2 && ((pipe->prev_odm_pipe && !pipe->prev_odm_pipe->prev_odm_pipe)
|
||||
|| !pipe->next_odm_pipe)) {
|
||||
ASSERT(0); /* NOT expected yet */
|
||||
merge[i] = true; /* 4 -> 2 ODM */
|
||||
} else if (split[i] == 0 && pipe->prev_odm_pipe) {
|
||||
ASSERT(0); /* NOT expected yet */
|
||||
|
|
|
@ -644,7 +644,7 @@ static const struct dc_debug_options debug_defaults_drv = {
|
|||
.clock_trace = true,
|
||||
.disable_pplib_clock_request = true,
|
||||
.min_disp_clk_khz = 100000,
|
||||
.pipe_split_policy = MPC_SPLIT_DYNAMIC,
|
||||
.pipe_split_policy = MPC_SPLIT_AVOID_MULT_DISP,
|
||||
.force_single_disp_pipe_split = false,
|
||||
.disable_dcc = DCC_ENABLE,
|
||||
.vsr_support = true,
|
||||
|
|
|
@ -547,6 +547,9 @@ void dcn30_init_hw(struct dc *dc)
|
|||
/* we want to turn off all dp displays before doing detection */
|
||||
dc_link_blank_all_dp_displays(dc);
|
||||
|
||||
if (hws->funcs.enable_power_gating_plane)
|
||||
hws->funcs.enable_power_gating_plane(dc->hwseq, true);
|
||||
|
||||
/* If taking control over from VBIOS, we may want to optimize our first
|
||||
* mode set, so we need to skip powering down pipes until we know which
|
||||
* pipes we want to use.
|
||||
|
@ -624,8 +627,6 @@ void dcn30_init_hw(struct dc *dc)
|
|||
|
||||
REG_UPDATE(DCFCLK_CNTL, DCFCLK_GATE_DIS, 0);
|
||||
}
|
||||
if (hws->funcs.enable_power_gating_plane)
|
||||
hws->funcs.enable_power_gating_plane(dc->hwseq, true);
|
||||
|
||||
if (!dcb->funcs->is_accelerated_mode(dcb) && dc->res_pool->hubbub->funcs->init_watermarks)
|
||||
dc->res_pool->hubbub->funcs->init_watermarks(dc->res_pool->hubbub);
|
||||
|
|
|
@ -1042,5 +1042,7 @@ void hubbub31_construct(struct dcn20_hubbub *hubbub31,
|
|||
hubbub31->detile_buf_size = det_size_kb * 1024;
|
||||
hubbub31->pixel_chunk_size = pixel_chunk_size_kb * 1024;
|
||||
hubbub31->crb_size_segs = config_return_buffer_size_kb / DCN31_CRB_SEGMENT_SIZE_KB;
|
||||
|
||||
hubbub31->debug_test_index_pstate = 0x6;
|
||||
}
|
||||
|
||||
|
|
|
@ -54,6 +54,13 @@ void hubp31_soft_reset(struct hubp *hubp, bool reset)
|
|||
REG_UPDATE(DCHUBP_CNTL, HUBP_SOFT_RESET, reset);
|
||||
}
|
||||
|
||||
void hubp31_program_extended_blank(struct hubp *hubp, unsigned int min_dst_y_next_start_optimized)
|
||||
{
|
||||
struct dcn20_hubp *hubp2 = TO_DCN20_HUBP(hubp);
|
||||
|
||||
REG_SET(BLANK_OFFSET_1, 0, MIN_DST_Y_NEXT_START, min_dst_y_next_start_optimized);
|
||||
}
|
||||
|
||||
static struct hubp_funcs dcn31_hubp_funcs = {
|
||||
.hubp_enable_tripleBuffer = hubp2_enable_triplebuffer,
|
||||
.hubp_is_triplebuffer_enabled = hubp2_is_triplebuffer_enabled,
|
||||
|
@ -80,6 +87,7 @@ static struct hubp_funcs dcn31_hubp_funcs = {
|
|||
.set_unbounded_requesting = hubp31_set_unbounded_requesting,
|
||||
.hubp_soft_reset = hubp31_soft_reset,
|
||||
.hubp_in_blank = hubp1_in_blank,
|
||||
.program_extended_blank = hubp31_program_extended_blank,
|
||||
};
|
||||
|
||||
bool hubp31_construct(
|
||||
|
|
|
@ -199,6 +199,9 @@ void dcn31_init_hw(struct dc *dc)
|
|||
/* we want to turn off all dp displays before doing detection */
|
||||
dc_link_blank_all_dp_displays(dc);
|
||||
|
||||
if (hws->funcs.enable_power_gating_plane)
|
||||
hws->funcs.enable_power_gating_plane(dc->hwseq, true);
|
||||
|
||||
/* If taking control over from VBIOS, we may want to optimize our first
|
||||
* mode set, so we need to skip powering down pipes until we know which
|
||||
* pipes we want to use.
|
||||
|
@ -248,8 +251,6 @@ void dcn31_init_hw(struct dc *dc)
|
|||
|
||||
REG_UPDATE(DCFCLK_CNTL, DCFCLK_GATE_DIS, 0);
|
||||
}
|
||||
if (hws->funcs.enable_power_gating_plane)
|
||||
hws->funcs.enable_power_gating_plane(dc->hwseq, true);
|
||||
|
||||
if (!dcb->funcs->is_accelerated_mode(dcb) && dc->res_pool->hubbub->funcs->init_watermarks)
|
||||
dc->res_pool->hubbub->funcs->init_watermarks(dc->res_pool->hubbub);
|
||||
|
@ -338,20 +339,20 @@ void dcn31_enable_power_gating_plane(
|
|||
bool enable)
|
||||
{
|
||||
bool force_on = true; /* disable power gating */
|
||||
uint32_t org_ip_request_cntl = 0;
|
||||
|
||||
if (enable && !hws->ctx->dc->debug.disable_hubp_power_gate)
|
||||
force_on = false;
|
||||
|
||||
REG_GET(DC_IP_REQUEST_CNTL, IP_REQUEST_EN, &org_ip_request_cntl);
|
||||
if (org_ip_request_cntl == 0)
|
||||
REG_SET(DC_IP_REQUEST_CNTL, 0, IP_REQUEST_EN, 1);
|
||||
/* DCHUBP0/1/2/3/4/5 */
|
||||
REG_UPDATE(DOMAIN0_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
|
||||
REG_WAIT(DOMAIN0_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, force_on, 1, 1000);
|
||||
REG_UPDATE(DOMAIN2_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
|
||||
REG_WAIT(DOMAIN2_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, force_on, 1, 1000);
|
||||
/* DPP0/1/2/3/4/5 */
|
||||
REG_UPDATE(DOMAIN1_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
|
||||
REG_WAIT(DOMAIN1_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, force_on, 1, 1000);
|
||||
REG_UPDATE(DOMAIN3_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
|
||||
REG_WAIT(DOMAIN3_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, force_on, 1, 1000);
|
||||
|
||||
force_on = true; /* disable power gating */
|
||||
if (enable && !hws->ctx->dc->debug.disable_dsc_power_gate)
|
||||
|
@ -359,11 +360,11 @@ void dcn31_enable_power_gating_plane(
|
|||
|
||||
/* DCS0/1/2/3/4/5 */
|
||||
REG_UPDATE(DOMAIN16_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
|
||||
REG_WAIT(DOMAIN16_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, force_on, 1, 1000);
|
||||
REG_UPDATE(DOMAIN17_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
|
||||
REG_WAIT(DOMAIN17_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, force_on, 1, 1000);
|
||||
REG_UPDATE(DOMAIN18_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
|
||||
REG_WAIT(DOMAIN18_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, force_on, 1, 1000);
|
||||
|
||||
if (org_ip_request_cntl == 0)
|
||||
REG_SET(DC_IP_REQUEST_CNTL, 0, IP_REQUEST_EN, 0);
|
||||
}
|
||||
|
||||
void dcn31_update_info_frame(struct pipe_ctx *pipe_ctx)
|
||||
|
|
|
@ -124,7 +124,6 @@ static bool optc31_enable_crtc(struct timing_generator *optc)
|
|||
static bool optc31_disable_crtc(struct timing_generator *optc)
|
||||
{
|
||||
struct optc *optc1 = DCN10TG_FROM_TG(optc);
|
||||
|
||||
/* disable otg request until end of the first line
|
||||
* in the vertical blank region
|
||||
*/
|
||||
|
@ -138,6 +137,7 @@ static bool optc31_disable_crtc(struct timing_generator *optc)
|
|||
REG_WAIT(OTG_CLOCK_CONTROL,
|
||||
OTG_BUSY, 0,
|
||||
1, 100000);
|
||||
optc1_clear_optc_underflow(optc);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
@ -158,6 +158,9 @@ static bool optc31_immediate_disable_crtc(struct timing_generator *optc)
|
|||
OTG_BUSY, 0,
|
||||
1, 100000);
|
||||
|
||||
/* clear the false state */
|
||||
optc1_clear_optc_underflow(optc);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
|
|
|
@ -2032,7 +2032,9 @@ bool dcn31_validate_bandwidth(struct dc *dc,
|
|||
|
||||
BW_VAL_TRACE_COUNT();
|
||||
|
||||
DC_FP_START();
|
||||
out = dcn30_internal_validate_bw(dc, context, pipes, &pipe_cnt, &vlevel, fast_validate);
|
||||
DC_FP_END();
|
||||
|
||||
// Disable fast_validate to set min dcfclk in alculate_wm_and_dlg
|
||||
if (pipe_cnt == 0)
|
||||
|
@ -2232,6 +2234,7 @@ static bool dcn31_resource_construct(
|
|||
dc->caps.extended_aux_timeout_support = true;
|
||||
dc->caps.dmcub_support = true;
|
||||
dc->caps.is_apu = true;
|
||||
dc->caps.zstate_support = true;
|
||||
|
||||
/* Color pipeline capabilities */
|
||||
dc->caps.color.dpp.dcn_arch = 1;
|
||||
|
|
|
@ -722,8 +722,10 @@ static enum dcn_zstate_support_state decide_zstate_support(struct dc *dc, struc
|
|||
{
|
||||
int plane_count;
|
||||
int i;
|
||||
unsigned int optimized_min_dst_y_next_start_us;
|
||||
|
||||
plane_count = 0;
|
||||
optimized_min_dst_y_next_start_us = 0;
|
||||
for (i = 0; i < dc->res_pool->pipe_count; i++) {
|
||||
if (context->res_ctx.pipe_ctx[i].plane_state)
|
||||
plane_count++;
|
||||
|
@ -744,11 +746,22 @@ static enum dcn_zstate_support_state decide_zstate_support(struct dc *dc, struc
|
|||
struct dc_link *link = context->streams[0]->sink->link;
|
||||
struct dc_stream_status *stream_status = &context->stream_status[0];
|
||||
|
||||
if (dc_extended_blank_supported(dc)) {
|
||||
for (i = 0; i < dc->res_pool->pipe_count; i++) {
|
||||
if (context->res_ctx.pipe_ctx[i].stream == context->streams[0]
|
||||
&& context->res_ctx.pipe_ctx[i].stream->adjust.v_total_min == context->res_ctx.pipe_ctx[i].stream->adjust.v_total_max
|
||||
&& context->res_ctx.pipe_ctx[i].stream->adjust.v_total_min > context->res_ctx.pipe_ctx[i].stream->timing.v_total) {
|
||||
optimized_min_dst_y_next_start_us =
|
||||
context->res_ctx.pipe_ctx[i].dlg_regs.optimized_min_dst_y_next_start_us;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
/* zstate only supported on PWRSEQ0 and when there's <2 planes*/
|
||||
if (link->link_index != 0 || stream_status->plane_count > 1)
|
||||
return DCN_ZSTATE_SUPPORT_DISALLOW;
|
||||
|
||||
if (context->bw_ctx.dml.vba.StutterPeriod > 5000.0)
|
||||
if (context->bw_ctx.dml.vba.StutterPeriod > 5000.0 || optimized_min_dst_y_next_start_us > 5000)
|
||||
return DCN_ZSTATE_SUPPORT_ALLOW;
|
||||
else if (link->psr_settings.psr_version == DC_PSR_VERSION_1 && !dc->debug.disable_psr)
|
||||
return DCN_ZSTATE_SUPPORT_ALLOW_Z10_ONLY;
|
||||
|
@ -786,8 +799,6 @@ void dcn20_calculate_dlg_params(
|
|||
!= dm_dram_clock_change_unsupported;
|
||||
context->bw_ctx.bw.dcn.clk.dppclk_khz = 0;
|
||||
|
||||
context->bw_ctx.bw.dcn.clk.zstate_support = decide_zstate_support(dc, context);
|
||||
|
||||
context->bw_ctx.bw.dcn.clk.dtbclk_en = is_dtbclk_required(dc, context);
|
||||
|
||||
if (context->bw_ctx.bw.dcn.clk.dispclk_khz < dc->debug.min_disp_clk_khz)
|
||||
|
@ -843,6 +854,7 @@ void dcn20_calculate_dlg_params(
|
|||
&pipes[pipe_idx].pipe);
|
||||
pipe_idx++;
|
||||
}
|
||||
context->bw_ctx.bw.dcn.clk.zstate_support = decide_zstate_support(dc, context);
|
||||
}
|
||||
|
||||
static void swizzle_to_dml_params(
|
||||
|
|
|
@ -1055,6 +1055,7 @@ static void dml_rq_dlg_get_dlg_params(
|
|||
|
||||
float vba__refcyc_per_req_delivery_pre_l = get_refcyc_per_req_delivery_pre_l_in_us(mode_lib, e2e_pipe_param, num_pipes, pipe_idx) * refclk_freq_in_mhz; // From VBA
|
||||
float vba__refcyc_per_req_delivery_l = get_refcyc_per_req_delivery_l_in_us(mode_lib, e2e_pipe_param, num_pipes, pipe_idx) * refclk_freq_in_mhz; // From VBA
|
||||
int blank_lines;
|
||||
|
||||
memset(disp_dlg_regs, 0, sizeof(*disp_dlg_regs));
|
||||
memset(disp_ttu_regs, 0, sizeof(*disp_ttu_regs));
|
||||
|
@ -1080,6 +1081,18 @@ static void dml_rq_dlg_get_dlg_params(
|
|||
dlg_vblank_start = interlaced ? (vblank_start / 2) : vblank_start;
|
||||
|
||||
disp_dlg_regs->min_dst_y_next_start = (unsigned int) (((double) dlg_vblank_start) * dml_pow(2, 2));
|
||||
blank_lines = (dst->vblank_end + dst->vtotal_min - dst->vblank_start - dst->vstartup_start - 1);
|
||||
if (blank_lines < 0)
|
||||
blank_lines = 0;
|
||||
if (blank_lines != 0) {
|
||||
disp_dlg_regs->optimized_min_dst_y_next_start_us =
|
||||
((unsigned int) blank_lines * dst->hactive) / (unsigned int) dst->pixel_rate_mhz;
|
||||
disp_dlg_regs->optimized_min_dst_y_next_start =
|
||||
(unsigned int)(((double) (dlg_vblank_start + blank_lines)) * dml_pow(2, 2));
|
||||
} else {
|
||||
// use unoptimized value
|
||||
disp_dlg_regs->optimized_min_dst_y_next_start = disp_dlg_regs->min_dst_y_next_start;
|
||||
}
|
||||
ASSERT(disp_dlg_regs->min_dst_y_next_start < (unsigned int)dml_pow(2, 18));
|
||||
|
||||
dml_print("DML_DLG: %s: min_ttu_vblank (us) = %3.2f\n", __func__, min_ttu_vblank);
|
||||
|
|
|
@ -446,6 +446,8 @@ struct _vcs_dpi_display_dlg_regs_st {
|
|||
unsigned int refcyc_h_blank_end;
|
||||
unsigned int dlg_vblank_end;
|
||||
unsigned int min_dst_y_next_start;
|
||||
unsigned int optimized_min_dst_y_next_start;
|
||||
unsigned int optimized_min_dst_y_next_start_us;
|
||||
unsigned int refcyc_per_htotal;
|
||||
unsigned int refcyc_x_after_scaler;
|
||||
unsigned int dst_y_after_scaler;
|
||||
|
|
|
@ -864,11 +864,11 @@ static bool setup_dsc_config(
|
|||
min_slices_h = inc_num_slices(dsc_common_caps.slice_caps, min_slices_h);
|
||||
}
|
||||
|
||||
is_dsc_possible = (min_slices_h <= max_slices_h);
|
||||
|
||||
if (pic_width % min_slices_h != 0)
|
||||
min_slices_h = 0; // DSC TODO: Maybe try increasing the number of slices first?
|
||||
|
||||
is_dsc_possible = (min_slices_h <= max_slices_h);
|
||||
|
||||
if (min_slices_h == 0 && max_slices_h == 0)
|
||||
is_dsc_possible = false;
|
||||
|
||||
|
|
|
@ -33,6 +33,7 @@
|
|||
#define MAX_MTP_SLOT_COUNT 64
|
||||
#define DP_REPEATER_CONFIGURATION_AND_STATUS_SIZE 0x50
|
||||
#define TRAINING_AUX_RD_INTERVAL 100 //us
|
||||
#define LINK_AUX_WAKE_TIMEOUT_MS 1500 // Timeout when trying to wake unresponsive DPRX.
|
||||
|
||||
struct dc_link;
|
||||
struct dc_stream_state;
|
||||
|
|
|
@ -195,6 +195,9 @@ struct hubp_funcs {
|
|||
|
||||
void (*hubp_set_flip_int)(struct hubp *hubp);
|
||||
|
||||
void (*program_extended_blank)(struct hubp *hubp,
|
||||
unsigned int min_dst_y_next_start_optimized);
|
||||
|
||||
void (*hubp_wait_pipe_read_start)(struct hubp *hubp);
|
||||
};
|
||||
|
||||
|
|
|
@ -100,7 +100,8 @@ enum vsc_packet_revision {
|
|||
//PB7 = MD0
|
||||
#define MASK_VTEM_MD0__VRR_EN 0x01
|
||||
#define MASK_VTEM_MD0__M_CONST 0x02
|
||||
#define MASK_VTEM_MD0__RESERVED2 0x0C
|
||||
#define MASK_VTEM_MD0__QMS_EN 0x04
|
||||
#define MASK_VTEM_MD0__RESERVED2 0x08
|
||||
#define MASK_VTEM_MD0__FVA_FACTOR_M1 0xF0
|
||||
|
||||
//MD1
|
||||
|
@ -109,7 +110,7 @@ enum vsc_packet_revision {
|
|||
//MD2
|
||||
#define MASK_VTEM_MD2__BASE_REFRESH_RATE_98 0x03
|
||||
#define MASK_VTEM_MD2__RB 0x04
|
||||
#define MASK_VTEM_MD2__RESERVED3 0xF8
|
||||
#define MASK_VTEM_MD2__NEXT_TFR 0xF8
|
||||
|
||||
//MD3
|
||||
#define MASK_VTEM_MD3__BASE_REFRESH_RATE_07 0xFF
|
||||
|
|
|
@ -173,6 +173,17 @@ bool amdgpu_dpm_is_baco_supported(struct amdgpu_device *adev)
|
|||
|
||||
if (!pp_funcs || !pp_funcs->get_asic_baco_capability)
|
||||
return false;
|
||||
/* Don't use baco for reset in S3.
|
||||
* This is a workaround for some platforms
|
||||
* where entering BACO during suspend
|
||||
* seems to cause reboots or hangs.
|
||||
* This might be related to the fact that BACO controls
|
||||
* power to the whole GPU including devices like audio and USB.
|
||||
* Powering down/up everything may adversely affect these other
|
||||
* devices. Needs more investigation.
|
||||
*/
|
||||
if (adev->in_s3)
|
||||
return false;
|
||||
|
||||
mutex_lock(&adev->pm.mutex);
|
||||
|
||||
|
@ -500,6 +511,9 @@ int amdgpu_dpm_send_hbm_bad_pages_num(struct amdgpu_device *adev, uint32_t size)
|
|||
struct smu_context *smu = adev->powerplay.pp_handle;
|
||||
int ret = 0;
|
||||
|
||||
if (!is_support_sw_smu(adev))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
mutex_lock(&adev->pm.mutex);
|
||||
ret = smu_send_hbm_bad_pages_num(smu, size);
|
||||
mutex_unlock(&adev->pm.mutex);
|
||||
|
@ -512,6 +526,9 @@ int amdgpu_dpm_send_hbm_bad_channel_flag(struct amdgpu_device *adev, uint32_t si
|
|||
struct smu_context *smu = adev->powerplay.pp_handle;
|
||||
int ret = 0;
|
||||
|
||||
if (!is_support_sw_smu(adev))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
mutex_lock(&adev->pm.mutex);
|
||||
ret = smu_send_hbm_bad_channel_flag(smu, size);
|
||||
mutex_unlock(&adev->pm.mutex);
|
||||
|
|
|
@ -773,13 +773,13 @@ static int smu10_dpm_force_dpm_level(struct pp_hwmgr *hwmgr,
|
|||
smum_send_msg_to_smc_with_parameter(hwmgr,
|
||||
PPSMC_MSG_SetHardMinFclkByFreq,
|
||||
hwmgr->display_config->num_display > 3 ?
|
||||
data->clock_vol_info.vdd_dep_on_fclk->entries[0].clk :
|
||||
(data->clock_vol_info.vdd_dep_on_fclk->entries[0].clk / 100) :
|
||||
min_mclk,
|
||||
NULL);
|
||||
|
||||
smum_send_msg_to_smc_with_parameter(hwmgr,
|
||||
PPSMC_MSG_SetHardMinSocclkByFreq,
|
||||
data->clock_vol_info.vdd_dep_on_socclk->entries[0].clk,
|
||||
data->clock_vol_info.vdd_dep_on_socclk->entries[0].clk / 100,
|
||||
NULL);
|
||||
smum_send_msg_to_smc_with_parameter(hwmgr,
|
||||
PPSMC_MSG_SetHardMinVcn,
|
||||
|
@ -792,11 +792,11 @@ static int smu10_dpm_force_dpm_level(struct pp_hwmgr *hwmgr,
|
|||
NULL);
|
||||
smum_send_msg_to_smc_with_parameter(hwmgr,
|
||||
PPSMC_MSG_SetSoftMaxFclkByFreq,
|
||||
data->clock_vol_info.vdd_dep_on_fclk->entries[index_fclk].clk,
|
||||
data->clock_vol_info.vdd_dep_on_fclk->entries[index_fclk].clk / 100,
|
||||
NULL);
|
||||
smum_send_msg_to_smc_with_parameter(hwmgr,
|
||||
PPSMC_MSG_SetSoftMaxSocclkByFreq,
|
||||
data->clock_vol_info.vdd_dep_on_socclk->entries[index_socclk].clk,
|
||||
data->clock_vol_info.vdd_dep_on_socclk->entries[index_socclk].clk / 100,
|
||||
NULL);
|
||||
smum_send_msg_to_smc_with_parameter(hwmgr,
|
||||
PPSMC_MSG_SetSoftMaxVcn,
|
||||
|
|
|
@ -991,7 +991,7 @@ static int smu_v13_0_5_set_performance_level(struct smu_context *smu,
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (sclk_min && sclk_max) {
|
||||
if (sclk_min && sclk_max && smu_v13_0_5_clk_dpm_is_enabled(smu, SMU_SCLK)) {
|
||||
ret = smu_v13_0_5_set_soft_freq_limited_range(smu,
|
||||
SMU_SCLK,
|
||||
sclk_min,
|
||||
|
|
|
@ -214,6 +214,29 @@ int drm_of_encoder_active_endpoint(struct device_node *node,
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(drm_of_encoder_active_endpoint);
|
||||
|
||||
static int find_panel_or_bridge(struct device_node *node,
|
||||
struct drm_panel **panel,
|
||||
struct drm_bridge **bridge)
|
||||
{
|
||||
if (panel) {
|
||||
*panel = of_drm_find_panel(node);
|
||||
if (!IS_ERR(*panel))
|
||||
return 0;
|
||||
|
||||
/* Clear the panel pointer in case of error. */
|
||||
*panel = NULL;
|
||||
}
|
||||
|
||||
/* No panel found yet, check for a bridge next. */
|
||||
if (bridge) {
|
||||
*bridge = of_drm_find_bridge(node);
|
||||
if (*bridge)
|
||||
return 0;
|
||||
}
|
||||
|
||||
return -EPROBE_DEFER;
|
||||
}
|
||||
|
||||
/**
|
||||
* drm_of_find_panel_or_bridge - return connected panel or bridge device
|
||||
* @np: device tree node containing encoder output ports
|
||||
|
@ -236,66 +259,44 @@ int drm_of_find_panel_or_bridge(const struct device_node *np,
|
|||
struct drm_panel **panel,
|
||||
struct drm_bridge **bridge)
|
||||
{
|
||||
int ret = -EPROBE_DEFER;
|
||||
struct device_node *remote;
|
||||
struct device_node *node;
|
||||
int ret;
|
||||
|
||||
if (!panel && !bridge)
|
||||
return -EINVAL;
|
||||
|
||||
if (panel)
|
||||
*panel = NULL;
|
||||
if (bridge)
|
||||
*bridge = NULL;
|
||||
|
||||
/**
|
||||
* Devices can also be child nodes when we also control that device
|
||||
* through the upstream device (ie, MIPI-DCS for a MIPI-DSI device).
|
||||
*
|
||||
* Lookup for a child node of the given parent that isn't either port
|
||||
* or ports.
|
||||
*/
|
||||
for_each_available_child_of_node(np, remote) {
|
||||
if (of_node_name_eq(remote, "port") ||
|
||||
of_node_name_eq(remote, "ports"))
|
||||
/* Check for a graph on the device node first. */
|
||||
if (of_graph_is_present(np)) {
|
||||
node = of_graph_get_remote_node(np, port, endpoint);
|
||||
if (node) {
|
||||
ret = find_panel_or_bridge(node, panel, bridge);
|
||||
of_node_put(node);
|
||||
|
||||
if (!ret)
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
/* Otherwise check for any child node other than port/ports. */
|
||||
for_each_available_child_of_node(np, node) {
|
||||
if (of_node_name_eq(node, "port") ||
|
||||
of_node_name_eq(node, "ports"))
|
||||
continue;
|
||||
|
||||
goto of_find_panel_or_bridge;
|
||||
ret = find_panel_or_bridge(node, panel, bridge);
|
||||
of_node_put(node);
|
||||
|
||||
/* Stop at the first found occurrence. */
|
||||
if (!ret)
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* of_graph_get_remote_node() produces a noisy error message if port
|
||||
* node isn't found and the absence of the port is a legit case here,
|
||||
* so at first we silently check whether graph presents in the
|
||||
* device-tree node.
|
||||
*/
|
||||
if (!of_graph_is_present(np))
|
||||
return -ENODEV;
|
||||
|
||||
remote = of_graph_get_remote_node(np, port, endpoint);
|
||||
|
||||
of_find_panel_or_bridge:
|
||||
if (!remote)
|
||||
return -ENODEV;
|
||||
|
||||
if (panel) {
|
||||
*panel = of_drm_find_panel(remote);
|
||||
if (!IS_ERR(*panel))
|
||||
ret = 0;
|
||||
else
|
||||
*panel = NULL;
|
||||
}
|
||||
|
||||
/* No panel found yet, check for a bridge next. */
|
||||
if (bridge) {
|
||||
if (ret) {
|
||||
*bridge = of_drm_find_bridge(remote);
|
||||
if (*bridge)
|
||||
ret = 0;
|
||||
} else {
|
||||
*bridge = NULL;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
of_node_put(remote);
|
||||
return ret;
|
||||
return -EPROBE_DEFER;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(drm_of_find_panel_or_bridge);
|
||||
|
||||
|
|
|
@ -222,6 +222,7 @@ static int dw_hdmi_imx_probe(struct platform_device *pdev)
|
|||
struct device_node *np = pdev->dev.of_node;
|
||||
const struct of_device_id *match = of_match_node(dw_hdmi_imx_dt_ids, np);
|
||||
struct imx_hdmi *hdmi;
|
||||
int ret;
|
||||
|
||||
hdmi = devm_kzalloc(&pdev->dev, sizeof(*hdmi), GFP_KERNEL);
|
||||
if (!hdmi)
|
||||
|
@ -243,10 +244,15 @@ static int dw_hdmi_imx_probe(struct platform_device *pdev)
|
|||
hdmi->bridge = of_drm_find_bridge(np);
|
||||
if (!hdmi->bridge) {
|
||||
dev_err(hdmi->dev, "Unable to find bridge\n");
|
||||
dw_hdmi_remove(hdmi->hdmi);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
return component_add(&pdev->dev, &dw_hdmi_imx_ops);
|
||||
ret = component_add(&pdev->dev, &dw_hdmi_imx_ops);
|
||||
if (ret)
|
||||
dw_hdmi_remove(hdmi->hdmi);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int dw_hdmi_imx_remove(struct platform_device *pdev)
|
||||
|
|
|
@ -572,6 +572,8 @@ static int imx_ldb_panel_ddc(struct device *dev,
|
|||
edidp = of_get_property(child, "edid", &edid_len);
|
||||
if (edidp) {
|
||||
channel->edid = kmemdup(edidp, edid_len, GFP_KERNEL);
|
||||
if (!channel->edid)
|
||||
return -ENOMEM;
|
||||
} else if (!channel->panel) {
|
||||
/* fallback to display-timings node */
|
||||
ret = of_get_drm_display_mode(child,
|
||||
|
|
|
@ -75,8 +75,10 @@ static int imx_pd_connector_get_modes(struct drm_connector *connector)
|
|||
ret = of_get_drm_display_mode(np, &imxpd->mode,
|
||||
&imxpd->bus_flags,
|
||||
OF_USE_NATIVE_MODE);
|
||||
if (ret)
|
||||
if (ret) {
|
||||
drm_mode_destroy(connector->dev, mode);
|
||||
return ret;
|
||||
}
|
||||
|
||||
drm_mode_copy(mode, &imxpd->mode);
|
||||
mode->type |= DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED;
|
||||
|
|
|
@ -216,6 +216,7 @@ gm20b_pmu = {
|
|||
.intr = gt215_pmu_intr,
|
||||
.recv = gm20b_pmu_recv,
|
||||
.initmsg = gm20b_pmu_initmsg,
|
||||
.reset = gf100_pmu_reset,
|
||||
};
|
||||
|
||||
#if IS_ENABLED(CONFIG_ARCH_TEGRA_210_SOC)
|
||||
|
|
|
@ -23,7 +23,7 @@
|
|||
*/
|
||||
#include "priv.h"
|
||||
|
||||
static void
|
||||
void
|
||||
gp102_pmu_reset(struct nvkm_pmu *pmu)
|
||||
{
|
||||
struct nvkm_device *device = pmu->subdev.device;
|
||||
|
|
|
@ -83,6 +83,7 @@ gp10b_pmu = {
|
|||
.intr = gt215_pmu_intr,
|
||||
.recv = gm20b_pmu_recv,
|
||||
.initmsg = gm20b_pmu_initmsg,
|
||||
.reset = gp102_pmu_reset,
|
||||
};
|
||||
|
||||
#if IS_ENABLED(CONFIG_ARCH_TEGRA_210_SOC)
|
||||
|
|
|
@ -41,6 +41,7 @@ int gt215_pmu_send(struct nvkm_pmu *, u32[2], u32, u32, u32, u32);
|
|||
|
||||
bool gf100_pmu_enabled(struct nvkm_pmu *);
|
||||
void gf100_pmu_reset(struct nvkm_pmu *);
|
||||
void gp102_pmu_reset(struct nvkm_pmu *pmu);
|
||||
|
||||
void gk110_pmu_pgob(struct nvkm_pmu *, bool);
|
||||
|
||||
|
|
|
@ -612,8 +612,10 @@ static int ili9341_dbi_probe(struct spi_device *spi, struct gpio_desc *dc,
|
|||
int ret;
|
||||
|
||||
vcc = devm_regulator_get_optional(dev, "vcc");
|
||||
if (IS_ERR(vcc))
|
||||
if (IS_ERR(vcc)) {
|
||||
dev_err(dev, "get optional vcc failed\n");
|
||||
vcc = NULL;
|
||||
}
|
||||
|
||||
dbidev = devm_drm_dev_alloc(dev, &ili9341_dbi_driver,
|
||||
struct mipi_dbi_dev, drm);
|
||||
|
|
|
@ -447,8 +447,9 @@ static void ipu_di_config_clock(struct ipu_di *di,
|
|||
|
||||
error = rate / (sig->mode.pixelclock / 1000);
|
||||
|
||||
dev_dbg(di->ipu->dev, " IPU clock can give %lu with divider %u, error %d.%u%%\n",
|
||||
rate, div, (signed)(error - 1000) / 10, error % 10);
|
||||
dev_dbg(di->ipu->dev, " IPU clock can give %lu with divider %u, error %c%d.%d%%\n",
|
||||
rate, div, error < 1000 ? '-' : '+',
|
||||
abs(error - 1000) / 10, abs(error - 1000) % 10);
|
||||
|
||||
/* Allow a 1% error */
|
||||
if (error < 1010 && error >= 990) {
|
||||
|
|
|
@ -1579,7 +1579,14 @@ static void do_remove_conflicting_framebuffers(struct apertures_struct *a,
|
|||
* If it's not a platform device, at least print a warning. A
|
||||
* fix would add code to remove the device from the system.
|
||||
*/
|
||||
if (dev_is_platform(device)) {
|
||||
if (!device) {
|
||||
/* TODO: Represent each OF framebuffer as its own
|
||||
* device in the device hierarchy. For now, offb
|
||||
* doesn't have such a device, so unregister the
|
||||
* framebuffer as before without warning.
|
||||
*/
|
||||
do_unregister_framebuffer(registered_fb[i]);
|
||||
} else if (dev_is_platform(device)) {
|
||||
registered_fb[i]->forced_out = true;
|
||||
platform_device_unregister(to_platform_device(device));
|
||||
} else {
|
||||
|
|
|
@ -61,6 +61,21 @@ to_dma_fence_array(struct dma_fence *fence)
|
|||
return container_of(fence, struct dma_fence_array, base);
|
||||
}
|
||||
|
||||
/**
|
||||
* dma_fence_array_for_each - iterate over all fences in array
|
||||
* @fence: current fence
|
||||
* @index: index into the array
|
||||
* @head: potential dma_fence_array object
|
||||
*
|
||||
* Test if @array is a dma_fence_array object and if yes iterate over all fences
|
||||
* in the array. If not just iterate over the fence in @array itself.
|
||||
*
|
||||
* For a deep dive iterator see dma_fence_unwrap_for_each().
|
||||
*/
|
||||
#define dma_fence_array_for_each(fence, index, head) \
|
||||
for (index = 0, fence = dma_fence_array_first(head); fence; \
|
||||
++(index), fence = dma_fence_array_next(head, index))
|
||||
|
||||
struct dma_fence_array *dma_fence_array_create(int num_fences,
|
||||
struct dma_fence **fences,
|
||||
u64 context, unsigned seqno,
|
||||
|
@ -68,4 +83,8 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
|
|||
|
||||
bool dma_fence_match_context(struct dma_fence *fence, u64 context);
|
||||
|
||||
struct dma_fence *dma_fence_array_first(struct dma_fence *head);
|
||||
struct dma_fence *dma_fence_array_next(struct dma_fence *head,
|
||||
unsigned int index);
|
||||
|
||||
#endif /* __LINUX_DMA_FENCE_ARRAY_H */
|
||||
|
|
|
@ -112,6 +112,8 @@ static inline void dma_fence_chain_free(struct dma_fence_chain *chain)
|
|||
*
|
||||
* Iterate over all fences in the chain. We keep a reference to the current
|
||||
* fence while inside the loop which must be dropped when breaking out.
|
||||
*
|
||||
* For a deep dive iterator see dma_fence_unwrap_for_each().
|
||||
*/
|
||||
#define dma_fence_chain_for_each(iter, head) \
|
||||
for (iter = dma_fence_get(head); iter; \
|
||||
|
|
|
@ -0,0 +1,95 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0-only */
|
||||
/*
|
||||
* fence-chain: chain fences together in a timeline
|
||||
*
|
||||
* Copyright (C) 2022 Advanced Micro Devices, Inc.
|
||||
* Authors:
|
||||
* Christian König <christian.koenig@amd.com>
|
||||
*/
|
||||
|
||||
#ifndef __LINUX_DMA_FENCE_UNWRAP_H
|
||||
#define __LINUX_DMA_FENCE_UNWRAP_H
|
||||
|
||||
#include <linux/dma-fence-chain.h>
|
||||
#include <linux/dma-fence-array.h>
|
||||
|
||||
/**
|
||||
* struct dma_fence_unwrap - cursor into the container structure
|
||||
*
|
||||
* Should be used with dma_fence_unwrap_for_each() iterator macro.
|
||||
*/
|
||||
struct dma_fence_unwrap {
|
||||
/**
|
||||
* @chain: potential dma_fence_chain, but can be other fence as well
|
||||
*/
|
||||
struct dma_fence *chain;
|
||||
/**
|
||||
* @array: potential dma_fence_array, but can be other fence as well
|
||||
*/
|
||||
struct dma_fence *array;
|
||||
/**
|
||||
* @index: last returned index if @array is really a dma_fence_array
|
||||
*/
|
||||
unsigned int index;
|
||||
};
|
||||
|
||||
/* Internal helper to start new array iteration, don't use directly */
|
||||
static inline struct dma_fence *
|
||||
__dma_fence_unwrap_array(struct dma_fence_unwrap * cursor)
|
||||
{
|
||||
cursor->array = dma_fence_chain_contained(cursor->chain);
|
||||
cursor->index = 0;
|
||||
return dma_fence_array_first(cursor->array);
|
||||
}
|
||||
|
||||
/**
|
||||
* dma_fence_unwrap_first - return the first fence from fence containers
|
||||
* @head: the entrypoint into the containers
|
||||
* @cursor: current position inside the containers
|
||||
*
|
||||
* Unwraps potential dma_fence_chain/dma_fence_array containers and return the
|
||||
* first fence.
|
||||
*/
|
||||
static inline struct dma_fence *
|
||||
dma_fence_unwrap_first(struct dma_fence *head, struct dma_fence_unwrap *cursor)
|
||||
{
|
||||
cursor->chain = dma_fence_get(head);
|
||||
return __dma_fence_unwrap_array(cursor);
|
||||
}
|
||||
|
||||
/**
|
||||
* dma_fence_unwrap_next - return the next fence from a fence containers
|
||||
* @cursor: current position inside the containers
|
||||
*
|
||||
* Continue unwrapping the dma_fence_chain/dma_fence_array containers and return
|
||||
* the next fence from them.
|
||||
*/
|
||||
static inline struct dma_fence *
|
||||
dma_fence_unwrap_next(struct dma_fence_unwrap *cursor)
|
||||
{
|
||||
struct dma_fence *tmp;
|
||||
|
||||
++cursor->index;
|
||||
tmp = dma_fence_array_next(cursor->array, cursor->index);
|
||||
if (tmp)
|
||||
return tmp;
|
||||
|
||||
cursor->chain = dma_fence_chain_walk(cursor->chain);
|
||||
return __dma_fence_unwrap_array(cursor);
|
||||
}
|
||||
|
||||
/**
|
||||
* dma_fence_unwrap_for_each - iterate over all fences in containers
|
||||
* @fence: current fence
|
||||
* @cursor: current position inside the containers
|
||||
* @head: starting point for the iterator
|
||||
*
|
||||
* Unwrap dma_fence_chain and dma_fence_array containers and deep dive into all
|
||||
* potential fences in them. If @head is just a normal fence only that one is
|
||||
* returned.
|
||||
*/
|
||||
#define dma_fence_unwrap_for_each(fence, cursor, head) \
|
||||
for (fence = dma_fence_unwrap_first(head, cursor); fence; \
|
||||
fence = dma_fence_unwrap_next(cursor))
|
||||
|
||||
#endif
|
Loading…
Reference in New Issue