drm/ttm: Clarify that the TTM_PL_SYSTEM is under TTMs control

TTM takes full control over TTM_PL_SYSTEM placed buffers. This makes
driver internal usage of TTM_PL_SYSTEM prone to errors because it
requires the drivers to manually handle all interactions between TTM
which can swap out those buffers whenever it thinks it's the right
thing to do and driver.

CPU buffers which need to be fenced and shared with accelerators should
be placed in driver specific placements that can explicitly handle
CPU/accelerator buffer fencing.
Currently, apart, from things silently failing nothing is enforcing
that requirement which means that it's easy for drivers and new
developers to get this wrong. To avoid the confusion we can document
this requirement and clarify the solution.

This came up during a discussion on dri-devel:
https://lore.kernel.org/dri-devel/232f45e9-8748-1243-09bf-56763e6668b3@amd.com

Signed-off-by: Zack Rusin <zackr@vmware.com>
Cc: Christian König <christian.koenig@amd.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20211110145034.487512-1-zackr@vmware.com
This commit is contained in:
Zack Rusin 2021-11-10 09:50:34 -05:00
parent a85b1cb230
commit 2696f9010d
1 changed files with 11 additions and 0 deletions

View File

@ -35,6 +35,17 @@
/*
* Memory regions for data placement.
*
* Buffers placed in TTM_PL_SYSTEM are considered under TTMs control and can
* be swapped out whenever TTMs thinks it is a good idea.
* In cases where drivers would like to use TTM_PL_SYSTEM as a valid
* placement they need to be able to handle the issues that arise due to the
* above manually.
*
* For BO's which reside in system memory but for which the accelerator
* requires direct access (i.e. their usage needs to be synchronized
* between the CPU and accelerator via fences) a new, driver private
* placement that can handle such scenarios is a good idea.
*/
#define TTM_PL_SYSTEM 0