drm/i915: expand on the kernel-doc for cache_dirty
Add some details around non-LLC platforms and cflushing, when dealing with the flush-on-acquire, which is potentially security sensitive. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Daniel Vetter <daniel@ffwll.ch> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211018174508.2137279-7-matthew.auld@intel.com
This commit is contained in:
parent
d70af57944
commit
df94fd05e6
|
@ -1922,6 +1922,17 @@ static int eb_move_to_gpu(struct i915_execbuffer *eb)
|
|||
* !(obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_READ)
|
||||
* but gcc's optimiser doesn't handle that as well and emits
|
||||
* two jumps instead of one. Maybe one day...
|
||||
*
|
||||
* FIXME: There is also sync flushing in set_pages(), which
|
||||
* serves a different purpose(some of the time at least).
|
||||
*
|
||||
* We should consider:
|
||||
*
|
||||
* 1. Rip out the async flush code.
|
||||
*
|
||||
* 2. Or make the sync flushing use the async clflush path
|
||||
* using mandatory fences underneath. Currently the below
|
||||
* async flush happens after we bind the object.
|
||||
*/
|
||||
if (unlikely(obj->cache_dirty & ~obj->cache_coherent)) {
|
||||
if (i915_gem_clflush_object(obj, 0))
|
||||
|
|
|
@ -427,6 +427,33 @@ struct drm_i915_gem_object {
|
|||
* can freely bypass the CPU cache when touching the pages with the GPU,
|
||||
* where the kernel is completely unaware. On such platform we need
|
||||
* apply the sledgehammer-on-acquire regardless of the @cache_coherent.
|
||||
*
|
||||
* Special care is taken on non-LLC platforms, to prevent potential
|
||||
* information leak. The driver currently ensures:
|
||||
*
|
||||
* 1. All userspace objects, by default, have @cache_level set as
|
||||
* I915_CACHE_NONE. The only exception is userptr objects, where we
|
||||
* instead force I915_CACHE_LLC, but we also don't allow userspace to
|
||||
* ever change the @cache_level for such objects. Another special case
|
||||
* is dma-buf, which doesn't rely on @cache_dirty, but there we
|
||||
* always do a forced flush when acquiring the pages, if there is a
|
||||
* chance that the pages can be read directly from main memory with
|
||||
* the GPU.
|
||||
*
|
||||
* 2. All I915_CACHE_NONE objects have @cache_dirty initially true.
|
||||
*
|
||||
* 3. All swapped-out objects(i.e shmem) have @cache_dirty set to
|
||||
* true.
|
||||
*
|
||||
* 4. The @cache_dirty is never freely reset before the initial
|
||||
* flush, even if userspace adjusts the @cache_level through the
|
||||
* i915_gem_set_caching_ioctl.
|
||||
*
|
||||
* 5. All @cache_dirty objects(including swapped-in) are initially
|
||||
* flushed with a synchronous call to drm_clflush_sg in
|
||||
* __i915_gem_object_set_pages. The @cache_dirty can be freely reset
|
||||
* at this point. All further asynchronous clfushes are never security
|
||||
* critical, i.e userspace is free to race against itself.
|
||||
*/
|
||||
unsigned int cache_dirty:1;
|
||||
|
||||
|
|
Loading…
Reference in New Issue