Merge tag 'drm-intel-gt-next-2023-01-18' of git://anongit.freedesktop.org/drm/drm-intel into drm-next

Driver Changes:

Fixes/improvements/new stuff:

- Fix workarounds on Gen2-3 (Tvrtko Ursulin)
- Fix HuC delayed load memory leaks (Daniele Ceraolo Spurio)
- Fix a BUG caused by impendance mismatch in dma_fence_wait_timeout and GuC (Janusz Krzysztofik)
- Add DG2 workarounds Wa_18018764978 and Wa_18019271663 (Matt Atwood)
- Apply recommended L3 hashing mask tuning parameters (Gen12+) (Matt Roper)
- Improve suspend / resume times with VT-d scanout workaround active (Andi Shyti, Chris Wilson)
- Silence misleading "mailbox access failed" warning in snb_pcode_read (Ashutosh Dixit)
- Fix null pointer dereference on HSW perf/OA (Umesh Nerlige Ramappa)
- Avoid trampling the ring during buffer migration (and selftests) (Chris Wilson, Matthew Auld)
- Fix DG2 visual corruption on small BAR systems by not forgetting to copy CCS aux state (Matthew Auld)
- More fixing of DG2 visual corruption by not forgetting to copy CCS aux state of backup objects (Matthew Auld)
- Fix TLB invalidation for Gen12.50 video and compute engines (Andrzej Hajda)
- Limit Wa_22012654132 to just specific steppings (Matt Roper)
- Fix userspace crashes due eviction not working under lock contention after the object locking conversion (Matthew Auld)
- Avoid double free is user deploys a corrupt GuC firmware (John Harrison)
- Fix 32-bit builds by using "%zu" to format size_t (Nirmoy Das)
- Fix a possible BUG in TTM async unbind due not reserving enough fence slots (Nirmoy Das)
- Fix potential use after free by not exposing the GEM context id to userspace too early (Rob Clark)
- Show clamped PL1 limit to the user (hwmon) (Ashutosh Dixit)
- Workaround unreliable reset on Jasperlake (Chris Wilson)
- Cover rest of SVG unit MCR registers (Gustavo Sousa)
- Avoid PXP log spam on platforms which do not support the feature (Alan Previn)
- Re-disable RC6p on Sandy Bridge to avoid GPU hangs and visual glitches (Sasa Dragic)

Future platform enablement:

- Manage uncore->lock while waiting on MCR register (Matt Roper)
- Enable Idle Messaging for GSC CS (Vinay Belgaumkar)
- Only initialize GSC in tile 0 (José Roberto de Souza)
- Media GT and Render GT share common GGTT (Aravind Iddamsetty)
- Add dedicated MCR lock (Matt Roper)
- Implement recommended caching policy (PVC) (Wayne Boyer)
- Add hardware-level lock for steering (Matt Roper)
- Check full IP version when applying hw steering semaphore (Matt Roper)
- Enable GuC GGTT invalidation from the start (Daniele Ceraolo Spurio)
- MTL GSC firmware support (Daniele Ceraolo Spurio, Jonathan Cavitt)
- MTL OA support (Umesh Nerlige Ramappa)
- MTL initial gt workarounds (Matt Roper)

Driver refactors:

- Hold forcewake and MCR lock over PPAT setup (Matt Roper)
- Acquire fw before loop in intel_uncore_read64_2x32 (Umesh Nerlige Ramappa)
- GuC filename cleanups and use submission API version number (John Harrison)
- Promote pxp subsystem to top-level of i915 (Alan Previn)
- Finish proofing the code agains object size overflows (Chris Wilson, Gwan-gyeong Mun)
- Start adding module oriented dmesg output (John Harrison)

Miscellaneous:

- Correct kerneldoc for intel_gt_mcr_wait_for_reg() (Matt Roper)
- Bump up sample period for busy stats selftest (Umesh Nerlige Ramappa)
- Make GuC default_lists const data (Jani Nikula)
- Fix table order verification to check all FW types (John Harrison)
- Remove some limited use register access wrappers (Jani Nikula)
- Remove struct_member macro (Andrzej Hajda)
- Remove hardcoded value with a macro (Nirmoy Das)
- Use helper func to find out map type (Nirmoy Das)
- Fix a static analysis warning (John Harrison)
- Consolidate VMA active tracking helpers (Andrzej Hajda)
- Do not cover all future platforms in TLB invalidation (Tvrtko Ursulin)
- Replace zero-length arrays with flexible-array members (Gustavo A. R. Silva)
- Unwind hugepages to drop wakeref on error (Chris Wilson)
- Remove a couple of superfluous i915_drm.h includes (Jani Nikula)

Merges:

- Merge drm/drm-next into drm-intel-gt-next (Rodrigo Vivi)

danvet: Fix up merge conflict in intel_uc_fw.c, we ended up with 2
copies of try_firmware_load() somehow.

Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/Y8fW2Ny1B1hZ5ZmF@tursulin-desk
This commit is contained in:
Daniel Vetter 2023-01-24 16:06:38 +01:00
commit 045e8d102f
127 changed files with 2667 additions and 958 deletions

View File

@ -191,9 +191,9 @@ i915-y += \
i915_vma_resource.o
# general-purpose microcontroller (GuC) support
i915-y += gt/uc/intel_uc.o \
gt/uc/intel_uc_debugfs.o \
gt/uc/intel_uc_fw.o \
i915-y += \
gt/uc/intel_gsc_fw.o \
gt/uc/intel_gsc_uc.o \
gt/uc/intel_guc.o \
gt/uc/intel_guc_ads.o \
gt/uc/intel_guc_capture.o \
@ -208,7 +208,10 @@ i915-y += gt/uc/intel_uc.o \
gt/uc/intel_guc_submission.o \
gt/uc/intel_huc.o \
gt/uc/intel_huc_debugfs.o \
gt/uc/intel_huc_fw.o
gt/uc/intel_huc_fw.o \
gt/uc/intel_uc.o \
gt/uc/intel_uc_debugfs.o \
gt/uc/intel_uc_fw.o
# graphics system controller (GSC) support
i915-y += gt/intel_gsc.o

View File

@ -91,7 +91,7 @@ intel_pin_fb_obj_dpt(struct drm_framebuffer *fb,
goto err;
}
vma->display_alignment = max_t(u64, vma->display_alignment, alignment);
vma->display_alignment = max(vma->display_alignment, alignment);
i915_gem_object_flush_if_display(obj);

View File

@ -286,7 +286,7 @@ static int intelfb_create(struct drm_fb_helper *helper,
/* Our framebuffer is the entirety of fbdev's system memory */
info->fix.smem_start =
(unsigned long)(ggtt->gmadr.start + vma->node.start);
(unsigned long)(ggtt->gmadr.start + i915_ggtt_offset(vma));
info->fix.smem_len = vma->size;
}

View File

@ -1848,7 +1848,7 @@ static bool bo_has_valid_encryption(struct drm_i915_gem_object *obj)
{
struct drm_i915_private *i915 = to_i915(obj->base.dev);
return intel_pxp_key_check(&to_gt(i915)->pxp, obj, false) == 0;
return intel_pxp_key_check(i915->pxp, obj, false) == 0;
}
static bool pxp_is_borked(struct drm_i915_gem_object *obj)

View File

@ -257,7 +257,7 @@ static int proto_context_set_protected(struct drm_i915_private *i915,
if (!protected) {
pc->uses_protected_content = false;
} else if (!intel_pxp_is_enabled(&to_gt(i915)->pxp)) {
} else if (!intel_pxp_is_enabled(i915->pxp)) {
ret = -ENODEV;
} else if ((pc->user_flags & BIT(UCONTEXT_RECOVERABLE)) ||
!(pc->user_flags & BIT(UCONTEXT_BANNABLE))) {
@ -271,8 +271,8 @@ static int proto_context_set_protected(struct drm_i915_private *i915,
*/
pc->pxp_wakeref = intel_runtime_pm_get(&i915->runtime_pm);
if (!intel_pxp_is_active(&to_gt(i915)->pxp))
ret = intel_pxp_start(&to_gt(i915)->pxp);
if (!intel_pxp_is_active(i915->pxp))
ret = intel_pxp_start(i915->pxp);
}
return ret;
@ -1688,6 +1688,10 @@ void i915_gem_init__contexts(struct drm_i915_private *i915)
init_contexts(&i915->gem.contexts);
}
/*
* Note that this implicitly consumes the ctx reference, by placing
* the ctx in the context_xa.
*/
static void gem_context_register(struct i915_gem_context *ctx,
struct drm_i915_file_private *fpriv,
u32 id)
@ -1703,10 +1707,6 @@ static void gem_context_register(struct i915_gem_context *ctx,
snprintf(ctx->name, sizeof(ctx->name), "%s[%d]",
current->comm, pid_nr(ctx->pid));
/* And finally expose ourselves to userspace via the idr */
old = xa_store(&fpriv->context_xa, id, ctx, GFP_KERNEL);
WARN_ON(old);
spin_lock(&ctx->client->ctx_lock);
list_add_tail_rcu(&ctx->client_link, &ctx->client->ctx_list);
spin_unlock(&ctx->client->ctx_lock);
@ -1714,6 +1714,10 @@ static void gem_context_register(struct i915_gem_context *ctx,
spin_lock(&i915->gem.contexts.lock);
list_add_tail(&ctx->link, &i915->gem.contexts.list);
spin_unlock(&i915->gem.contexts.lock);
/* And finally expose ourselves to userspace via the idr */
old = xa_store(&fpriv->context_xa, id, ctx, GFP_KERNEL);
WARN_ON(old);
}
int i915_gem_context_open(struct drm_i915_private *i915,
@ -2199,14 +2203,22 @@ finalize_create_context_locked(struct drm_i915_file_private *file_priv,
if (IS_ERR(ctx))
return ctx;
/*
* One for the xarray and one for the caller. We need to grab
* the reference *prior* to making the ctx visble to userspace
* in gem_context_register(), as at any point after that
* userspace can try to race us with another thread destroying
* the context under our feet.
*/
i915_gem_context_get(ctx);
gem_context_register(ctx, file_priv, id);
old = xa_erase(&file_priv->proto_context_xa, id);
GEM_BUG_ON(old != pc);
proto_context_close(file_priv->dev_priv, pc);
/* One for the xarray and one for the caller */
return i915_gem_context_get(ctx);
return ctx;
}
struct i915_gem_context *

View File

@ -384,7 +384,7 @@ static int ext_set_protected(struct i915_user_extension __user *base, void *data
if (ext.flags)
return -EINVAL;
if (!intel_pxp_is_enabled(&to_gt(ext_data->i915)->pxp))
if (!intel_pxp_is_enabled(ext_data->i915->pxp))
return -ENODEV;
ext_data->flags |= I915_BO_PROTECTED;

View File

@ -17,6 +17,8 @@
#include "i915_gem_object.h"
#include "i915_vma.h"
#define VTD_GUARD (168u * I915_GTT_PAGE_SIZE) /* 168 or tile-row PTE padding */
static bool gpu_write_needs_clflush(struct drm_i915_gem_object *obj)
{
struct drm_i915_private *i915 = to_i915(obj->base.dev);
@ -424,6 +426,17 @@ i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
if (ret)
return ERR_PTR(ret);
/* VT-d may overfetch before/after the vma, so pad with scratch */
if (intel_scanout_needs_vtd_wa(i915)) {
unsigned int guard = VTD_GUARD;
if (i915_gem_object_is_tiled(obj))
guard = max(guard,
i915_gem_object_get_tile_row_size(obj));
flags |= PIN_OFFSET_GUARD | guard;
}
/*
* As the user may map the buffer once pinned in the display plane
* (e.g. libkms for the bootup splash), we have to ensure that we
@ -444,7 +457,7 @@ i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
if (IS_ERR(vma))
return vma;
vma->display_alignment = max_t(u64, vma->display_alignment, alignment);
vma->display_alignment = max(vma->display_alignment, alignment);
i915_vma_mark_scanout(vma);
i915_gem_object_flush_if_display_locked(obj);

View File

@ -379,22 +379,25 @@ eb_vma_misplaced(const struct drm_i915_gem_exec_object2 *entry,
const struct i915_vma *vma,
unsigned int flags)
{
if (vma->node.size < entry->pad_to_size)
const u64 start = i915_vma_offset(vma);
const u64 size = i915_vma_size(vma);
if (size < entry->pad_to_size)
return true;
if (entry->alignment && !IS_ALIGNED(vma->node.start, entry->alignment))
if (entry->alignment && !IS_ALIGNED(start, entry->alignment))
return true;
if (flags & EXEC_OBJECT_PINNED &&
vma->node.start != entry->offset)
start != entry->offset)
return true;
if (flags & __EXEC_OBJECT_NEEDS_BIAS &&
vma->node.start < BATCH_OFFSET_BIAS)
start < BATCH_OFFSET_BIAS)
return true;
if (!(flags & EXEC_OBJECT_SUPPORTS_48B_ADDRESS) &&
(vma->node.start + vma->node.size + 4095) >> 32)
(start + size + 4095) >> 32)
return true;
if (flags & __EXEC_OBJECT_NEEDS_MAP &&
@ -440,7 +443,7 @@ eb_pin_vma(struct i915_execbuffer *eb,
int err;
if (vma->node.size)
pin_flags = vma->node.start;
pin_flags = __i915_vma_offset(vma);
else
pin_flags = entry->offset & PIN_OFFSET_MASK;
@ -663,8 +666,8 @@ static int eb_reserve_vma(struct i915_execbuffer *eb,
if (err)
return err;
if (entry->offset != vma->node.start) {
entry->offset = vma->node.start | UPDATE;
if (entry->offset != i915_vma_offset(vma)) {
entry->offset = i915_vma_offset(vma) | UPDATE;
eb->args->flags |= __EXEC_HAS_RELOC;
}
@ -906,7 +909,7 @@ static struct i915_vma *eb_lookup_vma(struct i915_execbuffer *eb, u32 handle)
*/
if (i915_gem_context_uses_protected_content(eb->gem_context) &&
i915_gem_object_is_protected(obj)) {
err = intel_pxp_key_check(&vm->gt->pxp, obj, true);
err = intel_pxp_key_check(eb->i915->pxp, obj, true);
if (err) {
i915_gem_object_put(obj);
return ERR_PTR(err);
@ -1021,8 +1024,8 @@ static int eb_validate_vmas(struct i915_execbuffer *eb)
return err;
if (!err) {
if (entry->offset != vma->node.start) {
entry->offset = vma->node.start | UPDATE;
if (entry->offset != i915_vma_offset(vma)) {
entry->offset = i915_vma_offset(vma) | UPDATE;
eb->args->flags |= __EXEC_HAS_RELOC;
}
} else {
@ -1103,7 +1106,7 @@ static inline u64
relocation_target(const struct drm_i915_gem_relocation_entry *reloc,
const struct i915_vma *target)
{
return gen8_canonical_addr((int)reloc->delta + target->node.start);
return gen8_canonical_addr((int)reloc->delta + i915_vma_offset(target));
}
static void reloc_cache_init(struct reloc_cache *cache,
@ -1312,7 +1315,7 @@ static void *reloc_iomap(struct i915_vma *batch,
if (err) /* no inactive aperture space, use cpu reloc */
return NULL;
} else {
cache->node.start = vma->node.start;
cache->node.start = i915_ggtt_offset(vma);
cache->node.mm = (void *)vma;
}
}
@ -1475,7 +1478,7 @@ eb_relocate_entry(struct i915_execbuffer *eb,
* more work needs to be done.
*/
if (!DBG_FORCE_RELOC &&
gen8_canonical_addr(target->vma->node.start) == reloc->presumed_offset)
gen8_canonical_addr(i915_vma_offset(target->vma)) == reloc->presumed_offset)
return 0;
/* Check that the relocation address is valid... */
@ -2405,7 +2408,7 @@ static int eb_request_submit(struct i915_execbuffer *eb,
}
err = rq->context->engine->emit_bb_start(rq,
batch->node.start +
i915_vma_offset(batch) +
eb->batch_start_offset,
batch_len,
eb->batch_flags);
@ -2416,7 +2419,7 @@ static int eb_request_submit(struct i915_execbuffer *eb,
GEM_BUG_ON(intel_context_is_parallel(rq->context));
GEM_BUG_ON(eb->batch_start_offset);
err = rq->context->engine->emit_bb_start(rq,
eb->trampoline->node.start +
i915_vma_offset(eb->trampoline) +
batch_len, 0, 0);
if (err)
return err;

View File

@ -35,11 +35,15 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
struct drm_i915_private *i915 = to_i915(obj->base.dev);
struct sg_table *st;
struct scatterlist *sg;
unsigned int npages;
unsigned int npages; /* restricted by sg_alloc_table */
int max_order = MAX_ORDER;
unsigned int max_segment;
gfp_t gfp;
if (overflows_type(obj->base.size >> PAGE_SHIFT, npages))
return -E2BIG;
npages = obj->base.size >> PAGE_SHIFT;
max_segment = i915_sg_segment_size(i915->drm.dev) >> PAGE_SHIFT;
max_order = min(max_order, get_order(max_segment));
@ -55,7 +59,6 @@ create_st:
if (!st)
return -ENOMEM;
npages = obj->base.size / PAGE_SIZE;
if (sg_alloc_table(st, npages, GFP_KERNEL)) {
kfree(st);
return -ENOMEM;

View File

@ -395,7 +395,7 @@ retry:
/* Finally, remap it using the new GTT offset */
ret = remap_io_mapping(area,
area->vm_start + (vma->gtt_view.partial.offset << PAGE_SHIFT),
(ggtt->gmadr.start + vma->node.start) >> PAGE_SHIFT,
(ggtt->gmadr.start + i915_ggtt_offset(vma)) >> PAGE_SHIFT,
min_t(u64, vma->size, area->vm_end - area->vm_start),
&ggtt->iomap);
if (ret)

View File

@ -427,10 +427,11 @@ void __i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj,
static void
i915_gem_object_read_from_page_kmap(struct drm_i915_gem_object *obj, u64 offset, void *dst, int size)
{
pgoff_t idx = offset >> PAGE_SHIFT;
void *src_map;
void *src_ptr;
src_map = kmap_atomic(i915_gem_object_get_page(obj, offset >> PAGE_SHIFT));
src_map = kmap_atomic(i915_gem_object_get_page(obj, idx));
src_ptr = src_map + offset_in_page(offset);
if (!(obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_READ))
@ -443,9 +444,10 @@ i915_gem_object_read_from_page_kmap(struct drm_i915_gem_object *obj, u64 offset,
static void
i915_gem_object_read_from_page_iomap(struct drm_i915_gem_object *obj, u64 offset, void *dst, int size)
{
pgoff_t idx = offset >> PAGE_SHIFT;
dma_addr_t dma = i915_gem_object_get_dma_address(obj, idx);
void __iomem *src_map;
void __iomem *src_ptr;
dma_addr_t dma = i915_gem_object_get_dma_address(obj, offset >> PAGE_SHIFT);
src_map = io_mapping_map_wc(&obj->mm.region->iomap,
dma - obj->mm.region->region.start,
@ -484,6 +486,7 @@ static bool object_has_mappable_iomem(struct drm_i915_gem_object *obj)
*/
int i915_gem_object_read_from_page(struct drm_i915_gem_object *obj, u64 offset, void *dst, int size)
{
GEM_BUG_ON(overflows_type(offset >> PAGE_SHIFT, pgoff_t));
GEM_BUG_ON(offset >= obj->base.size);
GEM_BUG_ON(offset_in_page(offset) > PAGE_SIZE - size);
GEM_BUG_ON(!i915_gem_object_has_pinned_pages(obj));

View File

@ -20,26 +20,10 @@
enum intel_region_id;
/*
* XXX: There is a prevalence of the assumption that we fit the
* object's page count inside a 32bit _signed_ variable. Let's document
* this and catch if we ever need to fix it. In the meantime, if you do
* spot such a local variable, please consider fixing!
*
* Aside from our own locals (for which we have no excuse!):
* - sg_table embeds unsigned int for num_pages
* - get_user_pages*() mixed ints with longs
*/
#define GEM_CHECK_SIZE_OVERFLOW(sz) \
GEM_WARN_ON((sz) >> PAGE_SHIFT > INT_MAX)
static inline bool i915_gem_object_size_2big(u64 size)
{
struct drm_i915_gem_object *obj;
if (GEM_CHECK_SIZE_OVERFLOW(size))
return true;
if (overflows_type(size, obj->base.size))
return true;
@ -363,44 +347,289 @@ i915_gem_object_get_tile_row_size(const struct drm_i915_gem_object *obj)
int i915_gem_object_set_tiling(struct drm_i915_gem_object *obj,
unsigned int tiling, unsigned int stride);
/**
* __i915_gem_object_page_iter_get_sg - helper to find the target scatterlist
* pointer and the target page position using pgoff_t n input argument and
* i915_gem_object_page_iter
* @obj: i915 GEM buffer object
* @iter: i915 GEM buffer object page iterator
* @n: page offset
* @offset: searched physical offset,
* it will be used for returning physical page offset value
*
* Context: Takes and releases the mutex lock of the i915_gem_object_page_iter.
* Takes and releases the RCU lock to search the radix_tree of
* i915_gem_object_page_iter.
*
* Returns:
* The target scatterlist pointer and the target page position.
*
* Recommended to use wrapper macro: i915_gem_object_page_iter_get_sg()
*/
struct scatterlist *
__i915_gem_object_get_sg(struct drm_i915_gem_object *obj,
struct i915_gem_object_page_iter *iter,
unsigned int n,
unsigned int *offset, bool dma);
__i915_gem_object_page_iter_get_sg(struct drm_i915_gem_object *obj,
struct i915_gem_object_page_iter *iter,
pgoff_t n,
unsigned int *offset);
/**
* i915_gem_object_page_iter_get_sg - wrapper macro for
* __i915_gem_object_page_iter_get_sg()
* @obj: i915 GEM buffer object
* @it: i915 GEM buffer object page iterator
* @n: page offset
* @offset: searched physical offset,
* it will be used for returning physical page offset value
*
* Context: Takes and releases the mutex lock of the i915_gem_object_page_iter.
* Takes and releases the RCU lock to search the radix_tree of
* i915_gem_object_page_iter.
*
* Returns:
* The target scatterlist pointer and the target page position.
*
* In order to avoid the truncation of the input parameter, it checks the page
* offset n's type from the input parameter before calling
* __i915_gem_object_page_iter_get_sg().
*/
#define i915_gem_object_page_iter_get_sg(obj, it, n, offset) ({ \
static_assert(castable_to_type(n, pgoff_t)); \
__i915_gem_object_page_iter_get_sg(obj, it, n, offset); \
})
/**
* __i915_gem_object_get_sg - helper to find the target scatterlist
* pointer and the target page position using pgoff_t n input argument and
* drm_i915_gem_object. It uses an internal shmem scatterlist lookup function.
* @obj: i915 GEM buffer object
* @n: page offset
* @offset: searched physical offset,
* it will be used for returning physical page offset value
*
* It uses drm_i915_gem_object's internal shmem scatterlist lookup function as
* i915_gem_object_page_iter and calls __i915_gem_object_page_iter_get_sg().
*
* Returns:
* The target scatterlist pointer and the target page position.
*
* Recommended to use wrapper macro: i915_gem_object_get_sg()
* See also __i915_gem_object_page_iter_get_sg()
*/
static inline struct scatterlist *
i915_gem_object_get_sg(struct drm_i915_gem_object *obj,
unsigned int n,
unsigned int *offset)
__i915_gem_object_get_sg(struct drm_i915_gem_object *obj, pgoff_t n,
unsigned int *offset)
{
return __i915_gem_object_get_sg(obj, &obj->mm.get_page, n, offset, false);
return __i915_gem_object_page_iter_get_sg(obj, &obj->mm.get_page, n, offset);
}
/**
* i915_gem_object_get_sg - wrapper macro for __i915_gem_object_get_sg()
* @obj: i915 GEM buffer object
* @n: page offset
* @offset: searched physical offset,
* it will be used for returning physical page offset value
*
* Returns:
* The target scatterlist pointer and the target page position.
*
* In order to avoid the truncation of the input parameter, it checks the page
* offset n's type from the input parameter before calling
* __i915_gem_object_get_sg().
* See also __i915_gem_object_page_iter_get_sg()
*/
#define i915_gem_object_get_sg(obj, n, offset) ({ \
static_assert(castable_to_type(n, pgoff_t)); \
__i915_gem_object_get_sg(obj, n, offset); \
})
/**
* __i915_gem_object_get_sg_dma - helper to find the target scatterlist
* pointer and the target page position using pgoff_t n input argument and
* drm_i915_gem_object. It uses an internal DMA mapped scatterlist lookup function
* @obj: i915 GEM buffer object
* @n: page offset
* @offset: searched physical offset,
* it will be used for returning physical page offset value
*
* It uses drm_i915_gem_object's internal DMA mapped scatterlist lookup function
* as i915_gem_object_page_iter and calls __i915_gem_object_page_iter_get_sg().
*
* Returns:
* The target scatterlist pointer and the target page position.
*
* Recommended to use wrapper macro: i915_gem_object_get_sg_dma()
* See also __i915_gem_object_page_iter_get_sg()
*/
static inline struct scatterlist *
i915_gem_object_get_sg_dma(struct drm_i915_gem_object *obj,
unsigned int n,
unsigned int *offset)
__i915_gem_object_get_sg_dma(struct drm_i915_gem_object *obj, pgoff_t n,
unsigned int *offset)
{
return __i915_gem_object_get_sg(obj, &obj->mm.get_dma_page, n, offset, true);
return __i915_gem_object_page_iter_get_sg(obj, &obj->mm.get_dma_page, n, offset);
}
/**
* i915_gem_object_get_sg_dma - wrapper macro for __i915_gem_object_get_sg_dma()
* @obj: i915 GEM buffer object
* @n: page offset
* @offset: searched physical offset,
* it will be used for returning physical page offset value
*
* Returns:
* The target scatterlist pointer and the target page position.
*
* In order to avoid the truncation of the input parameter, it checks the page
* offset n's type from the input parameter before calling
* __i915_gem_object_get_sg_dma().
* See also __i915_gem_object_page_iter_get_sg()
*/
#define i915_gem_object_get_sg_dma(obj, n, offset) ({ \
static_assert(castable_to_type(n, pgoff_t)); \
__i915_gem_object_get_sg_dma(obj, n, offset); \
})
/**
* __i915_gem_object_get_page - helper to find the target page with a page offset
* @obj: i915 GEM buffer object
* @n: page offset
*
* It uses drm_i915_gem_object's internal shmem scatterlist lookup function as
* i915_gem_object_page_iter and calls __i915_gem_object_page_iter_get_sg()
* internally.
*
* Returns:
* The target page pointer.
*
* Recommended to use wrapper macro: i915_gem_object_get_page()
* See also __i915_gem_object_page_iter_get_sg()
*/
struct page *
i915_gem_object_get_page(struct drm_i915_gem_object *obj,
unsigned int n);
__i915_gem_object_get_page(struct drm_i915_gem_object *obj, pgoff_t n);
/**
* i915_gem_object_get_page - wrapper macro for __i915_gem_object_get_page
* @obj: i915 GEM buffer object
* @n: page offset
*
* Returns:
* The target page pointer.
*
* In order to avoid the truncation of the input parameter, it checks the page
* offset n's type from the input parameter before calling
* __i915_gem_object_get_page().
* See also __i915_gem_object_page_iter_get_sg()
*/
#define i915_gem_object_get_page(obj, n) ({ \
static_assert(castable_to_type(n, pgoff_t)); \
__i915_gem_object_get_page(obj, n); \
})
/**
* __i915_gem_object_get_dirty_page - helper to find the target page with a page
* offset
* @obj: i915 GEM buffer object
* @n: page offset
*
* It works like i915_gem_object_get_page(), but it marks the returned page dirty.
*
* Returns:
* The target page pointer.
*
* Recommended to use wrapper macro: i915_gem_object_get_dirty_page()
* See also __i915_gem_object_page_iter_get_sg() and __i915_gem_object_get_page()
*/
struct page *
i915_gem_object_get_dirty_page(struct drm_i915_gem_object *obj,
unsigned int n);
__i915_gem_object_get_dirty_page(struct drm_i915_gem_object *obj, pgoff_t n);
dma_addr_t
i915_gem_object_get_dma_address_len(struct drm_i915_gem_object *obj,
unsigned long n,
unsigned int *len);
/**
* i915_gem_object_get_dirty_page - wrapper macro for __i915_gem_object_get_dirty_page
* @obj: i915 GEM buffer object
* @n: page offset
*
* Returns:
* The target page pointer.
*
* In order to avoid the truncation of the input parameter, it checks the page
* offset n's type from the input parameter before calling
* __i915_gem_object_get_dirty_page().
* See also __i915_gem_object_page_iter_get_sg() and __i915_gem_object_get_page()
*/
#define i915_gem_object_get_dirty_page(obj, n) ({ \
static_assert(castable_to_type(n, pgoff_t)); \
__i915_gem_object_get_dirty_page(obj, n); \
})
/**
* __i915_gem_object_get_dma_address_len - helper to get bus addresses of
* targeted DMA mapped scatterlist from i915 GEM buffer object and it's length
* @obj: i915 GEM buffer object
* @n: page offset
* @len: DMA mapped scatterlist's DMA bus addresses length to return
*
* Returns:
* Bus addresses of targeted DMA mapped scatterlist
*
* Recommended to use wrapper macro: i915_gem_object_get_dma_address_len()
* See also __i915_gem_object_page_iter_get_sg() and __i915_gem_object_get_sg_dma()
*/
dma_addr_t
i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj,
unsigned long n);
__i915_gem_object_get_dma_address_len(struct drm_i915_gem_object *obj, pgoff_t n,
unsigned int *len);
/**
* i915_gem_object_get_dma_address_len - wrapper macro for
* __i915_gem_object_get_dma_address_len
* @obj: i915 GEM buffer object
* @n: page offset
* @len: DMA mapped scatterlist's DMA bus addresses length to return
*
* Returns:
* Bus addresses of targeted DMA mapped scatterlist
*
* In order to avoid the truncation of the input parameter, it checks the page
* offset n's type from the input parameter before calling
* __i915_gem_object_get_dma_address_len().
* See also __i915_gem_object_page_iter_get_sg() and
* __i915_gem_object_get_dma_address_len()
*/
#define i915_gem_object_get_dma_address_len(obj, n, len) ({ \
static_assert(castable_to_type(n, pgoff_t)); \
__i915_gem_object_get_dma_address_len(obj, n, len); \
})
/**
* __i915_gem_object_get_dma_address - helper to get bus addresses of
* targeted DMA mapped scatterlist from i915 GEM buffer object
* @obj: i915 GEM buffer object
* @n: page offset
*
* Returns:
* Bus addresses of targeted DMA mapped scatterlis
*
* Recommended to use wrapper macro: i915_gem_object_get_dma_address()
* See also __i915_gem_object_page_iter_get_sg() and __i915_gem_object_get_sg_dma()
*/
dma_addr_t
__i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj, pgoff_t n);
/**
* i915_gem_object_get_dma_address - wrapper macro for
* __i915_gem_object_get_dma_address
* @obj: i915 GEM buffer object
* @n: page offset
*
* Returns:
* Bus addresses of targeted DMA mapped scatterlist
*
* In order to avoid the truncation of the input parameter, it checks the page
* offset n's type from the input parameter before calling
* __i915_gem_object_get_dma_address().
* See also __i915_gem_object_page_iter_get_sg() and
* __i915_gem_object_get_dma_address()
*/
#define i915_gem_object_get_dma_address(obj, n) ({ \
static_assert(castable_to_type(n, pgoff_t)); \
__i915_gem_object_get_dma_address(obj, n); \
})
void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj,
struct sg_table *pages);

View File

@ -521,14 +521,16 @@ void __i915_gem_object_release_map(struct drm_i915_gem_object *obj)
}
struct scatterlist *
__i915_gem_object_get_sg(struct drm_i915_gem_object *obj,
struct i915_gem_object_page_iter *iter,
unsigned int n,
unsigned int *offset,
bool dma)
__i915_gem_object_page_iter_get_sg(struct drm_i915_gem_object *obj,
struct i915_gem_object_page_iter *iter,
pgoff_t n,
unsigned int *offset)
{
struct scatterlist *sg;
const bool dma = iter == &obj->mm.get_dma_page ||
iter == &obj->ttm.get_io_page;
unsigned int idx, count;
struct scatterlist *sg;
might_sleep();
GEM_BUG_ON(n >= obj->base.size >> PAGE_SHIFT);
@ -636,7 +638,7 @@ lookup:
}
struct page *
i915_gem_object_get_page(struct drm_i915_gem_object *obj, unsigned int n)
__i915_gem_object_get_page(struct drm_i915_gem_object *obj, pgoff_t n)
{
struct scatterlist *sg;
unsigned int offset;
@ -649,8 +651,7 @@ i915_gem_object_get_page(struct drm_i915_gem_object *obj, unsigned int n)
/* Like i915_gem_object_get_page(), but mark the returned page dirty */
struct page *
i915_gem_object_get_dirty_page(struct drm_i915_gem_object *obj,
unsigned int n)
__i915_gem_object_get_dirty_page(struct drm_i915_gem_object *obj, pgoff_t n)
{
struct page *page;
@ -662,9 +663,8 @@ i915_gem_object_get_dirty_page(struct drm_i915_gem_object *obj,
}
dma_addr_t
i915_gem_object_get_dma_address_len(struct drm_i915_gem_object *obj,
unsigned long n,
unsigned int *len)
__i915_gem_object_get_dma_address_len(struct drm_i915_gem_object *obj,
pgoff_t n, unsigned int *len)
{
struct scatterlist *sg;
unsigned int offset;
@ -678,8 +678,7 @@ i915_gem_object_get_dma_address_len(struct drm_i915_gem_object *obj,
}
dma_addr_t
i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj,
unsigned long n)
__i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj, pgoff_t n)
{
return i915_gem_object_get_dma_address_len(obj, n, NULL);
}

View File

@ -28,6 +28,10 @@ static int i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj)
void *dst;
int i;
/* Contiguous chunk, with a single scatterlist element */
if (overflows_type(obj->base.size, sg->length))
return -E2BIG;
if (GEM_WARN_ON(i915_gem_object_needs_bit17_swizzle(obj)))
return -EINVAL;

View File

@ -60,7 +60,7 @@ int shmem_sg_alloc_table(struct drm_i915_private *i915, struct sg_table *st,
struct address_space *mapping,
unsigned int max_segment)
{
const unsigned long page_count = size / PAGE_SIZE;
unsigned int page_count; /* restricted by sg_alloc_table */
unsigned long i;
struct scatterlist *sg;
struct page *page;
@ -68,6 +68,10 @@ int shmem_sg_alloc_table(struct drm_i915_private *i915, struct sg_table *st,
gfp_t noreclaim;
int ret;
if (overflows_type(size / PAGE_SIZE, page_count))
return -E2BIG;
page_count = size / PAGE_SIZE;
/*
* If there's no chance of allocating enough pages for the whole
* object, bail early.
@ -193,7 +197,6 @@ static int shmem_get_pages(struct drm_i915_gem_object *obj)
struct drm_i915_private *i915 = to_i915(obj->base.dev);
struct intel_memory_region *mem = obj->mm.region;
struct address_space *mapping = obj->base.filp->f_mapping;
const unsigned long page_count = obj->base.size / PAGE_SIZE;
unsigned int max_segment = i915_sg_segment_size(i915->drm.dev);
struct sg_table *st;
struct sgt_iter sgt_iter;
@ -235,8 +238,8 @@ rebuild_st:
goto rebuild_st;
} else {
dev_warn(i915->drm.dev,
"Failed to DMA remap %lu pages\n",
page_count);
"Failed to DMA remap %zu pages\n",
obj->base.size >> PAGE_SHIFT);
goto err_pages;
}
}
@ -538,6 +541,20 @@ static int __create_shmem(struct drm_i915_private *i915,
drm_gem_private_object_init(&i915->drm, obj, size);
/* XXX: The __shmem_file_setup() function returns -EINVAL if size is
* greater than MAX_LFS_FILESIZE.
* To handle the same error as other code that returns -E2BIG when
* the size is too large, we add a code that returns -E2BIG when the
* size is larger than the size that can be handled.
* If BITS_PER_LONG is 32, size > MAX_LFS_FILESIZE is always false,
* so we only needs to check when BITS_PER_LONG is 64.
* If BITS_PER_LONG is 32, E2BIG checks are processed when
* i915_gem_object_size_2big() is called before init_object() callback
* is called.
*/
if (BITS_PER_LONG == 64 && size > MAX_LFS_FILESIZE)
return -E2BIG;
if (i915->mm.gemfs)
filp = shmem_file_setup_with_mnt(i915->mm.gemfs, "i915", size,
flags);

View File

@ -400,7 +400,7 @@ i915_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr
mutex_lock(&to_gt(i915)->ggtt->vm.mutex);
list_for_each_entry_safe(vma, next,
&to_gt(i915)->ggtt->vm.bound_list, vm_link) {
unsigned long count = vma->node.size >> PAGE_SHIFT;
unsigned long count = i915_vma_size(vma) >> PAGE_SHIFT;
struct drm_i915_gem_object *obj = vma->obj;
if (!vma->iomap || i915_vma_is_active(vma))

View File

@ -168,11 +168,11 @@ static bool i915_vma_fence_prepare(struct i915_vma *vma,
return true;
size = i915_gem_fence_size(i915, vma->size, tiling_mode, stride);
if (vma->node.size < size)
if (i915_vma_size(vma) < size)
return false;
alignment = i915_gem_fence_alignment(i915, vma->size, tiling_mode, stride);
if (!IS_ALIGNED(vma->node.start, alignment))
if (!IS_ALIGNED(i915_ggtt_offset(vma), alignment))
return false;
return true;

View File

@ -140,13 +140,16 @@ i915_ttm_place_from_region(const struct intel_memory_region *mr,
if (flags & I915_BO_ALLOC_CONTIGUOUS)
place->flags |= TTM_PL_FLAG_CONTIGUOUS;
if (offset != I915_BO_INVALID_OFFSET) {
WARN_ON(overflows_type(offset >> PAGE_SHIFT, place->fpfn));
place->fpfn = offset >> PAGE_SHIFT;
WARN_ON(overflows_type(place->fpfn + (size >> PAGE_SHIFT), place->lpfn));
place->lpfn = place->fpfn + (size >> PAGE_SHIFT);
} else if (mr->io_size && mr->io_size < mr->total) {
if (flags & I915_BO_ALLOC_GPU_ONLY) {
place->flags |= TTM_PL_FLAG_TOPDOWN;
} else {
place->fpfn = 0;
WARN_ON(overflows_type(mr->io_size >> PAGE_SHIFT, place->lpfn));
place->lpfn = mr->io_size >> PAGE_SHIFT;
}
}
@ -692,7 +695,7 @@ static unsigned long i915_ttm_io_mem_pfn(struct ttm_buffer_object *bo,
GEM_WARN_ON(bo->ttm);
base = obj->mm.region->iomap.base - obj->mm.region->region.start;
sg = __i915_gem_object_get_sg(obj, &obj->ttm.get_io_page, page_offset, &ofs, true);
sg = i915_gem_object_page_iter_get_sg(obj, &obj->ttm.get_io_page, page_offset, &ofs);
return ((base + sg_dma_address(sg)) >> PAGE_SHIFT) + ofs;
}
@ -835,6 +838,10 @@ static int i915_ttm_get_pages(struct drm_i915_gem_object *obj)
struct ttm_place requested, busy[I915_TTM_MAX_PLACEMENTS];
struct ttm_placement placement;
/* restricted by sg_alloc_table */
if (overflows_type(obj->base.size >> PAGE_SHIFT, unsigned int))
return -E2BIG;
GEM_BUG_ON(obj->mm.n_placements > I915_TTM_MAX_PLACEMENTS);
/* Move to the requested placement. */
@ -1305,6 +1312,17 @@ int __i915_gem_ttm_object_init(struct intel_memory_region *mem,
ret = ttm_bo_init_reserved(&i915->bdev, i915_gem_to_ttm(obj), bo_type,
&i915_sys_placement, page_size >> PAGE_SHIFT,
&ctx, NULL, NULL, i915_ttm_bo_destroy);
/*
* XXX: The ttm_bo_init_reserved() functions returns -ENOSPC if the size
* is too big to add vma. The direct function that returns -ENOSPC is
* drm_mm_insert_node_in_range(). To handle the same error as other code
* that returns -E2BIG when the size is too large, it converts -ENOSPC to
* -E2BIG.
*/
if (size >> PAGE_SHIFT > INT_MAX && ret == -ENOSPC)
ret = -E2BIG;
if (ret)
return i915_ttm_err_to_gem(ret);

View File

@ -128,12 +128,16 @@ static void i915_gem_object_userptr_drop_ref(struct drm_i915_gem_object *obj)
static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
{
const unsigned long num_pages = obj->base.size >> PAGE_SHIFT;
unsigned int max_segment = i915_sg_segment_size(obj->base.dev->dev);
struct sg_table *st;
struct page **pvec;
unsigned int num_pages; /* limited by sg_alloc_table_from_pages_segment */
int ret;
if (overflows_type(obj->base.size >> PAGE_SHIFT, num_pages))
return -E2BIG;
num_pages = obj->base.size >> PAGE_SHIFT;
st = kmalloc(sizeof(*st), GFP_KERNEL);
if (!st)
return -ENOMEM;

View File

@ -29,11 +29,15 @@ static int huge_get_pages(struct drm_i915_gem_object *obj)
{
#define GFP (GFP_KERNEL | __GFP_NOWARN | __GFP_RETRY_MAYFAIL)
const unsigned long nreal = obj->scratch / PAGE_SIZE;
const unsigned long npages = obj->base.size / PAGE_SIZE;
unsigned int npages; /* restricted by sg_alloc_table */
struct scatterlist *sg, *src, *end;
struct sg_table *pages;
unsigned long n;
if (overflows_type(obj->base.size / PAGE_SIZE, npages))
return -E2BIG;
npages = obj->base.size / PAGE_SIZE;
pages = kmalloc(sizeof(*pages), GFP);
if (!pages)
return -ENOMEM;

View File

@ -84,6 +84,10 @@ static int get_huge_pages(struct drm_i915_gem_object *obj)
unsigned int sg_page_sizes;
u64 rem;
/* restricted by sg_alloc_table */
if (overflows_type(obj->base.size >> PAGE_SHIFT, unsigned int))
return -E2BIG;
st = kmalloc(sizeof(*st), GFP);
if (!st)
return -ENOMEM;
@ -212,6 +216,10 @@ static int fake_get_huge_pages(struct drm_i915_gem_object *obj)
struct scatterlist *sg;
u64 rem;
/* restricted by sg_alloc_table */
if (overflows_type(obj->base.size >> PAGE_SHIFT, unsigned int))
return -E2BIG;
st = kmalloc(sizeof(*st), GFP);
if (!st)
return -ENOMEM;
@ -400,7 +408,7 @@ static int igt_check_page_sizes(struct i915_vma *vma)
* Maintaining alignment is required to utilise huge pages in the ppGGT.
*/
if (i915_gem_object_is_lmem(obj) &&
IS_ALIGNED(vma->node.start, SZ_2M) &&
IS_ALIGNED(i915_vma_offset(vma), SZ_2M) &&
vma->page_sizes.sg & SZ_2M &&
vma->resource->page_sizes_gtt < SZ_2M) {
pr_err("gtt pages mismatch for LMEM, expected 2M GTT pages, sg(%u), gtt(%u)\n",
@ -1847,7 +1855,7 @@ static int igt_shrink_thp(void *arg)
I915_SHRINK_ACTIVE);
i915_vma_unpin(vma);
if (err)
goto out_put;
goto out_wf;
/*
* Now that the pages are *unpinned* shrinking should invoke
@ -1863,19 +1871,19 @@ static int igt_shrink_thp(void *arg)
pr_err("unexpected pages mismatch, should_swap=%s\n",
str_yes_no(should_swap));
err = -EINVAL;
goto out_put;
goto out_wf;
}
if (should_swap == (obj->mm.page_sizes.sg || obj->mm.page_sizes.phys)) {
pr_err("unexpected residual page-size bits, should_swap=%s\n",
str_yes_no(should_swap));
err = -EINVAL;
goto out_put;
goto out_wf;
}
err = i915_vma_pin(vma, 0, 0, flags);
if (err)
goto out_put;
goto out_wf;
while (n--) {
err = cpu_check(obj, n, 0xdeadbeaf);

View File

@ -194,12 +194,12 @@ static int prepare_blit(const struct tiled_blits *t,
*cs++ = src_4t | dst_4t | BLT_DEPTH_32 | dst_pitch;
*cs++ = 0;
*cs++ = t->height << 16 | t->width;
*cs++ = lower_32_bits(dst->vma->node.start);
*cs++ = upper_32_bits(dst->vma->node.start);
*cs++ = lower_32_bits(i915_vma_offset(dst->vma));
*cs++ = upper_32_bits(i915_vma_offset(dst->vma));
*cs++ = 0;
*cs++ = src_pitch;
*cs++ = lower_32_bits(src->vma->node.start);
*cs++ = upper_32_bits(src->vma->node.start);
*cs++ = lower_32_bits(i915_vma_offset(src->vma));
*cs++ = upper_32_bits(i915_vma_offset(src->vma));
} else {
if (ver >= 6) {
*cs++ = MI_LOAD_REGISTER_IMM(1);
@ -240,14 +240,14 @@ static int prepare_blit(const struct tiled_blits *t,
*cs++ = BLT_DEPTH_32 | BLT_ROP_SRC_COPY | dst_pitch;
*cs++ = 0;
*cs++ = t->height << 16 | t->width;
*cs++ = lower_32_bits(dst->vma->node.start);
*cs++ = lower_32_bits(i915_vma_offset(dst->vma));
if (use_64b_reloc)
*cs++ = upper_32_bits(dst->vma->node.start);
*cs++ = upper_32_bits(i915_vma_offset(dst->vma));
*cs++ = 0;
*cs++ = src_pitch;
*cs++ = lower_32_bits(src->vma->node.start);
*cs++ = lower_32_bits(i915_vma_offset(src->vma));
if (use_64b_reloc)
*cs++ = upper_32_bits(src->vma->node.start);
*cs++ = upper_32_bits(i915_vma_offset(src->vma));
}
*cs++ = MI_BATCH_BUFFER_END;
@ -462,7 +462,7 @@ static int pin_buffer(struct i915_vma *vma, u64 addr)
{
int err;
if (drm_mm_node_allocated(&vma->node) && vma->node.start != addr) {
if (drm_mm_node_allocated(&vma->node) && i915_vma_offset(vma) != addr) {
err = i915_vma_unbind_unlocked(vma);
if (err)
return err;
@ -472,6 +472,7 @@ static int pin_buffer(struct i915_vma *vma, u64 addr)
if (err)
return err;
GEM_BUG_ON(i915_vma_offset(vma) != addr);
return 0;
}
@ -518,8 +519,8 @@ tiled_blit(struct tiled_blits *t,
err = igt_vma_move_to_active_unlocked(dst->vma, rq, 0);
if (!err)
err = rq->engine->emit_bb_start(rq,
t->batch->node.start,
t->batch->node.size,
i915_vma_offset(t->batch),
i915_vma_size(t->batch),
0);
i915_request_get(rq);
i915_request_add(rq);

View File

@ -222,7 +222,7 @@ static int gpu_set(struct context *ctx, unsigned long offset, u32 v)
}
if (GRAPHICS_VER(ctx->engine->i915) >= 8) {
*cs++ = MI_STORE_DWORD_IMM_GEN4 | 1 << 22;
*cs++ = MI_STORE_DWORD_IMM_GEN4 | MI_USE_GGTT;
*cs++ = lower_32_bits(i915_ggtt_offset(vma) + offset);
*cs++ = upper_32_bits(i915_ggtt_offset(vma) + offset);
*cs++ = v;

View File

@ -469,7 +469,8 @@ static int gpu_fill(struct intel_context *ce,
static int cpu_fill(struct drm_i915_gem_object *obj, u32 value)
{
const bool has_llc = HAS_LLC(to_i915(obj->base.dev));
unsigned int n, m, need_flush;
unsigned int need_flush;
unsigned long n, m;
int err;
i915_gem_object_lock(obj, NULL);
@ -499,7 +500,8 @@ out:
static noinline int cpu_check(struct drm_i915_gem_object *obj,
unsigned int idx, unsigned int max)
{
unsigned int n, m, needs_flush;
unsigned int needs_flush;
unsigned long n;
int err;
i915_gem_object_lock(obj, NULL);
@ -508,7 +510,7 @@ static noinline int cpu_check(struct drm_i915_gem_object *obj,
goto out_unlock;
for (n = 0; n < real_page_count(obj); n++) {
u32 *map;
u32 *map, m;
map = kmap_atomic(i915_gem_object_get_page(obj, n));
if (needs_flush & CLFLUSH_BEFORE)
@ -516,7 +518,7 @@ static noinline int cpu_check(struct drm_i915_gem_object *obj,
for (m = 0; m < max; m++) {
if (map[m] != m) {
pr_err("%pS: Invalid value at object %d page %d/%ld, offset %d/%d: found %x expected %x\n",
pr_err("%pS: Invalid value at object %d page %ld/%ld, offset %d/%d: found %x expected %x\n",
__builtin_return_address(0), idx,
n, real_page_count(obj), m, max,
map[m], m);
@ -527,7 +529,7 @@ static noinline int cpu_check(struct drm_i915_gem_object *obj,
for (; m < DW_PER_PAGE; m++) {
if (map[m] != STACK_MAGIC) {
pr_err("%pS: Invalid value at object %d page %d, offset %d: found %x expected %x (uninitialised)\n",
pr_err("%pS: Invalid value at object %d page %ld, offset %d: found %x expected %x (uninitialised)\n",
__builtin_return_address(0), idx, n, m,
map[m], STACK_MAGIC);
err = -EINVAL;
@ -914,8 +916,8 @@ static int rpcs_query_batch(struct drm_i915_gem_object *rpcs,
*cmd++ = MI_STORE_REGISTER_MEM_GEN8;
*cmd++ = i915_mmio_reg_offset(GEN8_R_PWR_CLK_STATE(engine->mmio_base));
*cmd++ = lower_32_bits(vma->node.start);
*cmd++ = upper_32_bits(vma->node.start);
*cmd++ = lower_32_bits(i915_vma_offset(vma));
*cmd++ = upper_32_bits(i915_vma_offset(vma));
*cmd = MI_BATCH_BUFFER_END;
__i915_gem_object_flush_map(rpcs, 0, 64);
@ -999,7 +1001,8 @@ retry:
}
err = rq->engine->emit_bb_start(rq,
batch->node.start, batch->node.size,
i915_vma_offset(batch),
i915_vma_size(batch),
0);
if (err)
goto skip_request;
@ -1548,9 +1551,7 @@ static int write_to_scratch(struct i915_gem_context *ctx,
goto err_unpin;
}
i915_vma_lock(vma);
err = i915_vma_move_to_active(vma, rq, 0);
i915_vma_unlock(vma);
err = igt_vma_move_to_active_unlocked(vma, rq, 0);
if (err)
goto skip_request;
@ -1560,7 +1561,8 @@ static int write_to_scratch(struct i915_gem_context *ctx,
goto skip_request;
}
err = engine->emit_bb_start(rq, vma->node.start, vma->node.size, 0);
err = engine->emit_bb_start(rq, i915_vma_offset(vma),
i915_vma_size(vma), 0);
if (err)
goto skip_request;
@ -1665,7 +1667,7 @@ static int read_from_scratch(struct i915_gem_context *ctx,
*cmd++ = offset;
*cmd++ = MI_STORE_REGISTER_MEM | MI_USE_GGTT;
*cmd++ = reg;
*cmd++ = vma->node.start + result;
*cmd++ = i915_vma_offset(vma) + result;
*cmd = MI_BATCH_BUFFER_END;
i915_gem_object_flush_map(obj);
@ -1682,9 +1684,7 @@ static int read_from_scratch(struct i915_gem_context *ctx,
goto err_unpin;
}
i915_vma_lock(vma);
err = i915_vma_move_to_active(vma, rq, EXEC_OBJECT_WRITE);
i915_vma_unlock(vma);
err = igt_vma_move_to_active_unlocked(vma, rq, EXEC_OBJECT_WRITE);
if (err)
goto skip_request;
@ -1694,7 +1694,8 @@ static int read_from_scratch(struct i915_gem_context *ctx,
goto skip_request;
}
err = engine->emit_bb_start(rq, vma->node.start, vma->node.size, flags);
err = engine->emit_bb_start(rq, i915_vma_offset(vma),
i915_vma_size(vma), flags);
if (err)
goto skip_request;

View File

@ -97,11 +97,11 @@ static int check_partial_mapping(struct drm_i915_gem_object *obj,
struct drm_i915_private *i915 = to_i915(obj->base.dev);
struct i915_gtt_view view;
struct i915_vma *vma;
unsigned long offset;
unsigned long page;
u32 __iomem *io;
struct page *p;
unsigned int n;
u64 offset;
u32 *cpu;
int err;
@ -158,7 +158,7 @@ static int check_partial_mapping(struct drm_i915_gem_object *obj,
cpu = kmap(p) + offset_in_page(offset);
drm_clflush_virt_range(cpu, sizeof(*cpu));
if (*cpu != (u32)page) {
pr_err("Partial view for %lu [%u] (offset=%llu, size=%u [%llu, row size %u], fence=%d, tiling=%d, stride=%d) misalignment, expected write to page (%llu + %u [0x%llx]) of 0x%x, found 0x%x\n",
pr_err("Partial view for %lu [%u] (offset=%llu, size=%u [%llu, row size %u], fence=%d, tiling=%d, stride=%d) misalignment, expected write to page (%lu + %u [0x%lx]) of 0x%x, found 0x%x\n",
page, n,
view.partial.offset,
view.partial.size,
@ -214,10 +214,10 @@ static int check_partial_mappings(struct drm_i915_gem_object *obj,
for_each_prime_number_from(page, 1, npages) {
struct i915_gtt_view view =
compute_partial_view(obj, page, MIN_CHUNK_PAGES);
unsigned long offset;
u32 __iomem *io;
struct page *p;
unsigned int n;
u64 offset;
u32 *cpu;
GEM_BUG_ON(view.partial.size > nreal);
@ -254,7 +254,7 @@ static int check_partial_mappings(struct drm_i915_gem_object *obj,
cpu = kmap(p) + offset_in_page(offset);
drm_clflush_virt_range(cpu, sizeof(*cpu));
if (*cpu != (u32)page) {
pr_err("Partial view for %lu [%u] (offset=%llu, size=%u [%llu, row size %u], fence=%d, tiling=%d, stride=%d) misalignment, expected write to page (%llu + %u [0x%llx]) of 0x%x, found 0x%x\n",
pr_err("Partial view for %lu [%u] (offset=%llu, size=%u [%llu, row size %u], fence=%d, tiling=%d, stride=%d) misalignment, expected write to page (%lu + %u [0x%lx]) of 0x%x, found 0x%x\n",
page, n,
view.partial.offset,
view.partial.size,
@ -1609,7 +1609,7 @@ retry:
err = i915_vma_move_to_active(vma, rq, 0);
err = engine->emit_bb_start(rq, vma->node.start, 0, 0);
err = engine->emit_bb_start(rq, i915_vma_offset(vma), 0, 0);
i915_request_get(rq);
i915_request_add(rq);

View File

@ -33,10 +33,10 @@ out:
static int igt_gem_huge(void *arg)
{
const unsigned int nreal = 509; /* just to be awkward */
const unsigned long nreal = 509; /* just to be awkward */
struct drm_i915_private *i915 = arg;
struct drm_i915_gem_object *obj;
unsigned int n;
unsigned long n;
int err;
/* Basic sanitycheck of our huge fake object allocation */
@ -49,7 +49,7 @@ static int igt_gem_huge(void *arg)
err = i915_gem_object_pin_pages_unlocked(obj);
if (err) {
pr_err("Failed to allocate %u pages (%lu total), err=%d\n",
pr_err("Failed to allocate %lu pages (%lu total), err=%d\n",
nreal, obj->base.size / PAGE_SIZE, err);
goto out;
}
@ -57,7 +57,7 @@ static int igt_gem_huge(void *arg)
for (n = 0; n < obj->base.size / PAGE_SIZE; n++) {
if (i915_gem_object_get_page(obj, n) !=
i915_gem_object_get_page(obj, n % nreal)) {
pr_err("Page lookup mismatch at index %u [%u]\n",
pr_err("Page lookup mismatch at index %lu [%lu]\n",
n, n % nreal);
err = -EINVAL;
goto out_unpin;

View File

@ -62,8 +62,8 @@ igt_emit_store_dw(struct i915_vma *vma,
goto err;
}
GEM_BUG_ON(offset + (count - 1) * PAGE_SIZE > vma->node.size);
offset += vma->node.start;
GEM_BUG_ON(offset + (count - 1) * PAGE_SIZE > i915_vma_size(vma));
offset += i915_vma_offset(vma);
for (n = 0; n < count; n++) {
if (ver >= 8) {
@ -130,15 +130,11 @@ int igt_gpu_fill_dw(struct intel_context *ce,
goto err_batch;
}
i915_vma_lock(batch);
err = i915_vma_move_to_active(batch, rq, 0);
i915_vma_unlock(batch);
err = igt_vma_move_to_active_unlocked(batch, rq, 0);
if (err)
goto skip_request;
i915_vma_lock(vma);
err = i915_vma_move_to_active(vma, rq, EXEC_OBJECT_WRITE);
i915_vma_unlock(vma);
err = igt_vma_move_to_active_unlocked(vma, rq, EXEC_OBJECT_WRITE);
if (err)
goto skip_request;
@ -147,7 +143,8 @@ int igt_gpu_fill_dw(struct intel_context *ce,
flags |= I915_DISPATCH_SECURE;
err = rq->engine->emit_bb_start(rq,
batch->node.start, batch->node.size,
i915_vma_offset(batch),
i915_vma_size(batch),
flags);
skip_request:

View File

@ -38,7 +38,7 @@ igt_vma_move_to_active_unlocked(struct i915_vma *vma, struct i915_request *rq,
int err;
i915_vma_lock(vma);
err = _i915_vma_move_to_active(vma, rq, &rq->fence, flags);
err = i915_vma_move_to_active(vma, rq, flags);
i915_vma_unlock(vma);
return err;
}

View File

@ -106,7 +106,7 @@ static u32 batch_offset(const struct batch_chunk *bc, u32 *cs)
static u32 batch_addr(const struct batch_chunk *bc)
{
return bc->vma->node.start;
return i915_vma_offset(bc->vma);
}
static void batch_add(struct batch_chunk *bc, const u32 d)

View File

@ -172,6 +172,8 @@ intel_write_status_page(struct intel_engine_cs *engine, int reg, u32 value)
#define I915_GEM_HWS_MIGRATE (0x42 * sizeof(u32))
#define I915_GEM_HWS_PXP 0x60
#define I915_GEM_HWS_PXP_ADDR (I915_GEM_HWS_PXP * sizeof(u32))
#define I915_GEM_HWS_GSC 0x62
#define I915_GEM_HWS_GSC_ADDR (I915_GEM_HWS_GSC * sizeof(u32))
#define I915_GEM_HWS_SCRATCH 0x80
#define I915_HWS_CSB_BUF0_INDEX 0x10

View File

@ -894,6 +894,24 @@ static intel_engine_mask_t init_engine_mask(struct intel_gt *gt)
engine_mask_apply_compute_fuses(gt);
engine_mask_apply_copy_fuses(gt);
/*
* The only use of the GSC CS is to load and communicate with the GSC
* FW, so we have no use for it if we don't have the FW.
*
* IMPORTANT: in cases where we don't have the GSC FW, we have a
* catch-22 situation that breaks media C6 due to 2 requirements:
* 1) once turned on, the GSC power well will not go to sleep unless the
* GSC FW is loaded.
* 2) to enable idling (which is required for media C6) we need to
* initialize the IDLE_MSG register for the GSC CS and do at least 1
* submission, which will wake up the GSC power well.
*/
if (__HAS_ENGINE(info->engine_mask, GSC0) && !intel_uc_wants_gsc_uc(&gt->uc)) {
drm_notice(&gt->i915->drm,
"No GSC FW selected, disabling GSC CS and media C6\n");
info->engine_mask &= ~BIT(GSC0);
}
return info->engine_mask;
}
@ -1476,10 +1494,12 @@ static int __intel_engine_stop_cs(struct intel_engine_cs *engine,
intel_uncore_write_fw(uncore, mode, _MASKED_BIT_ENABLE(STOP_RING));
/*
* Wa_22011802037 : gen11, gen12, Prior to doing a reset, ensure CS is
* Wa_22011802037: Prior to doing a reset, ensure CS is
* stopped, set ring stop bit and prefetch disable bit to halt CS
*/
if (IS_GRAPHICS_VER(engine->i915, 11, 12))
if (IS_MTL_GRAPHICS_STEP(engine->i915, M, STEP_A0, STEP_B0) ||
(GRAPHICS_VER(engine->i915) >= 11 &&
GRAPHICS_VER_FULL(engine->i915) < IP_VER(12, 70)))
intel_uncore_write_fw(uncore, RING_MODE_GEN7(engine->mmio_base),
_MASKED_BIT_ENABLE(GEN12_GFX_PREFETCH_DISABLE));

View File

@ -15,6 +15,22 @@
#include "intel_rc6.h"
#include "intel_ring.h"
#include "shmem_utils.h"
#include "intel_gt_regs.h"
static void intel_gsc_idle_msg_enable(struct intel_engine_cs *engine)
{
struct drm_i915_private *i915 = engine->i915;
if (IS_METEORLAKE(i915) && engine->id == GSC0) {
intel_uncore_write(engine->gt->uncore,
RC_PSMI_CTRL_GSCCS,
_MASKED_BIT_DISABLE(IDLE_MSG_DISABLE));
/* hysteresis 0xA=5us as recommended in spec*/
intel_uncore_write(engine->gt->uncore,
PWRCTX_MAXCNT_GSCCS,
0xA);
}
}
static void dbg_poison_ce(struct intel_context *ce)
{
@ -275,6 +291,8 @@ void intel_engine_init__pm(struct intel_engine_cs *engine)
intel_wakeref_init(&engine->wakeref, rpm, &wf_ops);
intel_engine_init_heartbeat(engine);
intel_gsc_idle_msg_enable(engine);
}
/**

View File

@ -2989,10 +2989,12 @@ static void execlists_reset_prepare(struct intel_engine_cs *engine)
intel_engine_stop_cs(engine);
/*
* Wa_22011802037:gen11/gen12: In addition to stopping the cs, we need
* Wa_22011802037: In addition to stopping the cs, we need
* to wait for any pending mi force wakeups
*/
if (IS_GRAPHICS_VER(engine->i915, 11, 12))
if (IS_MTL_GRAPHICS_STEP(engine->i915, M, STEP_A0, STEP_B0) ||
(GRAPHICS_VER(engine->i915) >= 11 &&
GRAPHICS_VER_FULL(engine->i915) < IP_VER(12, 70)))
intel_engine_wait_for_pending_mi_fw(engine);
engine->execlists.reset_ccid = active_ccid(engine);

View File

@ -8,6 +8,7 @@
#include <linux/types.h>
#include <linux/stop_machine.h>
#include <drm/drm_managed.h>
#include <drm/i915_drm.h>
#include <drm/intel-gtt.h>
@ -26,13 +27,6 @@
#include "intel_gtt.h"
#include "gen8_ppgtt.h"
static inline bool suspend_retains_ptes(struct i915_address_space *vm)
{
return GRAPHICS_VER(vm->i915) >= 8 &&
!HAS_LMEM(vm->i915) &&
vm->is_ggtt;
}
static void i915_ggtt_color_adjust(const struct drm_mm_node *node,
unsigned long color,
u64 *start,
@ -104,23 +98,6 @@ int i915_ggtt_init_hw(struct drm_i915_private *i915)
return 0;
}
/*
* Return the value of the last GGTT pte cast to an u64, if
* the system is supposed to retain ptes across resume. 0 otherwise.
*/
static u64 read_last_pte(struct i915_address_space *vm)
{
struct i915_ggtt *ggtt = i915_vm_to_ggtt(vm);
gen8_pte_t __iomem *ptep;
if (!suspend_retains_ptes(vm))
return 0;
GEM_BUG_ON(GRAPHICS_VER(vm->i915) < 8);
ptep = (typeof(ptep))ggtt->gsm + (ggtt_total_entries(ggtt) - 1);
return readq(ptep);
}
/**
* i915_ggtt_suspend_vm - Suspend the memory mappings for a GGTT or DPT VM
* @vm: The VM to suspend the mappings for
@ -184,10 +161,7 @@ retry:
i915_gem_object_unlock(obj);
}
if (!suspend_retains_ptes(vm))
vm->clear_range(vm, 0, vm->total);
else
i915_vm_to_ggtt(vm)->probed_pte = read_last_pte(vm);
vm->clear_range(vm, 0, vm->total);
vm->skip_pte_rewrite = save_skip_rewrite;
@ -196,10 +170,13 @@ retry:
void i915_ggtt_suspend(struct i915_ggtt *ggtt)
{
struct intel_gt *gt;
i915_ggtt_suspend_vm(&ggtt->vm);
ggtt->invalidate(ggtt);
intel_gt_check_and_clear_faults(ggtt->vm.gt);
list_for_each_entry(gt, &ggtt->gt_list, ggtt_link)
intel_gt_check_and_clear_faults(gt);
}
void gen6_ggtt_invalidate(struct i915_ggtt *ggtt)
@ -225,16 +202,21 @@ static void gen8_ggtt_invalidate(struct i915_ggtt *ggtt)
static void guc_ggtt_invalidate(struct i915_ggtt *ggtt)
{
struct intel_uncore *uncore = ggtt->vm.gt->uncore;
struct drm_i915_private *i915 = ggtt->vm.i915;
gen8_ggtt_invalidate(ggtt);
if (GRAPHICS_VER(i915) >= 12)
intel_uncore_write_fw(uncore, GEN12_GUC_TLB_INV_CR,
GEN12_GUC_TLB_INV_CR_INVALIDATE);
else
intel_uncore_write_fw(uncore, GEN8_GTCR, GEN8_GTCR_INVALIDATE);
if (GRAPHICS_VER(i915) >= 12) {
struct intel_gt *gt;
list_for_each_entry(gt, &ggtt->gt_list, ggtt_link)
intel_uncore_write_fw(gt->uncore,
GEN12_GUC_TLB_INV_CR,
GEN12_GUC_TLB_INV_CR_INVALIDATE);
} else {
intel_uncore_write_fw(ggtt->vm.gt->uncore,
GEN8_GTCR, GEN8_GTCR_INVALIDATE);
}
}
u64 gen8_ggtt_pte_encode(dma_addr_t addr,
@ -287,8 +269,11 @@ static void gen8_ggtt_insert_entries(struct i915_address_space *vm,
*/
gte = (gen8_pte_t __iomem *)ggtt->gsm;
gte += vma_res->start / I915_GTT_PAGE_SIZE;
end = gte + vma_res->node_size / I915_GTT_PAGE_SIZE;
gte += (vma_res->start - vma_res->guard) / I915_GTT_PAGE_SIZE;
end = gte + vma_res->guard / I915_GTT_PAGE_SIZE;
while (gte < end)
gen8_set_pte(gte++, vm->scratch[0]->encode);
end += (vma_res->node_size + vma_res->guard) / I915_GTT_PAGE_SIZE;
for_each_sgt_daddr(addr, iter, vma_res->bi.pages)
gen8_set_pte(gte++, pte_encode | addr);
@ -338,9 +323,12 @@ static void gen6_ggtt_insert_entries(struct i915_address_space *vm,
dma_addr_t addr;
gte = (gen6_pte_t __iomem *)ggtt->gsm;
gte += vma_res->start / I915_GTT_PAGE_SIZE;
end = gte + vma_res->node_size / I915_GTT_PAGE_SIZE;
gte += (vma_res->start - vma_res->guard) / I915_GTT_PAGE_SIZE;
end = gte + vma_res->guard / I915_GTT_PAGE_SIZE;
while (gte < end)
iowrite32(vm->scratch[0]->encode, gte++);
end += (vma_res->node_size + vma_res->guard) / I915_GTT_PAGE_SIZE;
for_each_sgt_daddr(addr, iter, vma_res->bi.pages)
iowrite32(vm->pte_encode(addr, level, flags), gte++);
GEM_BUG_ON(gte > end);
@ -361,27 +349,6 @@ static void nop_clear_range(struct i915_address_space *vm,
{
}
static void gen8_ggtt_clear_range(struct i915_address_space *vm,
u64 start, u64 length)
{
struct i915_ggtt *ggtt = i915_vm_to_ggtt(vm);
unsigned int first_entry = start / I915_GTT_PAGE_SIZE;
unsigned int num_entries = length / I915_GTT_PAGE_SIZE;
const gen8_pte_t scratch_pte = vm->scratch[0]->encode;
gen8_pte_t __iomem *gtt_base =
(gen8_pte_t __iomem *)ggtt->gsm + first_entry;
const int max_entries = ggtt_total_entries(ggtt) - first_entry;
int i;
if (WARN(num_entries > max_entries,
"First entry = %d; Num entries = %d (max=%d)\n",
first_entry, num_entries, max_entries))
num_entries = max_entries;
for (i = 0; i < num_entries; i++)
gen8_set_pte(&gtt_base[i], scratch_pte);
}
static void bxt_vtd_ggtt_wa(struct i915_address_space *vm)
{
/*
@ -551,8 +518,6 @@ static int init_ggtt(struct i915_ggtt *ggtt)
struct drm_mm_node *entry;
int ret;
ggtt->pte_lost = true;
/*
* GuC requires all resources that we're sharing with it to be placed in
* non-WOPCM memory. If GuC is not present or not in use we still need a
@ -953,8 +918,6 @@ static int gen8_gmch_probe(struct i915_ggtt *ggtt)
ggtt->vm.cleanup = gen6_gmch_remove;
ggtt->vm.insert_page = gen8_ggtt_insert_page;
ggtt->vm.clear_range = nop_clear_range;
if (intel_scanout_needs_vtd_wa(i915))
ggtt->vm.clear_range = gen8_ggtt_clear_range;
ggtt->vm.insert_entries = gen8_ggtt_insert_entries;
@ -979,15 +942,16 @@ static int gen8_gmch_probe(struct i915_ggtt *ggtt)
I915_VMA_GLOBAL_BIND | I915_VMA_LOCAL_BIND;
}
ggtt->invalidate = gen8_ggtt_invalidate;
if (intel_uc_wants_guc(&ggtt->vm.gt->uc))
ggtt->invalidate = guc_ggtt_invalidate;
else
ggtt->invalidate = gen8_ggtt_invalidate;
ggtt->vm.vma_ops.bind_vma = intel_ggtt_bind_vma;
ggtt->vm.vma_ops.unbind_vma = intel_ggtt_unbind_vma;
ggtt->vm.pte_encode = gen8_ggtt_pte_encode;
setup_private_pat(ggtt->vm.gt);
return ggtt_probe_common(ggtt, size);
}
@ -1115,7 +1079,7 @@ static int gen6_gmch_probe(struct i915_ggtt *ggtt)
ggtt->vm.alloc_scratch_dma = alloc_pt_dma;
ggtt->vm.clear_range = nop_clear_range;
if (!HAS_FULL_PPGTT(i915) || intel_scanout_needs_vtd_wa(i915))
if (!HAS_FULL_PPGTT(i915))
ggtt->vm.clear_range = gen6_ggtt_clear_range;
ggtt->vm.insert_page = gen6_ggtt_insert_page;
ggtt->vm.insert_entries = gen6_ggtt_insert_entries;
@ -1196,7 +1160,14 @@ static int ggtt_probe_hw(struct i915_ggtt *ggtt, struct intel_gt *gt)
*/
int i915_ggtt_probe_hw(struct drm_i915_private *i915)
{
int ret;
struct intel_gt *gt;
int ret, i;
for_each_gt(gt, i915, i) {
ret = intel_gt_assign_ggtt(gt);
if (ret)
return ret;
}
ret = ggtt_probe_hw(to_gt(i915)->ggtt, to_gt(i915));
if (ret)
@ -1208,6 +1179,19 @@ int i915_ggtt_probe_hw(struct drm_i915_private *i915)
return 0;
}
struct i915_ggtt *i915_ggtt_create(struct drm_i915_private *i915)
{
struct i915_ggtt *ggtt;
ggtt = drmm_kzalloc(&i915->drm, sizeof(*ggtt), GFP_KERNEL);
if (!ggtt)
return ERR_PTR(-ENOMEM);
INIT_LIST_HEAD(&ggtt->gt_list);
return ggtt;
}
int i915_ggtt_enable_hw(struct drm_i915_private *i915)
{
if (GRAPHICS_VER(i915) < 6)
@ -1216,29 +1200,6 @@ int i915_ggtt_enable_hw(struct drm_i915_private *i915)
return 0;
}
void i915_ggtt_enable_guc(struct i915_ggtt *ggtt)
{
GEM_BUG_ON(ggtt->invalidate != gen8_ggtt_invalidate);
ggtt->invalidate = guc_ggtt_invalidate;
ggtt->invalidate(ggtt);
}
void i915_ggtt_disable_guc(struct i915_ggtt *ggtt)
{
/* XXX Temporary pardon for error unload */
if (ggtt->invalidate == gen8_ggtt_invalidate)
return;
/* We should only be called after i915_ggtt_enable_guc() */
GEM_BUG_ON(ggtt->invalidate != guc_ggtt_invalidate);
ggtt->invalidate = gen8_ggtt_invalidate;
ggtt->invalidate(ggtt);
}
/**
* i915_ggtt_resume_vm - Restore the memory mappings for a GGTT or DPT VM
* @vm: The VM to restore the mappings for
@ -1253,20 +1214,11 @@ bool i915_ggtt_resume_vm(struct i915_address_space *vm)
{
struct i915_vma *vma;
bool write_domain_objs = false;
bool retained_ptes;
drm_WARN_ON(&vm->i915->drm, !vm->is_ggtt && !vm->is_dpt);
/*
* First fill our portion of the GTT with scratch pages if
* they were not retained across suspend.
*/
retained_ptes = suspend_retains_ptes(vm) &&
!i915_vm_to_ggtt(vm)->pte_lost &&
!GEM_WARN_ON(i915_vm_to_ggtt(vm)->probed_pte != read_last_pte(vm));
if (!retained_ptes)
vm->clear_range(vm, 0, vm->total);
/* First fill our portion of the GTT with scratch pages */
vm->clear_range(vm, 0, vm->total);
/* clflush objects bound into the GGTT and rebind them. */
list_for_each_entry(vma, &vm->bound_list, vm_link) {
@ -1275,16 +1227,16 @@ bool i915_ggtt_resume_vm(struct i915_address_space *vm)
atomic_read(&vma->flags) & I915_VMA_BIND_MASK;
GEM_BUG_ON(!was_bound);
if (!retained_ptes) {
/*
* Clear the bound flags of the vma resource to allow
* ptes to be repopulated.
*/
vma->resource->bound_flags = 0;
vma->ops->bind_vma(vm, NULL, vma->resource,
obj ? obj->cache_level : 0,
was_bound);
}
/*
* Clear the bound flags of the vma resource to allow
* ptes to be repopulated.
*/
vma->resource->bound_flags = 0;
vma->ops->bind_vma(vm, NULL, vma->resource,
obj ? obj->cache_level : 0,
was_bound);
if (obj) { /* only used during resume => exclusive access */
write_domain_objs |= fetch_and_zero(&obj->write_domain);
obj->read_domains |= I915_GEM_DOMAIN_GTT;
@ -1296,9 +1248,11 @@ bool i915_ggtt_resume_vm(struct i915_address_space *vm)
void i915_ggtt_resume(struct i915_ggtt *ggtt)
{
struct intel_gt *gt;
bool flush;
intel_gt_check_and_clear_faults(ggtt->vm.gt);
list_for_each_entry(gt, &ggtt->gt_list, ggtt_link)
intel_gt_check_and_clear_faults(gt);
flush = i915_ggtt_resume_vm(&ggtt->vm);
@ -1307,13 +1261,5 @@ void i915_ggtt_resume(struct i915_ggtt *ggtt)
if (flush)
wbinvd_on_all_cpus();
if (GRAPHICS_VER(ggtt->vm.i915) >= 8)
setup_private_pat(ggtt->vm.gt);
intel_ggtt_restore_fences(ggtt);
}
void i915_ggtt_mark_pte_lost(struct drm_i915_private *i915, bool val)
{
to_gt(i915)->ggtt->pte_lost = val;
}

View File

@ -220,7 +220,8 @@ static int fence_update(struct i915_fence_reg *fence,
return ret;
}
fence->start = vma->node.start;
GEM_BUG_ON(vma->fence_size > i915_vma_size(vma));
fence->start = i915_ggtt_offset(vma);
fence->size = vma->fence_size;
fence->stride = i915_gem_object_get_stride(vma->obj);
fence->tiling = i915_gem_object_get_tiling(vma->obj);

View File

@ -6,7 +6,6 @@
#include "intel_ggtt_gmch.h"
#include <drm/intel-gtt.h>
#include <drm/i915_drm.h>
#include <linux/agp_backend.h>

View File

@ -21,6 +21,7 @@
#define INSTR_CLIENT_SHIFT 29
#define INSTR_MI_CLIENT 0x0
#define INSTR_BC_CLIENT 0x2
#define INSTR_GSC_CLIENT 0x2 /* MTL+ */
#define INSTR_RC_CLIENT 0x3
#define INSTR_SUBCLIENT_SHIFT 27
#define INSTR_SUBCLIENT_MASK 0x18000000
@ -432,6 +433,12 @@
#define COLOR_BLT ((0x2<<29)|(0x40<<22))
#define SRC_COPY_BLT ((0x2<<29)|(0x43<<22))
#define GSC_INSTR(opcode, data, flags) \
(__INSTR(INSTR_GSC_CLIENT) | (opcode) << 22 | (data) << 9 | (flags))
#define GSC_FW_LOAD GSC_INSTR(1, 0, 2)
#define HECI1_FW_LIMIT_VALID (1 << 31)
/*
* Used to convert any address to canonical form.
* Starting from gen8, some commands (e.g. STATE_BASE_ADDRESS,

View File

@ -174,6 +174,14 @@ static void gsc_init_one(struct drm_i915_private *i915, struct intel_gsc *gsc,
intf->irq = -1;
intf->id = intf_id;
/*
* On the multi-tile setups the GSC is functional on the first tile only
*/
if (gsc_to_gt(gsc)->info.id != 0) {
drm_dbg(&i915->drm, "Not initializing gsc for remote tiles\n");
return;
}
if (intf_id == 0 && !HAS_HECI_PXP(i915))
return;

View File

@ -8,7 +8,6 @@
#include "gem/i915_gem_internal.h"
#include "gem/i915_gem_lmem.h"
#include "pxp/intel_pxp.h"
#include "i915_drv.h"
#include "i915_perf_oa_regs.h"
@ -23,6 +22,7 @@
#include "intel_gt_debugfs.h"
#include "intel_gt_mcr.h"
#include "intel_gt_pm.h"
#include "intel_gt_print.h"
#include "intel_gt_regs.h"
#include "intel_gt_requests.h"
#include "intel_migrate.h"
@ -90,9 +90,8 @@ static int intel_gt_probe_lmem(struct intel_gt *gt)
if (err == -ENODEV)
return 0;
drm_err(&i915->drm,
"Failed to setup region(%d) type=%d\n",
err, INTEL_MEMORY_LOCAL);
gt_err(gt, "Failed to setup region(%d) type=%d\n",
err, INTEL_MEMORY_LOCAL);
return err;
}
@ -110,9 +109,18 @@ static int intel_gt_probe_lmem(struct intel_gt *gt)
int intel_gt_assign_ggtt(struct intel_gt *gt)
{
gt->ggtt = drmm_kzalloc(&gt->i915->drm, sizeof(*gt->ggtt), GFP_KERNEL);
/* Media GT shares primary GT's GGTT */
if (gt->type == GT_MEDIA) {
gt->ggtt = to_gt(gt->i915)->ggtt;
} else {
gt->ggtt = i915_ggtt_create(gt->i915);
if (IS_ERR(gt->ggtt))
return PTR_ERR(gt->ggtt);
}
return gt->ggtt ? 0 : -ENOMEM;
list_add_tail(&gt->ggtt_link, &gt->ggtt->gt_list);
return 0;
}
int intel_gt_init_mmio(struct intel_gt *gt)
@ -192,14 +200,14 @@ int intel_gt_init_hw(struct intel_gt *gt)
ret = i915_ppgtt_init_hw(gt);
if (ret) {
drm_err(&i915->drm, "Enabling PPGTT failed (%d)\n", ret);
gt_err(gt, "Enabling PPGTT failed (%d)\n", ret);
goto out;
}
/* We can't enable contexts until all firmware is loaded */
ret = intel_uc_init_hw(&gt->uc);
if (ret) {
i915_probe_error(i915, "Enabling uc failed (%d)\n", ret);
gt_probe_error(gt, "Enabling uc failed (%d)\n", ret);
goto out;
}
@ -210,21 +218,6 @@ out:
return ret;
}
static void rmw_set(struct intel_uncore *uncore, i915_reg_t reg, u32 set)
{
intel_uncore_rmw(uncore, reg, 0, set);
}
static void rmw_clear(struct intel_uncore *uncore, i915_reg_t reg, u32 clr)
{
intel_uncore_rmw(uncore, reg, clr, 0);
}
static void clear_register(struct intel_uncore *uncore, i915_reg_t reg)
{
intel_uncore_rmw(uncore, reg, 0, 0);
}
static void gen6_clear_engine_error_register(struct intel_engine_cs *engine)
{
GEN6_RING_FAULT_REG_RMW(engine, RING_FAULT_VALID, 0);
@ -250,22 +243,22 @@ intel_gt_clear_error_registers(struct intel_gt *gt,
u32 eir;
if (GRAPHICS_VER(i915) != 2)
clear_register(uncore, PGTBL_ER);
intel_uncore_write(uncore, PGTBL_ER, 0);
if (GRAPHICS_VER(i915) < 4)
clear_register(uncore, IPEIR(RENDER_RING_BASE));
intel_uncore_write(uncore, IPEIR(RENDER_RING_BASE), 0);
else
clear_register(uncore, IPEIR_I965);
intel_uncore_write(uncore, IPEIR_I965, 0);
clear_register(uncore, EIR);
intel_uncore_write(uncore, EIR, 0);
eir = intel_uncore_read(uncore, EIR);
if (eir) {
/*
* some errors might have become stuck,
* mask them.
*/
drm_dbg(&gt->i915->drm, "EIR stuck: 0x%08x, masking\n", eir);
rmw_set(uncore, EMR, eir);
gt_dbg(gt, "EIR stuck: 0x%08x, masking\n", eir);
intel_uncore_rmw(uncore, EMR, 0, eir);
intel_uncore_write(uncore, GEN2_IIR,
I915_MASTER_ERROR_INTERRUPT);
}
@ -275,10 +268,10 @@ intel_gt_clear_error_registers(struct intel_gt *gt,
RING_FAULT_VALID, 0);
intel_gt_mcr_read_any(gt, XEHP_RING_FAULT_REG);
} else if (GRAPHICS_VER(i915) >= 12) {
rmw_clear(uncore, GEN12_RING_FAULT_REG, RING_FAULT_VALID);
intel_uncore_rmw(uncore, GEN12_RING_FAULT_REG, RING_FAULT_VALID, 0);
intel_uncore_posting_read(uncore, GEN12_RING_FAULT_REG);
} else if (GRAPHICS_VER(i915) >= 8) {
rmw_clear(uncore, GEN8_RING_FAULT_REG, RING_FAULT_VALID);
intel_uncore_rmw(uncore, GEN8_RING_FAULT_REG, RING_FAULT_VALID, 0);
intel_uncore_posting_read(uncore, GEN8_RING_FAULT_REG);
} else if (GRAPHICS_VER(i915) >= 6) {
struct intel_engine_cs *engine;
@ -298,16 +291,16 @@ static void gen6_check_faults(struct intel_gt *gt)
for_each_engine(engine, gt, id) {
fault = GEN6_RING_FAULT_REG_READ(engine);
if (fault & RING_FAULT_VALID) {
drm_dbg(&engine->i915->drm, "Unexpected fault\n"
"\tAddr: 0x%08lx\n"
"\tAddress space: %s\n"
"\tSource ID: %d\n"
"\tType: %d\n",
fault & PAGE_MASK,
fault & RING_FAULT_GTTSEL_MASK ?
"GGTT" : "PPGTT",
RING_FAULT_SRCID(fault),
RING_FAULT_FAULT_TYPE(fault));
gt_dbg(gt, "Unexpected fault\n"
"\tAddr: 0x%08lx\n"
"\tAddress space: %s\n"
"\tSource ID: %d\n"
"\tType: %d\n",
fault & PAGE_MASK,
fault & RING_FAULT_GTTSEL_MASK ?
"GGTT" : "PPGTT",
RING_FAULT_SRCID(fault),
RING_FAULT_FAULT_TYPE(fault));
}
}
}
@ -334,17 +327,17 @@ static void xehp_check_faults(struct intel_gt *gt)
fault_addr = ((u64)(fault_data1 & FAULT_VA_HIGH_BITS) << 44) |
((u64)fault_data0 << 12);
drm_dbg(&gt->i915->drm, "Unexpected fault\n"
"\tAddr: 0x%08x_%08x\n"
"\tAddress space: %s\n"
"\tEngine ID: %d\n"
"\tSource ID: %d\n"
"\tType: %d\n",
upper_32_bits(fault_addr), lower_32_bits(fault_addr),
fault_data1 & FAULT_GTT_SEL ? "GGTT" : "PPGTT",
GEN8_RING_FAULT_ENGINE_ID(fault),
RING_FAULT_SRCID(fault),
RING_FAULT_FAULT_TYPE(fault));
gt_dbg(gt, "Unexpected fault\n"
"\tAddr: 0x%08x_%08x\n"
"\tAddress space: %s\n"
"\tEngine ID: %d\n"
"\tSource ID: %d\n"
"\tType: %d\n",
upper_32_bits(fault_addr), lower_32_bits(fault_addr),
fault_data1 & FAULT_GTT_SEL ? "GGTT" : "PPGTT",
GEN8_RING_FAULT_ENGINE_ID(fault),
RING_FAULT_SRCID(fault),
RING_FAULT_FAULT_TYPE(fault));
}
}
@ -375,17 +368,17 @@ static void gen8_check_faults(struct intel_gt *gt)
fault_addr = ((u64)(fault_data1 & FAULT_VA_HIGH_BITS) << 44) |
((u64)fault_data0 << 12);
drm_dbg(&uncore->i915->drm, "Unexpected fault\n"
"\tAddr: 0x%08x_%08x\n"
"\tAddress space: %s\n"
"\tEngine ID: %d\n"
"\tSource ID: %d\n"
"\tType: %d\n",
upper_32_bits(fault_addr), lower_32_bits(fault_addr),
fault_data1 & FAULT_GTT_SEL ? "GGTT" : "PPGTT",
GEN8_RING_FAULT_ENGINE_ID(fault),
RING_FAULT_SRCID(fault),
RING_FAULT_FAULT_TYPE(fault));
gt_dbg(gt, "Unexpected fault\n"
"\tAddr: 0x%08x_%08x\n"
"\tAddress space: %s\n"
"\tEngine ID: %d\n"
"\tSource ID: %d\n"
"\tType: %d\n",
upper_32_bits(fault_addr), lower_32_bits(fault_addr),
fault_data1 & FAULT_GTT_SEL ? "GGTT" : "PPGTT",
GEN8_RING_FAULT_ENGINE_ID(fault),
RING_FAULT_SRCID(fault),
RING_FAULT_FAULT_TYPE(fault));
}
}
@ -479,7 +472,7 @@ static int intel_gt_init_scratch(struct intel_gt *gt, unsigned int size)
if (IS_ERR(obj))
obj = i915_gem_object_create_internal(i915, size);
if (IS_ERR(obj)) {
drm_err(&i915->drm, "Failed to allocate scratch page\n");
gt_err(gt, "Failed to allocate scratch page\n");
return PTR_ERR(obj);
}
@ -734,8 +727,7 @@ int intel_gt_init(struct intel_gt *gt)
err = intel_gt_init_hwconfig(gt);
if (err)
drm_err(&gt->i915->drm, "Failed to retrieve hwconfig table: %pe\n",
ERR_PTR(err));
gt_err(gt, "Failed to retrieve hwconfig table: %pe\n", ERR_PTR(err));
err = __engines_record_defaults(gt);
if (err)
@ -753,8 +745,6 @@ int intel_gt_init(struct intel_gt *gt)
intel_migrate_init(&gt->migrate, gt);
intel_pxp_init(&gt->pxp);
goto out_fw;
err_gt:
__intel_gt_disable(gt);
@ -794,8 +784,6 @@ void intel_gt_driver_unregister(struct intel_gt *gt)
intel_rps_driver_unregister(&gt->rps);
intel_gsc_fini(&gt->gsc);
intel_pxp_fini(&gt->pxp);
/*
* Upon unregistering the device to prevent any new users, cancel
* all in-flight requests so that we can quickly unbind the active
@ -896,7 +884,7 @@ int intel_gt_probe_all(struct drm_i915_private *i915)
gt->name = "Primary GT";
gt->info.engine_mask = RUNTIME_INFO(i915)->platform_engine_mask;
drm_dbg(&i915->drm, "Setting up %s\n", gt->name);
gt_dbg(gt, "Setting up %s\n", gt->name);
ret = intel_gt_tile_setup(gt, phys_addr);
if (ret)
return ret;
@ -921,7 +909,7 @@ int intel_gt_probe_all(struct drm_i915_private *i915)
gt->info.engine_mask = gtdef->engine_mask;
gt->info.id = i;
drm_dbg(&i915->drm, "Setting up %s\n", gt->name);
gt_dbg(gt, "Setting up %s\n", gt->name);
if (GEM_WARN_ON(range_overflows_t(resource_size_t,
gtdef->mapping_base,
SZ_16M,
@ -1009,8 +997,7 @@ get_reg_and_bit(const struct intel_engine_cs *engine, const bool gen8,
const unsigned int class = engine->class;
struct reg_and_bit rb = { };
if (drm_WARN_ON_ONCE(&engine->i915->drm,
class >= num || !regs[class].reg))
if (gt_WARN_ON_ONCE(engine->gt, class >= num || !regs[class].reg))
return rb;
rb.reg = regs[class];
@ -1079,11 +1066,25 @@ static void mmio_invalidate_full(struct intel_gt *gt)
enum intel_engine_id id;
const i915_reg_t *regs;
unsigned int num = 0;
unsigned long flags;
if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 50)) {
/*
* New platforms should not be added with catch-all-newer (>=)
* condition so that any later platform added triggers the below warning
* and in turn mandates a human cross-check of whether the invalidation
* flows have compatible semantics.
*
* For instance with the 11.00 -> 12.00 transition three out of five
* respective engine registers were moved to masked type. Then after the
* 12.00 -> 12.50 transition multi cast handling is required too.
*/
if (GRAPHICS_VER_FULL(i915) == IP_VER(12, 50) ||
GRAPHICS_VER_FULL(i915) == IP_VER(12, 55)) {
regs = NULL;
num = ARRAY_SIZE(xehp_regs);
} else if (GRAPHICS_VER(i915) == 12) {
} else if (GRAPHICS_VER_FULL(i915) == IP_VER(12, 0) ||
GRAPHICS_VER_FULL(i915) == IP_VER(12, 10)) {
regs = gen12_regs;
num = ARRAY_SIZE(gen12_regs);
} else if (GRAPHICS_VER(i915) >= 8 && GRAPHICS_VER(i915) <= 11) {
@ -1093,13 +1094,13 @@ static void mmio_invalidate_full(struct intel_gt *gt)
return;
}
if (drm_WARN_ONCE(&i915->drm, !num,
"Platform does not implement TLB invalidation!"))
if (gt_WARN_ONCE(gt, !num, "Platform does not implement TLB invalidation!"))
return;
intel_uncore_forcewake_get(uncore, FORCEWAKE_ALL);
spin_lock_irq(&uncore->lock); /* serialise invalidate with GT reset */
intel_gt_mcr_lock(gt, &flags);
spin_lock(&uncore->lock); /* serialise invalidate with GT reset */
awake = 0;
for_each_engine(engine, gt, id) {
@ -1144,7 +1145,8 @@ static void mmio_invalidate_full(struct intel_gt *gt)
IS_ALDERLAKE_P(i915)))
intel_uncore_write_fw(uncore, GEN12_OA_TLB_INV_CR, 1);
spin_unlock_irq(&uncore->lock);
spin_unlock(&uncore->lock);
intel_gt_mcr_unlock(gt, flags);
for_each_engine_masked(engine, gt, awake, tmp) {
struct reg_and_bit rb;
@ -1157,9 +1159,8 @@ static void mmio_invalidate_full(struct intel_gt *gt)
}
if (wait_for_invalidate(gt, rb))
drm_err_ratelimited(&gt->i915->drm,
"%s TLB invalidation did not complete in %ums!\n",
engine->name, TLB_INVAL_TIMEOUT_MS);
gt_err_ratelimited(gt, "%s TLB invalidation did not complete in %ums!\n",
engine->name, TLB_INVAL_TIMEOUT_MS);
}
/*

View File

@ -39,6 +39,11 @@ static inline struct intel_gt *huc_to_gt(struct intel_huc *huc)
return container_of(huc, struct intel_gt, uc.huc);
}
static inline struct intel_gt *gsc_uc_to_gt(struct intel_gsc_uc *gsc_uc)
{
return container_of(gsc_uc, struct intel_gt, uc.gsc);
}
static inline struct intel_gt *gsc_to_gt(struct intel_gsc *gsc)
{
return container_of(gsc, struct intel_gt, gsc);

View File

@ -7,6 +7,7 @@
#include "i915_reg.h"
#include "intel_gt.h"
#include "intel_gt_clock_utils.h"
#include "intel_gt_print.h"
#include "intel_gt_regs.h"
static u32 read_reference_ts_freq(struct intel_uncore *uncore)
@ -193,10 +194,9 @@ void intel_gt_init_clock_frequency(struct intel_gt *gt)
void intel_gt_check_clock_frequency(const struct intel_gt *gt)
{
if (gt->clock_frequency != read_clock_frequency(gt->uncore)) {
dev_err(gt->i915->drm.dev,
"GT clock frequency changed, was %uHz, now %uHz!\n",
gt->clock_frequency,
read_clock_frequency(gt->uncore));
gt_err(gt, "GT clock frequency changed, was %uHz, now %uHz!\n",
gt->clock_frequency,
read_clock_frequency(gt->uncore));
}
}
#endif

View File

@ -12,7 +12,6 @@
#include "intel_gt_mcr.h"
#include "intel_gt_pm_debugfs.h"
#include "intel_sseu_debugfs.h"
#include "pxp/intel_pxp_debugfs.h"
#include "uc/intel_uc_debugfs.h"
int intel_gt_debugfs_reset_show(struct intel_gt *gt, u64 *val)
@ -99,7 +98,6 @@ void intel_gt_debugfs_register(struct intel_gt *gt)
intel_sseu_debugfs_register(gt, root);
intel_uc_debugfs_register(&gt->uc, root);
intel_pxp_debugfs_register(&gt->pxp, root);
}
void intel_gt_debugfs_register_files(struct dentry *root,

View File

@ -10,6 +10,7 @@
#include "intel_breadcrumbs.h"
#include "intel_gt.h"
#include "intel_gt_irq.h"
#include "intel_gt_print.h"
#include "intel_gt_regs.h"
#include "intel_uncore.h"
#include "intel_rps.h"
@ -47,9 +48,8 @@ gen11_gt_engine_identity(struct intel_gt *gt,
!time_after32(local_clock() >> 10, timeout_ts));
if (unlikely(!(ident & GEN11_INTR_DATA_VALID))) {
drm_err(&gt->i915->drm,
"INTR_IDENTITY_REG%u:%u 0x%08x not valid!\n",
bank, bit, ident);
gt_err(gt, "INTR_IDENTITY_REG%u:%u 0x%08x not valid!\n",
bank, bit, ident);
return 0;
}
@ -76,7 +76,7 @@ gen11_other_irq_handler(struct intel_gt *gt, const u8 instance,
return gen11_rps_irq_handler(&media_gt->rps, iir);
if (instance == OTHER_KCR_INSTANCE)
return intel_pxp_irq_handler(&gt->pxp, iir);
return intel_pxp_irq_handler(gt->i915->pxp, iir);
if (instance == OTHER_GSC_INSTANCE)
return intel_gsc_irq_handler(gt, iir);
@ -378,8 +378,7 @@ void gen6_gt_irq_handler(struct intel_gt *gt, u32 gt_iir)
if (gt_iir & (GT_BLT_CS_ERROR_INTERRUPT |
GT_BSD_CS_ERROR_INTERRUPT |
GT_CS_MASTER_ERROR_INTERRUPT))
drm_dbg(&gt->i915->drm, "Command parser error, gt_iir 0x%08x\n",
gt_iir);
gt_dbg(gt, "Command parser error, gt_iir 0x%08x\n", gt_iir);
if (gt_iir & GT_PARITY_ERROR(gt->i915))
gen7_parity_error_irq_handler(gt, gt_iir);

View File

@ -6,6 +6,7 @@
#include "i915_drv.h"
#include "intel_gt_mcr.h"
#include "intel_gt_print.h"
#include "intel_gt_regs.h"
/**
@ -143,6 +144,8 @@ void intel_gt_mcr_init(struct intel_gt *gt)
unsigned long fuse;
int i;
spin_lock_init(&gt->mcr_lock);
/*
* An mslice is unavailable only if both the meml3 for the slice is
* disabled *and* all of the DSS in the slice (quadrant) are disabled.
@ -156,14 +159,21 @@ void intel_gt_mcr_init(struct intel_gt *gt)
GEN12_MEML3_EN_MASK);
if (!gt->info.mslice_mask) /* should be impossible! */
drm_warn(&i915->drm, "mslice mask all zero!\n");
gt_warn(gt, "mslice mask all zero!\n");
}
if (MEDIA_VER(i915) >= 13 && gt->type == GT_MEDIA) {
gt->steering_table[OADDRM] = xelpmp_oaddrm_steering_table;
} else if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 70)) {
fuse = REG_FIELD_GET(GT_L3_EXC_MASK,
intel_uncore_read(gt->uncore, XEHP_FUSE4));
/* Wa_14016747170 */
if (IS_MTL_GRAPHICS_STEP(i915, M, STEP_A0, STEP_B0) ||
IS_MTL_GRAPHICS_STEP(i915, P, STEP_A0, STEP_B0))
fuse = REG_FIELD_GET(MTL_GT_L3_EXC_MASK,
intel_uncore_read(gt->uncore,
MTL_GT_ACTIVITY_FACTOR));
else
fuse = REG_FIELD_GET(GT_L3_EXC_MASK,
intel_uncore_read(gt->uncore, XEHP_FUSE4));
/*
* Despite the register field being named "exclude mask" the
@ -196,7 +206,7 @@ void intel_gt_mcr_init(struct intel_gt *gt)
~intel_uncore_read(gt->uncore, GEN10_MIRROR_FUSE3) &
GEN10_L3BANK_MASK;
if (!gt->info.l3bank_mask) /* should be impossible! */
drm_warn(&i915->drm, "L3 bank mask is all zero!\n");
gt_warn(gt, "L3 bank mask is all zero!\n");
} else if (GRAPHICS_VER(i915) >= 11) {
/*
* We expect all modern platforms to have at least some
@ -221,24 +231,26 @@ static i915_reg_t mcr_reg_cast(const i915_mcr_reg_t mcr)
/*
* rw_with_mcr_steering_fw - Access a register with specific MCR steering
* @uncore: pointer to struct intel_uncore
* @gt: GT to read register from
* @reg: register being accessed
* @rw_flag: FW_REG_READ for read access or FW_REG_WRITE for write access
* @group: group number (documented as "sliceid" on older platforms)
* @instance: instance number (documented as "subsliceid" on older platforms)
* @value: register value to be written (ignored for read)
*
* Context: The caller must hold the MCR lock
* Return: 0 for write access. register value for read access.
*
* Caller needs to make sure the relevant forcewake wells are up.
*/
static u32 rw_with_mcr_steering_fw(struct intel_uncore *uncore,
static u32 rw_with_mcr_steering_fw(struct intel_gt *gt,
i915_mcr_reg_t reg, u8 rw_flag,
int group, int instance, u32 value)
{
struct intel_uncore *uncore = gt->uncore;
u32 mcr_mask, mcr_ss, mcr, old_mcr, val = 0;
lockdep_assert_held(&uncore->lock);
lockdep_assert_held(&gt->mcr_lock);
if (GRAPHICS_VER_FULL(uncore->i915) >= IP_VER(12, 70)) {
/*
@ -308,12 +320,14 @@ static u32 rw_with_mcr_steering_fw(struct intel_uncore *uncore,
return val;
}
static u32 rw_with_mcr_steering(struct intel_uncore *uncore,
static u32 rw_with_mcr_steering(struct intel_gt *gt,
i915_mcr_reg_t reg, u8 rw_flag,
int group, int instance,
u32 value)
{
struct intel_uncore *uncore = gt->uncore;
enum forcewake_domains fw_domains;
unsigned long flags;
u32 val;
fw_domains = intel_uncore_forcewake_for_reg(uncore, mcr_reg_cast(reg),
@ -322,17 +336,87 @@ static u32 rw_with_mcr_steering(struct intel_uncore *uncore,
GEN8_MCR_SELECTOR,
FW_REG_READ | FW_REG_WRITE);
spin_lock_irq(&uncore->lock);
intel_gt_mcr_lock(gt, &flags);
spin_lock(&uncore->lock);
intel_uncore_forcewake_get__locked(uncore, fw_domains);
val = rw_with_mcr_steering_fw(uncore, reg, rw_flag, group, instance, value);
val = rw_with_mcr_steering_fw(gt, reg, rw_flag, group, instance, value);
intel_uncore_forcewake_put__locked(uncore, fw_domains);
spin_unlock_irq(&uncore->lock);
spin_unlock(&uncore->lock);
intel_gt_mcr_unlock(gt, flags);
return val;
}
/**
* intel_gt_mcr_lock - Acquire MCR steering lock
* @gt: GT structure
* @flags: storage to save IRQ flags to
*
* Performs locking to protect the steering for the duration of an MCR
* operation. On MTL and beyond, a hardware lock will also be taken to
* serialize access not only for the driver, but also for external hardware and
* firmware agents.
*
* Context: Takes gt->mcr_lock. uncore->lock should *not* be held when this
* function is called, although it may be acquired after this
* function call.
*/
void intel_gt_mcr_lock(struct intel_gt *gt, unsigned long *flags)
{
unsigned long __flags;
int err = 0;
lockdep_assert_not_held(&gt->uncore->lock);
/*
* Starting with MTL, we need to coordinate not only with other
* driver threads, but also with hardware/firmware agents. A dedicated
* locking register is used.
*/
if (GRAPHICS_VER_FULL(gt->i915) >= IP_VER(12, 70))
err = wait_for(intel_uncore_read_fw(gt->uncore,
MTL_STEER_SEMAPHORE) == 0x1, 100);
/*
* Even on platforms with a hardware lock, we'll continue to grab
* a software spinlock too for lockdep purposes. If the hardware lock
* was already acquired, there should never be contention on the
* software lock.
*/
spin_lock_irqsave(&gt->mcr_lock, __flags);
*flags = __flags;
/*
* In theory we should never fail to acquire the HW semaphore; this
* would indicate some hardware/firmware is misbehaving and not
* releasing it properly.
*/
if (err == -ETIMEDOUT) {
gt_err_ratelimited(gt, "hardware MCR steering semaphore timed out");
add_taint_for_CI(gt->i915, TAINT_WARN); /* CI is now unreliable */
}
}
/**
* intel_gt_mcr_unlock - Release MCR steering lock
* @gt: GT structure
* @flags: IRQ flags to restore
*
* Releases the lock acquired by intel_gt_mcr_lock().
*
* Context: Releases gt->mcr_lock
*/
void intel_gt_mcr_unlock(struct intel_gt *gt, unsigned long flags)
{
spin_unlock_irqrestore(&gt->mcr_lock, flags);
if (GRAPHICS_VER_FULL(gt->i915) >= IP_VER(12, 70))
intel_uncore_write_fw(gt->uncore, MTL_STEER_SEMAPHORE, 0x1);
}
/**
* intel_gt_mcr_read - read a specific instance of an MCR register
* @gt: GT structure
@ -340,6 +424,8 @@ static u32 rw_with_mcr_steering(struct intel_uncore *uncore,
* @group: the MCR group
* @instance: the MCR instance
*
* Context: Takes and releases gt->mcr_lock
*
* Returns the value read from an MCR register after steering toward a specific
* group/instance.
*/
@ -347,7 +433,7 @@ u32 intel_gt_mcr_read(struct intel_gt *gt,
i915_mcr_reg_t reg,
int group, int instance)
{
return rw_with_mcr_steering(gt->uncore, reg, FW_REG_READ, group, instance, 0);
return rw_with_mcr_steering(gt, reg, FW_REG_READ, group, instance, 0);
}
/**
@ -360,11 +446,13 @@ u32 intel_gt_mcr_read(struct intel_gt *gt,
*
* Write an MCR register in unicast mode after steering toward a specific
* group/instance.
*
* Context: Calls a function that takes and releases gt->mcr_lock
*/
void intel_gt_mcr_unicast_write(struct intel_gt *gt, i915_mcr_reg_t reg, u32 value,
int group, int instance)
{
rw_with_mcr_steering(gt->uncore, reg, FW_REG_WRITE, group, instance, value);
rw_with_mcr_steering(gt, reg, FW_REG_WRITE, group, instance, value);
}
/**
@ -374,10 +462,16 @@ void intel_gt_mcr_unicast_write(struct intel_gt *gt, i915_mcr_reg_t reg, u32 val
* @value: value to write
*
* Write an MCR register in multicast mode to update all instances.
*
* Context: Takes and releases gt->mcr_lock
*/
void intel_gt_mcr_multicast_write(struct intel_gt *gt,
i915_mcr_reg_t reg, u32 value)
{
unsigned long flags;
intel_gt_mcr_lock(gt, &flags);
/*
* Ensure we have multicast behavior, just in case some non-i915 agent
* left the hardware in unicast mode.
@ -386,6 +480,8 @@ void intel_gt_mcr_multicast_write(struct intel_gt *gt,
intel_uncore_write_fw(gt->uncore, MTL_MCR_SELECTOR, GEN11_MCR_MULTICAST);
intel_uncore_write(gt->uncore, mcr_reg_cast(reg), value);
intel_gt_mcr_unlock(gt, flags);
}
/**
@ -398,9 +494,13 @@ void intel_gt_mcr_multicast_write(struct intel_gt *gt,
* function assumes the caller is already holding any necessary forcewake
* domains; use intel_gt_mcr_multicast_write() in cases where forcewake should
* be obtained automatically.
*
* Context: The caller must hold gt->mcr_lock.
*/
void intel_gt_mcr_multicast_write_fw(struct intel_gt *gt, i915_mcr_reg_t reg, u32 value)
{
lockdep_assert_held(&gt->mcr_lock);
/*
* Ensure we have multicast behavior, just in case some non-i915 agent
* left the hardware in unicast mode.
@ -427,6 +527,8 @@ void intel_gt_mcr_multicast_write_fw(struct intel_gt *gt, i915_mcr_reg_t reg, u3
* domains; use intel_gt_mcr_multicast_rmw() in cases where forcewake should
* be obtained automatically.
*
* Context: Calls functions that take and release gt->mcr_lock
*
* Returns the old (unmodified) value read.
*/
u32 intel_gt_mcr_multicast_rmw(struct intel_gt *gt, i915_mcr_reg_t reg,
@ -578,6 +680,8 @@ void intel_gt_mcr_get_nonterminated_steering(struct intel_gt *gt,
* domains; use intel_gt_mcr_read_any() in cases where forcewake should be
* obtained automatically.
*
* Context: The caller must hold gt->mcr_lock.
*
* Returns the value from a non-terminated instance of @reg.
*/
u32 intel_gt_mcr_read_any_fw(struct intel_gt *gt, i915_mcr_reg_t reg)
@ -585,10 +689,12 @@ u32 intel_gt_mcr_read_any_fw(struct intel_gt *gt, i915_mcr_reg_t reg)
int type;
u8 group, instance;
lockdep_assert_held(&gt->mcr_lock);
for (type = 0; type < NUM_STEERING_TYPES; type++) {
if (reg_needs_read_steering(gt, reg, type)) {
get_nonterminated_steering(gt, type, &group, &instance);
return rw_with_mcr_steering_fw(gt->uncore, reg,
return rw_with_mcr_steering_fw(gt, reg,
FW_REG_READ,
group, instance, 0);
}
@ -605,6 +711,8 @@ u32 intel_gt_mcr_read_any_fw(struct intel_gt *gt, i915_mcr_reg_t reg)
* Reads a GT MCR register. The read will be steered to a non-terminated
* instance (i.e., one that isn't fused off or powered down by power gating).
*
* Context: Calls a function that takes and releases gt->mcr_lock.
*
* Returns the value from a non-terminated instance of @reg.
*/
u32 intel_gt_mcr_read_any(struct intel_gt *gt, i915_mcr_reg_t reg)
@ -615,7 +723,7 @@ u32 intel_gt_mcr_read_any(struct intel_gt *gt, i915_mcr_reg_t reg)
for (type = 0; type < NUM_STEERING_TYPES; type++) {
if (reg_needs_read_steering(gt, reg, type)) {
get_nonterminated_steering(gt, type, &group, &instance);
return rw_with_mcr_steering(gt->uncore, reg,
return rw_with_mcr_steering(gt, reg,
FW_REG_READ,
group, instance, 0);
}
@ -728,6 +836,7 @@ void intel_gt_mcr_get_ss_steering(struct intel_gt *gt, unsigned int dss,
* Note that this routine assumes the caller holds forcewake asserted, it is
* not suitable for very long waits.
*
* Context: Calls a function that takes and releases gt->mcr_lock
* Return: 0 if the register matches the desired condition, or -ETIMEDOUT.
*/
int intel_gt_mcr_wait_for_reg(struct intel_gt *gt,
@ -739,7 +848,7 @@ int intel_gt_mcr_wait_for_reg(struct intel_gt *gt,
{
int ret;
lockdep_assert_not_held(&gt->uncore->lock);
lockdep_assert_not_held(&gt->mcr_lock);
#define done ((intel_gt_mcr_read_any(gt, reg) & mask) == value)

View File

@ -9,6 +9,8 @@
#include "intel_gt_types.h"
void intel_gt_mcr_init(struct intel_gt *gt);
void intel_gt_mcr_lock(struct intel_gt *gt, unsigned long *flags);
void intel_gt_mcr_unlock(struct intel_gt *gt, unsigned long flags);
u32 intel_gt_mcr_read(struct intel_gt *gt,
i915_mcr_reg_t reg,

View File

@ -14,6 +14,7 @@
#include "intel_gt.h"
#include "intel_gt_clock_utils.h"
#include "intel_gt_pm.h"
#include "intel_gt_print.h"
#include "intel_gt_requests.h"
#include "intel_llc.h"
#include "intel_pm.h"
@ -275,8 +276,7 @@ int intel_gt_resume(struct intel_gt *gt)
/* Only when the HW is re-initialised, can we replay the requests */
err = intel_gt_init_hw(gt);
if (err) {
i915_probe_error(gt->i915,
"Failed to initialize GPU, declaring it wedged!\n");
gt_probe_error(gt, "Failed to initialize GPU, declaring it wedged!\n");
goto err_wedged;
}
@ -293,9 +293,8 @@ int intel_gt_resume(struct intel_gt *gt)
intel_engine_pm_put(engine);
if (err) {
drm_err(&gt->i915->drm,
"Failed to restart %s (%d)\n",
engine->name, err);
gt_err(gt, "Failed to restart %s (%d)\n",
engine->name, err);
goto err_wedged;
}
}
@ -304,8 +303,6 @@ int intel_gt_resume(struct intel_gt *gt)
intel_uc_resume(&gt->uc);
intel_pxp_resume(&gt->pxp);
user_forcewake(gt, false);
out_fw:
@ -339,8 +336,6 @@ void intel_gt_suspend_prepare(struct intel_gt *gt)
{
user_forcewake(gt, true);
wait_for_suspend(gt);
intel_pxp_suspend_prepare(&gt->pxp);
}
static suspend_state_t pm_suspend_target(void)
@ -365,7 +360,6 @@ void intel_gt_suspend_late(struct intel_gt *gt)
GEM_BUG_ON(gt->awake);
intel_uc_suspend(&gt->uc);
intel_pxp_suspend(&gt->pxp);
/*
* On disabling the device, we want to turn off HW access to memory
@ -393,7 +387,6 @@ void intel_gt_suspend_late(struct intel_gt *gt)
void intel_gt_runtime_suspend(struct intel_gt *gt)
{
intel_pxp_runtime_suspend(&gt->pxp);
intel_uc_runtime_suspend(&gt->uc);
GT_TRACE(gt, "\n");
@ -411,8 +404,6 @@ int intel_gt_runtime_resume(struct intel_gt *gt)
if (ret)
return ret;
intel_pxp_runtime_resume(&gt->pxp);
return 0;
}

View File

@ -0,0 +1,51 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef __INTEL_GT_PRINT__
#define __INTEL_GT_PRINT__
#include <drm/drm_print.h>
#include "intel_gt_types.h"
#include "i915_utils.h"
#define gt_err(_gt, _fmt, ...) \
drm_err(&(_gt)->i915->drm, "GT%u: " _fmt, (_gt)->info.id, ##__VA_ARGS__)
#define gt_warn(_gt, _fmt, ...) \
drm_warn(&(_gt)->i915->drm, "GT%u: " _fmt, (_gt)->info.id, ##__VA_ARGS__)
#define gt_notice(_gt, _fmt, ...) \
drm_notice(&(_gt)->i915->drm, "GT%u: " _fmt, (_gt)->info.id, ##__VA_ARGS__)
#define gt_info(_gt, _fmt, ...) \
drm_info(&(_gt)->i915->drm, "GT%u: " _fmt, (_gt)->info.id, ##__VA_ARGS__)
#define gt_dbg(_gt, _fmt, ...) \
drm_dbg(&(_gt)->i915->drm, "GT%u: " _fmt, (_gt)->info.id, ##__VA_ARGS__)
#define gt_err_ratelimited(_gt, _fmt, ...) \
drm_err_ratelimited(&(_gt)->i915->drm, "GT%u: " _fmt, (_gt)->info.id, ##__VA_ARGS__)
#define gt_probe_error(_gt, _fmt, ...) \
do { \
if (i915_error_injected()) \
gt_dbg(_gt, _fmt, ##__VA_ARGS__); \
else \
gt_err(_gt, _fmt, ##__VA_ARGS__); \
} while (0)
#define gt_WARN(_gt, _condition, _fmt, ...) \
drm_WARN(&(_gt)->i915->drm, _condition, "GT%u: " _fmt, (_gt)->info.id, ##__VA_ARGS__)
#define gt_WARN_ONCE(_gt, _condition, _fmt, ...) \
drm_WARN_ONCE(&(_gt)->i915->drm, _condition, "GT%u: " _fmt, (_gt)->info.id, ##__VA_ARGS__)
#define gt_WARN_ON(_gt, _condition) \
gt_WARN(_gt, _condition, "%s", "gt_WARN_ON(" __stringify(_condition) ")")
#define gt_WARN_ON_ONCE(_gt, _condition) \
gt_WARN_ONCE(_gt, _condition, "%s", "gt_WARN_ONCE(" __stringify(_condition) ")")
#endif /* __INTEL_GT_PRINT_H__ */

View File

@ -67,6 +67,7 @@
#define GMD_ID_MEDIA _MMIO(MTL_MEDIA_GSI_BASE + 0xd8c)
#define MCFG_MCR_SELECTOR _MMIO(0xfd0)
#define MTL_STEER_SEMAPHORE _MMIO(0xfd0)
#define MTL_MCR_SELECTOR _MMIO(0xfd4)
#define SF_MCR_SELECTOR _MMIO(0xfd8)
#define GEN8_MCR_SELECTOR _MMIO(0xfdc)
@ -406,13 +407,14 @@
#define GEN9_WM_CHICKEN3 _MMIO(0x5588)
#define GEN9_FACTOR_IN_CLR_VAL_HIZ (1 << 9)
#define CHICKEN_RASTER_1 _MMIO(0x6204)
#define CHICKEN_RASTER_1 MCR_REG(0x6204)
#define DIS_SF_ROUND_NEAREST_EVEN REG_BIT(8)
#define CHICKEN_RASTER_2 _MMIO(0x6208)
#define CHICKEN_RASTER_2 MCR_REG(0x6208)
#define TBIMR_FAST_CLIP REG_BIT(5)
#define VFLSKPD MCR_REG(0x62a8)
#define VF_PREFETCH_TLB_DIS REG_BIT(5)
#define DIS_OVER_FETCH_CACHE REG_BIT(1)
#define DIS_MULT_MISS_RD_SQUASH REG_BIT(0)
@ -429,9 +431,10 @@
#define RC_OP_FLUSH_ENABLE (1 << 0)
#define HIZ_RAW_STALL_OPT_DISABLE (1 << 2)
#define CACHE_MODE_1 _MMIO(0x7004) /* IVB+ */
#define PIXEL_SUBSPAN_COLLECT_OPT_DISABLE (1 << 6)
#define GEN8_4x4_STC_OPTIMIZATION_DISABLE (1 << 6)
#define GEN9_PARTIAL_RESOLVE_IN_VC_DISABLE (1 << 1)
#define MSAA_OPTIMIZATION_REDUC_DISABLE REG_BIT(11)
#define PIXEL_SUBSPAN_COLLECT_OPT_DISABLE REG_BIT(6)
#define GEN8_4x4_STC_OPTIMIZATION_DISABLE REG_BIT(6)
#define GEN9_PARTIAL_RESOLVE_IN_VC_DISABLE REG_BIT(1)
#define GEN7_GT_MODE _MMIO(0x7008)
#define GEN9_IZ_HASHING_MASK(slice) (0x3 << ((slice) * 2))
@ -457,6 +460,9 @@
#define GEN8_L3CNTLREG _MMIO(0x7034)
#define GEN8_ERRDETBCTRL (1 << 9)
#define PSS_MODE2 _MMIO(0x703c)
#define SCOREBOARD_STALL_FLUSH_CONTROL REG_BIT(5)
#define GEN7_SC_INSTDONE _MMIO(0x7100)
#define GEN12_SC_INSTDONE_EXTRA _MMIO(0x7104)
#define GEN12_SC_INSTDONE_EXTRA2 _MMIO(0x7108)
@ -917,6 +923,10 @@
#define MSG_IDLE_FW_MASK REG_GENMASK(13, 9)
#define MSG_IDLE_FW_SHIFT 9
#define RC_PSMI_CTRL_GSCCS _MMIO(0x11a050)
#define IDLE_MSG_DISABLE REG_BIT(0)
#define PWRCTX_MAXCNT_GSCCS _MMIO(0x11a054)
#define FORCEWAKE_MEDIA_GEN9 _MMIO(0xa270)
#define FORCEWAKE_RENDER_GEN9 _MMIO(0xa278)
@ -949,10 +959,11 @@
#define GEN7_DISABLE_SAMPLER_PREFETCH (1 << 30)
#define GEN8_GARBCNTL _MMIO(0xb004)
#define GEN9_GAPS_TSV_CREDIT_DISABLE (1 << 7)
#define GEN11_ARBITRATION_PRIO_ORDER_MASK (0x3f << 22)
#define GEN11_HASH_CTRL_EXCL_MASK (0x7f << 0)
#define GEN11_HASH_CTRL_EXCL_BIT0 (1 << 0)
#define GEN11_ARBITRATION_PRIO_ORDER_MASK REG_GENMASK(27, 22)
#define GEN12_BUS_HASH_CTL_BIT_EXC REG_BIT(7)
#define GEN9_GAPS_TSV_CREDIT_DISABLE REG_BIT(7)
#define GEN11_HASH_CTRL_EXCL_MASK REG_GENMASK(6, 0)
#define GEN11_HASH_CTRL_EXCL_BIT0 REG_FIELD_PREP(GEN11_HASH_CTRL_EXCL_MASK, 0x1)
#define GEN9_SCRATCH_LNCF1 _MMIO(0xb008)
#define GEN9_LNCF_NONIA_COHERENT_ATOMICS_ENABLE REG_BIT(0)
@ -965,6 +976,7 @@
#define GEN7_L3AGDIS (1 << 19)
#define XEHPC_LNCFMISCCFGREG0 _MMIO(0xb01c)
#define XEHPC_HOSTCACHEEN REG_BIT(1)
#define XEHPC_OVRLSCCC REG_BIT(0)
#define GEN7_L3CNTLREG2 _MMIO(0xb020)
@ -1524,6 +1536,9 @@
#define MTL_MEDIA_MC6 _MMIO(0x138048)
#define MTL_GT_ACTIVITY_FACTOR _MMIO(0x138010)
#define MTL_GT_L3_EXC_MASK REG_GENMASK(5, 3)
#define GEN6_GT_THREAD_STATUS_REG _MMIO(0x13805c)
#define GEN6_GT_THREAD_STATUS_CORE_MASK 0x7

View File

@ -12,6 +12,7 @@
#include "i915_drv.h"
#include "i915_sysfs.h"
#include "intel_gt.h"
#include "intel_gt_print.h"
#include "intel_gt_sysfs.h"
#include "intel_gt_sysfs_pm.h"
#include "intel_gt_types.h"
@ -105,8 +106,7 @@ void intel_gt_sysfs_register(struct intel_gt *gt)
exit_fail:
kobject_put(&gt->sysfs_gt);
drm_warn(&gt->i915->drm,
"failed to initialize gt%d sysfs root\n", gt->info.id);
gt_warn(gt, "failed to initialize sysfs root\n");
}
void intel_gt_sysfs_unregister(struct intel_gt *gt)

View File

@ -11,6 +11,7 @@
#include "i915_reg.h"
#include "i915_sysfs.h"
#include "intel_gt.h"
#include "intel_gt_print.h"
#include "intel_gt_regs.h"
#include "intel_gt_sysfs.h"
#include "intel_gt_sysfs_pm.h"
@ -304,9 +305,7 @@ static void intel_sysfs_rc6_init(struct intel_gt *gt, struct kobject *kobj)
ret = __intel_gt_sysfs_create_group(kobj, rc6_attr_group);
if (ret)
drm_warn(&gt->i915->drm,
"failed to create gt%u RC6 sysfs files (%pe)\n",
gt->info.id, ERR_PTR(ret));
gt_warn(gt, "failed to create RC6 sysfs files (%pe)\n", ERR_PTR(ret));
/*
* cannot use the is_visible() attribute because
@ -315,17 +314,13 @@ static void intel_sysfs_rc6_init(struct intel_gt *gt, struct kobject *kobj)
if (HAS_RC6p(gt->i915)) {
ret = __intel_gt_sysfs_create_group(kobj, rc6p_attr_group);
if (ret)
drm_warn(&gt->i915->drm,
"failed to create gt%u RC6p sysfs files (%pe)\n",
gt->info.id, ERR_PTR(ret));
gt_warn(gt, "failed to create RC6p sysfs files (%pe)\n", ERR_PTR(ret));
}
if (IS_VALLEYVIEW(gt->i915) || IS_CHERRYVIEW(gt->i915)) {
ret = __intel_gt_sysfs_create_group(kobj, media_rc6_attr_group);
if (ret)
drm_warn(&gt->i915->drm,
"failed to create media %u RC6 sysfs files (%pe)\n",
gt->info.id, ERR_PTR(ret));
gt_warn(gt, "failed to create media RC6 sysfs files (%pe)\n", ERR_PTR(ret));
}
}
@ -739,9 +734,7 @@ void intel_gt_sysfs_pm_init(struct intel_gt *gt, struct kobject *kobj)
ret = intel_sysfs_rps_init(gt, kobj);
if (ret)
drm_warn(&gt->i915->drm,
"failed to create gt%u RPS sysfs files (%pe)",
gt->info.id, ERR_PTR(ret));
gt_warn(gt, "failed to create RPS sysfs files (%pe)", ERR_PTR(ret));
/* end of the legacy interfaces */
if (!is_object_gt(kobj))
@ -749,29 +742,22 @@ void intel_gt_sysfs_pm_init(struct intel_gt *gt, struct kobject *kobj)
ret = sysfs_create_file(kobj, &attr_punit_req_freq_mhz.attr);
if (ret)
drm_warn(&gt->i915->drm,
"failed to create gt%u punit_req_freq_mhz sysfs (%pe)",
gt->info.id, ERR_PTR(ret));
gt_warn(gt, "failed to create punit_req_freq_mhz sysfs (%pe)", ERR_PTR(ret));
if (i915_mmio_reg_valid(intel_gt_perf_limit_reasons_reg(gt))) {
ret = sysfs_create_files(kobj, throttle_reason_attrs);
if (ret)
drm_warn(&gt->i915->drm,
"failed to create gt%u throttle sysfs files (%pe)",
gt->info.id, ERR_PTR(ret));
gt_warn(gt, "failed to create throttle sysfs files (%pe)", ERR_PTR(ret));
}
if (HAS_MEDIA_RATIO_MODE(gt->i915) && intel_uc_uses_guc_slpc(&gt->uc)) {
ret = sysfs_create_files(kobj, media_perf_power_attrs);
if (ret)
drm_warn(&gt->i915->drm,
"failed to create gt%u media_perf_power_attrs sysfs (%pe)\n",
gt->info.id, ERR_PTR(ret));
gt_warn(gt, "failed to create media_perf_power_attrs sysfs (%pe)\n",
ERR_PTR(ret));
}
ret = sysfs_create_files(gt->sysfs_defaults, rps_defaults_attrs);
if (ret)
drm_warn(&gt->i915->drm,
"failed to add gt%u rps defaults (%pe)\n",
gt->info.id, ERR_PTR(ret));
gt_warn(gt, "failed to add rps defaults (%pe)\n", ERR_PTR(ret));
}

View File

@ -30,7 +30,6 @@
#include "intel_rps_types.h"
#include "intel_migrate_types.h"
#include "intel_wakeref.h"
#include "pxp/intel_pxp_types.h"
#include "intel_wopcm.h"
struct drm_i915_private;
@ -233,6 +232,14 @@ struct intel_gt {
u8 instanceid;
} default_steering;
/**
* @mcr_lock: Protects the MCR steering register
*
* Protects the MCR steering register (e.g., GEN8_MCR_SELECTOR).
* Should be taken before uncore->lock in cases where both are desired.
*/
spinlock_t mcr_lock;
/*
* Base of per-tile GTTMMADR where we can derive the MMIO and the GGTT.
*/
@ -267,8 +274,6 @@ struct intel_gt {
u8 wb_index; /* Only used on HAS_L3_CCS_READ() platforms */
} mocs;
struct intel_pxp pxp;
/* gt/gtN sysfs */
struct kobject sysfs_gt;
@ -277,6 +282,9 @@ struct intel_gt {
struct kobject *sysfs_defaults;
struct i915_perf_gt perf;
/** link: &ggtt.gt_list */
struct list_head ggtt_link;
};
struct intel_gt_definition {
@ -296,12 +304,6 @@ enum intel_gt_scratch_field {
/* 8 bytes */
INTEL_GT_SCRATCH_FIELD_COHERENTL3_WA = 256,
/* 6 * 8 bytes */
INTEL_GT_SCRATCH_FIELD_PERF_CS_GPR = 2048,
/* 4 bytes */
INTEL_GT_SCRATCH_FIELD_PERF_PREDICATE_RESULT_1 = 2096,
};
#endif /* __INTEL_GT_TYPES_H__ */

View File

@ -17,6 +17,7 @@
#include "i915_utils.h"
#include "intel_gt.h"
#include "intel_gt_mcr.h"
#include "intel_gt_print.h"
#include "intel_gt_regs.h"
#include "intel_gtt.h"
@ -461,9 +462,9 @@ void gtt_write_workarounds(struct intel_gt *gt)
intel_uncore_write(uncore,
HSW_GTT_CACHE_EN,
can_use_gtt_cache ? GTT_CACHE_EN_ALL : 0);
drm_WARN_ON_ONCE(&i915->drm, can_use_gtt_cache &&
intel_uncore_read(uncore,
HSW_GTT_CACHE_EN) == 0);
gt_WARN_ON_ONCE(gt, can_use_gtt_cache &&
intel_uncore_read(uncore,
HSW_GTT_CACHE_EN) == 0);
}
}
@ -482,14 +483,25 @@ static void tgl_setup_private_ppat(struct intel_uncore *uncore)
static void xehp_setup_private_ppat(struct intel_gt *gt)
{
intel_gt_mcr_multicast_write(gt, XEHP_PAT_INDEX(0), GEN8_PPAT_WB);
intel_gt_mcr_multicast_write(gt, XEHP_PAT_INDEX(1), GEN8_PPAT_WC);
intel_gt_mcr_multicast_write(gt, XEHP_PAT_INDEX(2), GEN8_PPAT_WT);
intel_gt_mcr_multicast_write(gt, XEHP_PAT_INDEX(3), GEN8_PPAT_UC);
intel_gt_mcr_multicast_write(gt, XEHP_PAT_INDEX(4), GEN8_PPAT_WB);
intel_gt_mcr_multicast_write(gt, XEHP_PAT_INDEX(5), GEN8_PPAT_WB);
intel_gt_mcr_multicast_write(gt, XEHP_PAT_INDEX(6), GEN8_PPAT_WB);
intel_gt_mcr_multicast_write(gt, XEHP_PAT_INDEX(7), GEN8_PPAT_WB);
enum forcewake_domains fw;
unsigned long flags;
fw = intel_uncore_forcewake_for_reg(gt->uncore, _MMIO(XEHP_PAT_INDEX(0).reg),
FW_REG_WRITE);
intel_uncore_forcewake_get(gt->uncore, fw);
intel_gt_mcr_lock(gt, &flags);
intel_gt_mcr_multicast_write_fw(gt, XEHP_PAT_INDEX(0), GEN8_PPAT_WB);
intel_gt_mcr_multicast_write_fw(gt, XEHP_PAT_INDEX(1), GEN8_PPAT_WC);
intel_gt_mcr_multicast_write_fw(gt, XEHP_PAT_INDEX(2), GEN8_PPAT_WT);
intel_gt_mcr_multicast_write_fw(gt, XEHP_PAT_INDEX(3), GEN8_PPAT_UC);
intel_gt_mcr_multicast_write_fw(gt, XEHP_PAT_INDEX(4), GEN8_PPAT_WB);
intel_gt_mcr_multicast_write_fw(gt, XEHP_PAT_INDEX(5), GEN8_PPAT_WB);
intel_gt_mcr_multicast_write_fw(gt, XEHP_PAT_INDEX(6), GEN8_PPAT_WB);
intel_gt_mcr_multicast_write_fw(gt, XEHP_PAT_INDEX(7), GEN8_PPAT_WB);
intel_gt_mcr_unlock(gt, flags);
intel_uncore_forcewake_put(gt->uncore, fw);
}
static void icl_setup_private_ppat(struct intel_uncore *uncore)

View File

@ -355,19 +355,6 @@ struct i915_ggtt {
bool do_idle_maps;
/**
* @pte_lost: Are ptes lost on resume?
*
* Whether the system was recently restored from hibernate and
* thus may have lost pte content.
*/
bool pte_lost;
/**
* @probed_pte: Probed pte value on suspend. Re-checked on resume.
*/
u64 probed_pte;
int mtrr;
/** Bit 6 swizzling required for X tiling */
@ -390,6 +377,9 @@ struct i915_ggtt {
struct mutex error_mutex;
struct drm_mm_node error_capture;
struct drm_mm_node uc_fw;
/** List of GTs mapping this GGTT */
struct list_head gt_list;
};
struct i915_ppgtt {
@ -579,11 +569,10 @@ void intel_ggtt_unbind_vma(struct i915_address_space *vm,
int i915_ggtt_probe_hw(struct drm_i915_private *i915);
int i915_ggtt_init_hw(struct drm_i915_private *i915);
int i915_ggtt_enable_hw(struct drm_i915_private *i915);
void i915_ggtt_enable_guc(struct i915_ggtt *ggtt);
void i915_ggtt_disable_guc(struct i915_ggtt *ggtt);
int i915_init_ggtt(struct drm_i915_private *i915);
void i915_ggtt_driver_release(struct drm_i915_private *i915);
void i915_ggtt_driver_late_release(struct drm_i915_private *i915);
struct i915_ggtt *i915_ggtt_create(struct drm_i915_private *i915);
static inline bool i915_ggtt_has_aperture(const struct i915_ggtt *ggtt)
{
@ -600,17 +589,6 @@ bool i915_ggtt_resume_vm(struct i915_address_space *vm);
void i915_ggtt_suspend(struct i915_ggtt *gtt);
void i915_ggtt_resume(struct i915_ggtt *ggtt);
/**
* i915_ggtt_mark_pte_lost - Mark ggtt ptes as lost or clear such a marking
* @i915 The device private.
* @val whether the ptes should be marked as lost.
*
* In some cases pte content is retained across suspend, but typically lost
* across hibernate. Typically they should be marked as lost on
* hibernation restore and such marking cleared on suspend.
*/
void i915_ggtt_mark_pte_lost(struct drm_i915_private *i915, bool val);
void
fill_page_dma(struct drm_i915_gem_object *p, const u64 val, unsigned int count);

View File

@ -352,6 +352,8 @@ static int max_pte_pkt_size(struct i915_request *rq, int pkt)
return pkt;
}
#define I915_EMIT_PTE_NUM_DWORDS 6
static int emit_pte(struct i915_request *rq,
struct sgt_dma *it,
enum i915_cache_level cache_level,
@ -393,7 +395,7 @@ static int emit_pte(struct i915_request *rq,
offset += (u64)rq->engine->instance << 32;
cs = intel_ring_begin(rq, 6);
cs = intel_ring_begin(rq, I915_EMIT_PTE_NUM_DWORDS);
if (IS_ERR(cs))
return PTR_ERR(cs);
@ -416,7 +418,7 @@ static int emit_pte(struct i915_request *rq,
intel_ring_advance(rq, cs);
intel_ring_update_space(ring);
cs = intel_ring_begin(rq, 6);
cs = intel_ring_begin(rq, I915_EMIT_PTE_NUM_DWORDS);
if (IS_ERR(cs))
return PTR_ERR(cs);

View File

@ -613,14 +613,17 @@ static u32 l3cc_combine(u16 low, u16 high)
static void init_l3cc_table(struct intel_gt *gt,
const struct drm_i915_mocs_table *table)
{
unsigned long flags;
unsigned int i;
u32 l3cc;
intel_gt_mcr_lock(gt, &flags);
for_each_l3cc(l3cc, table, i)
if (GRAPHICS_VER_FULL(gt->i915) >= IP_VER(12, 50))
intel_gt_mcr_multicast_write_fw(gt, XEHP_LNCFCMOCS(i), l3cc);
else
intel_uncore_write_fw(gt->uncore, GEN9_LNCFCMOCS(i), l3cc);
intel_gt_mcr_unlock(gt, flags);
}
void intel_mocs_init_engine(struct intel_engine_cs *engine)

View File

@ -63,7 +63,7 @@ static int render_state_setup(struct intel_renderstate *so,
u32 s = rodata->batch[i];
if (i * 4 == rodata->reloc[reloc_index]) {
u64 r = s + so->vma->node.start;
u64 r = s + i915_vma_offset(so->vma);
s = lower_32_bits(r);
if (HAS_64BIT_RELOC(i915)) {

View File

@ -35,16 +35,6 @@
/* XXX How to handle concurrent GGTT updates using tiling registers? */
#define RESET_UNDER_STOP_MACHINE 0
static void rmw_set_fw(struct intel_uncore *uncore, i915_reg_t reg, u32 set)
{
intel_uncore_rmw_fw(uncore, reg, 0, set);
}
static void rmw_clear_fw(struct intel_uncore *uncore, i915_reg_t reg, u32 clr)
{
intel_uncore_rmw_fw(uncore, reg, clr, 0);
}
static void client_mark_guilty(struct i915_gem_context *ctx, bool banned)
{
struct drm_i915_file_private *file_priv = ctx->file_priv;
@ -212,7 +202,7 @@ static int g4x_do_reset(struct intel_gt *gt,
int ret;
/* WaVcpClkGateDisableForMediaReset:ctg,elk */
rmw_set_fw(uncore, VDECCLK_GATE_D, VCP_UNIT_CLOCK_GATE_DISABLE);
intel_uncore_rmw_fw(uncore, VDECCLK_GATE_D, 0, VCP_UNIT_CLOCK_GATE_DISABLE);
intel_uncore_posting_read_fw(uncore, VDECCLK_GATE_D);
pci_write_config_byte(pdev, I915_GDRST,
@ -234,7 +224,7 @@ static int g4x_do_reset(struct intel_gt *gt,
out:
pci_write_config_byte(pdev, I915_GDRST, 0);
rmw_clear_fw(uncore, VDECCLK_GATE_D, VCP_UNIT_CLOCK_GATE_DISABLE);
intel_uncore_rmw_fw(uncore, VDECCLK_GATE_D, VCP_UNIT_CLOCK_GATE_DISABLE, 0);
intel_uncore_posting_read_fw(uncore, VDECCLK_GATE_D);
return ret;
@ -278,6 +268,7 @@ out:
static int gen6_hw_domain_reset(struct intel_gt *gt, u32 hw_domain_mask)
{
struct intel_uncore *uncore = gt->uncore;
int loops = 2;
int err;
/*
@ -285,18 +276,39 @@ static int gen6_hw_domain_reset(struct intel_gt *gt, u32 hw_domain_mask)
* for fifo space for the write or forcewake the chip for
* the read
*/
intel_uncore_write_fw(uncore, GEN6_GDRST, hw_domain_mask);
do {
intel_uncore_write_fw(uncore, GEN6_GDRST, hw_domain_mask);
/* Wait for the device to ack the reset requests */
err = __intel_wait_for_register_fw(uncore,
GEN6_GDRST, hw_domain_mask, 0,
500, 0,
NULL);
/*
* Wait for the device to ack the reset requests.
*
* On some platforms, e.g. Jasperlake, we see that the
* engine register state is not cleared until shortly after
* GDRST reports completion, causing a failure as we try
* to immediately resume while the internal state is still
* in flux. If we immediately repeat the reset, the second
* reset appears to serialise with the first, and since
* it is a no-op, the registers should retain their reset
* value. However, there is still a concern that upon
* leaving the second reset, the internal engine state
* is still in flux and not ready for resuming.
*/
err = __intel_wait_for_register_fw(uncore, GEN6_GDRST,
hw_domain_mask, 0,
2000, 0,
NULL);
} while (err == 0 && --loops);
if (err)
GT_TRACE(gt,
"Wait for 0x%08x engines reset failed\n",
hw_domain_mask);
/*
* As we have observed that the engine state is still volatile
* after GDRST is acked, impose a small delay to let everything settle.
*/
udelay(50);
return err;
}
@ -448,7 +460,7 @@ static int gen11_lock_sfc(struct intel_engine_cs *engine,
* to reset it as well (we will unlock it once the reset sequence is
* completed).
*/
rmw_set_fw(uncore, sfc_lock.lock_reg, sfc_lock.lock_bit);
intel_uncore_rmw_fw(uncore, sfc_lock.lock_reg, 0, sfc_lock.lock_bit);
ret = __intel_wait_for_register_fw(uncore,
sfc_lock.ack_reg,
@ -498,7 +510,7 @@ static void gen11_unlock_sfc(struct intel_engine_cs *engine)
get_sfc_forced_lock_data(engine, &sfc_lock);
rmw_clear_fw(uncore, sfc_lock.lock_reg, sfc_lock.lock_bit);
intel_uncore_rmw_fw(uncore, sfc_lock.lock_reg, sfc_lock.lock_bit, 0);
}
static int __gen11_reset_engines(struct intel_gt *gt,

View File

@ -897,7 +897,7 @@ static int clear_residuals(struct i915_request *rq)
}
ret = engine->emit_bb_start(rq,
engine->wa_ctx.vma->node.start, 0,
i915_vma_offset(engine->wa_ctx.vma), 0,
0);
if (ret)
return ret;

View File

@ -645,7 +645,7 @@ static void icl_ctx_workarounds_init(struct intel_engine_cs *engine,
static void dg2_ctx_gt_tuning_init(struct intel_engine_cs *engine,
struct i915_wa_list *wal)
{
wa_masked_en(wal, CHICKEN_RASTER_2, TBIMR_FAST_CLIP);
wa_mcr_masked_en(wal, CHICKEN_RASTER_2, TBIMR_FAST_CLIP);
wa_mcr_write_clr_set(wal, XEHP_L3SQCREG5, L3_PWM_TIMER_INIT_VAL_MASK,
REG_FIELD_PREP(L3_PWM_TIMER_INIT_VAL_MASK, 0x7f));
wa_mcr_add(wal,
@ -771,11 +771,45 @@ static void dg2_ctx_workarounds_init(struct intel_engine_cs *engine,
/* Wa_14014947963:dg2 */
if (IS_DG2_GRAPHICS_STEP(engine->i915, G10, STEP_B0, STEP_FOREVER) ||
IS_DG2_G11(engine->i915) || IS_DG2_G12(engine->i915))
IS_DG2_G11(engine->i915) || IS_DG2_G12(engine->i915))
wa_masked_field_set(wal, VF_PREEMPTION, PREEMPTION_VERTEX_COUNT, 0x4000);
/* Wa_18018764978:dg2 */
if (IS_DG2_GRAPHICS_STEP(engine->i915, G10, STEP_C0, STEP_FOREVER) ||
IS_DG2_G11(engine->i915) || IS_DG2_G12(engine->i915))
wa_masked_en(wal, PSS_MODE2, SCOREBOARD_STALL_FLUSH_CONTROL);
/* Wa_15010599737:dg2 */
wa_masked_en(wal, CHICKEN_RASTER_1, DIS_SF_ROUND_NEAREST_EVEN);
wa_mcr_masked_en(wal, CHICKEN_RASTER_1, DIS_SF_ROUND_NEAREST_EVEN);
/* Wa_18019271663:dg2 */
wa_masked_en(wal, CACHE_MODE_1, MSAA_OPTIMIZATION_REDUC_DISABLE);
}
static void mtl_ctx_workarounds_init(struct intel_engine_cs *engine,
struct i915_wa_list *wal)
{
struct drm_i915_private *i915 = engine->i915;
if (IS_MTL_GRAPHICS_STEP(i915, M, STEP_A0, STEP_B0) ||
IS_MTL_GRAPHICS_STEP(i915, P, STEP_A0, STEP_B0)) {
/* Wa_14014947963 */
wa_masked_field_set(wal, VF_PREEMPTION,
PREEMPTION_VERTEX_COUNT, 0x4000);
/* Wa_16013271637 */
wa_mcr_masked_en(wal, XEHP_SLICE_COMMON_ECO_CHICKEN1,
MSC_MSAA_REODER_BUF_BYPASS_DISABLE);
/* Wa_18019627453 */
wa_mcr_masked_en(wal, VFLSKPD, VF_PREFETCH_TLB_DIS);
/* Wa_18018764978 */
wa_masked_en(wal, PSS_MODE2, SCOREBOARD_STALL_FLUSH_CONTROL);
}
/* Wa_18019271663 */
wa_masked_en(wal, CACHE_MODE_1, MSAA_OPTIMIZATION_REDUC_DISABLE);
}
static void fakewa_disable_nestedbb_mode(struct intel_engine_cs *engine,
@ -864,7 +898,9 @@ __intel_engine_init_ctx_wa(struct intel_engine_cs *engine,
if (engine->class != RENDER_CLASS)
goto done;
if (IS_PONTEVECCHIO(i915))
if (IS_METEORLAKE(i915))
mtl_ctx_workarounds_init(engine, wal);
else if (IS_PONTEVECCHIO(i915))
; /* noop; none at this time */
else if (IS_DG2(i915))
dg2_ctx_workarounds_init(engine, wal);
@ -1620,7 +1656,10 @@ pvc_gt_workarounds_init(struct intel_gt *gt, struct i915_wa_list *wal)
static void
xelpg_gt_workarounds_init(struct intel_gt *gt, struct i915_wa_list *wal)
{
/* FIXME: Actual workarounds will be added in future patch(es) */
/* Wa_14014830051 */
if (IS_MTL_GRAPHICS_STEP(gt->i915, M, STEP_A0, STEP_B0) ||
IS_MTL_GRAPHICS_STEP(gt->i915, P, STEP_A0, STEP_B0))
wa_mcr_write_clr(wal, SARB_CHICKEN1, COMP_CKN_IN);
/*
* Unlike older platforms, we no longer setup implicit steering here;
@ -1752,7 +1791,8 @@ static void wa_list_apply(const struct i915_wa_list *wal)
fw = wal_get_fw_for_rmw(uncore, wal);
spin_lock_irqsave(&uncore->lock, flags);
intel_gt_mcr_lock(gt, &flags);
spin_lock(&uncore->lock);
intel_uncore_forcewake_get__locked(uncore, fw);
for (i = 0, wa = wal->list; i < wal->count; i++, wa++) {
@ -1781,7 +1821,8 @@ static void wa_list_apply(const struct i915_wa_list *wal)
}
intel_uncore_forcewake_put__locked(uncore, fw);
spin_unlock_irqrestore(&uncore->lock, flags);
spin_unlock(&uncore->lock);
intel_gt_mcr_unlock(gt, flags);
}
void intel_gt_apply_workarounds(struct intel_gt *gt)
@ -1802,7 +1843,8 @@ static bool wa_list_verify(struct intel_gt *gt,
fw = wal_get_fw_for_rmw(uncore, wal);
spin_lock_irqsave(&uncore->lock, flags);
intel_gt_mcr_lock(gt, &flags);
spin_lock(&uncore->lock);
intel_uncore_forcewake_get__locked(uncore, fw);
for (i = 0, wa = wal->list; i < wal->count; i++, wa++)
@ -1812,7 +1854,8 @@ static bool wa_list_verify(struct intel_gt *gt,
wal->name, from);
intel_uncore_forcewake_put__locked(uncore, fw);
spin_unlock_irqrestore(&uncore->lock, flags);
spin_unlock(&uncore->lock);
intel_gt_mcr_unlock(gt, flags);
return ok;
}
@ -2156,7 +2199,9 @@ void intel_engine_init_whitelist(struct intel_engine_cs *engine)
wa_init_start(w, engine->gt, "whitelist", engine->name);
if (IS_PONTEVECCHIO(i915))
if (IS_METEORLAKE(i915))
; /* noop; none at this time */
else if (IS_PONTEVECCHIO(i915))
pvc_whitelist_build(engine);
else if (IS_DG2(i915))
dg2_whitelist_build(engine);
@ -2266,6 +2311,34 @@ rcs_engine_wa_init(struct intel_engine_cs *engine, struct i915_wa_list *wal)
{
struct drm_i915_private *i915 = engine->i915;
if (IS_MTL_GRAPHICS_STEP(i915, M, STEP_A0, STEP_B0) ||
IS_MTL_GRAPHICS_STEP(i915, P, STEP_A0, STEP_B0)) {
/* Wa_22014600077 */
wa_mcr_masked_en(wal, GEN10_CACHE_MODE_SS,
ENABLE_EU_COUNT_FOR_TDL_FLUSH);
}
if (IS_MTL_GRAPHICS_STEP(i915, M, STEP_A0, STEP_B0) ||
IS_MTL_GRAPHICS_STEP(i915, P, STEP_A0, STEP_B0) ||
IS_DG2_GRAPHICS_STEP(i915, G10, STEP_B0, STEP_FOREVER) ||
IS_DG2_G11(i915) || IS_DG2_G12(i915)) {
/* Wa_1509727124 */
wa_mcr_masked_en(wal, GEN10_SAMPLER_MODE,
SC_DISABLE_POWER_OPTIMIZATION_EBB);
/* Wa_22013037850 */
wa_mcr_write_or(wal, LSC_CHICKEN_BIT_0_UDW,
DISABLE_128B_EVICTION_COMMAND_UDW);
}
if (IS_DG2_GRAPHICS_STEP(i915, G10, STEP_B0, STEP_FOREVER) ||
IS_DG2_G11(i915) || IS_DG2_G12(i915) ||
IS_MTL_GRAPHICS_STEP(i915, M, STEP_A0, STEP_B0)) {
/* Wa_22012856258 */
wa_mcr_masked_en(wal, GEN8_ROW_CHICKEN2,
GEN12_DISABLE_READ_SUPPRESSION);
}
if (IS_DG2(i915)) {
/* Wa_1509235366:dg2 */
wa_write_or(wal, GEN12_GAMCNTRL_CTRL, INVALIDATION_BROADCAST_MODE_DIS |
@ -2277,13 +2350,6 @@ rcs_engine_wa_init(struct intel_engine_cs *engine, struct i915_wa_list *wal)
wa_mcr_masked_en(wal, GEN8_ROW_CHICKEN2, GEN12_ENABLE_LARGE_GRF_MODE);
}
if (IS_DG2_GRAPHICS_STEP(i915, G10, STEP_B0, STEP_FOREVER) ||
IS_DG2_G11(i915) || IS_DG2_G12(i915)) {
/* Wa_1509727124:dg2 */
wa_mcr_masked_en(wal, GEN10_SAMPLER_MODE,
SC_DISABLE_POWER_OPTIMIZATION_EBB);
}
if (IS_DG2_GRAPHICS_STEP(i915, G10, STEP_A0, STEP_B0) ||
IS_DG2_GRAPHICS_STEP(i915, G11, STEP_A0, STEP_B0)) {
/* Wa_14012419201:dg2 */
@ -2315,14 +2381,6 @@ rcs_engine_wa_init(struct intel_engine_cs *engine, struct i915_wa_list *wal)
if (IS_DG2_GRAPHICS_STEP(i915, G10, STEP_B0, STEP_FOREVER) ||
IS_DG2_G11(i915) || IS_DG2_G12(i915)) {
/* Wa_22013037850:dg2 */
wa_mcr_write_or(wal, LSC_CHICKEN_BIT_0_UDW,
DISABLE_128B_EVICTION_COMMAND_UDW);
/* Wa_22012856258:dg2 */
wa_mcr_masked_en(wal, GEN8_ROW_CHICKEN2,
GEN12_DISABLE_READ_SUPPRESSION);
/*
* Wa_22010960976:dg2
* Wa_14013347512:dg2
@ -2895,25 +2953,12 @@ add_render_compute_tuning_settings(struct drm_i915_private *i915,
if (IS_PONTEVECCHIO(i915)) {
wa_write(wal, XEHPC_L3SCRUB,
SCRUB_CL_DWNGRADE_SHARED | SCRUB_RATE_4B_PER_CLK);
wa_masked_en(wal, XEHPC_LNCFMISCCFGREG0, XEHPC_HOSTCACHEEN);
}
if (IS_DG2(i915)) {
wa_mcr_write_or(wal, XEHP_L3SCQREG7, BLEND_FILL_CACHING_OPT_DIS);
wa_mcr_write_clr_set(wal, RT_CTRL, STACKID_CTRL, STACKID_CTRL_512);
/*
* This is also listed as Wa_22012654132 for certain DG2
* steppings, but the tuning setting programming is a superset
* since it applies to all DG2 variants and steppings.
*
* Note that register 0xE420 is write-only and cannot be read
* back for verification on DG2 (due to Wa_14012342262), so
* we need to explicitly skip the readback.
*/
wa_mcr_add(wal, GEN10_CACHE_MODE_SS, 0,
_MASKED_BIT_ENABLE(ENABLE_PREFETCH_INTO_IC),
0 /* write-only, so skip validation */,
true);
}
/*
@ -2924,6 +2969,9 @@ add_render_compute_tuning_settings(struct drm_i915_private *i915,
if (INTEL_INFO(i915)->tuning_thread_rr_after_dep)
wa_mcr_masked_field_set(wal, GEN9_ROW_CHICKEN4, THREAD_EX_ARB_MODE,
THREAD_EX_ARB_MODE_RR_AFTER_DEP);
if (GRAPHICS_VER(i915) == 12 && GRAPHICS_VER_FULL(i915) < IP_VER(12, 50))
wa_write_clr(wal, GEN8_GARBCNTL, GEN12_BUS_HASH_CTL_BIT_EXC);
}
/*
@ -2942,6 +2990,27 @@ general_render_compute_wa_init(struct intel_engine_cs *engine, struct i915_wa_li
add_render_compute_tuning_settings(i915, wal);
if (IS_MTL_GRAPHICS_STEP(i915, M, STEP_A0, STEP_B0) ||
IS_MTL_GRAPHICS_STEP(i915, P, STEP_A0, STEP_B0) ||
IS_PONTEVECCHIO(i915) ||
IS_DG2(i915)) {
/* Wa_18018781329 */
wa_mcr_write_or(wal, RENDER_MOD_CTRL, FORCE_MISS_FTLB);
wa_mcr_write_or(wal, COMP_MOD_CTRL, FORCE_MISS_FTLB);
wa_mcr_write_or(wal, VDBX_MOD_CTRL, FORCE_MISS_FTLB);
wa_mcr_write_or(wal, VEBX_MOD_CTRL, FORCE_MISS_FTLB);
/* Wa_22014226127 */
wa_mcr_write_or(wal, LSC_CHICKEN_BIT_0, DISABLE_D8_D16_COASLESCE);
}
if (IS_MTL_GRAPHICS_STEP(i915, M, STEP_A0, STEP_B0) ||
IS_MTL_GRAPHICS_STEP(i915, P, STEP_A0, STEP_B0) ||
IS_DG2(i915)) {
/* Wa_18017747507 */
wa_masked_en(wal, VFG_PREEMPTION_CHICKEN, POLYGON_TRIFAN_LINELOOP_DISABLE);
}
if (IS_PONTEVECCHIO(i915)) {
/* Wa_16016694945 */
wa_masked_en(wal, XEHPC_LNCFMISCCFGREG0, XEHPC_OVRLSCCC);
@ -2983,17 +3052,8 @@ general_render_compute_wa_init(struct intel_engine_cs *engine, struct i915_wa_li
/* Wa_14015227452:dg2,pvc */
wa_mcr_masked_en(wal, GEN9_ROW_CHICKEN4, XEHP_DIS_BBL_SYSPIPE);
/* Wa_22014226127:dg2,pvc */
wa_mcr_write_or(wal, LSC_CHICKEN_BIT_0, DISABLE_D8_D16_COASLESCE);
/* Wa_16015675438:dg2,pvc */
wa_masked_en(wal, FF_SLICE_CS_CHICKEN2, GEN12_PERF_FIX_BALANCING_CFE_DISABLE);
/* Wa_18018781329:dg2,pvc */
wa_mcr_write_or(wal, RENDER_MOD_CTRL, FORCE_MISS_FTLB);
wa_mcr_write_or(wal, COMP_MOD_CTRL, FORCE_MISS_FTLB);
wa_mcr_write_or(wal, VDBX_MOD_CTRL, FORCE_MISS_FTLB);
wa_mcr_write_or(wal, VEBX_MOD_CTRL, FORCE_MISS_FTLB);
}
if (IS_DG2(i915)) {
@ -3002,10 +3062,20 @@ general_render_compute_wa_init(struct intel_engine_cs *engine, struct i915_wa_li
* Wa_22015475538:dg2
*/
wa_mcr_write_or(wal, LSC_CHICKEN_BIT_0_UDW, DIS_CHAIN_2XSIMD8);
/* Wa_18017747507:dg2 */
wa_masked_en(wal, VFG_PREEMPTION_CHICKEN, POLYGON_TRIFAN_LINELOOP_DISABLE);
}
if (IS_DG2_GRAPHICS_STEP(i915, G10, STEP_A0, STEP_C0) || IS_DG2_G11(i915))
/*
* Wa_22012654132
*
* Note that register 0xE420 is write-only and cannot be read
* back for verification on DG2 (due to Wa_14012342262), so
* we need to explicitly skip the readback.
*/
wa_mcr_add(wal, GEN10_CACHE_MODE_SS, 0,
_MASKED_BIT_ENABLE(ENABLE_PREFETCH_INTO_IC),
0 /* write-only, so skip validation */,
true);
}
static void

View File

@ -178,7 +178,7 @@ static int perf_mi_bb_start(void *arg)
goto out;
err = rq->engine->emit_bb_start(rq,
batch->node.start, 8,
i915_vma_offset(batch), 8,
0);
if (err)
goto out;
@ -321,7 +321,7 @@ static int perf_mi_noop(void *arg)
goto out;
err = rq->engine->emit_bb_start(rq,
base->node.start, 8,
i915_vma_offset(base), 8,
0);
if (err)
goto out;
@ -331,8 +331,8 @@ static int perf_mi_noop(void *arg)
goto out;
err = rq->engine->emit_bb_start(rq,
nop->node.start,
nop->node.size,
i915_vma_offset(nop),
i915_vma_size(nop),
0);
if (err)
goto out;

View File

@ -2737,11 +2737,11 @@ static int create_gang(struct intel_engine_cs *engine,
MI_SEMAPHORE_POLL |
MI_SEMAPHORE_SAD_EQ_SDD;
*cs++ = 0;
*cs++ = lower_32_bits(vma->node.start);
*cs++ = upper_32_bits(vma->node.start);
*cs++ = lower_32_bits(i915_vma_offset(vma));
*cs++ = upper_32_bits(i915_vma_offset(vma));
if (*prev) {
u64 offset = (*prev)->batch->node.start;
u64 offset = i915_vma_offset((*prev)->batch);
/* Terminate the spinner in the next lower priority batch. */
*cs++ = MI_STORE_DWORD_IMM_GEN4;
@ -2763,13 +2763,11 @@ static int create_gang(struct intel_engine_cs *engine,
rq->batch = i915_vma_get(vma);
i915_request_get(rq);
i915_vma_lock(vma);
err = i915_vma_move_to_active(vma, rq, 0);
err = igt_vma_move_to_active_unlocked(vma, rq, 0);
if (!err)
err = rq->engine->emit_bb_start(rq,
vma->node.start,
i915_vma_offset(vma),
PAGE_SIZE, 0);
i915_vma_unlock(vma);
i915_request_add(rq);
if (err)
goto err_rq;
@ -3095,7 +3093,7 @@ create_gpr_user(struct intel_engine_cs *engine,
*cs++ = MI_MATH_ADD;
*cs++ = MI_MATH_STORE(MI_MATH_REG(i), MI_MATH_REG_ACCU);
addr = result->node.start + offset + i * sizeof(*cs);
addr = i915_vma_offset(result) + offset + i * sizeof(*cs);
*cs++ = MI_STORE_REGISTER_MEM_GEN8;
*cs++ = CS_GPR(engine, 2 * i);
*cs++ = lower_32_bits(addr);
@ -3105,8 +3103,8 @@ create_gpr_user(struct intel_engine_cs *engine,
MI_SEMAPHORE_POLL |
MI_SEMAPHORE_SAD_GTE_SDD;
*cs++ = i;
*cs++ = lower_32_bits(result->node.start);
*cs++ = upper_32_bits(result->node.start);
*cs++ = lower_32_bits(i915_vma_offset(result));
*cs++ = upper_32_bits(i915_vma_offset(result));
}
*cs++ = MI_BATCH_BUFFER_END;
@ -3177,16 +3175,14 @@ create_gpr_client(struct intel_engine_cs *engine,
goto out_batch;
}
i915_vma_lock(vma);
err = i915_vma_move_to_active(vma, rq, 0);
i915_vma_unlock(vma);
err = igt_vma_move_to_active_unlocked(vma, rq, 0);
i915_vma_lock(batch);
if (!err)
err = i915_vma_move_to_active(batch, rq, 0);
if (!err)
err = rq->engine->emit_bb_start(rq,
batch->node.start,
i915_vma_offset(batch),
PAGE_SIZE, 0);
i915_vma_unlock(batch);
i915_vma_unpin(batch);
@ -3514,13 +3510,11 @@ static int smoke_submit(struct preempt_smoke *smoke,
}
if (vma) {
i915_vma_lock(vma);
err = i915_vma_move_to_active(vma, rq, 0);
err = igt_vma_move_to_active_unlocked(vma, rq, 0);
if (!err)
err = rq->engine->emit_bb_start(rq,
vma->node.start,
i915_vma_offset(vma),
PAGE_SIZE, 0);
i915_vma_unlock(vma);
}
i915_request_add(rq);

View File

@ -96,7 +96,8 @@ err_ctx:
static u64 hws_address(const struct i915_vma *hws,
const struct i915_request *rq)
{
return hws->node.start + offset_in_page(sizeof(u32)*rq->fence.context);
return i915_vma_offset(hws) +
offset_in_page(sizeof(u32) * rq->fence.context);
}
static struct i915_request *
@ -180,8 +181,8 @@ hang_create_request(struct hang *h, struct intel_engine_cs *engine)
*batch++ = MI_NOOP;
*batch++ = MI_BATCH_BUFFER_START | 1 << 8 | 1;
*batch++ = lower_32_bits(vma->node.start);
*batch++ = upper_32_bits(vma->node.start);
*batch++ = lower_32_bits(i915_vma_offset(vma));
*batch++ = upper_32_bits(i915_vma_offset(vma));
} else if (GRAPHICS_VER(gt->i915) >= 6) {
*batch++ = MI_STORE_DWORD_IMM_GEN4;
*batch++ = 0;
@ -194,7 +195,7 @@ hang_create_request(struct hang *h, struct intel_engine_cs *engine)
*batch++ = MI_NOOP;
*batch++ = MI_BATCH_BUFFER_START | 1 << 8;
*batch++ = lower_32_bits(vma->node.start);
*batch++ = lower_32_bits(i915_vma_offset(vma));
} else if (GRAPHICS_VER(gt->i915) >= 4) {
*batch++ = MI_STORE_DWORD_IMM_GEN4 | MI_USE_GGTT;
*batch++ = 0;
@ -207,7 +208,7 @@ hang_create_request(struct hang *h, struct intel_engine_cs *engine)
*batch++ = MI_NOOP;
*batch++ = MI_BATCH_BUFFER_START | 2 << 6;
*batch++ = lower_32_bits(vma->node.start);
*batch++ = lower_32_bits(i915_vma_offset(vma));
} else {
*batch++ = MI_STORE_DWORD_IMM | MI_MEM_VIRTUAL;
*batch++ = lower_32_bits(hws_address(hws, rq));
@ -219,7 +220,7 @@ hang_create_request(struct hang *h, struct intel_engine_cs *engine)
*batch++ = MI_NOOP;
*batch++ = MI_BATCH_BUFFER_START | 2 << 6;
*batch++ = lower_32_bits(vma->node.start);
*batch++ = lower_32_bits(i915_vma_offset(vma));
}
*batch++ = MI_BATCH_BUFFER_END; /* not reached */
intel_gt_chipset_flush(engine->gt);
@ -234,7 +235,7 @@ hang_create_request(struct hang *h, struct intel_engine_cs *engine)
if (GRAPHICS_VER(gt->i915) <= 5)
flags |= I915_DISPATCH_SECURE;
err = rq->engine->emit_bb_start(rq, vma->node.start, PAGE_SIZE, flags);
err = rq->engine->emit_bb_start(rq, i915_vma_offset(vma), PAGE_SIZE, flags);
cancel_rq:
if (err) {

View File

@ -599,9 +599,7 @@ __gpr_read(struct intel_context *ce, struct i915_vma *scratch, u32 *slot)
*cs++ = 0;
}
i915_vma_lock(scratch);
err = i915_vma_move_to_active(scratch, rq, EXEC_OBJECT_WRITE);
i915_vma_unlock(scratch);
err = igt_vma_move_to_active_unlocked(scratch, rq, EXEC_OBJECT_WRITE);
i915_request_get(rq);
i915_request_add(rq);
@ -1030,8 +1028,8 @@ store_context(struct intel_context *ce, struct i915_vma *scratch)
while (len--) {
*cs++ = MI_STORE_REGISTER_MEM_GEN8;
*cs++ = hw[dw];
*cs++ = lower_32_bits(scratch->node.start + x);
*cs++ = upper_32_bits(scratch->node.start + x);
*cs++ = lower_32_bits(i915_vma_offset(scratch) + x);
*cs++ = upper_32_bits(i915_vma_offset(scratch) + x);
dw += 2;
x += 4;
@ -1098,8 +1096,8 @@ record_registers(struct intel_context *ce,
*cs++ = MI_ARB_ON_OFF | MI_ARB_DISABLE;
*cs++ = MI_BATCH_BUFFER_START_GEN8 | BIT(8);
*cs++ = lower_32_bits(b_before->node.start);
*cs++ = upper_32_bits(b_before->node.start);
*cs++ = lower_32_bits(i915_vma_offset(b_before));
*cs++ = upper_32_bits(i915_vma_offset(b_before));
*cs++ = MI_ARB_ON_OFF | MI_ARB_ENABLE;
*cs++ = MI_SEMAPHORE_WAIT |
@ -1114,8 +1112,8 @@ record_registers(struct intel_context *ce,
*cs++ = MI_ARB_ON_OFF | MI_ARB_DISABLE;
*cs++ = MI_BATCH_BUFFER_START_GEN8 | BIT(8);
*cs++ = lower_32_bits(b_after->node.start);
*cs++ = upper_32_bits(b_after->node.start);
*cs++ = lower_32_bits(i915_vma_offset(b_after));
*cs++ = upper_32_bits(i915_vma_offset(b_after));
intel_ring_advance(rq, cs);
@ -1236,8 +1234,8 @@ static int poison_registers(struct intel_context *ce, u32 poison, u32 *sema)
*cs++ = MI_ARB_ON_OFF | MI_ARB_DISABLE;
*cs++ = MI_BATCH_BUFFER_START_GEN8 | BIT(8);
*cs++ = lower_32_bits(batch->node.start);
*cs++ = upper_32_bits(batch->node.start);
*cs++ = lower_32_bits(i915_vma_offset(batch));
*cs++ = upper_32_bits(i915_vma_offset(batch));
*cs++ = MI_STORE_DWORD_IMM_GEN4 | MI_USE_GGTT;
*cs++ = i915_ggtt_offset(ce->engine->status_page.vma) +

View File

@ -8,6 +8,7 @@
#include "gem/i915_gem_internal.h"
#include "gem/i915_gem_lmem.h"
#include "selftests/igt_spinner.h"
#include "selftests/i915_random.h"
static const unsigned int sizes[] = {
@ -486,7 +487,8 @@ global_clear(struct intel_migrate *migrate, u32 sz, struct rnd_state *prng)
static int live_migrate_copy(void *arg)
{
struct intel_migrate *migrate = arg;
struct intel_gt *gt = arg;
struct intel_migrate *migrate = &gt->migrate;
struct drm_i915_private *i915 = migrate->context->engine->i915;
I915_RND_STATE(prng);
int i;
@ -507,7 +509,8 @@ static int live_migrate_copy(void *arg)
static int live_migrate_clear(void *arg)
{
struct intel_migrate *migrate = arg;
struct intel_gt *gt = arg;
struct intel_migrate *migrate = &gt->migrate;
struct drm_i915_private *i915 = migrate->context->engine->i915;
I915_RND_STATE(prng);
int i;
@ -527,6 +530,149 @@ static int live_migrate_clear(void *arg)
return 0;
}
struct spinner_timer {
struct timer_list timer;
struct igt_spinner spin;
};
static void spinner_kill(struct timer_list *timer)
{
struct spinner_timer *st = from_timer(st, timer, timer);
igt_spinner_end(&st->spin);
pr_info("%s\n", __func__);
}
static int live_emit_pte_full_ring(void *arg)
{
struct intel_gt *gt = arg;
struct intel_migrate *migrate = &gt->migrate;
struct drm_i915_private *i915 = migrate->context->engine->i915;
struct drm_i915_gem_object *obj;
struct intel_context *ce;
struct i915_request *rq, *prev;
struct spinner_timer st;
struct sgt_dma it;
int len, sz, err;
u32 *cs;
/*
* Simple regression test to check that we don't trample the
* rq->reserved_space when returning from emit_pte(), if the ring is
* nearly full.
*/
if (igt_spinner_init(&st.spin, to_gt(i915)))
return -ENOMEM;
obj = i915_gem_object_create_internal(i915, 2 * PAGE_SIZE);
if (IS_ERR(obj)) {
err = PTR_ERR(obj);
goto out_spinner;
}
err = i915_gem_object_pin_pages_unlocked(obj);
if (err)
goto out_obj;
ce = intel_migrate_create_context(migrate);
if (IS_ERR(ce)) {
err = PTR_ERR(ce);
goto out_obj;
}
ce->ring_size = SZ_4K; /* Not too big */
err = intel_context_pin(ce);
if (err)
goto out_put;
rq = igt_spinner_create_request(&st.spin, ce, MI_ARB_CHECK);
if (IS_ERR(rq)) {
err = PTR_ERR(rq);
goto out_unpin;
}
i915_request_add(rq);
if (!igt_wait_for_spinner(&st.spin, rq)) {
err = -EIO;
goto out_unpin;
}
/*
* Fill the rest of the ring leaving I915_EMIT_PTE_NUM_DWORDS +
* ring->reserved_space at the end. To actually emit the PTEs we require
* slightly more than I915_EMIT_PTE_NUM_DWORDS, since our object size is
* greater than PAGE_SIZE. The correct behaviour is to wait for more
* ring space in emit_pte(), otherwise we trample on the reserved_space
* resulting in crashes when later submitting the rq.
*/
prev = NULL;
do {
if (prev)
i915_request_add(rq);
rq = i915_request_create(ce);
if (IS_ERR(rq)) {
err = PTR_ERR(rq);
goto out_unpin;
}
sz = (rq->ring->space - rq->reserved_space) / sizeof(u32) -
I915_EMIT_PTE_NUM_DWORDS;
sz = min_t(u32, sz, (SZ_1K - rq->reserved_space) / sizeof(u32) -
I915_EMIT_PTE_NUM_DWORDS);
cs = intel_ring_begin(rq, sz);
if (IS_ERR(cs)) {
err = PTR_ERR(cs);
goto out_rq;
}
memset32(cs, MI_NOOP, sz);
cs += sz;
intel_ring_advance(rq, cs);
pr_info("%s emit=%u sz=%d\n", __func__, rq->ring->emit, sz);
prev = rq;
} while (rq->ring->space > (rq->reserved_space +
I915_EMIT_PTE_NUM_DWORDS * sizeof(u32)));
timer_setup_on_stack(&st.timer, spinner_kill, 0);
mod_timer(&st.timer, jiffies + 2 * HZ);
/*
* This should wait for the spinner to be killed, otherwise we should go
* down in flames when doing i915_request_add().
*/
pr_info("%s emite_pte ring space=%u\n", __func__, rq->ring->space);
it = sg_sgt(obj->mm.pages->sgl);
len = emit_pte(rq, &it, obj->cache_level, false, 0, CHUNK_SZ);
if (!len) {
err = -EINVAL;
goto out_rq;
}
if (len < 0) {
err = len;
goto out_rq;
}
out_rq:
i915_request_add(rq); /* GEM_BUG_ON(rq->reserved_space > ring->space)? */
del_timer_sync(&st.timer);
destroy_timer_on_stack(&st.timer);
out_unpin:
intel_context_unpin(ce);
out_put:
intel_context_put(ce);
out_obj:
i915_gem_object_put(obj);
out_spinner:
igt_spinner_fini(&st.spin);
return err;
}
struct threaded_migrate {
struct intel_migrate *migrate;
struct task_struct *tsk;
@ -593,7 +739,10 @@ static int __thread_migrate_copy(void *arg)
static int thread_migrate_copy(void *arg)
{
return threaded_migrate(arg, __thread_migrate_copy, 0);
struct intel_gt *gt = arg;
struct intel_migrate *migrate = &gt->migrate;
return threaded_migrate(migrate, __thread_migrate_copy, 0);
}
static int __thread_global_copy(void *arg)
@ -605,7 +754,10 @@ static int __thread_global_copy(void *arg)
static int thread_global_copy(void *arg)
{
return threaded_migrate(arg, __thread_global_copy, 0);
struct intel_gt *gt = arg;
struct intel_migrate *migrate = &gt->migrate;
return threaded_migrate(migrate, __thread_global_copy, 0);
}
static int __thread_migrate_clear(void *arg)
@ -624,12 +776,18 @@ static int __thread_global_clear(void *arg)
static int thread_migrate_clear(void *arg)
{
return threaded_migrate(arg, __thread_migrate_clear, 0);
struct intel_gt *gt = arg;
struct intel_migrate *migrate = &gt->migrate;
return threaded_migrate(migrate, __thread_migrate_clear, 0);
}
static int thread_global_clear(void *arg)
{
return threaded_migrate(arg, __thread_global_clear, 0);
struct intel_gt *gt = arg;
struct intel_migrate *migrate = &gt->migrate;
return threaded_migrate(migrate, __thread_global_clear, 0);
}
int intel_migrate_live_selftests(struct drm_i915_private *i915)
@ -637,6 +795,7 @@ int intel_migrate_live_selftests(struct drm_i915_private *i915)
static const struct i915_subtest tests[] = {
SUBTEST(live_migrate_copy),
SUBTEST(live_migrate_clear),
SUBTEST(live_emit_pte_full_ring),
SUBTEST(thread_migrate_copy),
SUBTEST(thread_migrate_clear),
SUBTEST(thread_global_copy),
@ -647,7 +806,7 @@ int intel_migrate_live_selftests(struct drm_i915_private *i915)
if (!gt->migrate.context)
return 0;
return i915_subtests(tests, &gt->migrate);
return intel_gt_live_subtests(tests, gt);
}
static struct drm_i915_gem_object *

View File

@ -228,9 +228,7 @@ static int check_mocs_engine(struct live_mocs *arg,
if (IS_ERR(rq))
return PTR_ERR(rq);
i915_vma_lock(vma);
err = i915_vma_move_to_active(vma, rq, EXEC_OBJECT_WRITE);
i915_vma_unlock(vma);
err = igt_vma_move_to_active_unlocked(vma, rq, EXEC_OBJECT_WRITE);
/* Read the mocs tables back using SRM */
offset = i915_ggtt_offset(vma);

View File

@ -50,7 +50,7 @@ static struct i915_vma *create_wally(struct intel_engine_cs *engine)
} else {
*cs++ = MI_STORE_DWORD_IMM | MI_MEM_VIRTUAL;
}
*cs++ = vma->node.start + 4000;
*cs++ = i915_vma_offset(vma) + 4000;
*cs++ = STACK_MAGIC;
*cs++ = MI_BATCH_BUFFER_END;

View File

@ -122,14 +122,14 @@ create_spin_counter(struct intel_engine_cs *engine,
if (srm) {
*cs++ = MI_STORE_REGISTER_MEM_GEN8;
*cs++ = i915_mmio_reg_offset(CS_GPR(COUNT));
*cs++ = lower_32_bits(vma->node.start + end * sizeof(*cs));
*cs++ = upper_32_bits(vma->node.start + end * sizeof(*cs));
*cs++ = lower_32_bits(i915_vma_offset(vma) + end * sizeof(*cs));
*cs++ = upper_32_bits(i915_vma_offset(vma) + end * sizeof(*cs));
}
}
*cs++ = MI_BATCH_BUFFER_START_GEN8;
*cs++ = lower_32_bits(vma->node.start + loop * sizeof(*cs));
*cs++ = upper_32_bits(vma->node.start + loop * sizeof(*cs));
*cs++ = lower_32_bits(i915_vma_offset(vma) + loop * sizeof(*cs));
*cs++ = upper_32_bits(i915_vma_offset(vma) + loop * sizeof(*cs));
GEM_BUG_ON(cs - base > end);
i915_gem_object_flush_map(obj);
@ -655,7 +655,7 @@ int live_rps_frequency_cs(void *arg)
err = i915_vma_move_to_active(vma, rq, 0);
if (!err)
err = rq->engine->emit_bb_start(rq,
vma->node.start,
i915_vma_offset(vma),
PAGE_SIZE, 0);
i915_request_add(rq);
if (err)
@ -794,7 +794,7 @@ int live_rps_frequency_srm(void *arg)
err = i915_vma_move_to_active(vma, rq, 0);
if (!err)
err = rq->engine->emit_bb_start(rq,
vma->node.start,
i915_vma_offset(vma),
PAGE_SIZE, 0);
i915_request_add(rq);
if (err)

View File

@ -138,9 +138,7 @@ read_nonprivs(struct intel_context *ce)
goto err_pin;
}
i915_vma_lock(vma);
err = i915_vma_move_to_active(vma, rq, EXEC_OBJECT_WRITE);
i915_vma_unlock(vma);
err = igt_vma_move_to_active_unlocked(vma, rq, EXEC_OBJECT_WRITE);
if (err)
goto err_req;
@ -521,7 +519,7 @@ static int check_dirty_whitelist(struct intel_context *ce)
for (i = 0; i < engine->whitelist.count; i++) {
u32 reg = i915_mmio_reg_offset(engine->whitelist.list[i].reg);
struct i915_gem_ww_ctx ww;
u64 addr = scratch->node.start;
u64 addr = i915_vma_offset(scratch);
struct i915_request *rq;
u32 srm, lrm, rsvd;
u32 expect;
@ -640,7 +638,7 @@ retry:
goto err_request;
err = engine->emit_bb_start(rq,
batch->node.start, PAGE_SIZE,
i915_vma_offset(batch), PAGE_SIZE,
0);
if (err)
goto err_request;
@ -853,9 +851,7 @@ static int read_whitelisted_registers(struct intel_context *ce,
if (IS_ERR(rq))
return PTR_ERR(rq);
i915_vma_lock(results);
err = i915_vma_move_to_active(results, rq, EXEC_OBJECT_WRITE);
i915_vma_unlock(results);
err = igt_vma_move_to_active_unlocked(results, rq, EXEC_OBJECT_WRITE);
if (err)
goto err_req;
@ -870,7 +866,7 @@ static int read_whitelisted_registers(struct intel_context *ce,
}
for (i = 0; i < engine->whitelist.count; i++) {
u64 offset = results->node.start + sizeof(u32) * i;
u64 offset = i915_vma_offset(results) + sizeof(u32) * i;
u32 reg = i915_mmio_reg_offset(engine->whitelist.list[i].reg);
/* Clear non priv flags */
@ -935,14 +931,12 @@ static int scrub_whitelisted_registers(struct intel_context *ce)
goto err_request;
}
i915_vma_lock(batch);
err = i915_vma_move_to_active(batch, rq, 0);
i915_vma_unlock(batch);
err = igt_vma_move_to_active_unlocked(batch, rq, 0);
if (err)
goto err_request;
/* Perform the writes from an unprivileged "user" batch */
err = engine->emit_bb_start(rq, batch->node.start, 0, 0);
err = engine->emit_bb_start(rq, i915_vma_offset(batch), 0, 0);
err_request:
err = request_add_sync(rq, err);

View File

@ -8,6 +8,7 @@
#include <linux/pagemap.h>
#include <linux/shmem_fs.h>
#include "i915_drv.h"
#include "gem/i915_gem_object.h"
#include "gem/i915_gem_lmem.h"
#include "shmem_utils.h"
@ -32,6 +33,8 @@ struct file *shmem_create_from_data(const char *name, void *data, size_t len)
struct file *shmem_create_from_object(struct drm_i915_gem_object *obj)
{
struct drm_i915_private *i915 = to_i915(obj->base.dev);
enum i915_map_type map_type;
struct file *file;
void *ptr;
@ -41,8 +44,8 @@ struct file *shmem_create_from_object(struct drm_i915_gem_object *obj)
return file;
}
ptr = i915_gem_object_pin_map_unlocked(obj, i915_gem_object_is_lmem(obj) ?
I915_MAP_WC : I915_MAP_WB);
map_type = i915_coherent_map_type(i915, obj, true);
ptr = i915_gem_object_pin_map_unlocked(obj, map_type);
if (IS_ERR(ptr))
return ERR_CAST(ptr);

View File

@ -73,7 +73,7 @@ struct guc_debug_capture_list_header {
struct guc_debug_capture_list {
struct guc_debug_capture_list_header header;
struct guc_mmio_reg regs[0];
struct guc_mmio_reg regs[];
} __packed;
/**
@ -125,7 +125,7 @@ struct guc_state_capture_header_t {
struct guc_state_capture_t {
struct guc_state_capture_header_t header;
struct guc_mmio_reg mmio_entries[0];
struct guc_mmio_reg mmio_entries[];
} __packed;
enum guc_capture_group_types {
@ -145,7 +145,7 @@ struct guc_state_capture_group_header_t {
/* this is the top level structure where an error-capture dump starts */
struct guc_state_capture_group_t {
struct guc_state_capture_group_header_t grp_header;
struct guc_state_capture_t capture_entries[0];
struct guc_state_capture_t capture_entries[];
} __packed;
/**

View File

@ -0,0 +1,210 @@
// SPDX-License-Identifier: MIT
/*
* Copyright © 2022 Intel Corporation
*/
#include "gt/intel_engine_pm.h"
#include "gt/intel_gpu_commands.h"
#include "gt/intel_gt.h"
#include "gt/intel_ring.h"
#include "intel_gsc_fw.h"
#define GSC_FW_STATUS_REG _MMIO(0x116C40)
#define GSC_FW_CURRENT_STATE REG_GENMASK(3, 0)
#define GSC_FW_CURRENT_STATE_RESET 0
#define GSC_FW_INIT_COMPLETE_BIT REG_BIT(9)
static bool gsc_is_in_reset(struct intel_uncore *uncore)
{
u32 fw_status = intel_uncore_read(uncore, GSC_FW_STATUS_REG);
return REG_FIELD_GET(GSC_FW_CURRENT_STATE, fw_status) ==
GSC_FW_CURRENT_STATE_RESET;
}
bool intel_gsc_uc_fw_init_done(struct intel_gsc_uc *gsc)
{
struct intel_uncore *uncore = gsc_uc_to_gt(gsc)->uncore;
u32 fw_status = intel_uncore_read(uncore, GSC_FW_STATUS_REG);
return fw_status & GSC_FW_INIT_COMPLETE_BIT;
}
static int emit_gsc_fw_load(struct i915_request *rq, struct intel_gsc_uc *gsc)
{
u32 offset = i915_ggtt_offset(gsc->local);
u32 *cs;
cs = intel_ring_begin(rq, 4);
if (IS_ERR(cs))
return PTR_ERR(cs);
*cs++ = GSC_FW_LOAD;
*cs++ = lower_32_bits(offset);
*cs++ = upper_32_bits(offset);
*cs++ = (gsc->local->size / SZ_4K) | HECI1_FW_LIMIT_VALID;
intel_ring_advance(rq, cs);
return 0;
}
static int gsc_fw_load(struct intel_gsc_uc *gsc)
{
struct intel_context *ce = gsc->ce;
struct i915_request *rq;
int err;
if (!ce)
return -ENODEV;
rq = i915_request_create(ce);
if (IS_ERR(rq))
return PTR_ERR(rq);
if (ce->engine->emit_init_breadcrumb) {
err = ce->engine->emit_init_breadcrumb(rq);
if (err)
goto out_rq;
}
err = emit_gsc_fw_load(rq, gsc);
if (err)
goto out_rq;
err = ce->engine->emit_flush(rq, 0);
out_rq:
i915_request_get(rq);
if (unlikely(err))
i915_request_set_error_once(rq, err);
i915_request_add(rq);
if (!err && i915_request_wait(rq, 0, msecs_to_jiffies(500)) < 0)
err = -ETIME;
i915_request_put(rq);
if (err)
drm_err(&gsc_uc_to_gt(gsc)->i915->drm,
"Request submission for GSC load failed (%d)\n",
err);
return err;
}
static int gsc_fw_load_prepare(struct intel_gsc_uc *gsc)
{
struct intel_gt *gt = gsc_uc_to_gt(gsc);
struct drm_i915_private *i915 = gt->i915;
struct drm_i915_gem_object *obj;
void *src, *dst;
if (!gsc->local)
return -ENODEV;
obj = gsc->local->obj;
if (obj->base.size < gsc->fw.size)
return -ENOSPC;
dst = i915_gem_object_pin_map_unlocked(obj,
i915_coherent_map_type(i915, obj, true));
if (IS_ERR(dst))
return PTR_ERR(dst);
src = i915_gem_object_pin_map_unlocked(gsc->fw.obj,
i915_coherent_map_type(i915, gsc->fw.obj, true));
if (IS_ERR(src)) {
i915_gem_object_unpin_map(obj);
return PTR_ERR(src);
}
memset(dst, 0, obj->base.size);
memcpy(dst, src, gsc->fw.size);
i915_gem_object_unpin_map(gsc->fw.obj);
i915_gem_object_unpin_map(obj);
return 0;
}
static int gsc_fw_wait(struct intel_gt *gt)
{
return intel_wait_for_register(gt->uncore,
GSC_FW_STATUS_REG,
GSC_FW_INIT_COMPLETE_BIT,
GSC_FW_INIT_COMPLETE_BIT,
500);
}
int intel_gsc_uc_fw_upload(struct intel_gsc_uc *gsc)
{
struct intel_gt *gt = gsc_uc_to_gt(gsc);
struct intel_uc_fw *gsc_fw = &gsc->fw;
int err;
/* check current fw status */
if (intel_gsc_uc_fw_init_done(gsc)) {
if (GEM_WARN_ON(!intel_uc_fw_is_loaded(gsc_fw)))
intel_uc_fw_change_status(gsc_fw, INTEL_UC_FIRMWARE_TRANSFERRED);
return -EEXIST;
}
if (!intel_uc_fw_is_loadable(gsc_fw))
return -ENOEXEC;
/* FW blob is ok, so clean the status */
intel_uc_fw_sanitize(&gsc->fw);
if (!gsc_is_in_reset(gt->uncore))
return -EIO;
err = gsc_fw_load_prepare(gsc);
if (err)
goto fail;
/*
* GSC is only killed by an FLR, so we need to trigger one on unload to
* make sure we stop it. This is because we assign a chunk of memory to
* the GSC as part of the FW load , so we need to make sure it stops
* using it when we release it to the system on driver unload. Note that
* this is not a problem of the unload per-se, because the GSC will not
* touch that memory unless there are requests for it coming from the
* driver; therefore, no accesses will happen while i915 is not loaded,
* but if we re-load the driver then the GSC might wake up and try to
* access that old memory location again.
* Given that an FLR is a very disruptive action (see the FLR function
* for details), we want to do it as the last action before releasing
* the access to the MMIO bar, which means we need to do it as part of
* the primary uncore cleanup.
* An alternative approach to the FLR would be to use a memory location
* that survives driver unload, like e.g. stolen memory, and keep the
* GSC loaded across reloads. However, this requires us to make sure we
* preserve that memory location on unload and then determine and
* reserve its offset on each subsequent load, which is not trivial, so
* it is easier to just kill everything and start fresh.
*/
intel_uncore_set_flr_on_fini(&gt->i915->uncore);
err = gsc_fw_load(gsc);
if (err)
goto fail;
err = gsc_fw_wait(gt);
if (err)
goto fail;
/* FW is not fully operational until we enable SW proxy */
intel_uc_fw_change_status(gsc_fw, INTEL_UC_FIRMWARE_TRANSFERRED);
drm_info(&gt->i915->drm, "Loaded GSC firmware %s\n",
gsc_fw->file_selected.path);
return 0;
fail:
return intel_uc_fw_mark_load_failed(gsc_fw, err);
}

View File

@ -0,0 +1,15 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2022 Intel Corporation
*/
#ifndef _INTEL_GSC_FW_H_
#define _INTEL_GSC_FW_H_
#include <linux/types.h>
struct intel_gsc_uc;
int intel_gsc_uc_fw_upload(struct intel_gsc_uc *gsc);
bool intel_gsc_uc_fw_init_done(struct intel_gsc_uc *gsc);
#endif

View File

@ -0,0 +1,137 @@
// SPDX-License-Identifier: MIT
/*
* Copyright © 2022 Intel Corporation
*/
#include <linux/types.h>
#include "gt/intel_gt.h"
#include "intel_gsc_uc.h"
#include "intel_gsc_fw.h"
#include "i915_drv.h"
static void gsc_work(struct work_struct *work)
{
struct intel_gsc_uc *gsc = container_of(work, typeof(*gsc), work);
struct intel_gt *gt = gsc_uc_to_gt(gsc);
intel_wakeref_t wakeref;
with_intel_runtime_pm(gt->uncore->rpm, wakeref)
intel_gsc_uc_fw_upload(gsc);
}
static bool gsc_engine_supported(struct intel_gt *gt)
{
intel_engine_mask_t mask;
/*
* We reach here from i915_driver_early_probe for the primary GT before
* its engine mask is set, so we use the device info engine mask for it.
* For other GTs we expect the GT-specific mask to be set before we
* call this function.
*/
GEM_BUG_ON(!gt_is_root(gt) && !gt->info.engine_mask);
if (gt_is_root(gt))
mask = RUNTIME_INFO(gt->i915)->platform_engine_mask;
else
mask = gt->info.engine_mask;
return __HAS_ENGINE(mask, GSC0);
}
void intel_gsc_uc_init_early(struct intel_gsc_uc *gsc)
{
intel_uc_fw_init_early(&gsc->fw, INTEL_UC_FW_TYPE_GSC);
INIT_WORK(&gsc->work, gsc_work);
/* we can arrive here from i915_driver_early_probe for primary
* GT with it being not fully setup hence check device info's
* engine mask
*/
if (!gsc_engine_supported(gsc_uc_to_gt(gsc))) {
intel_uc_fw_change_status(&gsc->fw, INTEL_UC_FIRMWARE_NOT_SUPPORTED);
return;
}
}
int intel_gsc_uc_init(struct intel_gsc_uc *gsc)
{
static struct lock_class_key gsc_lock;
struct intel_gt *gt = gsc_uc_to_gt(gsc);
struct drm_i915_private *i915 = gt->i915;
struct intel_engine_cs *engine = gt->engine[GSC0];
struct intel_context *ce;
struct i915_vma *vma;
int err;
err = intel_uc_fw_init(&gsc->fw);
if (err)
goto out;
vma = intel_guc_allocate_vma(&gt->uc.guc, SZ_8M);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
goto out_fw;
}
gsc->local = vma;
ce = intel_engine_create_pinned_context(engine, engine->gt->vm, SZ_4K,
I915_GEM_HWS_GSC_ADDR,
&gsc_lock, "gsc_context");
if (IS_ERR(ce)) {
drm_err(&gt->i915->drm,
"failed to create GSC CS ctx for FW communication\n");
err = PTR_ERR(ce);
goto out_vma;
}
gsc->ce = ce;
intel_uc_fw_change_status(&gsc->fw, INTEL_UC_FIRMWARE_LOADABLE);
return 0;
out_vma:
i915_vma_unpin_and_release(&gsc->local, 0);
out_fw:
intel_uc_fw_fini(&gsc->fw);
out:
i915_probe_error(i915, "failed with %d\n", err);
return err;
}
void intel_gsc_uc_fini(struct intel_gsc_uc *gsc)
{
if (!intel_uc_fw_is_loadable(&gsc->fw))
return;
flush_work(&gsc->work);
if (gsc->ce)
intel_engine_destroy_pinned_context(fetch_and_zero(&gsc->ce));
i915_vma_unpin_and_release(&gsc->local, 0);
intel_uc_fw_fini(&gsc->fw);
}
void intel_gsc_uc_suspend(struct intel_gsc_uc *gsc)
{
if (!intel_uc_fw_is_loadable(&gsc->fw))
return;
flush_work(&gsc->work);
}
void intel_gsc_uc_load_start(struct intel_gsc_uc *gsc)
{
if (!intel_uc_fw_is_loadable(&gsc->fw))
return;
if (intel_gsc_uc_fw_init_done(gsc))
return;
queue_work(system_unbound_wq, &gsc->work);
}

View File

@ -0,0 +1,47 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2022 Intel Corporation
*/
#ifndef _INTEL_GSC_UC_H_
#define _INTEL_GSC_UC_H_
#include "intel_uc_fw.h"
struct i915_vma;
struct intel_context;
struct intel_gsc_uc {
/* Generic uC firmware management */
struct intel_uc_fw fw;
/* GSC-specific additions */
struct i915_vma *local; /* private memory for GSC usage */
struct intel_context *ce; /* for submission to GSC FW via GSC engine */
struct work_struct work; /* for delayed load */
};
void intel_gsc_uc_init_early(struct intel_gsc_uc *gsc);
int intel_gsc_uc_init(struct intel_gsc_uc *gsc);
void intel_gsc_uc_fini(struct intel_gsc_uc *gsc);
void intel_gsc_uc_suspend(struct intel_gsc_uc *gsc);
void intel_gsc_uc_load_start(struct intel_gsc_uc *gsc);
static inline bool intel_gsc_uc_is_supported(struct intel_gsc_uc *gsc)
{
return intel_uc_fw_is_supported(&gsc->fw);
}
static inline bool intel_gsc_uc_is_wanted(struct intel_gsc_uc *gsc)
{
return intel_uc_fw_is_enabled(&gsc->fw);
}
static inline bool intel_gsc_uc_is_used(struct intel_gsc_uc *gsc)
{
GEM_BUG_ON(__intel_uc_fw_status(&gsc->fw) == INTEL_UC_FIRMWARE_SELECTED);
return intel_uc_fw_is_available(&gsc->fw);
}
#endif

View File

@ -274,8 +274,9 @@ static u32 guc_ctl_wa_flags(struct intel_guc *guc)
if (IS_DG2_GRAPHICS_STEP(gt->i915, G10, STEP_A0, STEP_B0))
flags |= GUC_WA_GAM_CREDITS;
/* Wa_14014475959:dg2 */
if (IS_DG2(gt->i915))
/* Wa_14014475959 */
if (IS_MTL_GRAPHICS_STEP(gt->i915, M, STEP_A0, STEP_B0) ||
IS_DG2(gt->i915))
flags |= GUC_WA_HOLD_CCS_SWITCHOUT;
/*
@ -289,7 +290,9 @@ static u32 guc_ctl_wa_flags(struct intel_guc *guc)
flags |= GUC_WA_DUAL_QUEUE;
/* Wa_22011802037: graphics version 11/12 */
if (IS_GRAPHICS_VER(gt->i915, 11, 12))
if (IS_MTL_GRAPHICS_STEP(gt->i915, M, STEP_A0, STEP_B0) ||
(GRAPHICS_VER(gt->i915) >= 11 &&
GRAPHICS_VER_FULL(gt->i915) < IP_VER(12, 70)))
flags |= GUC_WA_PRE_PARSER;
/* Wa_16011777198:dg2 */
@ -430,9 +433,6 @@ int intel_guc_init(struct intel_guc *guc)
/* now that everything is perma-pinned, initialize the parameters */
guc_init_params(guc);
/* We need to notify the guc whenever we change the GGTT */
i915_ggtt_enable_guc(gt->ggtt);
intel_uc_fw_change_status(&guc->fw, INTEL_UC_FIRMWARE_LOADABLE);
return 0;
@ -457,13 +457,9 @@ out:
void intel_guc_fini(struct intel_guc *guc)
{
struct intel_gt *gt = guc_to_gt(guc);
if (!intel_uc_fw_is_loadable(&guc->fw))
return;
i915_ggtt_disable_guc(gt->ggtt);
if (intel_guc_slpc_is_used(guc))
intel_guc_slpc_fini(&guc->slpc);

View File

@ -158,6 +158,9 @@ struct intel_guc {
bool submission_selected;
/** @submission_initialized: tracks whether GuC submission has been initialised */
bool submission_initialized;
/** @submission_version: Submission API version of the currently loaded firmware */
struct intel_uc_fw_ver submission_version;
/**
* @rc_supported: tracks whether we support GuC rc on the current platform
*/
@ -268,6 +271,14 @@ struct intel_guc {
#endif
};
/*
* GuC version number components are only 8-bit, so converting to a 32bit 8.8.8
* integer works.
*/
#define MAKE_GUC_VER(maj, min, pat) (((maj) << 16) | ((min) << 8) | (pat))
#define MAKE_GUC_VER_STRUCT(ver) MAKE_GUC_VER((ver).major, (ver).minor, (ver).patch)
#define GUC_SUBMIT_VER(guc) MAKE_GUC_VER_STRUCT((guc)->submission_version)
static inline struct intel_guc *log_to_guc(struct intel_guc_log *log)
{
return container_of(log, struct intel_guc, log);

View File

@ -1621,7 +1621,7 @@ static void guc_engine_reset_prepare(struct intel_engine_cs *engine)
intel_engine_stop_cs(engine);
/*
* Wa_22011802037:gen11/gen12: In addition to stopping the cs, we need
* Wa_22011802037: In addition to stopping the cs, we need
* to wait for any pending mi force wakeups
*/
intel_engine_wait_for_pending_mi_fw(engine);
@ -1890,7 +1890,7 @@ int intel_guc_submission_init(struct intel_guc *guc)
if (guc->submission_initialized)
return 0;
if (GET_UC_VER(guc) < MAKE_UC_VER(70, 0, 0)) {
if (GUC_SUBMIT_VER(guc) < MAKE_GUC_VER(1, 0, 0)) {
ret = guc_lrc_desc_pool_create_v69(guc);
if (ret)
return ret;
@ -2330,7 +2330,7 @@ static int register_context(struct intel_context *ce, bool loop)
GEM_BUG_ON(intel_context_is_child(ce));
trace_intel_context_register(ce);
if (GET_UC_VER(guc) >= MAKE_UC_VER(70, 0, 0))
if (GUC_SUBMIT_VER(guc) >= MAKE_GUC_VER(1, 0, 0))
ret = register_context_v70(guc, ce, loop);
else
ret = register_context_v69(guc, ce, loop);
@ -2342,7 +2342,7 @@ static int register_context(struct intel_context *ce, bool loop)
set_context_registered(ce);
spin_unlock_irqrestore(&ce->guc_state.lock, flags);
if (GET_UC_VER(guc) >= MAKE_UC_VER(70, 0, 0))
if (GUC_SUBMIT_VER(guc) >= MAKE_GUC_VER(1, 0, 0))
guc_context_policy_init_v70(ce, loop);
}
@ -2534,6 +2534,7 @@ static void prepare_context_registration_info_v69(struct intel_context *ce)
i915_gem_object_is_lmem(ce->ring->vma->obj));
desc = __get_lrc_desc_v69(guc, ctx_id);
GEM_BUG_ON(!desc);
desc->engine_class = engine_class_to_guc_class(engine->class);
desc->engine_submit_mask = engine->logical_mask;
desc->hw_context_desc = ce->lrc.lrca;
@ -2956,7 +2957,7 @@ static void __guc_context_set_preemption_timeout(struct intel_guc *guc,
u16 guc_id,
u32 preemption_timeout)
{
if (GET_UC_VER(guc) >= MAKE_UC_VER(70, 0, 0)) {
if (GUC_SUBMIT_VER(guc) >= MAKE_GUC_VER(1, 0, 0)) {
struct context_policy policy;
__guc_context_policy_start_klv(&policy, guc_id);
@ -3283,7 +3284,7 @@ static int guc_context_alloc(struct intel_context *ce)
static void __guc_context_set_prio(struct intel_guc *guc,
struct intel_context *ce)
{
if (GET_UC_VER(guc) >= MAKE_UC_VER(70, 0, 0)) {
if (GUC_SUBMIT_VER(guc) >= MAKE_GUC_VER(1, 0, 0)) {
struct context_policy policy;
__guc_context_policy_start_klv(&policy, ce->guc_id.id);
@ -4202,8 +4203,10 @@ static void guc_default_vfuncs(struct intel_engine_cs *engine)
engine->flags |= I915_ENGINE_HAS_TIMESLICES;
/* Wa_14014475959:dg2 */
if (IS_DG2(engine->i915) && engine->class == COMPUTE_CLASS)
engine->flags |= I915_ENGINE_USES_WA_HOLD_CCS_SWITCHOUT;
if (engine->class == COMPUTE_CLASS)
if (IS_MTL_GRAPHICS_STEP(engine->i915, M, STEP_A0, STEP_B0) ||
IS_DG2(engine->i915))
engine->flags |= I915_ENGINE_USES_WA_HOLD_CCS_SWITCHOUT;
/*
* TODO: GuC supports timeslicing and semaphores as well, but they're
@ -4366,7 +4369,7 @@ static int guc_init_global_schedule_policy(struct intel_guc *guc)
intel_wakeref_t wakeref;
int ret = 0;
if (GET_UC_VER(guc) < MAKE_UC_VER(70, 3, 0))
if (GUC_SUBMIT_VER(guc) < MAKE_GUC_VER(1, 1, 0))
return 0;
__guc_scheduling_policy_start_klv(&policy);
@ -4905,6 +4908,9 @@ void intel_guc_submission_print_info(struct intel_guc *guc,
if (!sched_engine)
return;
drm_printf(p, "GuC Submission API Version: %d.%d.%d\n",
guc->submission_version.major, guc->submission_version.minor,
guc->submission_version.patch);
drm_printf(p, "GuC Number Outstanding Submission G2H: %u\n",
atomic_read(&guc->outstanding_submission_g2h));
drm_printf(p, "GuC tasklet count: %u\n",

View File

@ -32,7 +32,7 @@ int intel_huc_fw_load_and_auth_via_gsc(struct intel_huc *huc)
GEM_WARN_ON(intel_uc_fw_is_loaded(&huc->fw));
ret = intel_pxp_huc_load_and_auth(&huc_to_gt(huc)->pxp);
ret = intel_pxp_huc_load_and_auth(huc_to_gt(huc)->i915->pxp);
if (ret)
return ret;

View File

@ -7,6 +7,8 @@
#include "gt/intel_gt.h"
#include "gt/intel_reset.h"
#include "intel_gsc_fw.h"
#include "intel_gsc_uc.h"
#include "intel_guc.h"
#include "intel_guc_ads.h"
#include "intel_guc_submission.h"
@ -126,6 +128,7 @@ void intel_uc_init_early(struct intel_uc *uc)
intel_guc_init_early(&uc->guc);
intel_huc_init_early(&uc->huc);
intel_gsc_uc_init_early(&uc->gsc);
__confirm_options(uc);
@ -296,15 +299,26 @@ static void __uc_fetch_firmwares(struct intel_uc *uc)
INTEL_UC_FIRMWARE_ERROR);
}
if (intel_uc_wants_gsc_uc(uc)) {
drm_dbg(&uc_to_gt(uc)->i915->drm,
"Failed to fetch GuC: %d disabling GSC\n", err);
intel_uc_fw_change_status(&uc->gsc.fw,
INTEL_UC_FIRMWARE_ERROR);
}
return;
}
if (intel_uc_wants_huc(uc))
intel_uc_fw_fetch(&uc->huc.fw);
if (intel_uc_wants_gsc_uc(uc))
intel_uc_fw_fetch(&uc->gsc.fw);
}
static void __uc_cleanup_firmwares(struct intel_uc *uc)
{
intel_uc_fw_cleanup_fetch(&uc->gsc.fw);
intel_uc_fw_cleanup_fetch(&uc->huc.fw);
intel_uc_fw_cleanup_fetch(&uc->guc.fw);
}
@ -330,11 +344,15 @@ static int __uc_init(struct intel_uc *uc)
if (intel_uc_uses_huc(uc))
intel_huc_init(huc);
if (intel_uc_uses_gsc_uc(uc))
intel_gsc_uc_init(&uc->gsc);
return 0;
}
static void __uc_fini(struct intel_uc *uc)
{
intel_gsc_uc_fini(&uc->gsc);
intel_huc_fini(&uc->huc);
intel_guc_fini(&uc->guc);
}
@ -437,9 +455,9 @@ static void print_fw_ver(struct intel_uc *uc, struct intel_uc_fw *fw)
drm_info(&i915->drm, "%s firmware %s version %u.%u.%u\n",
intel_uc_fw_type_repr(fw->type), fw->file_selected.path,
fw->file_selected.major_ver,
fw->file_selected.minor_ver,
fw->file_selected.patch_ver);
fw->file_selected.ver.major,
fw->file_selected.ver.minor,
fw->file_selected.ver.patch);
}
static int __uc_init_hw(struct intel_uc *uc)
@ -531,6 +549,8 @@ static int __uc_init_hw(struct intel_uc *uc)
intel_rps_lower_unslice(&uc_to_gt(uc)->rps);
}
intel_gsc_uc_load_start(&uc->gsc);
drm_info(&i915->drm, "GuC submission %s\n",
str_enabled_disabled(intel_uc_uses_guc_submission(uc)));
drm_info(&i915->drm, "GuC SLPC %s\n",
@ -659,6 +679,9 @@ void intel_uc_suspend(struct intel_uc *uc)
intel_wakeref_t wakeref;
int err;
/* flush the GSC worker */
intel_gsc_uc_suspend(&uc->gsc);
if (!intel_guc_is_ready(guc)) {
guc->interrupts.enabled = false;
return;

View File

@ -6,6 +6,7 @@
#ifndef _INTEL_UC_H_
#define _INTEL_UC_H_
#include "intel_gsc_uc.h"
#include "intel_guc.h"
#include "intel_guc_rc.h"
#include "intel_guc_submission.h"
@ -27,6 +28,7 @@ struct intel_uc_ops {
struct intel_uc {
struct intel_uc_ops const *ops;
struct intel_gsc_uc gsc;
struct intel_guc guc;
struct intel_huc huc;
@ -87,6 +89,7 @@ uc_state_checkers(huc, huc);
uc_state_checkers(guc, guc_submission);
uc_state_checkers(guc, guc_slpc);
uc_state_checkers(guc, guc_rc);
uc_state_checkers(gsc, gsc_uc);
#undef uc_state_checkers
#undef __uc_state_checker

View File

@ -19,11 +19,18 @@
static inline struct intel_gt *
____uc_fw_to_gt(struct intel_uc_fw *uc_fw, enum intel_uc_fw_type type)
{
if (type == INTEL_UC_FW_TYPE_GUC)
return container_of(uc_fw, struct intel_gt, uc.guc.fw);
GEM_BUG_ON(type >= INTEL_UC_FW_NUM_TYPES);
GEM_BUG_ON(type != INTEL_UC_FW_TYPE_HUC);
return container_of(uc_fw, struct intel_gt, uc.huc.fw);
switch (type) {
case INTEL_UC_FW_TYPE_GUC:
return container_of(uc_fw, struct intel_gt, uc.guc.fw);
case INTEL_UC_FW_TYPE_HUC:
return container_of(uc_fw, struct intel_gt, uc.huc.fw);
case INTEL_UC_FW_TYPE_GSC:
return container_of(uc_fw, struct intel_gt, uc.gsc.fw);
}
return NULL;
}
static inline struct intel_gt *__uc_fw_to_gt(struct intel_uc_fw *uc_fw)
@ -118,35 +125,35 @@ void intel_uc_fw_change_status(struct intel_uc_fw *uc_fw,
*/
#define __MAKE_UC_FW_PATH_BLANK(prefix_, name_) \
"i915/" \
__stringify(prefix_) name_ ".bin"
__stringify(prefix_) "_" name_ ".bin"
#define __MAKE_UC_FW_PATH_MAJOR(prefix_, name_, major_) \
"i915/" \
__stringify(prefix_) name_ \
__stringify(prefix_) "_" name_ "_" \
__stringify(major_) ".bin"
#define __MAKE_UC_FW_PATH_MMP(prefix_, name_, major_, minor_, patch_) \
"i915/" \
__stringify(prefix_) name_ \
__stringify(prefix_) "_" name_ "_" \
__stringify(major_) "." \
__stringify(minor_) "." \
__stringify(patch_) ".bin"
/* Minor for internal driver use, not part of file name */
#define MAKE_GUC_FW_PATH_MAJOR(prefix_, major_, minor_) \
__MAKE_UC_FW_PATH_MAJOR(prefix_, "_guc_", major_)
__MAKE_UC_FW_PATH_MAJOR(prefix_, "guc", major_)
#define MAKE_GUC_FW_PATH_MMP(prefix_, major_, minor_, patch_) \
__MAKE_UC_FW_PATH_MMP(prefix_, "_guc_", major_, minor_, patch_)
__MAKE_UC_FW_PATH_MMP(prefix_, "guc", major_, minor_, patch_)
#define MAKE_HUC_FW_PATH_BLANK(prefix_) \
__MAKE_UC_FW_PATH_BLANK(prefix_, "_huc")
__MAKE_UC_FW_PATH_BLANK(prefix_, "huc")
#define MAKE_HUC_FW_PATH_GSC(prefix_) \
__MAKE_UC_FW_PATH_BLANK(prefix_, "_huc_gsc")
__MAKE_UC_FW_PATH_BLANK(prefix_, "huc_gsc")
#define MAKE_HUC_FW_PATH_MMP(prefix_, major_, minor_, patch_) \
__MAKE_UC_FW_PATH_MMP(prefix_, "_huc_", major_, minor_, patch_)
__MAKE_UC_FW_PATH_MMP(prefix_, "huc", major_, minor_, patch_)
/*
* All blobs need to be declared via MODULE_FIRMWARE().
@ -238,7 +245,7 @@ __uc_fw_auto_select(struct drm_i915_private *i915, struct intel_uc_fw *uc_fw)
[INTEL_UC_FW_TYPE_GUC] = { blobs_guc, ARRAY_SIZE(blobs_guc) },
[INTEL_UC_FW_TYPE_HUC] = { blobs_huc, ARRAY_SIZE(blobs_huc) },
};
static bool verified;
static bool verified[INTEL_UC_FW_NUM_TYPES];
const struct uc_fw_platform_requirement *fw_blobs;
enum intel_platform p = INTEL_INFO(i915)->platform;
u32 fw_count;
@ -246,6 +253,14 @@ __uc_fw_auto_select(struct drm_i915_private *i915, struct intel_uc_fw *uc_fw)
int i;
bool found;
/*
* GSC FW support is still not fully in place, so we're not defining
* the FW blob yet because we don't want the driver to attempt to load
* it until we're ready for it.
*/
if (uc_fw->type == INTEL_UC_FW_TYPE_GSC)
return;
/*
* The only difference between the ADL GuC FWs is the HWConfig support.
* ADL-N does not support HWConfig, so we should use the same binary as
@ -278,8 +293,8 @@ __uc_fw_auto_select(struct drm_i915_private *i915, struct intel_uc_fw *uc_fw)
uc_fw->file_selected.path = blob->path;
uc_fw->file_wanted.path = blob->path;
uc_fw->file_wanted.major_ver = blob->major;
uc_fw->file_wanted.minor_ver = blob->minor;
uc_fw->file_wanted.ver.major = blob->major;
uc_fw->file_wanted.ver.minor = blob->minor;
uc_fw->loaded_via_gsc = blob->loaded_via_gsc;
found = true;
break;
@ -291,8 +306,8 @@ __uc_fw_auto_select(struct drm_i915_private *i915, struct intel_uc_fw *uc_fw)
}
/* make sure the list is ordered as expected */
if (IS_ENABLED(CONFIG_DRM_I915_SELFTEST) && !verified) {
verified = true;
if (IS_ENABLED(CONFIG_DRM_I915_SELFTEST) && !verified[uc_fw->type]) {
verified[uc_fw->type] = true;
for (i = 1; i < fw_count; i++) {
/* Next platform is good: */
@ -343,7 +358,8 @@ __uc_fw_auto_select(struct drm_i915_private *i915, struct intel_uc_fw *uc_fw)
continue;
bad:
drm_err(&i915->drm, "Invalid FW blob order: %s r%u %s%d.%d.%d comes before %s r%u %s%d.%d.%d\n",
drm_err(&i915->drm, "Invalid %s blob order: %s r%u %s%d.%d.%d comes before %s r%u %s%d.%d.%d\n",
intel_uc_fw_type_repr(uc_fw->type),
intel_platform_name(fw_blobs[i - 1].p), fw_blobs[i - 1].rev,
fw_blobs[i - 1].blob.legacy ? "L" : "v",
fw_blobs[i - 1].blob.major,
@ -374,6 +390,11 @@ static const char *__override_huc_firmware_path(struct drm_i915_private *i915)
return "";
}
static const char *__override_gsc_firmware_path(struct drm_i915_private *i915)
{
return i915->params.gsc_firmware_path;
}
static void __uc_fw_user_override(struct drm_i915_private *i915, struct intel_uc_fw *uc_fw)
{
const char *path = NULL;
@ -385,6 +406,9 @@ static void __uc_fw_user_override(struct drm_i915_private *i915, struct intel_uc
case INTEL_UC_FW_TYPE_HUC:
path = __override_huc_firmware_path(i915);
break;
case INTEL_UC_FW_TYPE_GSC:
path = __override_gsc_firmware_path(i915);
break;
}
if (unlikely(path)) {
@ -438,28 +462,28 @@ static void __force_fw_fetch_failures(struct intel_uc_fw *uc_fw, int e)
uc_fw->user_overridden = user;
} else if (i915_inject_probe_error(i915, e)) {
/* require next major version */
uc_fw->file_wanted.major_ver += 1;
uc_fw->file_wanted.minor_ver = 0;
uc_fw->file_wanted.ver.major += 1;
uc_fw->file_wanted.ver.minor = 0;
uc_fw->user_overridden = user;
} else if (i915_inject_probe_error(i915, e)) {
/* require next minor version */
uc_fw->file_wanted.minor_ver += 1;
uc_fw->file_wanted.ver.minor += 1;
uc_fw->user_overridden = user;
} else if (uc_fw->file_wanted.major_ver &&
} else if (uc_fw->file_wanted.ver.major &&
i915_inject_probe_error(i915, e)) {
/* require prev major version */
uc_fw->file_wanted.major_ver -= 1;
uc_fw->file_wanted.minor_ver = 0;
uc_fw->file_wanted.ver.major -= 1;
uc_fw->file_wanted.ver.minor = 0;
uc_fw->user_overridden = user;
} else if (uc_fw->file_wanted.minor_ver &&
} else if (uc_fw->file_wanted.ver.minor &&
i915_inject_probe_error(i915, e)) {
/* require prev minor version - hey, this should work! */
uc_fw->file_wanted.minor_ver -= 1;
uc_fw->file_wanted.ver.minor -= 1;
uc_fw->user_overridden = user;
} else if (user && i915_inject_probe_error(i915, e)) {
/* officially unsupported platform */
uc_fw->file_wanted.major_ver = 0;
uc_fw->file_wanted.minor_ver = 0;
uc_fw->file_wanted.ver.major = 0;
uc_fw->file_wanted.ver.minor = 0;
uc_fw->user_overridden = true;
}
}
@ -471,13 +495,69 @@ static int check_gsc_manifest(const struct firmware *fw,
u32 version_hi = dw[HUC_GSC_VERSION_HI_DW];
u32 version_lo = dw[HUC_GSC_VERSION_LO_DW];
uc_fw->file_selected.major_ver = FIELD_GET(HUC_GSC_MAJOR_VER_HI_MASK, version_hi);
uc_fw->file_selected.minor_ver = FIELD_GET(HUC_GSC_MINOR_VER_HI_MASK, version_hi);
uc_fw->file_selected.patch_ver = FIELD_GET(HUC_GSC_PATCH_VER_LO_MASK, version_lo);
uc_fw->file_selected.ver.major = FIELD_GET(HUC_GSC_MAJOR_VER_HI_MASK, version_hi);
uc_fw->file_selected.ver.minor = FIELD_GET(HUC_GSC_MINOR_VER_HI_MASK, version_hi);
uc_fw->file_selected.ver.patch = FIELD_GET(HUC_GSC_PATCH_VER_LO_MASK, version_lo);
return 0;
}
static void uc_unpack_css_version(struct intel_uc_fw_ver *ver, u32 css_value)
{
/* Get version numbers from the CSS header */
ver->major = FIELD_GET(CSS_SW_VERSION_UC_MAJOR, css_value);
ver->minor = FIELD_GET(CSS_SW_VERSION_UC_MINOR, css_value);
ver->patch = FIELD_GET(CSS_SW_VERSION_UC_PATCH, css_value);
}
static void guc_read_css_info(struct intel_uc_fw *uc_fw, struct uc_css_header *css)
{
struct intel_guc *guc = container_of(uc_fw, struct intel_guc, fw);
/*
* The GuC firmware includes an extra version number to specify the
* submission API level. This allows submission code to work with
* multiple GuC versions without having to know the absolute firmware
* version number (there are likely to be multiple firmware releases
* which all support the same submission API level).
*
* Note that the spec for the CSS header defines this version number
* as 'vf_version' as it was originally intended for virtualisation.
* However, it is applicable to native submission as well.
*
* Unfortunately, due to an oversight, this version number was only
* exposed in the CSS header from v70.6.0.
*/
if (uc_fw->file_selected.ver.major >= 70) {
if (uc_fw->file_selected.ver.minor >= 6) {
/* v70.6.0 adds CSS header support */
uc_unpack_css_version(&guc->submission_version, css->vf_version);
} else if (uc_fw->file_selected.ver.minor >= 3) {
/* v70.3.0 introduced v1.1.0 */
guc->submission_version.major = 1;
guc->submission_version.minor = 1;
guc->submission_version.patch = 0;
} else {
/* v70.0.0 introduced v1.0.0 */
guc->submission_version.major = 1;
guc->submission_version.minor = 0;
guc->submission_version.patch = 0;
}
} else if (uc_fw->file_selected.ver.major >= 69) {
/* v69.0.0 introduced v0.10.0 */
guc->submission_version.major = 0;
guc->submission_version.minor = 10;
guc->submission_version.patch = 0;
} else {
/* Prior versions were v0.1.0 */
guc->submission_version.major = 0;
guc->submission_version.minor = 1;
guc->submission_version.patch = 0;
}
uc_fw->private_data_size = css->private_data_size;
}
static int check_ccs_header(struct intel_gt *gt,
const struct firmware *fw,
struct intel_uc_fw *uc_fw)
@ -531,16 +611,66 @@ static int check_ccs_header(struct intel_gt *gt,
return -E2BIG;
}
/* Get version numbers from the CSS header */
uc_fw->file_selected.major_ver = FIELD_GET(CSS_SW_VERSION_UC_MAJOR,
css->sw_version);
uc_fw->file_selected.minor_ver = FIELD_GET(CSS_SW_VERSION_UC_MINOR,
css->sw_version);
uc_fw->file_selected.patch_ver = FIELD_GET(CSS_SW_VERSION_UC_PATCH,
css->sw_version);
uc_unpack_css_version(&uc_fw->file_selected.ver, css->sw_version);
if (uc_fw->type == INTEL_UC_FW_TYPE_GUC)
uc_fw->private_data_size = css->private_data_size;
guc_read_css_info(uc_fw, css);
return 0;
}
static bool is_ver_8bit(struct intel_uc_fw_ver *ver)
{
return ver->major < 0xFF && ver->minor < 0xFF && ver->patch < 0xFF;
}
static bool guc_check_version_range(struct intel_uc_fw *uc_fw)
{
struct intel_guc *guc = container_of(uc_fw, struct intel_guc, fw);
/*
* GuC version number components are defined as being 8-bits.
* The submission code relies on this to optimise version comparison
* tests. So enforce the restriction here.
*/
if (!is_ver_8bit(&uc_fw->file_selected.ver)) {
drm_warn(&__uc_fw_to_gt(uc_fw)->i915->drm, "%s firmware: invalid file version: 0x%02X:%02X:%02X\n",
intel_uc_fw_type_repr(uc_fw->type),
uc_fw->file_selected.ver.major,
uc_fw->file_selected.ver.minor,
uc_fw->file_selected.ver.patch);
return false;
}
if (!is_ver_8bit(&guc->submission_version)) {
drm_warn(&__uc_fw_to_gt(uc_fw)->i915->drm, "%s firmware: invalid submit version: 0x%02X:%02X:%02X\n",
intel_uc_fw_type_repr(uc_fw->type),
guc->submission_version.major,
guc->submission_version.minor,
guc->submission_version.patch);
return false;
}
return true;
}
static int check_fw_header(struct intel_gt *gt,
const struct firmware *fw,
struct intel_uc_fw *uc_fw)
{
int err = 0;
/* GSC FW version is queried after the FW is loaded */
if (uc_fw->type == INTEL_UC_FW_TYPE_GSC)
return 0;
if (uc_fw->loaded_via_gsc)
err = check_gsc_manifest(fw, uc_fw);
else
err = check_ccs_header(gt, fw, uc_fw);
if (err)
return err;
return 0;
}
@ -628,31 +758,31 @@ int intel_uc_fw_fetch(struct intel_uc_fw *uc_fw)
if (err)
goto fail;
if (uc_fw->loaded_via_gsc)
err = check_gsc_manifest(fw, uc_fw);
else
err = check_ccs_header(gt, fw, uc_fw);
err = check_fw_header(gt, fw, uc_fw);
if (err)
goto fail;
if (uc_fw->file_wanted.major_ver) {
if (uc_fw->type == INTEL_UC_FW_TYPE_GUC && !guc_check_version_range(uc_fw))
goto fail;
if (uc_fw->file_wanted.ver.major && uc_fw->file_selected.ver.major) {
/* Check the file's major version was as it claimed */
if (uc_fw->file_selected.major_ver != uc_fw->file_wanted.major_ver) {
if (uc_fw->file_selected.ver.major != uc_fw->file_wanted.ver.major) {
drm_notice(&i915->drm, "%s firmware %s: unexpected version: %u.%u != %u.%u\n",
intel_uc_fw_type_repr(uc_fw->type), uc_fw->file_selected.path,
uc_fw->file_selected.major_ver, uc_fw->file_selected.minor_ver,
uc_fw->file_wanted.major_ver, uc_fw->file_wanted.minor_ver);
uc_fw->file_selected.ver.major, uc_fw->file_selected.ver.minor,
uc_fw->file_wanted.ver.major, uc_fw->file_wanted.ver.minor);
if (!intel_uc_fw_is_overridden(uc_fw)) {
err = -ENOEXEC;
goto fail;
}
} else {
if (uc_fw->file_selected.minor_ver < uc_fw->file_wanted.minor_ver)
if (uc_fw->file_selected.ver.minor < uc_fw->file_wanted.ver.minor)
old_ver = true;
}
}
if (old_ver) {
if (old_ver && uc_fw->file_selected.ver.major) {
/* Preserve the version that was really wanted */
memcpy(&uc_fw->file_wanted, &file_ideal, sizeof(uc_fw->file_wanted));
@ -660,9 +790,9 @@ int intel_uc_fw_fetch(struct intel_uc_fw *uc_fw)
"%s firmware %s (%d.%d) is recommended, but only %s (%d.%d) was found\n",
intel_uc_fw_type_repr(uc_fw->type),
uc_fw->file_wanted.path,
uc_fw->file_wanted.major_ver, uc_fw->file_wanted.minor_ver,
uc_fw->file_wanted.ver.major, uc_fw->file_wanted.ver.minor,
uc_fw->file_selected.path,
uc_fw->file_selected.major_ver, uc_fw->file_selected.minor_ver);
uc_fw->file_selected.ver.major, uc_fw->file_selected.ver.minor);
drm_info(&i915->drm,
"Consider updating your linux-firmware pkg or downloading from %s\n",
INTEL_UC_FIRMWARE_URL);
@ -814,6 +944,20 @@ static int uc_fw_xfer(struct intel_uc_fw *uc_fw, u32 dst_offset, u32 dma_flags)
return ret;
}
int intel_uc_fw_mark_load_failed(struct intel_uc_fw *uc_fw, int err)
{
struct intel_gt *gt = __uc_fw_to_gt(uc_fw);
GEM_BUG_ON(!intel_uc_fw_is_loadable(uc_fw));
i915_probe_error(gt->i915, "Failed to load %s firmware %s (%d)\n",
intel_uc_fw_type_repr(uc_fw->type), uc_fw->file_selected.path,
err);
intel_uc_fw_change_status(uc_fw, INTEL_UC_FIRMWARE_LOAD_FAIL);
return err;
}
/**
* intel_uc_fw_upload - load uC firmware using custom loader
* @uc_fw: uC firmware
@ -850,11 +994,7 @@ int intel_uc_fw_upload(struct intel_uc_fw *uc_fw, u32 dst_offset, u32 dma_flags)
return 0;
fail:
i915_probe_error(gt->i915, "Failed to load %s firmware %s (%d)\n",
intel_uc_fw_type_repr(uc_fw->type), uc_fw->file_selected.path,
err);
intel_uc_fw_change_status(uc_fw, INTEL_UC_FIRMWARE_LOAD_FAIL);
return err;
return intel_uc_fw_mark_load_failed(uc_fw, err);
}
static inline bool uc_fw_need_rsa_in_memory(struct intel_uc_fw *uc_fw)
@ -1068,7 +1208,7 @@ size_t intel_uc_fw_copy_rsa(struct intel_uc_fw *uc_fw, void *dst, u32 max_len)
*/
void intel_uc_fw_dump(const struct intel_uc_fw *uc_fw, struct drm_printer *p)
{
u32 ver_sel, ver_want;
bool got_wanted;
drm_printf(p, "%s firmware: %s\n",
intel_uc_fw_type_repr(uc_fw->type), uc_fw->file_selected.path);
@ -1077,25 +1217,32 @@ void intel_uc_fw_dump(const struct intel_uc_fw *uc_fw, struct drm_printer *p)
intel_uc_fw_type_repr(uc_fw->type), uc_fw->file_wanted.path);
drm_printf(p, "\tstatus: %s\n",
intel_uc_fw_status_repr(uc_fw->status));
ver_sel = MAKE_UC_VER(uc_fw->file_selected.major_ver,
uc_fw->file_selected.minor_ver,
uc_fw->file_selected.patch_ver);
ver_want = MAKE_UC_VER(uc_fw->file_wanted.major_ver,
uc_fw->file_wanted.minor_ver,
uc_fw->file_wanted.patch_ver);
if (ver_sel < ver_want)
if (uc_fw->file_selected.ver.major < uc_fw->file_wanted.ver.major)
got_wanted = false;
else if ((uc_fw->file_selected.ver.major == uc_fw->file_wanted.ver.major) &&
(uc_fw->file_selected.ver.minor < uc_fw->file_wanted.ver.minor))
got_wanted = false;
else if ((uc_fw->file_selected.ver.major == uc_fw->file_wanted.ver.major) &&
(uc_fw->file_selected.ver.minor == uc_fw->file_wanted.ver.minor) &&
(uc_fw->file_selected.ver.patch < uc_fw->file_wanted.ver.patch))
got_wanted = false;
else
got_wanted = true;
if (!got_wanted)
drm_printf(p, "\tversion: wanted %u.%u.%u, found %u.%u.%u\n",
uc_fw->file_wanted.major_ver,
uc_fw->file_wanted.minor_ver,
uc_fw->file_wanted.patch_ver,
uc_fw->file_selected.major_ver,
uc_fw->file_selected.minor_ver,
uc_fw->file_selected.patch_ver);
uc_fw->file_wanted.ver.major,
uc_fw->file_wanted.ver.minor,
uc_fw->file_wanted.ver.patch,
uc_fw->file_selected.ver.major,
uc_fw->file_selected.ver.minor,
uc_fw->file_selected.ver.patch);
else
drm_printf(p, "\tversion: found %u.%u.%u\n",
uc_fw->file_selected.major_ver,
uc_fw->file_selected.minor_ver,
uc_fw->file_selected.patch_ver);
uc_fw->file_selected.ver.major,
uc_fw->file_selected.ver.minor,
uc_fw->file_selected.ver.patch);
drm_printf(p, "\tuCode: %u bytes\n", uc_fw->ucode_size);
drm_printf(p, "\tRSA: %u bytes\n", uc_fw->rsa_size);
}

View File

@ -61,9 +61,16 @@ enum intel_uc_fw_status {
enum intel_uc_fw_type {
INTEL_UC_FW_TYPE_GUC = 0,
INTEL_UC_FW_TYPE_HUC
INTEL_UC_FW_TYPE_HUC,
INTEL_UC_FW_TYPE_GSC,
};
#define INTEL_UC_FW_NUM_TYPES 3
struct intel_uc_fw_ver {
u32 major;
u32 minor;
u32 patch;
};
#define INTEL_UC_FW_NUM_TYPES 2
/*
* The firmware build process will generate a version header file with major and
@ -72,9 +79,7 @@ enum intel_uc_fw_type {
*/
struct intel_uc_fw_file {
const char *path;
u16 major_ver;
u16 minor_ver;
u16 patch_ver;
struct intel_uc_fw_ver ver;
};
/*
@ -110,11 +115,6 @@ struct intel_uc_fw {
bool loaded_via_gsc;
};
#define MAKE_UC_VER(maj, min, pat) ((pat) | ((min) << 8) | ((maj) << 16))
#define GET_UC_VER(uc) (MAKE_UC_VER((uc)->fw.file_selected.major_ver, \
(uc)->fw.file_selected.minor_ver, \
(uc)->fw.file_selected.patch_ver))
/*
* When we load the uC binaries, we pin them in a reserved section at the top of
* the GGTT, which is ~18 MBs. On multi-GT systems where the GTs share the GGTT,
@ -205,6 +205,8 @@ static inline const char *intel_uc_fw_type_repr(enum intel_uc_fw_type type)
return "GuC";
case INTEL_UC_FW_TYPE_HUC:
return "HuC";
case INTEL_UC_FW_TYPE_GSC:
return "GSC";
}
return "uC";
}
@ -287,6 +289,7 @@ int intel_uc_fw_upload(struct intel_uc_fw *uc_fw, u32 offset, u32 dma_flags);
int intel_uc_fw_init(struct intel_uc_fw *uc_fw);
void intel_uc_fw_fini(struct intel_uc_fw *uc_fw);
size_t intel_uc_fw_copy_rsa(struct intel_uc_fw *uc_fw, void *dst, u32 max_len);
int intel_uc_fw_mark_load_failed(struct intel_uc_fw *uc_fw, int err);
void intel_uc_fw_dump(const struct intel_uc_fw *uc_fw, struct drm_printer *p);
#endif

View File

@ -74,7 +74,8 @@ struct uc_css_header {
#define CSS_SW_VERSION_UC_MAJOR (0xFF << 16)
#define CSS_SW_VERSION_UC_MINOR (0xFF << 8)
#define CSS_SW_VERSION_UC_PATCH (0xFF << 0)
u32 reserved0[13];
u32 vf_version;
u32 reserved0[12];
union {
u32 private_data_size; /* only applies to GuC */
u32 reserved1;

View File

@ -42,8 +42,7 @@
#define GEN8_DECODE_PTE(pte) (pte & GENMASK_ULL(63, 12))
static int vgpu_gem_get_pages(
struct drm_i915_gem_object *obj)
static int vgpu_gem_get_pages(struct drm_i915_gem_object *obj)
{
struct drm_i915_private *dev_priv = to_i915(obj->base.dev);
struct intel_vgpu *vgpu;
@ -52,8 +51,12 @@ static int vgpu_gem_get_pages(
int i, j, ret;
gen8_pte_t __iomem *gtt_entries;
struct intel_vgpu_fb_info *fb_info;
u32 page_num;
unsigned int page_num; /* limited by sg_alloc_table */
if (overflows_type(obj->base.size >> PAGE_SHIFT, page_num))
return -E2BIG;
page_num = obj->base.size >> PAGE_SHIFT;
fb_info = (struct intel_vgpu_fb_info *)obj->gvt_info;
if (drm_WARN_ON(&dev_priv->drm, !fb_info))
return -ENODEV;
@ -66,7 +69,6 @@ static int vgpu_gem_get_pages(
if (unlikely(!st))
return -ENOMEM;
page_num = obj->base.size >> PAGE_SHIFT;
ret = sg_alloc_table(st, page_num, GFP_KERNEL);
if (ret) {
kfree(st);

View File

@ -1471,8 +1471,8 @@ int intel_engine_cmd_parser(struct intel_engine_cs *engine,
/* Defer failure until attempted use */
jump_whitelist = alloc_whitelist(batch_length);
shadow_addr = gen8_canonical_addr(shadow->node.start);
batch_addr = gen8_canonical_addr(batch->node.start + batch_offset);
shadow_addr = gen8_canonical_addr(i915_vma_offset(shadow));
batch_addr = gen8_canonical_addr(i915_vma_offset(batch) + batch_offset);
/*
* We use the batch length as size because the shadow object is as

View File

@ -183,7 +183,7 @@ i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
seq_printf(m, " (%s offset: %08llx, size: %08llx, pages: %s",
stringify_vma_type(vma),
vma->node.start, vma->node.size,
i915_vma_offset(vma), i915_vma_size(vma),
stringify_page_sizes(vma->resource->page_sizes_gtt,
NULL, 0));
if (i915_vma_is_ggtt(vma) || i915_vma_is_dpt(vma)) {

View File

@ -73,6 +73,8 @@
#include "gt/intel_gt_pm.h"
#include "gt/intel_rc6.h"
#include "pxp/intel_pxp.h"
#include "pxp/intel_pxp_debugfs.h"
#include "pxp/intel_pxp_pm.h"
#include "soc/intel_dram.h"
@ -103,9 +105,6 @@
#include "intel_region_ttm.h"
#include "vlv_suspend.h"
/* Intel Rapid Start Technology ACPI device name */
static const char irst_name[] = "INT3392";
static const struct drm_driver i915_drm_driver;
static void i915_release_bridge_dev(struct drm_device *dev,
@ -613,10 +612,6 @@ static int i915_driver_hw_probe(struct drm_i915_private *dev_priv)
i915_perf_init(dev_priv);
ret = intel_gt_assign_ggtt(to_gt(dev_priv));
if (ret)
goto err_perf;
ret = i915_ggtt_probe_hw(dev_priv);
if (ret)
goto err_perf;
@ -764,6 +759,8 @@ static void i915_driver_register(struct drm_i915_private *dev_priv)
for_each_gt(gt, dev_priv, i)
intel_gt_driver_register(gt);
intel_pxp_debugfs_register(dev_priv->pxp);
i915_hwmon_register(dev_priv);
intel_display_driver_register(dev_priv);
@ -795,6 +792,8 @@ static void i915_driver_unregister(struct drm_i915_private *dev_priv)
intel_display_driver_unregister(dev_priv);
intel_pxp_fini(dev_priv);
for_each_gt(gt, dev_priv, i)
intel_gt_driver_unregister(gt);
@ -938,6 +937,8 @@ int i915_driver_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
if (ret)
goto out_cleanup_modeset2;
intel_pxp_init(i915);
ret = intel_modeset_init(i915);
if (ret)
goto out_cleanup_gem;
@ -1173,6 +1174,8 @@ static int i915_drm_prepare(struct drm_device *dev)
{
struct drm_i915_private *i915 = to_i915(dev);
intel_pxp_suspend_prepare(i915->pxp);
/*
* NB intel_display_suspend() may issue new requests after we've
* ostensibly marked the GPU as ready-to-sleep here. We need to
@ -1253,6 +1256,8 @@ static int i915_drm_suspend_late(struct drm_device *dev, bool hibernation)
disable_rpm_wakeref_asserts(rpm);
intel_pxp_suspend(dev_priv->pxp);
i915_gem_suspend_late(dev_priv);
for_each_gt(gt, dev_priv, i)
@ -1317,7 +1322,8 @@ int i915_driver_suspend_switcheroo(struct drm_i915_private *i915,
static int i915_drm_resume(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = to_i915(dev);
int ret;
struct intel_gt *gt;
int ret, i;
disable_rpm_wakeref_asserts(&dev_priv->runtime_pm);
@ -1332,6 +1338,11 @@ static int i915_drm_resume(struct drm_device *dev)
drm_err(&dev_priv->drm, "failed to re-enable GGTT\n");
i915_ggtt_resume(to_gt(dev_priv)->ggtt);
for_each_gt(gt, dev_priv, i)
if (GRAPHICS_VER(gt->i915) >= 8)
setup_private_pat(gt);
/* Must be called after GGTT is resumed. */
intel_dpt_resume(dev_priv);
@ -1359,6 +1370,8 @@ static int i915_drm_resume(struct drm_device *dev)
i915_gem_resume(dev_priv);
intel_pxp_resume(dev_priv->pxp);
intel_modeset_init_hw(dev_priv);
intel_init_clock_gating(dev_priv);
intel_hpd_init(dev_priv);
@ -1495,8 +1508,6 @@ static int i915_pm_suspend(struct device *kdev)
return -ENODEV;
}
i915_ggtt_mark_pte_lost(i915, false);
if (i915->drm.switch_power_state == DRM_SWITCH_POWER_OFF)
return 0;
@ -1549,14 +1560,6 @@ static int i915_pm_resume(struct device *kdev)
if (i915->drm.switch_power_state == DRM_SWITCH_POWER_OFF)
return 0;
/*
* If IRST is enabled, or if we can't detect whether it's enabled,
* then we must assume we lost the GGTT page table entries, since
* they are not retained if IRST decided to enter S4.
*/
if (!IS_ENABLED(CONFIG_ACPI) || acpi_dev_present(irst_name, NULL, -1))
i915_ggtt_mark_pte_lost(i915, true);
return i915_drm_resume(&i915->drm);
}
@ -1616,9 +1619,6 @@ static int i915_pm_restore_early(struct device *kdev)
static int i915_pm_restore(struct device *kdev)
{
struct drm_i915_private *i915 = kdev_to_i915(kdev);
i915_ggtt_mark_pte_lost(i915, true);
return i915_pm_resume(kdev);
}
@ -1642,6 +1642,8 @@ static int intel_runtime_suspend(struct device *kdev)
*/
i915_gem_runtime_suspend(dev_priv);
intel_pxp_runtime_suspend(dev_priv->pxp);
for_each_gt(gt, dev_priv, i)
intel_gt_runtime_suspend(gt);
@ -1746,6 +1748,8 @@ static int intel_runtime_resume(struct device *kdev)
for_each_gt(gt, dev_priv, i)
intel_gt_runtime_resume(gt);
intel_pxp_runtime_resume(dev_priv->pxp);
/*
* On VLV/CHV display interrupts are part of the display
* power well, so hpd is reinitialized from there. For

View File

@ -73,6 +73,7 @@ struct intel_encoder;
struct intel_limit;
struct intel_overlay_error_state;
struct vlv_s0ix_state;
struct intel_pxp;
#define I915_GEM_GPU_DOMAINS \
(I915_GEM_DOMAIN_RENDER | \
@ -365,6 +366,8 @@ struct drm_i915_private {
struct file *mmap_singleton;
} gem;
struct intel_pxp *pxp;
u8 pch_ssc_use;
/* For i915gm/i945gm vblank irq workaround */
@ -924,10 +927,6 @@ IS_SUBPLATFORM(const struct drm_i915_private *i915,
#define HAS_GLOBAL_MOCS_REGISTERS(dev_priv) (INTEL_INFO(dev_priv)->has_global_mocs)
#define HAS_PXP(dev_priv) ((IS_ENABLED(CONFIG_DRM_I915_PXP) && \
INTEL_INFO(dev_priv)->has_pxp) && \
VDBOX_MASK(to_gt(dev_priv)))
#define HAS_GMCH(dev_priv) (INTEL_INFO(dev_priv)->display.has_gmch)
#define HAS_GMD_ID(i915) (INTEL_INFO(i915)->has_gmd_id)

View File

@ -229,8 +229,9 @@ i915_gem_shmem_pread(struct drm_i915_gem_object *obj,
struct drm_i915_gem_pread *args)
{
unsigned int needs_clflush;
unsigned int idx, offset;
char __user *user_data;
unsigned long offset;
pgoff_t idx;
u64 remain;
int ret;
@ -383,13 +384,17 @@ i915_gem_gtt_pread(struct drm_i915_gem_object *obj,
{
struct drm_i915_private *i915 = to_i915(obj->base.dev);
struct i915_ggtt *ggtt = to_gt(i915)->ggtt;
unsigned long remain, offset;
intel_wakeref_t wakeref;
struct drm_mm_node node;
void __user *user_data;
struct i915_vma *vma;
u64 remain, offset;
int ret = 0;
if (overflows_type(args->size, remain) ||
overflows_type(args->offset, offset))
return -EINVAL;
wakeref = intel_runtime_pm_get(&i915->runtime_pm);
vma = i915_gem_gtt_prepare(obj, &node, false);
@ -540,13 +545,17 @@ i915_gem_gtt_pwrite_fast(struct drm_i915_gem_object *obj,
struct drm_i915_private *i915 = to_i915(obj->base.dev);
struct i915_ggtt *ggtt = to_gt(i915)->ggtt;
struct intel_runtime_pm *rpm = &i915->runtime_pm;
unsigned long remain, offset;
intel_wakeref_t wakeref;
struct drm_mm_node node;
struct i915_vma *vma;
u64 remain, offset;
void __user *user_data;
int ret = 0;
if (overflows_type(args->size, remain) ||
overflows_type(args->offset, offset))
return -EINVAL;
if (i915_gem_object_has_struct_page(obj)) {
/*
* Avoid waking the device up if we can fallback, as
@ -654,8 +663,9 @@ i915_gem_shmem_pwrite(struct drm_i915_gem_object *obj,
{
unsigned int partial_cacheline_write;
unsigned int needs_clflush;
unsigned int offset, idx;
void __user *user_data;
unsigned long offset;
pgoff_t idx;
u64 remain;
int ret;
@ -1143,6 +1153,8 @@ int i915_gem_init(struct drm_i915_private *dev_priv)
for_each_gt(gt, dev_priv, i) {
intel_uc_fetch_firmwares(&gt->uc);
intel_wopcm_init(&gt->wopcm);
if (GRAPHICS_VER(dev_priv) >= 8)
setup_private_pat(gt);
}
ret = i915_init_ggtt(dev_priv);

View File

@ -43,16 +43,25 @@ static bool dying_vma(struct i915_vma *vma)
return !kref_read(&vma->obj->base.refcount);
}
static int ggtt_flush(struct intel_gt *gt)
static int ggtt_flush(struct i915_address_space *vm)
{
/*
* Not everything in the GGTT is tracked via vma (otherwise we
* could evict as required with minimal stalling) so we are forced
* to idle the GPU and explicitly retire outstanding requests in
* the hopes that we can then remove contexts and the like only
* bound by their active reference.
*/
return intel_gt_wait_for_idle(gt, MAX_SCHEDULE_TIMEOUT);
struct i915_ggtt *ggtt = i915_vm_to_ggtt(vm);
struct intel_gt *gt;
int ret = 0;
list_for_each_entry(gt, &ggtt->gt_list, ggtt_link) {
/*
* Not everything in the GGTT is tracked via vma (otherwise we
* could evict as required with minimal stalling) so we are forced
* to idle the GPU and explicitly retire outstanding requests in
* the hopes that we can then remove contexts and the like only
* bound by their active reference.
*/
ret = intel_gt_wait_for_idle(gt, MAX_SCHEDULE_TIMEOUT);
if (ret)
return ret;
}
return ret;
}
static bool grab_vma(struct i915_vma *vma, struct i915_gem_ww_ctx *ww)
@ -149,6 +158,7 @@ i915_gem_evict_something(struct i915_address_space *vm,
struct drm_mm_node *node;
enum drm_mm_insert_mode mode;
struct i915_vma *active;
struct intel_gt *gt;
int ret;
lockdep_assert_held(&vm->mutex);
@ -174,7 +184,14 @@ i915_gem_evict_something(struct i915_address_space *vm,
min_size, alignment, color,
start, end, mode);
intel_gt_retire_requests(vm->gt);
if (i915_is_ggtt(vm)) {
struct i915_ggtt *ggtt = i915_vm_to_ggtt(vm);
list_for_each_entry(gt, &ggtt->gt_list, ggtt_link)
intel_gt_retire_requests(gt);
} else {
intel_gt_retire_requests(vm->gt);
}
search_again:
active = NULL;
@ -246,7 +263,7 @@ search_again:
if (I915_SELFTEST_ONLY(igt_evict_ctl.fail_if_busy))
return -EBUSY;
ret = ggtt_flush(vm->gt);
ret = ggtt_flush(vm);
if (ret)
return ret;
@ -332,7 +349,15 @@ int i915_gem_evict_for_node(struct i915_address_space *vm,
* a stray pin (preventing eviction) that can only be resolved by
* retiring.
*/
intel_gt_retire_requests(vm->gt);
if (i915_is_ggtt(vm)) {
struct i915_ggtt *ggtt = i915_vm_to_ggtt(vm);
struct intel_gt *gt;
list_for_each_entry(gt, &ggtt->gt_list, ggtt_link)
intel_gt_retire_requests(gt);
} else {
intel_gt_retire_requests(vm->gt);
}
if (i915_vm_has_cache_coloring(vm)) {
/* Expand search to cover neighbouring guard pages (or lack!) */
@ -444,7 +469,7 @@ int i915_gem_evict_vm(struct i915_address_space *vm, struct i915_gem_ww_ctx *ww,
* switch otherwise is ineffective.
*/
if (i915_is_ggtt(vm)) {
ret = ggtt_flush(vm->gt);
ret = ggtt_flush(vm);
if (ret)
return ret;
}

View File

@ -44,7 +44,8 @@ int i915_gem_gtt_insert(struct i915_address_space *vm,
#define PIN_HIGH BIT_ULL(5)
#define PIN_OFFSET_BIAS BIT_ULL(6)
#define PIN_OFFSET_FIXED BIT_ULL(7)
#define PIN_VALIDATE BIT_ULL(8) /* validate placement only, no need to call unpin() */
#define PIN_OFFSET_GUARD BIT_ULL(8)
#define PIN_VALIDATE BIT_ULL(9) /* validate placement only, no need to call unpin() */
#define PIN_GLOBAL BIT_ULL(10) /* I915_VMA_GLOBAL_BIND */
#define PIN_USER BIT_ULL(11) /* I915_VMA_LOCAL_BIND */

View File

@ -293,6 +293,10 @@ static const struct hwmon_channel_info *hwm_gt_info[] = {
/* I1 is exposed as power_crit or as curr_crit depending on bit 31 */
static int hwm_pcode_read_i1(struct drm_i915_private *i915, u32 *uval)
{
/* Avoid ILLEGAL_SUBCOMMAND "mailbox access failed" warning in snb_pcode_read */
if (IS_DG1(i915) || IS_DG2(i915))
return -ENXIO;
return snb_pcode_read_p(&i915->uncore, PCODE_POWER_SETUP,
POWER_SETUP_SUBCOMMAND_READ_I1, 0, uval);
}
@ -355,6 +359,38 @@ hwm_power_is_visible(const struct hwm_drvdata *ddat, u32 attr, int chan)
}
}
/*
* HW allows arbitrary PL1 limits to be set but silently clamps these values to
* "typical but not guaranteed" min/max values in rg.pkg_power_sku. Follow the
* same pattern for sysfs, allow arbitrary PL1 limits to be set but display
* clamped values when read. Write/read I1 also follows the same pattern.
*/
static int
hwm_power_max_read(struct hwm_drvdata *ddat, long *val)
{
struct i915_hwmon *hwmon = ddat->hwmon;
intel_wakeref_t wakeref;
u64 r, min, max;
*val = hwm_field_read_and_scale(ddat,
hwmon->rg.pkg_rapl_limit,
PKG_PWR_LIM_1,
hwmon->scl_shift_power,
SF_POWER);
with_intel_runtime_pm(ddat->uncore->rpm, wakeref)
r = intel_uncore_read64(ddat->uncore, hwmon->rg.pkg_power_sku);
min = REG_FIELD_GET(PKG_MIN_PWR, r);
min = mul_u64_u32_shr(min, SF_POWER, hwmon->scl_shift_power);
max = REG_FIELD_GET(PKG_MAX_PWR, r);
max = mul_u64_u32_shr(max, SF_POWER, hwmon->scl_shift_power);
if (min && max)
*val = clamp_t(u64, *val, min, max);
return 0;
}
static int
hwm_power_read(struct hwm_drvdata *ddat, u32 attr, int chan, long *val)
{
@ -364,12 +400,7 @@ hwm_power_read(struct hwm_drvdata *ddat, u32 attr, int chan, long *val)
switch (attr) {
case hwmon_power_max:
*val = hwm_field_read_and_scale(ddat,
hwmon->rg.pkg_rapl_limit,
PKG_PWR_LIM_1,
hwmon->scl_shift_power,
SF_POWER);
return 0;
return hwm_power_max_read(ddat, val);
case hwmon_power_rated_max:
*val = hwm_field_read_and_scale(ddat,
hwmon->rg.pkg_power_sku,

View File

@ -192,6 +192,9 @@ i915_param_named_unsafe(huc_firmware_path, charp, 0400,
i915_param_named_unsafe(dmc_firmware_path, charp, 0400,
"DMC firmware path to use instead of the default one");
i915_param_named_unsafe(gsc_firmware_path, charp, 0400,
"GSC firmware path to use instead of the default one");
i915_param_named_unsafe(enable_dp_mst, bool, 0400,
"Enable multi-stream transport (MST) for new DisplayPort sinks. (default: true)");

View File

@ -64,6 +64,7 @@ struct drm_printer;
param(char *, guc_firmware_path, NULL, 0400) \
param(char *, huc_firmware_path, NULL, 0400) \
param(char *, dmc_firmware_path, NULL, 0400) \
param(char *, gsc_firmware_path, NULL, 0400) \
param(bool, memtest, false, 0400) \
param(int, mmio_debug, -IS_ENABLED(CONFIG_DRM_I915_DEBUG_MMIO), 0600) \
param(int, edp_vswing, 0, 0400) \

View File

@ -423,7 +423,8 @@ static const struct intel_device_info ilk_m_info = {
.has_coherent_ggtt = true, \
.has_llc = 1, \
.has_rc6 = 1, \
.has_rc6p = 1, \
/* snb does support rc6p, but enabling it causes various issues */ \
.has_rc6p = 0, \
.has_rps = true, \
.dma_mask_size = 40, \
.__runtime.ppgtt_type = INTEL_PPGTT_ALIASING, \
@ -1125,7 +1126,7 @@ static const struct intel_gt_definition xelpmp_extra_gt[] = {
.type = GT_MEDIA,
.name = "Standalone Media GT",
.gsi_offset = MTL_MEDIA_GSI_BASE,
.engine_mask = BIT(VECS0) | BIT(VCS0) | BIT(VCS2),
.engine_mask = BIT(VECS0) | BIT(VCS0) | BIT(VCS2) | BIT(GSC0),
},
{}
};

View File

@ -1846,8 +1846,7 @@ static u32 *save_restore_register(struct i915_perf_stream *stream, u32 *cs,
for (d = 0; d < dword_count; d++) {
*cs++ = cmd;
*cs++ = i915_mmio_reg_offset(reg) + 4 * d;
*cs++ = intel_gt_scratch_offset(stream->engine->gt,
offset) + 4 * d;
*cs++ = i915_ggtt_offset(stream->noa_wait) + offset + 4 * d;
*cs++ = 0;
}
@ -1880,7 +1879,13 @@ static int alloc_noa_wait(struct i915_perf_stream *stream)
MI_PREDICATE_RESULT_2_ENGINE(base) :
MI_PREDICATE_RESULT_1(RENDER_RING_BASE);
bo = i915_gem_object_create_internal(i915, 4096);
/*
* gt->scratch was being used to save/restore the GPR registers, but on
* MTL the scratch uses stolen lmem. An MI_SRM to this memory region
* causes an engine hang. Instead allocate an additional page here to
* save/restore GPR registers
*/
bo = i915_gem_object_create_internal(i915, 8192);
if (IS_ERR(bo)) {
drm_err(&i915->drm,
"Failed to allocate NOA wait batchbuffer\n");
@ -1914,14 +1919,19 @@ retry:
goto err_unpin;
}
stream->noa_wait = vma;
#define GPR_SAVE_OFFSET 4096
#define PREDICATE_SAVE_OFFSET 4160
/* Save registers. */
for (i = 0; i < N_CS_GPR; i++)
cs = save_restore_register(
stream, cs, true /* save */, CS_GPR(i),
INTEL_GT_SCRATCH_FIELD_PERF_CS_GPR + 8 * i, 2);
GPR_SAVE_OFFSET + 8 * i, 2);
cs = save_restore_register(
stream, cs, true /* save */, mi_predicate_result,
INTEL_GT_SCRATCH_FIELD_PERF_PREDICATE_RESULT_1, 1);
PREDICATE_SAVE_OFFSET, 1);
/* First timestamp snapshot location. */
ts0 = cs;
@ -2037,10 +2047,10 @@ retry:
for (i = 0; i < N_CS_GPR; i++)
cs = save_restore_register(
stream, cs, false /* restore */, CS_GPR(i),
INTEL_GT_SCRATCH_FIELD_PERF_CS_GPR + 8 * i, 2);
GPR_SAVE_OFFSET + 8 * i, 2);
cs = save_restore_register(
stream, cs, false /* restore */, mi_predicate_result,
INTEL_GT_SCRATCH_FIELD_PERF_PREDICATE_RESULT_1, 1);
PREDICATE_SAVE_OFFSET, 1);
/* And return to the ring. */
*cs++ = MI_BATCH_BUFFER_END;
@ -2050,7 +2060,6 @@ retry:
i915_gem_object_flush_map(bo);
__i915_gem_object_release_map(bo);
stream->noa_wait = vma;
goto out_ww;
err_unpin:
@ -2263,7 +2272,7 @@ retry:
goto err_add_request;
err = rq->engine->emit_bb_start(rq,
vma->node.start, 0,
i915_vma_offset(vma), 0,
I915_DISPATCH_SECURE);
if (err)
goto err_add_request;
@ -3131,8 +3140,11 @@ get_sseu_config(struct intel_sseu *out_sseu,
*/
u32 i915_perf_oa_timestamp_frequency(struct drm_i915_private *i915)
{
/* Wa_18013179988:dg2 */
if (IS_DG2(i915)) {
/*
* Wa_18013179988:dg2
* Wa_14015846243:mtl
*/
if (IS_DG2(i915) || IS_METEORLAKE(i915)) {
intel_wakeref_t wakeref;
u32 reg, shift;
@ -4310,6 +4322,17 @@ static const struct i915_range gen12_oa_mux_regs[] = {
{}
};
/*
* Ref: 14010536224:
* 0x20cc is repurposed on MTL, so use a separate array for MTL.
*/
static const struct i915_range mtl_oa_mux_regs[] = {
{ .start = 0x0d00, .end = 0x0d04 }, /* RPM_CONFIG[0-1] */
{ .start = 0x0d0c, .end = 0x0d2c }, /* NOA_CONFIG[0-8] */
{ .start = 0x9840, .end = 0x9840 }, /* GDT_CHICKEN_BITS */
{ .start = 0x9884, .end = 0x9888 }, /* NOA_WRITE */
};
static bool gen7_is_valid_b_counter_addr(struct i915_perf *perf, u32 addr)
{
return reg_in_range_table(addr, gen7_oa_b_counters);
@ -4353,7 +4376,10 @@ static bool xehp_is_valid_b_counter_addr(struct i915_perf *perf, u32 addr)
static bool gen12_is_valid_mux_addr(struct i915_perf *perf, u32 addr)
{
return reg_in_range_table(addr, gen12_oa_mux_regs);
if (IS_METEORLAKE(perf->i915))
return reg_in_range_table(addr, mtl_oa_mux_regs);
else
return reg_in_range_table(addr, gen12_oa_mux_regs);
}
static u32 mask_reg_value(u32 reg, u32 val)
@ -4750,6 +4776,7 @@ static void oa_init_supported_formats(struct i915_perf *perf)
break;
case INTEL_DG2:
case INTEL_METEORLAKE:
oa_format_add(perf, I915_OAR_FORMAT_A32u40_A4u32_B8_C8);
oa_format_add(perf, I915_OA_FORMAT_A24u40_A14u32_B8_C8);
break;

View File

@ -118,6 +118,9 @@
#define GU_CNTL _MMIO(0x101010)
#define LMEM_INIT REG_BIT(7)
#define DRIVERFLR REG_BIT(31)
#define GU_DEBUG _MMIO(0x101018)
#define DRIVERFLR_STATUS REG_BIT(31)
#define GEN6_STOLEN_RESERVED _MMIO(0x1082C0)
#define GEN6_STOLEN_RESERVED_ADDR_MASK (0xFFF << 20)

View File

@ -96,6 +96,11 @@ struct i915_refct_sgt *i915_rsgt_from_mm_node(const struct drm_mm_node *node,
i915_refct_sgt_init(rsgt, node->size << PAGE_SHIFT);
st = &rsgt->table;
/* restricted by sg_alloc_table */
if (WARN_ON(overflows_type(DIV_ROUND_UP_ULL(node->size, segment_pages),
unsigned int)))
return ERR_PTR(-E2BIG);
if (sg_alloc_table(st, DIV_ROUND_UP_ULL(node->size, segment_pages),
GFP_KERNEL)) {
i915_refct_sgt_put(rsgt);
@ -177,6 +182,10 @@ struct i915_refct_sgt *i915_rsgt_from_buddy_resource(struct ttm_resource *res,
i915_refct_sgt_init(rsgt, size);
st = &rsgt->table;
/* restricted by sg_alloc_table */
if (WARN_ON(overflows_type(PFN_UP(res->size), unsigned int)))
return ERR_PTR(-E2BIG);
if (sg_alloc_table(st, PFN_UP(res->size), GFP_KERNEL)) {
i915_refct_sgt_put(rsgt);
return ERR_PTR(-ENOMEM);

View File

@ -145,8 +145,6 @@ bool i915_error_injected(void);
#define page_pack_bits(ptr, bits) ptr_pack_bits(ptr, bits, PAGE_SHIFT)
#define page_unpack_bits(ptr, bits) ptr_unpack_bits(ptr, bits, PAGE_SHIFT)
#define struct_member(T, member) (((T *)0)->member)
#define fetch_and_zero(ptr) ({ \
typeof(*ptr) __T = *(ptr); \
*(ptr) = (typeof(*ptr))0; \
@ -166,7 +164,7 @@ static __always_inline ptrdiff_t ptrdiff(const void *a, const void *b)
*/
#define container_of_user(ptr, type, member) ({ \
void __user *__mptr = (void __user *)(ptr); \
BUILD_BUG_ON_MSG(!__same_type(*(ptr), struct_member(type, member)) && \
BUILD_BUG_ON_MSG(!__same_type(*(ptr), typeof_member(type, member)) && \
!__same_type(*(ptr), void), \
"pointer type mismatch in container_of()"); \
((type __user *)(__mptr - offsetof(type, member))); })

View File

@ -418,8 +418,8 @@ i915_vma_resource_init_from_vma(struct i915_vma_resource *vma_res,
i915_vma_resource_init(vma_res, vma->vm, vma->pages, &vma->page_sizes,
obj->mm.rsgt, i915_gem_object_is_readonly(obj),
i915_gem_object_is_lmem(obj), obj->mm.region,
vma->ops, vma->private, vma->node.start,
vma->node.size, vma->size);
vma->ops, vma->private, __i915_vma_offset(vma),
__i915_vma_size(vma), vma->size, vma->guard);
}
/**
@ -447,7 +447,7 @@ int i915_vma_bind(struct i915_vma *vma,
lockdep_assert_held(&vma->vm->mutex);
GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
GEM_BUG_ON(vma->size > vma->node.size);
GEM_BUG_ON(vma->size > i915_vma_size(vma));
if (GEM_DEBUG_WARN_ON(range_overflows(vma->node.start,
vma->node.size,
@ -569,8 +569,8 @@ void __iomem *i915_vma_pin_iomap(struct i915_vma *vma)
vma->obj->base.size);
} else if (i915_vma_is_map_and_fenceable(vma)) {
ptr = io_mapping_map_wc(&i915_vm_to_ggtt(vma->vm)->iomap,
vma->node.start,
vma->node.size);
i915_vma_offset(vma),
i915_vma_size(vma));
} else {
ptr = (void __iomem *)
i915_gem_object_pin_map(vma->obj, I915_MAP_WC);
@ -659,22 +659,26 @@ bool i915_vma_misplaced(const struct i915_vma *vma,
if (test_bit(I915_VMA_ERROR_BIT, __i915_vma_flags(vma)))
return true;
if (vma->node.size < size)
if (i915_vma_size(vma) < size)
return true;
GEM_BUG_ON(alignment && !is_power_of_2(alignment));
if (alignment && !IS_ALIGNED(vma->node.start, alignment))
if (alignment && !IS_ALIGNED(i915_vma_offset(vma), alignment))
return true;
if (flags & PIN_MAPPABLE && !i915_vma_is_map_and_fenceable(vma))
return true;
if (flags & PIN_OFFSET_BIAS &&
vma->node.start < (flags & PIN_OFFSET_MASK))
i915_vma_offset(vma) < (flags & PIN_OFFSET_MASK))
return true;
if (flags & PIN_OFFSET_FIXED &&
vma->node.start != (flags & PIN_OFFSET_MASK))
i915_vma_offset(vma) != (flags & PIN_OFFSET_MASK))
return true;
if (flags & PIN_OFFSET_GUARD &&
vma->guard < (flags & PIN_OFFSET_MASK))
return true;
return false;
@ -687,10 +691,11 @@ void __i915_vma_set_map_and_fenceable(struct i915_vma *vma)
GEM_BUG_ON(!i915_vma_is_ggtt(vma));
GEM_BUG_ON(!vma->fence_size);
fenceable = (vma->node.size >= vma->fence_size &&
IS_ALIGNED(vma->node.start, vma->fence_alignment));
fenceable = (i915_vma_size(vma) >= vma->fence_size &&
IS_ALIGNED(i915_vma_offset(vma), vma->fence_alignment));
mappable = vma->node.start + vma->fence_size <= i915_vm_to_ggtt(vma->vm)->mappable_end;
mappable = i915_ggtt_offset(vma) + vma->fence_size <=
i915_vm_to_ggtt(vma->vm)->mappable_end;
if (mappable && fenceable)
set_bit(I915_VMA_CAN_FENCE_BIT, __i915_vma_flags(vma));
@ -748,15 +753,16 @@ static int
i915_vma_insert(struct i915_vma *vma, struct i915_gem_ww_ctx *ww,
u64 size, u64 alignment, u64 flags)
{
unsigned long color;
unsigned long color, guard;
u64 start, end;
int ret;
GEM_BUG_ON(i915_vma_is_bound(vma, I915_VMA_GLOBAL_BIND | I915_VMA_LOCAL_BIND));
GEM_BUG_ON(drm_mm_node_allocated(&vma->node));
GEM_BUG_ON(hweight64(flags & (PIN_OFFSET_GUARD | PIN_OFFSET_FIXED | PIN_OFFSET_BIAS)) > 1);
size = max(size, vma->size);
alignment = max(alignment, vma->display_alignment);
alignment = max_t(typeof(alignment), alignment, vma->display_alignment);
if (flags & PIN_MAPPABLE) {
size = max_t(typeof(size), size, vma->fence_size);
alignment = max_t(typeof(alignment),
@ -767,6 +773,18 @@ i915_vma_insert(struct i915_vma *vma, struct i915_gem_ww_ctx *ww,
GEM_BUG_ON(!IS_ALIGNED(alignment, I915_GTT_MIN_ALIGNMENT));
GEM_BUG_ON(!is_power_of_2(alignment));
guard = vma->guard; /* retain guard across rebinds */
if (flags & PIN_OFFSET_GUARD) {
GEM_BUG_ON(overflows_type(flags & PIN_OFFSET_MASK, u32));
guard = max_t(u32, guard, flags & PIN_OFFSET_MASK);
}
/*
* As we align the node upon insertion, but the hardware gets
* node.start + guard, the easiest way to make that work is
* to make the guard a multiple of the alignment size.
*/
guard = ALIGN(guard, alignment);
start = flags & PIN_OFFSET_BIAS ? flags & PIN_OFFSET_MASK : 0;
GEM_BUG_ON(!IS_ALIGNED(start, I915_GTT_PAGE_SIZE));
@ -779,11 +797,12 @@ i915_vma_insert(struct i915_vma *vma, struct i915_gem_ww_ctx *ww,
alignment = max(alignment, i915_vm_obj_min_alignment(vma->vm, vma->obj));
/* If binding the object/GGTT view requires more space than the entire
/*
* If binding the object/GGTT view requires more space than the entire
* aperture has, reject it early before evicting everything in a vain
* attempt to find space.
*/
if (size > end) {
if (size > end - 2 * guard) {
drm_dbg(&to_i915(vma->obj->base.dev)->drm,
"Attempting to bind an object larger than the aperture: request=%llu > %s aperture=%llu\n",
size, flags & PIN_MAPPABLE ? "mappable" : "total", end);
@ -800,13 +819,23 @@ i915_vma_insert(struct i915_vma *vma, struct i915_gem_ww_ctx *ww,
if (!IS_ALIGNED(offset, alignment) ||
range_overflows(offset, size, end))
return -EINVAL;
/*
* The caller knows not of the guard added by others and
* requests for the offset of the start of its buffer
* to be fixed, which may not be the same as the position
* of the vma->node due to the guard pages.
*/
if (offset < guard || offset + size > end - guard)
return -ENOSPC;
ret = i915_gem_gtt_reserve(vma->vm, ww, &vma->node,
size, offset, color,
flags);
size + 2 * guard,
offset - guard,
color, flags);
if (ret)
return ret;
} else {
size += 2 * guard;
/*
* We only support huge gtt pages through the 48b PPGTT,
* however we also don't want to force any alignment for
@ -854,6 +883,7 @@ i915_vma_insert(struct i915_vma *vma, struct i915_gem_ww_ctx *ww,
GEM_BUG_ON(!i915_gem_valid_gtt_space(vma, color));
list_move_tail(&vma->vm_link, &vma->vm->bound_list);
vma->guard = guard;
return 0;
}
@ -906,7 +936,7 @@ rotate_pages(struct drm_i915_gem_object *obj, unsigned int offset,
struct sg_table *st, struct scatterlist *sg)
{
unsigned int column, row;
unsigned int src_idx;
pgoff_t src_idx;
for (column = 0; column < width; column++) {
unsigned int left;
@ -1012,7 +1042,7 @@ add_padding_pages(unsigned int count,
static struct scatterlist *
remap_tiled_color_plane_pages(struct drm_i915_gem_object *obj,
unsigned int offset, unsigned int alignment_pad,
unsigned long offset, unsigned int alignment_pad,
unsigned int width, unsigned int height,
unsigned int src_stride, unsigned int dst_stride,
struct sg_table *st, struct scatterlist *sg,
@ -1071,7 +1101,7 @@ remap_tiled_color_plane_pages(struct drm_i915_gem_object *obj,
static struct scatterlist *
remap_contiguous_pages(struct drm_i915_gem_object *obj,
unsigned int obj_offset,
pgoff_t obj_offset,
unsigned int count,
struct sg_table *st, struct scatterlist *sg)
{
@ -1104,7 +1134,7 @@ remap_contiguous_pages(struct drm_i915_gem_object *obj,
static struct scatterlist *
remap_linear_color_plane_pages(struct drm_i915_gem_object *obj,
unsigned int obj_offset, unsigned int alignment_pad,
pgoff_t obj_offset, unsigned int alignment_pad,
unsigned int size,
struct sg_table *st, struct scatterlist *sg,
unsigned int *gtt_offset)
@ -1544,6 +1574,8 @@ static int __i915_ggtt_pin(struct i915_vma *vma, struct i915_gem_ww_ctx *ww,
u32 align, unsigned int flags)
{
struct i915_address_space *vm = vma->vm;
struct intel_gt *gt;
struct i915_ggtt *ggtt = i915_vm_to_ggtt(vm);
int err;
do {
@ -1559,7 +1591,8 @@ static int __i915_ggtt_pin(struct i915_vma *vma, struct i915_gem_ww_ctx *ww,
}
/* Unlike i915_vma_pin, we don't take no for an answer! */
flush_idle_contexts(vm->gt);
list_for_each_entry(gt, &ggtt->gt_list, ggtt_link)
flush_idle_contexts(gt);
if (mutex_lock_interruptible(&vm->mutex) == 0) {
/*
* We pass NULL ww here, as we don't want to unbind
@ -2116,7 +2149,7 @@ int i915_vma_unbind_async(struct i915_vma *vma, bool trylock_vm)
if (!obj->mm.rsgt)
return -EBUSY;
err = dma_resv_reserve_fences(obj->base.resv, 1);
err = dma_resv_reserve_fences(obj->base.resv, 2);
if (err)
return -EBUSY;

Some files were not shown because too many files have changed in this diff Show More