Commit Graph

59 Commits

Author SHA1 Message Date
Michel Thierry 24dfd0736c drm/i915: prevent out of range pt in the PDE macros (take 3)
We tried to fix this in commit fdc454c148 ("drm/i915: Prevent out of
range pt in gen6_for_each_pde").

But the static analyzer still complains that, just before we break due
to "iter < I915_PDES", we do "pt = (pd)->page_table[iter]" with an
iter value that is bigger than I915_PDES. Of course, this isn't really
a problem since no one uses pt outside the macro. Still, every single
new usage of the macro will create a new issue for us to mark as a
false positive.

Also, Paulo re-started the discussion a while ago [1], but didn't end up
implemented.

In order to "solve" this "problem", this patch takes the ideas from
Chris and Dave, but that check would change the desired behavior of the
code, because the object (for example pdp->page_directory[iter]) can be
null during init/alloc, and C would take this as false, breaking the for
loop immediately.

This has been already verified with "static analysis tools".

[1]http://lists.freedesktop.org/archives/intel-gfx/2015-June/068548.html

v2: Make it a single statement, while preventing the common subexpression
elimination (Chris)

Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Dave Gordon <david.s.gordon@intel.com>
Signed-off-by: Michel Thierry <michel.thierry@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-10-06 14:15:29 +02:00
Tvrtko Ursulin dedf278ce6 drm/i915: Enable querying offset of UV plane with intel_plane_obj_offset
v2: Rebase.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-09-23 17:31:29 +02:00
Tvrtko Ursulin 89e3e14276 drm/i915: Support NV12 in rotated GGTT mapping
Just adding the rotated UV plane at the end of the rotated Y plane.

v2: Rebase.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-09-23 17:28:40 +02:00
Paulo Zanoni a9da512b3e drm/i915: avoid the last 8mb of stolen on BDW/SKL
The FBC hardware for these platforms doesn't have access to the
bios_reserved range, so it always assumes the maximum (8mb) is used.
So avoid this range while allocating.

This solves a bunch of FIFO underruns that happen if you end up
putting the CFB in that memory range. On my machine, with 32mb of
stolen, I need a 2560x1440 mode for that.

Testcase: igt/kms_frontbuffer_tracking/fbc-* (given the right setup)
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-09-23 14:39:17 +02:00
Michel Thierry 088e0df402 drm/i915/gtt: Allow >= 4GB offsets in X86_32
Similar to commit c44ef60e43 ("drm/i915/gtt:
Allow >= 4GB sizes for vm"), i915_gem_obj_offset and i915_gem_obj_ggtt_offset
return an unsigned long, which in only 4-bytes long in 32-bit kernels.

Change return type (and other related offset variables) to u64.

Since Global GTT is always limited to 4GB, this change would not be required
in i915_gem_obj_ggtt_offset, but this is done for consistency.

v2: Remove unnecessary offset variable in do_pin, as we already have
    vma->node.start (Chris).
    Update GGTT offset too (Tvrtko).

Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Signed-off-by: Michel Thierry <michel.thierry@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-08-14 18:16:30 +02:00
Michel Thierry 69ab76fd3d drm/i915/gen8: Initialize PDPs and PML4
Similar to PDs, while setting up a page directory pointer, make all entries
of the pdp point to the scratch pd before mapping (and make all its entries
point to the scratch page); this is to be safe in case of out of bound
access or  proactive prefetch.

Also add a scratch pdp, which the PML4 entries point to.

v2: Handle scratch_pdp allocation failure correctly, and keep
initialize_px functions together (Akash)
v3: Rebase after Mika's ppgtt cleanup / scratch merge patch series. Rely on
the added macros to initialize the pdps.
v4: Rebase after final merged version of Mika's ppgtt/scratch patches
(and removed commit message part related to v3).
v5: Update commit message to also mention PML4 table initialization and
the new scratch pdp (Akash).

Suggested-by: Akash Goel <akash.goel@intel.com>
Signed-off-by: Michel Thierry <michel.thierry@intel.com>
Reviewed-by: Akash Goel <akash.goel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-08-14 18:16:23 +02:00
Michel Thierry 762d99363d drm/i915/gen8: implement alloc/free for 4lvl
PML4 has no special attributes, and there will always be a PML4.
So simply initialize it at creation, and destroy it at the end.

The code for 4lvl is able to call into the existing 3lvl page table code
to handle all of the lower levels.

v2: Return something at the end of gen8_alloc_va_range_4lvl to keep the
compiler happy. And define ret only in one place.
Updated gen8_ppgtt_unmap_pages and gen8_ppgtt_free to handle 4lvl.
v3: Use i915_dma_unmap_single instead of pci API. Fix a
couple of incorrect checks when unmapping pdp and pd pages (Akash).
v4: Call __pdp_fini also for 32b PPGTT. Clean up alloc_pdp param list.
v5: Prevent (harmless) out of range access in gen8_for_each_pml4e.
v6: Simplify alloc_vma_range_4lvl and gen8_ppgtt_init_common error
paths. (Akash)
v7: Rebase, s/gen8_ppgtt_free_*/gen8_ppgtt_cleanup_*/.
v8: Change location of pml4_init/fini. It will make next patches
cleaner.
v9: Rebase after Mika's ppgtt cleanup / scratch merge patch series, while
trying to reuse as much as possible for pdp alloc. pml4_init/fini
replaced by setup/cleanup_px macros.
v10: Rebase after Mika's merged ppgtt cleanup patch series.
v11: Rebase after final merged version of Mika's ppgtt/scratch
patches.
v12: Fix pdpe start value in trace (Akash)
v13: Define all 4lvl functions in this patch directly, instead of
previous patches, add i915_page_directory_pointer_entry_alloc here,
use test_bit to detect when pdp is already allocated (Akash).
v14: Move pdp allocation into a new gen8_ppgtt_alloc_page_dirpointers
funtion, as we do for pds and pts; move pd and pdp setup functions to
this patch (Akash).
v15: Added kfree(pdp) from previous patch to this (Akash).

Cc: Akash Goel <akash.goel@intel.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v2+)
Reviewed-by: Akash Goel <akash.goel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-08-14 18:16:21 +02:00
Michel Thierry 81ba8aefd0 drm/i915/gen8: Add PML4 structure
Introduces the Page Map Level 4 (PML4), ie. the new top level structure
of the page tables.

To facilitate testing, 48b mode will be available on Broadwell and
GEN9+, when i915.enable_ppgtt = 3.

v2: Remove unnecessary CONFIG_X86_64 checks, ppgtt code is already
32/64-bit safe (Chris).
v3: Add goto free_scratch in temp 48-bit mode init code (Akash).
v4: kfree the pdp until the 4lvl alloc/free patch (Akash).
v5: Postpone 48-bit code in sanitize_enable_ppgtt (Akash).
v6: Keep _insert_pte_entries changes outside this patch (Akash).

Cc: Akash Goel <akash.goel@intel.com>
Signed-off-by: Michel Thierry <michel.thierry@intel.com>
Reviewed-by: Akash Goel <akash.goel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-08-14 18:16:20 +02:00
Michel Thierry 6ac1850220 drm/i915/gen8: Make pdp allocation more dynamic
This transitional patch doesn't do much for the existing code. However,
it should make upcoming patches to use the full 48b address space a bit
easier.

32-bit ppgtt uses just 4 PDPs, while 48-bit ppgtt will have up-to 512;
this patch prepares the existing functions to query the right number of pdps
at run-time. This also means that used_pdpes should also be allocated during
ppgtt_init, as the bitmap size will depend on the ppgtt address range
selected.

v2: Renamed  pdp_free to be similar to  pd/pt (unmap_and_free_pdp).
v3: To facilitate testing, 48b mode will be available on Broadwell and
GEN9+, when i915.enable_ppgtt = 3.
v4: Rebase after s/page_tables/page_table/, added extra information
about 4-level page table formats and use IS_ENABLED macro.
v5: Check CONFIG_X86_64 instead of CONFIG_64BIT.
v6: Rebase after Mika's ppgtt cleanup / scratch merge patch series, and
follow
his nomenclature in pdp functions (there is no alloc_pdp yet).
v7: Rebase after merged version of Mika's ppgtt cleanup patch series.
v8: Rebase after final merged version of Mika's ppgtt/scratch patches.
v9: Introduce PML4 (and 48-bit checks) until next patch (Akash).
v10: Also use test_bit to detect when pd/pt are already allocated (Akash)

Cc: Akash Goel <akash.goel@intel.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v2+)
Reviewed-by: Akash Goel <akash.goel@intel.com>
[danvet: Amend commit message as suggested by Michel.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-08-14 18:16:17 +02:00
Michel Thierry 09120d4e88 drm/i915: Remove unnecessary gen8_clamp_pd
gen8_clamp_pd clamps to the next page directory boundary, but the macro
gen8_for_each_pde already has a check to stop at the page directory
boundary.

Furthermore, i915_pte_count also restricts to the next page table
boundary.

v2: Rebase after Mika's ppgtt cleanup / scratch merge patch series.

Suggested-by: Akash Goel <akash.goel@intel.com>
Signed-off-by: Michel Thierry <michel.thierry@intel.com>
Reviewed-by: "Akash Goel" <akash.goel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-08-14 18:16:16 +02:00
Mika Kuoppala 79ab937054 drm/i915/gtt: Move scratch_pd and scratch_pt into vm struct
Scratch page is part of struct i915_address_space. Move other
scratch entities into the same struct. This is a preparatory patch
for having only one instance of each scratch_pt/pd.

v2: make commit msg more readable

Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Michel Thierry <michel.thierry@intel.com> (v1)
[danvet: Bikeshed summary to avoid confusion with vmas.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-26 11:06:30 +02:00
Mika Kuoppala c114f76a0a drm/i915/gtt: Make scratch page i915_page_dma compatible
Lay out scratch page structure in similar manner than other
paging structures. This allows us to use the same tools for
setup and teardown.

Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-26 10:54:00 +02:00
Mika Kuoppala 567047be2a drm/i915/gtt: Use macros to access dma mapped pages
Make paging structure type agnostic *_px macros to access
page dma struct, the backing page and the dma address.

This makes the code less cluttered on internals of
i915_page_dma.

v2: Superfluous const -> nonconst removed
v3: Rebased

Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Michel Thierry <michel.thierry@intel.com> (v2)
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-26 10:53:50 +02:00
Mika Kuoppala 44159ddbea drm/i915/gtt: Introduce struct i915_page_dma
All our paging structures have struct page and dma address
for that page.

Add struct for page/dma address pairs and use it to make
the setup and teardown for different paging structures
identical.

Include the page directory offset also in the struct for legacy
gens. Rename it to clearly point out that it is offset into the
ggtt.

v2: Add comment about ggtt_offset (Michel)

Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-26 10:51:04 +02:00
Mika Kuoppala d852c7bf90 drm/i915/gtt: Introduce i915_page_dir_dma_addr
The legacy mode mm switch and the execlist context assignment
needs dma address for the page directories.

Introduce a function that encapsulates the scratch_pd dma
fallback if no pd is found.

v2: Rebase, s/ring/req

Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Michel Thierry <michel.thierry@intel.com> (v1)
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-26 10:50:52 +02:00
Mika Kuoppala c44ef60e43 drm/i915/gtt: Allow >= 4GB sizes for vm.
We can have exactly 4GB sized ppgtt with 32bit system.
size_t is inadequate for this.

v2: Convert a lot more places (Daniel)

Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-26 10:41:13 +02:00
Tvrtko Ursulin 84fe03f7b2 drm/i915: Move rotated geometry calculations into the fill helper
This way data is available as soon as the view is passed into the call chain.

v2: Store size in bytes instead of pages under the appropriate name. (Chris Wilson)

v3: Use uint64_t instead of size_t. (Daniel Vetter)

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> (v2)
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-24 15:11:05 +02:00
John Harrison e85b26dc1c drm/i915: Update switch_mm() to take a request structure
Updated the switch_mm() code paths to take a request instead of a ring. This
includes the myriad *_mm_switch functions themselves and a bunch of PDP related
helper functions.

v2: Rebased to newer tree.

For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23 14:02:21 +02:00
John Harrison b3dd6b9681 drm/i915: Update ppgtt_init_ring() & context_enable() to take requests
The final step in removing the OLR from i915_gem_init_hw() is to pass the newly
allocated request structure in to each step rather than passing a ring
structure. This patch updates both i915_ppgtt_init_ring() and
i915_gem_context_enable() to take request pointers.

For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23 14:02:09 +02:00
John Harrison 4ad2fd888b drm/i915: Split i915_ppgtt_init_hw() in half - generic and per ring
The i915_gem_init_hw() function calls a bunch of smaller initialisation
functions. Multiple of which have generic sections and per ring sections. This
means multiple passes are done over the rings. Each pass writes data to the ring
which floats around in that ring's OLR until some random point in the future
when an add_request() is done by some random other piece of code.

This patch breaks i915_ppgtt_init_hw() in two with the per ring initialisation
now being done in i915_ppgtt_init_ring(). The ring looping is now done at the
top level in i915_gem_init_hw().

v2: Fix dumb loop variable re-use.

For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com> (v1)
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23 14:02:06 +02:00
Joonas Lahtinen 8bd7ef1638 drm/i915: Add a partial GGTT view type
Partial view type allows manipulating parts of huge BOs through the GGTT,
which was not previously possible due to constraint that whole object had
to be mapped for any access to it through GGTT.

v2:
- Retain error value from sg_alloc_table (Tvrtko Ursulin)
- Do not zero already zeroed variable (Tvrtko Ursulin)
- Use more common variable types for page size/offset (Tvrtko Ursulin)
v3:
- Only compare additional view parameters when need to (Tvrtko Ursulin)
v4:
- Do zero out the variable that needs to be (bug introduced in v2).

Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-05-08 13:04:18 +02:00
Joonas Lahtinen 91e6711e30 drm/i915: Do not make assumptions on GGTT VMA sizes
GGTT VMA sizes might be smaller than the whole object size due to
different GGTT views.

v2:
- Separate GGTT view constraint calculations from normal view
  constraint calculations (Chris Wilson)
v3:
- Do not bother with debug wording. (Tvrtko Ursulin)
v4:
- Clearer logic for calculating map_and_fenceable (Tvrtko Ursulin)

Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
[danvet: Drop BUG_ON, it's redudant.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-05-08 13:04:17 +02:00
Daniel Vetter 70b9f6f832 rm/i915: Move i915_get_ggtt_vma_pages into ggtt_bind_vma
We have this neat abstraction between ppgtt and ggtt for (un)bind_vma
and didn't end up using it really. What a shame, so fix this and make
the ->bind_vma hook a bit more useful.

Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
2015-04-23 21:07:53 +02:00
Daniel Vetter f329f5f6eb drm/i915: Move PTE_READ_ONLY to ->pte_encode vfunc
It's only used as a flag there, so unconfuse things a bit.
Also separate the bind_vma flag space from the pte_encode flag
space in the code.

Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-04-20 08:59:14 -07:00
Daniel Vetter 777dc5bb26 drm/i915: Move vma vfuns to adddress_space
They change with the address space and not with each vma, so move them
into the right pile of vfuncs. Save 2 pointers per vma and clarifies
the code.

Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
2015-04-20 08:54:29 -07:00
Michel Thierry 33c8819f1b drm/i915/gen8: begin bitmap tracking
Like with gen6/7, we can enable bitmap tracking with all the
preallocations to make sure things actually don't blow up.

v2: Rebased to match changes from previous patches.
v3: Without teardown logic, rely on used_pdpes and used_pdes when
freeing page tables.
v4: Rebased after s/page_tables/page_table/.
v5: Rebased after page table generalizations.

Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v2+)
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-04-10 08:56:12 +02:00
Michel Thierry 09942c656b drm/i915: num_pd_pages/num_pd_entries isn't useful
These values are never quite useful for dynamic allocations of the page
tables. Getting rid of them will help prevent later confusion.

v2: Updated to use unmap_and_free_pd functions.
v3: Updated gen8_ppgtt_free after teardown logic was removed.
v4: Rebase after s/page_tables/page_table/.
v5: Keep allocating all page directories in GEN8+ systems with less
than 4GB of memory. Updated gen6_for_all_pdes.
v6: Prevent (harmless) out of range access in gen6_for_all_pdes.

Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v2+)
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-04-10 08:56:11 +02:00
Michel Thierry 7cb6d7ac63 drm/i915/gen8: Update pdp switch and point unused PDPs to scratch page
One important part of this patch is we now write a scratch page
directory into any unused PDP descriptors. This matters for 2 reasons,
first, we're not allowed to just use 0, or an invalid pointer, and second,
we must wipe out any previous contents from the last context.

The latter point only matters with full PPGTT. The former point only
effect platforms with less than 4GB memory.

v2: Updated commit message to point that we must set unused PDPs to the
scratch page.

v3: Unmap scratch_pd in gen8_ppgtt_free.

v4: Initialize scratch_pd. (Mika)

Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v2+)
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-04-10 08:56:10 +02:00
Michel Thierry 9271d959dc drm/i915/gen8: Add dynamic allocation macros and helper functions
Similar to gen6, we will use for_each_pde/for_each_pdpe
and pte/pde/pdpe_index to iterate over these new structures.

v2: Match trace_i915_va_teardown params
v3: Multiple rebases.
v4: Updated to use unmap_and_free_pt.
v5: teardown_va_range logic no longer needed.
v6: Rebase after s/page_tables/page_table/.
v7: Renamed commit to match what it does now (it was "Use dynamic
allocation idioms on free").
v8: Prevent (harmless) out of range access in gen8_for_each_pde and
gen8_for_each_pdpe_e.

Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v2+)
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
[danvet: s/BUG/WARN/]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-04-10 08:56:09 +02:00
Michel Thierry ec565b3c15 drm/i915: Remove _entry from PPGTT page structures
Lets try to keep this consistent:

Page Directory Pointer (PDP).
Page Directory (PD), also known as page directory pointer entries.
Page Table (PT), also known as page directory entries.

s/struct i915_page_table_entry/struct i915_page_table/
s/struct i915_page_directory_entry/struct i915_page_directory/
s/struct i915_page_directory_pointer_entry/struct
i915_page_directory_pointer/

Suggested-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Signed-off-by: Michel Thierry <michel.thierry@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-04-10 08:56:07 +02:00
Joonas Lahtinen 9abc464854 drm/i915: Compare GGTT view structs instead of types
To allow for views where the view type is not defined by the view type only,
like it is in stereo or rotated 90 degree view, change the semantic to require
the whole view structure for comparison when we match a GGTT view.

This allows including parameters like offset to be included in the view which
is useful for eg. partial views.

v3:
- Rely on ggtt_view type being 0 for non-GGTT vma's, which equals to
  I915_GGTT_VIEW_NORMAL. (Daniel Vetter)
- Do not use potentially slower comparison when we only want to know if
  something is or is not a normal view.
- Rebase on top of rotated view patches. Add rotated view singleton.
- If one view is missing in comparison they're equal only if both are missing.

v4:
- Use comparison helper in obj_to_ggtt_view too. (Tvrtko Ursulin)
- Do WARN_ON if one view is NULL. (Tvrtko Ursulin)

Cc: Daniel Vetter <daniel@ffwll.ch>
Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-03-27 15:05:22 +01:00
Michel Thierry 4933d51955 drm/i915: Finish gen6/7 dynamic page table allocation
This patch continues on the idea from "Track GEN6 page table usage".
From here on, in the steady state, PDEs are all pointing to the scratch
page table (as recommended in the spec). When an object is allocated in
the VA range, the code will determine if we need to allocate a page for
the page table. Similarly when the object is destroyed, we will remove,
and free the page table pointing the PDE back to the scratch page.

Following patches will work to unify the code a bit as we bring in GEN8
support. GEN6 and GEN8 are different enough that I had a hard time to
get to this point with as much common code as I do.

The aliasing PPGTT must pre-allocate all of the page tables. There are a
few reasons for this. Two trivial ones: aliasing ppgtt goes through the
ggtt paths, so it's hard to maintain, we currently do not restore the
default context (assuming the previous force reload is indeed
necessary). Most importantly though, the only way (it seems from
empirical evidence) to invalidate the CS TLBs on non-render ring is to
either use ring sync (which requires actually stopping the rings in
order to synchronize when the sync completes vs. where you are in
execution), or to reload DCLV.  Since without full PPGTT we do not ever
reload the DCLV register, there is no good way to achieve this. The
simplest solution is just to not support dynamic page table
creation/destruction in the aliasing PPGTT.

We could always reload DCLV, but this seems like quite a bit of excess
overhead only to save at most 2MB-4k of memory for the aliasing PPGTT
page tables.

v2: Make the page table bitmap declared inside the function (Chris)
Simplify the way scratching address space works.
Move the alloc/teardown tracepoints up a level in the call stack so that
both all implementations get the trace.

v3: Updated trace event to spit out a name

v4: Aliasing ppgtt is now initialized differently (in setup global gtt)

v5: Rebase to latest code. Also removed unnecessary aliasing ppgtt check
for trace, as it is no longer possible after the PPGTT cleanup patch series
of a couple of months ago (Daniel).

v6: Implement changes from code review (Daniel):
 - allocate/teardown_va_range calls added.
 - Add a scratch page allocation helper (only need the address).
 - Move trace events to a new patch.
 - Use updated mark_tlbs_dirty.
 - Moved pt preallocation for aliasing ppgtt into gen6_ppgtt_init.

v7: teardown_va_range removed (Daniel).
    In init, gen6_ppgtt_clear_range call is only needed for aliasing ppgtt.

v8: Rebase after s/page_tables/page_table/.

v9: Remove unnecessary scratch flag in page_table struct, future patches
can just compare against ppgtt->scratch_pt, and alloc_pt_scratch becomes
redundant. Initialize scratch_pt and pt. (Mika)

v10: Clean up aliasing ppgtt init error path and prevent leaking the
ppgtt obj when init fails. (Mika)
Updated commit author. (Daniel)

Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v4+)
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-03-27 09:25:26 +01:00
Michel Thierry fdc454c148 drm/i915: Prevent out of range pt in gen6_for_each_pde
Found by static analysis tool, this was harmless as the pt was not
used out of scope though.

Introduced by commit 678d96fbb3
("drm/i915: Track GEN6 page table usage").

Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Michel Thierry <michel.thierry@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-03-27 09:24:55 +01:00
Tvrtko Ursulin 50470bb011 drm/i915/skl: Support secondary (rotated) frame buffer mapping
90/270 rotated scanout needs a rotated GTT view of the framebuffer.

This is put in a separate VMA with a dedicated ggtt view and wired such that
it is created when a framebuffer is pinned to a 90/270 rotated plane.

Rotation is only possible with Yb/Yf buffers and error is propagated to
user space in case of a mismatch.

Special rotated page view is constructed at the VMA creation time by
borrowing the DMA addresses from obj->pages.

v2:
    * Do not bother with pages for rotated sg list, just populate the DMA
      addresses. (Daniel Vetter)
    * Checkpatch cleanup.

v3:
    * Rebased on top of new plane handling (create rotated mapping when
      setting the rotation property).
    * Unpin rotated VMA on unpinning from display plane.
    * Simplify rotation check using bitwise AND. (Chris Wilson)

v4:
    * Fix unpinning of optional rotated mapping so it is really considered
      to be optional.

v5:
   * Rebased for fb modifier changes.
   * Rebased for atomic commit.
   * Only pin needed view for display. (Ville Syrjälä, Daniel Vetter)

v6:
   * Rebased after preparatory work has been extracted out. (Daniel Vetter)

v7:
   * Slightly simplified tiling geometry calculation.
   * Moved rotated GGTT view implementation into i915_gem_gtt.c (Daniel Vetter)

v8:
   * Do not use i915_gem_obj_size to get object size since that actually
     returns the size of an VMA which may not exist.
   * Rebased for ggtt view changes.

v9:
   * Rebased after code review changes on the preceding patches.
   * Tidy function definitions. (Joonas Lahtinen)

For: VIZ-4726
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Michel Thierry <michel.thierry@intel.com> (v4)
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-03-23 15:06:31 +01:00
Ben Widawsky 563222a745 drm/i915: Track page table reload need
This patch was formerly known as, "Force pd restore when PDEs change,
gen6-7." I had to change the name because it is needed for GEN8 too.

The real issue this is trying to solve is when a new object is mapped
into the current address space. The GPU does not snoop the new mapping
so we must do the gen specific action to reload the page tables.

GEN8 and GEN7 do differ in the way they load page tables for the RCS.
GEN8 does so with the context restore, while GEN7 requires the proper
load commands in the command streamer. Non-render is similar for both.

Caveat for GEN7
The docs say you cannot change the PDEs of a currently running context.
We never map new PDEs of a running context, and expect them to be
present - so I think this is okay. (We can unmap, but this should also
be okay since we only unmap unreferenced objects that the GPU shouldn't
be tryingto va->pa xlate.) The MI_SET_CONTEXT command does have a flag
to signal that even if the context is the same, force a reload. It's
unclear exactly what this does, but I have a hunch it's the right thing
to do.

The logic assumes that we always emit a context switch after mapping new
PDEs, and before we submit a batch. This is the case today, and has been
the case since the inception of hardware contexts. A note in the comment
let's the user know.

It's not just for gen8. If the current context has mappings change, we
need a context reload to switch

v2: Rebased after ppgtt clean up patches. Split the warning for aliasing
and true ppgtt options. And do not break aliasing ppgtt, where to->ppgtt
is always null.

v3: Invalidate PPGTT TLBs inside alloc_va_range.

v4: Rename ppgtt_invalidate_tlbs to mark_tlbs_dirty and move
pd_dirty_rings from i915_address_space to i915_hw_ppgtt. Fixes when
neither ctx->ppgtt and aliasing_ppgtt exist.

v5: Removed references to teardown_va_range.

v6: Updated needs_pd_load_pre/post.

v7: Fix pd_dirty_rings check in needs_pd_load_post, and update/move
comment about updated PDEs to object_pin/bind (Mika).

Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v2+)
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-03-20 11:48:18 +01:00
Ben Widawsky 678d96fbb3 drm/i915: Track GEN6 page table usage
Instead of implementing the full tracking + dynamic allocation, this
patch does a bit less than half of the work, by tracking and warning on
unexpected conditions. The tracking itself follows which PTEs within a
page table are currently being used for objects. The next patch will
modify this to actually allocate the page tables only when necessary.

With the current patch there isn't much in the way of making a gen
agnostic range allocation function. However, in the next patch we'll add
more specificity which makes having separate functions a bit easier to
manage.

One important change introduced here is that DMA mappings are
created/destroyed at the same page directories/tables are
allocated/deallocated.

Notice that aliasing PPGTT is not managed here. The patch which actually
begins dynamic allocation/teardown explains the reasoning for this.

v2: s/pdp.page_directory/pdp.page_directories
Make a scratch page allocation helper

v3: Rebase and expand commit message.

v4: Allocate required pagetables only when it is needed, _bind_to_vm
instead of bind_vma (Daniel).

v5: Rebased to remove the unnecessary noise in the diff, also:
 - PDE mask is GEN agnostic, renamed GEN6_PDE_MASK to I915_PDE_MASK.
 - Removed unnecessary checks in gen6_alloc_va_range.
 - Changed map/unmap_px_single macros to use dma functions directly and
   be part of a static inline function instead.
 - Moved drm_device plumbing through page tables operation to its own
   patch.
 - Moved allocate/teardown_va_range calls until they are fully
   implemented (in subsequent patch).
 - Merged pt and scratch_pt unmap_and_free path.
 - Moved scratch page allocator helper to the patch that will use it.

v6: Reduce complexity by not tearing down pagetables dynamically, the
same can be achieved while freeing empty vms. (Daniel)

v7: s/i915_dma_map_px_single/i915_dma_map_single
s/gen6_write_pdes/gen6_write_pde
Prevent a NULL case when only GGTT is available. (Mika)

v8: Rebased after s/page_tables/page_table/.

v9: Reworked i915_pte_index and i915_pte_count.
Also exercise bitmap allocation here (gen6_alloc_va_range) and fix
incorrect write_page_range in i915_gem_restore_gtt_mappings (Mika).

Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v3+)
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-03-20 11:48:18 +01:00
Michel Thierry 07749ef32c drm/i915: page table generalizations
No functional changes, but will improve code clarity and removed some
duplicated defines.

Signed-off-by: Michel Thierry <michel.thierry@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-03-20 11:48:17 +01:00
Ben Widawsky 06fda602db drm/i915: Create page table allocators
As we move toward dynamic page table allocation, it becomes much easier
to manage our data structures if break do things less coarsely by
breaking up all of our actions into individual tasks.  This makes the
code easier to write, read, and verify.

Aside from the dissection of the allocation functions, the patch
statically allocates the page table structures without a page directory.
This remains the same for all platforms,

The patch itself should not have much functional difference. The primary
noticeable difference is the fact that page tables are no longer
allocated, but rather statically declared as part of the page directory.
This has non-zero overhead, but things gain additional complexity as a
result.

This patch exists for a few reasons:
1. Splitting out the functions allows easily combining GEN6 and GEN8
code. Page tables have no difference based on GEN8. As we'll see in a
future patch when we add the DMA mappings to the allocations, it
requires only one small change to make work, and error handling should
just fall into place.

2. Unless we always want to allocate all page tables under a given PDE,
we'll have to eventually break this up into an array of pointers (or
pointer to pointer).

3. Having the discrete functions is easier to review, and understand.
All allocations and frees now take place in just a couple of locations.
Reviewing, and catching leaks should be easy.

4. Less important: the GFP flags are confined to one location, which
makes playing around with such things trivial.

v2: Updated commit message to explain why this patch exists

v3: For lrc, s/pdp.page_directory[i].daddr/pdp.page_directory[i]->daddr/

v4: Renamed free_pt/pd_single functions to unmap_and_free_pt/pd (Daniel)

v5: Added additional safety checks in gen8 clear/free/unmap.

v6: Use WARN_ON and return -EINVAL in alloc_pt_range (Mika).

v7: Make err_out loop symmetrical to the way we allocate in
alloc_pt_range. Also s/page_tables/page_table and correct commit
message (Mika)

Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v3+)
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-02-25 16:53:43 +01:00
Ben Widawsky 7324cc0491 drm/i915: Complete page table structures
Move the remaining members over to the new page table structures.

This can be squashed with the previous commit if desire. The reasoning
is the same as that patch. I simply felt it is easier to review if split.

v2: In lrc: s/ppgtt->pd_dma_addr[i]/ppgtt->pdp.page_directory[i].daddr/
v3: Rebase.
v4: Rebased after s/page_tables/page_table/.

Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v2+)
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-02-25 16:53:07 +01:00
Ben Widawsky d7b3de9121 drm/i915: page table abstractions
When we move to dynamic page allocation, keeping page_directory and pagetabs as
separate structures will help to break actions into simpler tasks.

To help transition the code nicely there is some wasted space in gen6/7.
This will be ameliorated shortly.

Following the x86 pagetable terminology:
PDPE = struct i915_page_directory_pointer_entry.
PDE = struct i915_page_directory_entry [page_directory].
PTE = struct i915_page_table_entry [page_tables].

v2: fixed mismatches after clean-up/rebase.

v3: Clarify the names of the multiple levels of page tables (Daniel)

v4: Addressing Mika's review comments.
s/gen8_free_page_directories/gen8_free_page_directory and free the
page tables for the directory there.
In gen8_ppgtt_allocate_page_directories, do not leak previously allocated
pt in case the page_directory alloc fails.
Update error return handling in gen8_ppgtt_alloc.

v5: Do not leak pt on error in gen6_ppgtt_allocate_page_tables. (Mika)

v6: s/page_tables/page_table/. (Mika)

Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v2+)
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-02-25 16:52:34 +01:00
Ben Widawsky 17d5538d54 drm/i915/gen8: Un-hardcode number of page directories
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Michel Thierry <michel.thierry@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-02-13 23:28:12 +01:00
Ben Widawsky 766436004b drm/i915: Rename to GEN8_LEGACY_PDPES
In gen8, 32b PPGTT has always had one "pdp" (it doesn't actually have
one, but it resembles having one). The #define was confusing as is, and
using "PDPE" is a much better description.

sed -i 's/GEN8_LEGACY_PDPS/GEN8_LEGACY_PDPES/' drivers/gpu/drm/i915/*.[ch]

It also matches the x86 pagetable terminology:
PTE  = Page Table Entry - pagetable level 1 page
PDE  = Page Directory Entry - pagetable level 2 page
PDPE = Page Directory Pointer Entry - pagetable level 3 page

And in the near future (for 48b addressing):
PML4E = Page Map Level 4 Entry

v2: Expanded information about Page Directory/Table nomenclature.

Cc: Daniel Vetter <daniel@ffwll.ch>
CC: Dave Gordon <david.s.gordon@intel.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v2)
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-02-13 23:28:11 +01:00
Tvrtko Ursulin fe14d5f4e5 drm/i915: Infrastructure for supporting different GGTT views per object
Things like reliable GGTT mappings and mirrored 2d-on-3d display will need
to map objects into the same address space multiple times.

Added a GGTT view concept and linked it with the VMA to distinguish between
multiple instances per address space.

New objects and GEM functions which do not take this new view as a parameter
assume the default of zero (I915_GGTT_VIEW_NORMAL) which preserves the
previous behaviour.

This now means that objects can have multiple VMA entries so the code which
assumed there will only be one also had to be modified.

Alternative GGTT views are supposed to borrow DMA addresses from obj->pages
which is DMA mapped on first VMA instantiation and unmapped on the last one
going away.

v2:
    * Removed per view special casing in i915_gem_ggtt_prepare /
      finish_object in favour of creating and destroying DMA mappings
      on first VMA instantiation and last VMA destruction. (Daniel Vetter)
    * Simplified i915_vma_unbind which does not need to count the GGTT views.
      (Daniel Vetter)
    * Also moved obj->map_and_fenceable reset under the same check.
    * Checkpatch cleanups.

v3:
    * Only retire objects once the last VMA is unbound.

v4:
    * Keep scatter-gather table for alternative views persistent for the
      lifetime of the VMA.
    * Propagate binding errors to callers and handle appropriately.

v5:
    * Explicitly look for normal GGTT view in i915_gem_obj_bound to align
      usage in i915_gem_object_ggtt_unpin. (Michel Thierry)
    * Change to single if statement in i915_gem_obj_to_ggtt. (Michel Thierry)
    * Removed stray semi-colon in i915_gem_object_set_cache_level.

For: VIZ-4544
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
[danvet: Drop hunk from i915_gem_shrink since it's just prettification
but upsets a __must_check warning.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-12-15 11:25:04 +01:00
John Harrison 97b2a6a10a drm/i915: Replace last_[rwf]_seqno with last_[rwf]_req
The object structure contains the last read, write and fenced seqno values for
use in syncrhonisation operations. These have now been replaced with their
request structure counterparts.

Note that to ensure that objects do not end up with dangling pointers, the
assignments of last_*_req include reference count updates. Thus a request cannot
be freed if an object is still hanging on to it for any reason.

v2: Corrected 'last_rendering_' to 'last_read_' in a number of comments that did
not get updated when 'last_rendering_seqno' became 'last_read|write_seqno'
several millenia ago.

For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-12-03 09:35:14 +01:00
Daniel Vetter 4feb765943 drm/i915: Remove user pinning code
Now unused.

Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
2014-12-03 09:35:11 +01:00
Daniel Vetter f548c0e9d4 drm/i915: Can i915_gem_init_ioctl
Found one more!

With this we can clear up the ggtt init code a bit, yay!

Acked-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
2014-11-20 13:03:31 +01:00
Tvrtko Ursulin aff437667b drm/i915: Move flags describing VMA mappings into the VMA
If these flags are on the object level it will be more difficult to allow
for multiple VMAs per object.

v2: Simplification and cleanup after code review comments (Chris Wilson).

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-11-04 14:04:51 +01:00
McAulay, Alistair 6689c167ae drm/i915: Rework GPU reset sequence to match driver load & thaw
This patch is to address Daniels concerns over different code during reset:

http://lists.freedesktop.org/archives/intel-gfx/2014-June/047758.html

"The reason for aiming as hard as possible to use the exact same code for
driver load, gpu reset and runtime pm/system resume is that we've simply
seen too many bugs due to slight variations and unintended omissions."

Tested using igt drv_hangman.

V2: Cleaner way of preventing check_wedge returning -EAGAIN
V3: Clean the last_context during reset, to ensure do_switch() does the MI_SET_CONTEXT. As per review.
Signed-off-by: McAulay, Alistair <alistair.mcaulay@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
[danvet: Rebase over ctx->ppgtt rework and extend the comment in
check_wedge a bit.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-09-03 10:54:09 +02:00
Daniel Vetter 90d0a0e8d0 drm/i915: Extract commmon global gtt cleanup code
We want to move the aliasing ppgtt cleanup back into the global
gtt cleanup code for symmetry, but first we need to create such
a place.

Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-08-13 14:23:35 +02:00
Daniel Vetter 82460d9724 drm/i915: Rework ppgtt init to no require an aliasing ppgtt
Currently we abuse the aliasing ppgtt to set up the ppgtt support in
general. Which is a bit backwards since with full ppgtt we don't ever
need the aliasing ppgtt.

So untangle this and separate the ppgtt init from the aliasing
ppgtt. While at it drag it out of the context enabling (which just
does a switch to the default context).

Note that we still have the differentiation between synchronous and
asynchronous ppgtt setup, but that will soon vanish. So also correctly
wire up the return value handling to be prepared for when ->switch_mm
drops the synchronous parameter and could start to fail.

Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-08-13 14:23:31 +02:00