Simply to match the GEN8 style of PPGTT initialization, split up the
allocations and mappings. Unlike GEN8, we skip a separate dma_addr_t
allocation function, as it is much simpler pre-gen8.
With this code it would be easy to make a more general PPGTT
initialization function with per GEN alloc/map/etc. or use a common
helper, similar to the ringbuffer code. I don't see a benefit to doing
this just yet, but who knows...
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This cleanup is similar to the GEN8 cleanup (though less necessary).
Having everything split will make cleaning the initialization path error
paths easier to understand.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
I keep meaning to do this... by now almost the entire file has been
written by an Intel employee (including Daniel post-2010).
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This reverts commit 3a2ffb65ee.
Now that the code is fixed to use smaller allocations, it should be safe
to let the full GGTT be used on BDW.
The testcase for this is anything which uses more than half of the GTT,
thus eclipsing the old limit.
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The previous allocation mechanism would get 2 contiguous allocations,
one for the page directories, and one for the page tables. As each page
table is 1 page, and there are 512 of these per page directory, this
goes to 2MB. An unfriendly request at best. Worse still, our HW now
supports 4 page directories, and a 2MB allocation is not allowed.
In order to fix this, this patch attempts to split up each page table
allocation into a single, discrete allocation. There is nothing really
fancy about the patch itself, it just has to manage an extra pointer
indirection, and have a fancier bit of logic to free up the pages.
To accommodate some of the added complexity, two new helpers are
introduced to allocate, and free the page table pages.
NOTE: I really wanted to split the way we do allocations, and the way in
which we identify the page table/page directory being used. I found
splitting this functionality up to be too unwieldy. I apologize in
advance to the reviewer. I'd recommend looking at the result, rather
than the diff.
v2/NOTE2: This patch predated commit:
6f1cc99351
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Tue Dec 31 15:50:31 2013 +0000
drm/i915: Avoid dereference past end of page arr
It fixed the same issue as that patch, but because of the limbo state of
PPGTT, Chris patch was merged instead. The excess churn is a result of
my using my original patch, which has my preferred naming. Primarily
act_* is changed to which_*, but it's mostly the same otherwise. I've
kept the convention Chris used for the pte wrap (I had something
slightly different, and broken - but fixable)
v3: Rename which_p[..]e to drop which_ (Chris)
Remove BUG_ON in inner loop (Chris)
Redo the pde/pdpe wrap logic (Chris)
v4: s/1MB/2MB in commit message (Imre)
Plug leaking gen8_pt_pages in both the error path, as well as general
free case (Imre)
v5: Rename leftover "which_" variables (Imre)
Add the pde = 0 wrap that was missed from v3 (Imre)
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Squash in fixup from Ben.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This patch converts insert_entries and clear_range, both functions which
are specific to the VM. These functions tend to encapsulate the gen
specific PTE writes. Passing absolute addresses to the insert_entries,
and clear_range will help make the logic clearer within the functions as
to what's going on. Currently, all callers simply do the appropriate
page shift, which IMO, ends up looking weird with an upcoming change for
the gen8 page table allocations.
Up until now, the PPGTT was a funky 2 level page table. GEN8 changes
this to look more like a 3 level page table, and to that extent we need
a significant amount more memory simply for the page tables. To address
this, the allocations will be split up in finer amounts.
v2: Replace size_t with uint64_t (Chris, Imre)
v3: Fix size in gen8_ppgtt_init (Ben)
Fix Size in i915_gem_suspend_gtt_mappings/restore (Imre)
Reviewed-by: Imre Deak <imre.deak@intel.com> (v2)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Like cleanup in an earlier patch, the code becomes much more readable,
and easier to extend if we extract out helper functions for the various
stages of init.
Note that with this patch it becomes really simple, and tempting to begin
using the 'goto out' idiom with explicit free/fini semantics. I've
kept the error path as similar as possible to the cleanup() function to
make sure cleanup is as robust as possible
v2: Remove comment "NB:From here on, ppgtt->base.cleanup() should
function properly"
Update commit message to reflect above
v3: Rebased on top of bugfixes found in the previous patch by Imre
Moved number of pd pages assertion to the proper place (Imre)
v4:
Allocate dma address space for num_pd_pages, not num_pd_entries (Ben)
Don't use gen8_pt_dma_addr after free on error path (Imre)
With new fix from v4 of the previous patch.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Create 3 clear stages in PPGTT init. This will help with upcoming
changes be more readable. The 3 stages are, allocation, dma mapping, and
writing the P[DT]Es
One nice benefit to the patches is that it makes 2 very clear error
points, allocation, and mapping, and avoids having to do any handling
after writing PTEs (something which was likely buggy before). This
simplified error handling I suspect will be helpful when we move to
deferred/dynamic page table allocation and mapping.
The patches also attempts to break up some of the steps into more
logical reviewable chunks, particularly when we free.
v2: Don't call cleanup on the error path since that takes down the
drm_mm and list entry, which aren't setup at this point.
v3: Fixes addressing Imre's comments from:
<1392821989.19792.13.camel@intelbox>
Don't do dynamic allocation for the page table DMA addresses. I can't
remember why I did it in the first place. This addresses one of Imre's
other issues.
Fix error path leak of page tables.
v4: Fix the fix of the error path leak. Original fix still leaked page
tables. (Imre)
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
GEN8 never freed the PPGTT struct. As GEN8 doesn't use full PPGTT, the
leak is small and only found on a module reload. ie. I don't think this
needs to go to stable.
v2: The very naive, kfree in gen8 ppgtt cleanup, is subject to a double
free on PPGTT initialization failure. (Spotted by Imre). Instead this
patch pulls the ppgtt struct freeing out of the cleanup and leaves it to
the allocators/callers or the one doing the last kref_put as in standard
convention
Reported-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
At one time it was expected to be called in multiple places by kref_put.
At the current time however, it is all contained within
i915_gem_context.c.
This patch makes an upcoming required addition a bit nicer since it too
doesn't need to be defined in a header file.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Add a comment next to our WIZ hashing setup to remind people about the
link between WIZ hashing disable bit and PS/WM thread counts.
Suggested-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
BSpec recommends using 8x4 hashing mode when MSAA is used. But in
practice 16x4 seems to have a slight edge in performance (on IVB and
HSW at least). So just use 16x4.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Antti Koskipää <antti.koskipaa@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
BSpec recommends using 8x4 hashing mode when MSAA is used. But in
practice 16x4 seems to have a slight edge in performance (on IVB and
HSW at least). So just use 16x4.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Antti Koskipää <antti.koskipaa@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
BSpec recommends using 8x4 hashing mode when MSAA is used. But in
practice 16x4 seems to have a slight edge in performance (on IVB and
HSW at least). So just use 16x4.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Antti Koskipää <antti.koskipaa@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The need to set all of the mask bits for 3D_CHICKEN3 was required
only for pre-production hardware.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Antti Koskipää <antti.koskipaa@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Based on the name, the workaround we implement is
WaStripsFansDisableFastClipPerformanceFix. Unfortunately there's no
description in the w/a database, so this is just a guess.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Antti Koskipää <antti.koskipaa@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
On SNB we set up WaSetupGtModeTdRowDispatch:snb early in
gen6_init_clock_gating(). That sets a bit in the GEN6_GT_MODE register.
However later we go and disable all the bits in the same register. And
then we go on to set some other bit. So apparently we never actually
implemented this workaround since the "disable all bits" part was there
already before the w/a got supposedly implemented.
These are the relevant commits:
commit 6547fbdbff
Author: Daniel Vetter <daniel.vetter@ffwll.ch>
Date: Fri Dec 14 23:38:29 2012 +0100
drm/i915: Implement WaSetupGtModeTdRowDispatch
commit f8f2ac9a76
Author: Ben Widawsky <ben@bwidawsk.net>
Date: Wed Oct 3 19:34:24 2012 -0700
drm/i915: Fix GT_MODE default value
So, let's drop the "disable all bits" part, move both writes to
closer proxomity to each other, and name the WIZ hashing bits
appropriately. BSpec is still a bit confused how the bits should
actually be interpreted, but I took the the description for the
high bit since the low bit part only lists values for a single bit.
Also add a comment about our choice of WIZ hashing mode.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Antti Koskipää <antti.koskipaa@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We'll need this when we merge PC8 and Runtime PM: the PC8
enable/disable functions need that lock.
Also, it's good practice to not hold a lock for longer than strictly
needed.
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
To modeset_update_crtc_power_domains, since this function is
responsible for updating all the power domains of all CRTCs after a
modeset. In the future we should also run this function on all
platforms, not just Haswell.
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The i915 driver sets DRIVER_GEM unconditionally, so testing for the
feature will always fail.
Signed-off-by: Thierry Reding <treding@nvidia.com>
[danvet: Fix up conflicts.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Apparently we've missed a few more than what Fengguang's 0-day tester
recently reported in i915_irq.c ... Makes sparse happy again (ignore
some spurious stuff about ksyms of exported functions).
Cc: kbuild test robot <fengguang.wu@intel.com>
Cc: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Spotted while auditing the code for fencing issues.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Looks like I've missed one of the potential NULL deref bugs in Jesse's
fbdev->fb embedded struct to pointer conversions. Fix it up.
This regression has been introduced in
commit 8bcd45534d
Author: Jesse Barnes <jbarnes@virtuousgeek.org>
Date: Fri Feb 7 12:10:38 2014 -0800
drm/i915: alloc intel_fb in the intel_fbdev struct
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
One side-effect of the introduction of ppgtt was that we needed to
rebind the object into the appropriate vm (and global gtt in some
peculiar cases). For simplicity this was done twice for every object on
every call to execbuffer. However, that adds a tremendous amount of CPU
overhead (rewriting all the PTE for all objects into WC memory) per
draw. The fix is to push all the decision about which vm to bind into
and when down into the low-level bind routines through hints rather than
performing the bind unconditionally in the execbuffer routine.
Note that this is a regression introduced in the full ppgtt feature
branch, before this we've only done re-bound objects when the relevant
has_(aliasing_ppgtt|global_gtt)_mapping flag was clear. But since
that's per-object and not per-vma that optimization broke.
v2: Split out prep work and unrelated changes.
v3: Bring back functional change around PIN_GLOBAL that I've
accidentally split out.
v4: Remove the temporary hack for the old binding logic to avoid
bisection issues.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=72906
Tested-by: jianx.zhou@intel.com
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> (v1)
Cc: Ben Widawsky <benjamin.widawsky@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Acked-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This is prep work for reworking the object_pin logic. Atm
it still does a (now redundant) lookup of the vma. The next
patch will fix this.
Split out from Chris vma-bind rework.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ben Widawsky <benjamin.widawsky@intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Split out from Chris vma-bind rework.
Jani wondered why this is save, and the reason is that i915_vma_unbind
does all these checks, too. So they're redundant.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ben Widawsky <benjamin.widawsky@intel.com>
Cc: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
There's no need not to, really.
Split out from Chris vma-bind rework.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ben Widawsky <benjamin.widawsky@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Only the hardware really access them, so no need to have cpu
gtt access available.
Split out from Chris vma-bind rework.
Note that this is only possible due to the split-up of the mappable
pin flag into PIN_GLOBAL and PIN_MAPPABLE.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ben Widawsky <benjamin.widawsky@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Split out from Chris vma-bind rework.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ben Widawsky <benjamin.widawsky@intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We access it through the cpu window. No functional difference expected
atm since we default to a bottom-up allocation scheme. But that might
eventually change so that we prefer the unmappable range for buffers
that don't need cpu gtt access.
Split out from Chris vma-bind rework.
Note that this is only possible due to the split-up of the mappable
pin flag into PIN_GLOBAL and PIN_MAPPABLE.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ben Widawsky <benjamin.widawsky@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Tighter code since legacy gem has only mappable anyway.
Split out from Chris vma-bind rework.
Note that this is only possible due to the split-up of the mappable
pin flag into PIN_GLOBAL and PIN_MAPPABLE.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ben Widawsky <benjamin.widawsky@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Split out from Chris vma-bind rework.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ben Widawsky <benjamin.widawsky@intel.com>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
With abitrary pin flags it makes sense to split out a "please bind
this into global gtt" from the "please allocate in the mappable
range".
Use this unconditionally in our global gtt pin helper since this is
what its callers want. Later patches will drop PIN_MAPPABLE where it's
not strictly needed.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Anything more than just one bool parameter is just a pain to read,
symbolic constants are much better.
Split out from Chris' vma-binding rework patch.
v2: Undo the behaviour change in object_pin that Chris spotted.
v3: Split out misplaced hunk to handle set_cache_level errors,
spotted by Jani.
v4: Keep the current over-zealous binding logic in the execbuffer code
working with a quick hack while the overall binding code gets shuffled
around.
v5: Reorder the PIN_ flags for more natural patch splitup.
v6: Pull out the PIN_GLOBAL split-up again.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ben Widawsky <benjamin.widawsky@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This is the same what we do for DP connectors, so make things more
consistent.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Antti Koskipää <antti.koskipaa@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Antti Koskipää <antti.koskipaa@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Atm we set the parent of the dp i2c device to be the correspondig
connector device. During driver cleanup we first remove the connector
device through intel_modeset_cleanup()->drm_sysfs_connector_remove() and
only after that the i2c device through the encoder's destroy callback.
This order is not supported by the device core and we'll get a warning,
see the below bugzilla ticket. The proper order is to remove first any
child device and only then the parent device.
The first part of the fix changes the i2c device's parent to be the drm
device. Its logical owner is not the connector anyway, but the encoder.
Since the encoder doesn't have a device object, the next best choice is
the drm device. This is the same what we do in the case of the sdvo i2c
device and what the nouveau driver does.
The second part creates a symlink in the connector's sysfs directory
pointing to the i2c device. This is so, that we keep the current ABI,
which also makes sense in case someone wants to look up the i2c device
belonging to a specific connector.
Reference: http://lists.freedesktop.org/archives/intel-gfx/2014-January/038782.html
Reference: http://lists.freedesktop.org/archives/intel-gfx/2014-February/039427.html
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=70523
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Antti Koskipää <antti.koskipaa@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Since
commit d9255d5714
Author: Paulo Zanoni <paulo.r.zanoni@intel.com>
Date: Thu Sep 26 20:05:59 2013 -0300
it became clear that we need to separate the unload sequence into two
parts:
1. remove all interfaces through which new operations on some object
(crtc, encoder, connector) can be started and make sure all pending
operations are completed
2. do the actual tear down of the internal representation of the above
objects
The above commit achieved this separation for connectors by splitting
out the sysfs removal part from the connector's destroy callback and
doing this removal before calling drm_mode_config_cleanup() which does
the actual tear-down of all the drm objects.
Since we'll have to customize the interface removal part for different
types of connectors in the upcoming patches, add a new unregister
callback and move the interface removal part to it.
No functional change.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Antti Koskipää <antti.koskipaa@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Coverity points out that, if we end up in the 'failed' label, that's
precisely because we couldn't retrieve a fixed mode (ie fixed_mode is
NULL) and then "if (fixed_mode)" is always false.
Remove that dead code.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
It can be corrected later and may be what was actually desired, but
generally isn't, so if we find nothing is enabled, let the core DRM fb
helper figure something out.
v2: free the array too (Jesse)
Note that this also undoes any changes in case we bail out due to hw
cloning.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This will make the code more readable, and extensible which is needed
for upcoming feature work. Eventually, we'll do the same for init.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We have a couple of switch cases to compute the port value for the
VIDEO_DIP_CTL register. Replace them with a simple macro.
We do lose a few BUG() calls, but many people may consider that
an improvement.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
... past the check for DRIVER_MODESET. Avoids races with userspace
opening a master and our sarea setup.
Cc: Signed-off-by: Stéphane Marchesin <marcheu@chromium.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We assign the sarea_priv pointer only in the dma ioctl, which is
disallowed when kernel modesetting is enabled. So this is dead code.
Cc: Stéphane Marchesin <marcheu@chromium.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The BIOS or boot loader will generally create an initial display
configuration for us that includes some set of active pipes and
displays. This routine tries to figure out which pipes and connectors
are active and stuffs them into the crtcs and modes array given to us by
the drm_fb_helper code.
The overall sequence is:
intel_fbdev_init - from driver load
intel_fbdev_init_bios - initialize the intel_fbdev using BIOS data
drm_fb_helper_init - build fb helper structs
drm_fb_helper_single_add_all_connectors - more fb helper structs
intel_fbdev_initial_config - apply the config
drm_fb_helper_initial_config - call ->probe then register_framebuffer()
drm_setup_crtcs - build crtc config for fbdev
intel_fb_initial_config - find active connectors etc
drm_fb_helper_single_fb_probe - set up fbdev
intelfb_create - re-use or alloc fb, build out fbdev structs
v2: use BIOS connector config unconditionally if possible (Daniel)
check for crtc cloning and reject (Daniel)
fix up comments (Daniel)
v3: use command line args and preferred modes first (Ville)
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Tested-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
[danvet: Re-add the WARN_ON for a missing encoder crtc - the state
sanitizer should take care of this. And spell-ocd the comments.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This allows drivers to use them in custom initial_config functions.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Acked-by: Dave Airlie <airlied@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Just a bit of polish which I hope will help me with massaging some
internal patches to use Imre's reworked pipestat handling:
- Don't check for underrun reporting or enable pipestat interrupts
twice.
- Frob the comments a bit.
- Do the iir PIPE_EVENT to pipe mapping explicitly with a switch. We
only have one place which does this, so better to make it explicit.
v2: Ville noticed that I've broken the logic a bit with trying to
avoid checking whether we're interested in a given pipe twice. push
the PIPESTAT read down after we've computed the mask of interesting
bits first to avoid that duplication properly.
v3: Squash in fixups from Imre on irc.
Cc: Imre Deak <imre.deak@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We want to reuse this in the fbdev initial config code independently
from any fastboot hacks. So allow a bit more flexibility.
v2: Forgot to git add ...
v3: make non-static (Jesse)
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>