* 'intel/drm-intel-next' of ../drm-next: (755 commits)
drm/i915: Only wait on a pending flip if we intend to write to the buffer
drm/i915/dp: Sanity check eDP existence
drm/i915: Rebind the buffer if its alignment constraints changes with tiling
drm/i915: Disable GPU semaphores by default
drm/i915: Do not overflow the MMADDR write FIFO
Revert "drm/i915: fix corruptions on i8xx due to relaxed fencing"
drm/i915: Don't save/restore hardware status page address register
drm/i915: don't store the reg value for HWS_PGA
drm/i915: fix memory corruption with GM965 and >4GB RAM
Linux 2.6.38-rc7
Revert "TPM: Long default timeout fix"
drm/i915: Re-enable GPU semaphores for SandyBridge mobile
drm/i915: Replace vblank PM QoS with "Interrupt-Based AGPBUSY#"
Revert "drm/i915: Use PM QoS to prevent C-State starvation of gen3 GPU"
drm/i915: Allow relocation deltas outside of target bo
drm/i915: Silence an innocuous compiler warning for an unused variable
fs/block_dev.c: fix new kernel-doc warning
ACPI: Fix build for CONFIG_NET unset
mm: <asm-generic/pgtable.h> must include <linux/mm_types.h>
x86: Use u32 instead of long to set reset vector back to 0
...
Conflicts:
drivers/gpu/drm/i915/i915_gem.c
Somehow fixes a misrendering + hang at GDM startup on my NVA8...
My first guess would have been stale TLB entries laying around that a new
bo then accidentally inherits. That doesn't make a great deal of sense
however, as when we mapped the pages for the new bo the TLBs would've
gotten flushed anyway.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The immediate benefit of doing this is that on NV50 and up, the GPU
virtual address of any buffer is now constant, regardless of what
memtype they're placed in.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Upcoming patches are going to enable full support for buffers that keep
a constant GPU virtual address whenever they're validated for use by
the GPU.
In order for this to work properly while keeping support for large pages,
we need to know if it's ever going to be possible for a buffer to end
up in GART, and if so, disable large pages for the buffer's VMA.
This is a new restriction that's not present in earlier kernel's, but
should not break userspace as the current code never attempts to validate
buffers into a memtype other than it was created with.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
'mappable' isn't really used at all, nor is it necessary anymore as the
bo code is capable of moving buffers to mappable vram as required.
'no_vm' isn't necessary anymore either, any places that don't want to be
mapped into a GPU address space should allocate the VRAM directly instead.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
In preparation for the addition of a new nv40 backend, we'll need to be
able to distinguish between a paged dma object and the on-chip GART.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
We free the temporary binding before leaving this function, so we also have
to wait for the move to actually complete.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Reported-by: Alex Buell <alex.buell@munted.org.uk>
Signed-off-by: Francisco Jerez <currojerez@riseup.net>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
As of this commit, it's guaranteed that if an object is in VRAM that its
GPU virtual address will be constant.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
This is required on nv50 as we need to be able to have more precise control
over physical VRAM allocations to avoid buffer corruption when using
buffers of mixed memory types.
This removes some nasty overallocation/alignment that we were previously
using to "control" this problem.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The point is to share more code between the PFB/PGRAPH tile region
hooks, and give the hardware specific functions a chance to allocate
per-region resources.
Signed-off-by: Francisco Jerez <currojerez@riseup.net>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
nouveau_bo_move_m2mf() needs to lock the kernel channel, and it may be
called from the pushbuf IOCTL with an user channel already locked. Use
a separate subclass for the kernel channel mutex because this is
legitimate mutex nesting.
Signed-off-by: Francisco Jerez <currojerez@riseup.net>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
nv0x-nv4x should be mostly fine, nv50 doesn't work yet.
Signed-off-by: Francisco Jerez <currojerez@riseup.net>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
nouveau_fence_* functions are not type safe, which could lead to bugs.
Additionally every use of nouveau_fence_unref had to cast struct
nouveau_fence to void **.
Fix it by renaming old functions and creating static inline functions with
new prototypes. We still need old functions, because we pass function
pointers to ttm.
As we are wrapping functions, drop unused "void *arg" parameter where possible.
Signed-off-by: Marcin Slusarz <marcin.slusarz@gmail.com>
Signed-off-by: Francisco Jerez <currojerez@riseup.net>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The pushbuf ioctl syncs after validation, no need for this anymore.
Signed-off-by: Francisco Jerez <currojerez@riseup.net>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
This fixes a race condition between fbcon acceleration and TTM buffer
moves. To reproduce:
- start X
- switch to vt and "while (true); do dmesg; done"
- switch to another vt and "sleep 2 && cat /path/to/debugfs/dri/0/evict_vram"
- switch back to vt running dmesg
We don't make use of this on any other channel yet, they're currently
protected by drm_global_mutex. This will change in the near future.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
This will be needed for Z compression and to take smarter placement
decisions.
Signed-off-by: Francisco Jerez <currojerez@riseup.net>
Acked-by: Ben Skeggs <bskeggs@redhat.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Nouveau will need this on GeForce 8 and up to account for the GPU
reordering physical VRAM for some memory types.
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Acked-by: Thomas Hellström <thellstrom@vmware.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Existing core code/drivers call drm_mm_put_block on ttm_mem_reg.mm_node
directly. Future patches will modify TTM behaviour in such a way that
ttm_mem_reg.mm_node doesn't necessarily belong to drm_mm.
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Acked-by: Thomas Hellström <thellstrom@vmware.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Hopefully this one will be better able to cope with moving tiled buffers
around without getting them all scrambled as a result.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
When VRAM is running out it's possible that the client's push buffers get
evicted to main memory. When they're validated back in, the GPU may
be used for the copy back to VRAM, but the existing synchronisation code
only deals with inter-channel sync, not sync between PFIFO and PGRAPH on
the same channel. This leads to PFIFO fetching from command buffers that
haven't quite been copied by PGRAPH yet.
This patch marks push buffers as so, and forces any GPU-assisted buffer
moves to be done on a different channel, which triggers the correct
synchronisation to happen before we submit them.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Avoids an oops in the fence wait failure path (bug 26521).
Signed-off-by: Francisco Jerez <currojerez@riseup.net>
Tested-by: Marcin Slusarz <marcin.slusarz@gmail.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Remove the drm_resource wrappers and directly use the
actual PCI and/or platform functions in their place.
[airlied: fixup nouveau properly to build]
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>