msm_iommu_new() can fail and this change makes sure that we
detect the failure and free the allocated domain before going
any further.
Signed-off-by: Stephane Viau <sviau@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@gmail.com>
Track the list of in-flight submits. If the gpu hangs, retire up to an
including the offending submit, and then re-submit the remainder. This
way, for concurrently running piglit tests (for example), one failing
test doesn't cause unrelated tests to fail simply because it's submit
was queued up after one that triggered a hang.
Signed-off-by: Rob Clark <robdclark@gmail.com>
As found in apq8016 (used in DragonBoard 410c) and msm8916.
Note that numerically a306 is actually 307 (since a305c already claimed
306). Nice and confusing.
Signed-off-by: Rob Clark <robdclark@gmail.com>
A few spots in the driver have support for downstream android
CONFIG_MSM_BUS_SCALING. This is mainly to simplify backporting the
driver for various devices which do not have sufficient upstream
kernel support. But the intentionally dead code seems to cause
some confusion. Rename the #define to make this more clear.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Somewhere along the way, the firmware loader sprouted another lock
dependency, resulting in possible deadlock scenario:
&dev->struct_mutex --> &sb->s_type->i_mutex_key#2 --> &mm->mmap_sem
which is problematic vs things like gem mmap.
So introduce a separate mutex to synchronize gpu init.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Downstream kernel IOMMU had a non-standard way of dealing with multiple
devices and multiple ports/contexts. We don't need that on upstream
kernel, so rip out the crazy.
Note that we have to move the pinning of the ringbuffer to after the
IOMMU is attached. No idea how that managed to work properly on the
downstream kernel.
For now, I am leaving the IOMMU port name stuff in place, to simplify
things for folks trying to backport latest drm/msm to device kernels.
Once we no longer have to care about pre-DT kernels, we can drop this
and instead backport upstream IOMMU driver.
Signed-off-by: Rob Clark <robdclark@gmail.com>
To ease debugging, add debugfs file which can be cat/tail'd to log
submits, along with fence #. If GPU hangs, you can look at 'gpu'
debugfs file to find last completed fence and current register state,
and compare with logged rd file to narrow down the DRAW_INDX which
triggered the GPU hang.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Shut down the clks when the gpu has nothing to do. A short inactivity
timer is used to provide a low pass filter for power transitions.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Because we use a list_head in the bo to track it's position in a submit,
we need to serialize at a higher layer. Otherwise there are problems
when multiple contexts are SUBMIT'ing in parallel cmdstreams referencing
a shared bo.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Add a VRAM carveout that is used for systems which do not have an IOMMU.
The VRAM carveout uses CMA. The arch code must setup a CMA pool for the
device (preferrably in highmem.. a 256m-512m VRAM pool in lowmem is not
cool). The user can configure the VRAM pool size using msm.vram module
param.
Technically, the abstraction of IOMMU behind msm_mmu is not strictly
needed, but it simplifies the GEM code a bit, and will be useful later
when I add support for a2xx devices with GPUMMU, so I decided to keep
this part.
It appears to be possible to configure the GPU to restrict access to
addresses within the VRAM pool, but this is not done yet. So for now
the GPU will refuse to load if there is no sort of mmu. Once address
based limits are supported and tested to confirm that we aren't giving
the GPU access to arbitrary memory, this restriction can be lifted
Signed-off-by: Rob Clark <robdclark@gmail.com>
This got a bit broken with original patches when re-arranging things to
move dependencies on mach-msm inside #ifndef OF.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Re-arrange things a bit so that we can get work requested after a bo
fence passes, like pageflip, done before retiring bo's. Without any
sort of bo cache in userspace, some games can trigger hundred's of
transient bo's, which can cause retire to take a long time (5-10ms).
Obviously we want a bo cache.. but this cleanup will make things a
bit easier for atomic as well and makes things a bit cleaner.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Acked-by: David Brown <davidb@codeaurora.org>
Occasionally we seem to miss an IRQ from the ME (microengine). I'm not
entirely sure the root cause, but for now we can unwedge things by
retiring from the hangcheck timer.
Signed-off-by: Rob Clark <robdclark@gmail.com>
If gpu locks up with the rptr shortly beyond the wrap-around point in
the ringbuffer, because the rptr was not reset (but wptr is, by virtue
of resetting rb->cur), we could end up in a scenario where we think
there is not enough space in the ringbuffer for the next cmds. And
since the CP won't reset rptr until after processing an IB, this leaves
things in a sort of deadlock.
So reset rptr too. And a bit more spiffing up of hangcheck to make
things easier to debug.
Signed-off-by: Rob Clark <robdclark@gmail.com>
The userspace API already had everything needed to handle read vs write
synchronization. This patch actually bothers to hook it up properly, so
that we don't need to (for example) stall on userspace read access to a
buffer that gpu is also still reading.
Signed-off-by: Rob Clark <robdclark@gmail.com>
A basic, no-frills recovery mechanism in case the gpu gets wedged. We
could try to be a bit more fancy and restart the next submit after the
one that got wedged, but for now keep it simple. This is enough to
recover things if, for example, the gpu hangs mid way through a piglit
run.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Add initial support for a3xx 3d core.
So far, with hardware that I've seen to date, we can have:
+ zero, one, or two z180 2d cores
+ a3xx or a2xx 3d core, which share a common CP (the firmware
for the CP seems to implement some different PM4 packet types
but the basics of cmdstream submission are the same)
Which means that the eventual complete "class" hierarchy, once
support for all past and present hw is in place, becomes:
+ msm_gpu
+ adreno_gpu
+ a3xx_gpu
+ a2xx_gpu
+ z180_gpu
This commit splits out the parts that will eventually be common
between a2xx/a3xx into adreno_gpu, and the parts that are even
common to z180 into msm_gpu.
Note that there is no cmdstream validation required. All memory access
from the GPU is via IOMMU/MMU. So as long as you don't map silly things
to the GPU, there isn't much damage that the GPU can do.
Signed-off-by: Rob Clark <robdclark@gmail.com>