This builds on top of the MMU contexts introduced earlier. Instead of having
one context per GPU core, each GPU client receives its own context.
On MMUv1 this still means a single shared pagetable set is used by all
clients, but on MMUv2 there is now a distinct set of pagetables for each
client. As the command fetch is also translated via the MMU on MMUv2 the
kernel command ringbuffer is mapped into each of the client pagetables.
As the MMU context switch is a bit of a heavy operation, due to the needed
cache and TLB flushing, this patch implements a lazy way of switching the
MMU context. The kernel does not have its own MMU context, but reuses the
last client context for all of its operations. This has some visible impact,
as the GPU can now only be started once a client has submitted some work and
we got the client MMU context assigned. Also the MMU context has a different
lifetime than the general client context, as the GPU might still execute the
kernel command buffer in the context of a client even after the client has
completed all GPU work and has been terminated. Only when the GPU is runtime
suspended or switches to another clients MMU context is the old context
freed up.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Guido Günther <agx@sigxcpu.org>
This reworks the MMU handling to make it possible to have multiple MMU contexts.
A context is basically one instance of GPU page tables. Currently we have one
set of page tables per GPU, which isn't all that clever, as it has the
following two consequences:
1. All GPU clients (aka processes) are sharing the same pagetables, which means
there is no isolation between clients, but only between GPU assigned memory
spaces and the rest of the system. Better than nothing, but also not great.
2. Clients operating on the same set of buffers with different etnaviv GPU
cores, e.g. a workload using both the 2D and 3D GPU, need to map the used
buffers into the pagetable sets of each used GPU.
This patch reworks all the MMU handling to introduce the abstraction of the
MMU context. A context can be shared across different GPU cores, as long as
they have compatible MMU implementations, which is the case for all systems
with Vivante GPUs seen in the wild.
As MMUv1 is not able to change pagetables on the fly, without a
"stop the world" operation, which stops GPU, changes pagetables via CPU
interaction, restarts GPU, the implementation introduces a shared context on
MMUv1, which is returned whenever there is a request for a new context.
This patch assigns a MMU context to each GPU, so on MMUv2 systems there is
still one set of pagetables per GPU, but due to the shared context MMUv1
systems see a change in behavior as now a single pagetable set is used
across all GPU cores.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Guido Günther <agx@sigxcpu.org>
If a MMU is shared between multiple GPUs, all of them need to flush their
TLBs, so a single marker that gets reset on the first flush won't do.
Replace the flush marker with a sequence number, so that it's possible to
check if the TLB is in sync with the current page table state for each GPU.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Guido Günther <agx@sigxcpu.org>
There is no need for each GPU to have it's own cmdbuf suballocation
region. Only allocate a single one for the the etnaviv virtual device
and share it across all GPUs.
As the suballoc space is now potentially shared by more hardware jobs
running in parallel, double its size to 512KB to avoid contention.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Guido Günther <agx@sigxcpu.org>
This allows to decouple the cmdbuf suballocator create and mapping
the region into the GPU address space. Allowing multiple AS to share
a single cmdbuf suballoc.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Guido Günther <agx@sigxcpu.org>
Remember if the GPU has been sucessfully initialized. Only in that case
do we need to clean up various structures in the unbind path. If the
GPU hasn't been sucessfully initialized all the cleanups should happen
in the failure paths of the init function.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Guido Günther <agx@sigxcpu.org>
It only written and we don't infer any useful information from
it anymore. Remove it.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
The etnaviv_gpu header only needs to know about the pointer types, so
replace by a forward declaration and only include the headers where needed.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
This is the only place in the driver that should have to deal with
the raw hardware fences. To avoid any further confusion, consolidate
the fence handling in this file and remove any traces of this from
the header files.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
There is no need to track the currently active fence. The GPU scheduler
keeps track of all the in-flight jobs.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
The documentation of drm_sched_job_init and drm_sched_entity_push_job has
been clarified. Both functions should be called under a shared lock, to
avoid jobs getting pushed into the scheduler queue in a different order
than their sched_fence seqnos, which will confuse checks that are looking
at the seqnos to infer information about completion order.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
When the hangcheck handler was replaced by the DRM scheduler timeout
handling we dropped the forward progress check, as this might allow
clients to hog the GPU for a long time with a big job.
It turns out that even reasonably well behaved clients like the
Armada Xorg driver occasionally trip over the 500ms timeout. Bring
back the forward progress check to get rid of the userspace regression.
We would still like to fix userspace to submit smaller batches
if possible, but that is for another day.
Cc: <stable@vger.kernel.org>
Fixes: 6d7a20c077 (drm/etnaviv: replace hangcheck with scheduler timeout)
Reported-by: Russell King <linux@armlinux.org.uk>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Eric Anholt <eric@anholt.net>
This replaces the repetitive GPL-2.0 license text in code and header files
with the SPDX tags. Generated hardware headers aren't changed, as any changes
there need to be done in the upstream rnndb repository.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
I'm not aware of any case where tracing GPU register manipulation at the
kernel level would have been useful. It only adds more indirections and
adds to the code size.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
With the introduction of GPU security we have 3 different modes of
GPU operation:
- GPU core doesn't have security features -> no handling required
- the security related states are handled by the kernel driver
- the security related states are handled by a TrustZone application
Add a enum to differentiate between the different operation modes.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
New versions of the Vivante kernel driver don't trust the hardware feature
bits anymore, but use an internal hardware database. This also includes
more feature fields than are available in hardware.
As we can't trust the hardware feature bits to be correct anymore, we need
to replicate the HWDB in etanviv. For now only the GC7000L as found on
the i.MX8M is supported.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
The slave interface clock is a clock input found on newer cores to gate
the register interface. For now we simply ungate it when the GPU is in
active state.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
This replaces the etnaviv internal hangcheck logic with the job timeout
handling provided by the DRM scheduler. This simplifies the driver further
and allows to replay jobs after a GPU reset, so only minimal state is lost.
This introduces a user-visible change in that we don't allow jobs to run
indefinitely as long as they make progress anymore, as this introduces
quality of service issues when multiple processes are using the GPU.
Userspace is now responsible to flush jobs in a way that the finish in a
reasonable time, where reasonable is currently defined as less than 500ms.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Move the fence dependency handling to the scheduler where it belongs.
Jobs with unsignaled dependencies just get to sit in the scheduler queue
without holding any locks.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
This hooks in the DRM GPU scheduler. No improvement yet, as all the
dependency handling is still done in etnaviv_gem_submit. This just
replaces the actual GPU submit by passing through the scheduler.
Allows to get rid of the retire worker, as this is now driven by the
scheduler.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
This moves away from using the internal seqno as the userspace fence
reference. By moving to a generic ID, we can later replace the internal
fence by something different than the etnaviv seqno fence.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Less dynamic allocations and slims down the cmdbuf object to only the
required information, as everything else is already available in the
submit object.
This also simplifies buffer and mappings lifetime management, as they
are now exlusively attached to the submit object and not additionally
to the cmdbuf.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
To make them available to the event worker even after the actual
command stream execution has finished.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
While the etnaviv workqueue needs to be ordered, as we rely on work items
being executed in queuing order, this is only true for a single GPU.
Having a shared workqueue for all GPUs in the system limits concurrency
artificially.
Getting each GPU its own ordered workqueue still meets our ordering
expectations and enables retire workers to run concurrently.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
There is no need to store this in the gpu struct. MMU flushes are triggered
correctly in reaction to MMU maps and unmaps, independent of the current ctx.
Any required pipe switches can be infered from the current and the desired
GPU exec state.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
With 'sync points' we can sample the reqeustes perform signals
before and/or after the submited command buffer.
Changes v2 -> v3:
- fixed indentation and init nr_events to 1
Changes v4 -> v5:
- simplify logic around fence handling.
Signed-off-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
In order to support performance counters in a sane way we need to provide
a method to sync the GPU with the CPU. The GPU can process multpile command
buffers/events per irq. With the help of a 'sync point' we can trigger an event
and stop the GPU/FE immediately. When the CPU is done with is processing it
simply needs to restart the FE and the GPU will process the command stream.
Changes from v1 -> v2:
- process sync point with a work item to keep irq as fast as possible
Changes from v4 -> v5:
- renamed pmrs_* to sync_point_*
- call event_free(..) in sync_point_worker(..)
Signed-off-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
This is prep work to be able to allocate multiple events in one go.
Signed-off-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
GPU cores with the DYNAMIC_FREQUENCY_SCALING feature bit set expect the
platform to provide the clock scaling and ignore any requests to use the
internal FSCALE divider. Writes to this register still work, but don't
have any effect on the GPU clock frequency.
Save the initial core and shader clock frequency and ask the platform
to provide a slower clock when cooling is requested.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Loosely based on commit f0a42bb542 ("drm/msm: submit support for
in-fences"). Unfortunately, struct drm_etnaviv_gem_submit doesn't have
a flags field yet, so we have to extend the structure and trust that
drm_ioctl will clear the flags for us if an older userspace only submits
part of the struct.
Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Gustavo Padovan <gustavo.padovan@collabora.com>
Reviewed-by: Sumit Semwal <sumit.semwal@linaro.org>
Reviewed-by: Lucas Stach <l.stach@pengutronix.de>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Each Vivante GPU contains a clock divider which can divide the GPU clock
by 2^n, which can lower the power dissipation from the GPU. It has been
suggested that the GC600 on Dove is responsible for 20-30% of the power
dissipation from the SoC, so lowering the GPU clock rate provides a way
to throttle the power dissiptation, and reduce the temperature when the
SoC gets hot.
This patch hooks the Etnaviv driver into the kernel's thermal management
to allow the GPUs to be throttled when necessary, allowing a reduction in
GPU clock rate from /1 to /64 in power of 2 steps.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Reviewed-by: Lucas Stach <l.stach@pengutronix.de>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
There are 3 big benefits to suballocating a single big DMA buffer
for command submission:
1. Avoid hammering CMA. The old way of allocating and freeing a DMA
buffer for each submission was hitting some of the real slow
pathes in CMA, as this allocator was not designed for a concurrent
small buffers load.
2. Less TLB flushes on IOMMUv2. If a new command buffer is mapped into
the GPU address space the MMU TLBs need to be flushed. By having
one big buffer statically mapped to the GPU, a lot of those flushes
can be avoided.
3. No funky workarounds for GC3000. The FE TLB flush on GC3000 isn't
reliable. To work around that we tried to lay out the cmdbufs in
the GPU address space in a way to avoid this issue. This hasn't
always worked if the address space is crowded. A single statically
mapped buffer avoids the erratum completely.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
This will get more complex with the following changes, so move it
into its own place.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
With MMUv2 all buffers need to be mapped through the MMU once it
is enabled. Align the buffer size to 4K, as the MMU is only able to
map page aligned buffers.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Split out into a new externally visible function, as the IOMMUv2
code needs this functionality, too.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Split out into a new externally visible function, as the IOMMUv2
code needs this functionality, too.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Fence contexts are created on the fly (for example) by the GPU scheduler used
in the amdgpu driver as a result of an userspace request. Because of this
userspace could in theory force a wrap around of the 32bit context number
if it doesn't behave well.
Avoid this by increasing the context number to 64bits. This way even when
userspace manages to allocate a billion contexts per second it takes more
than 500 years for the context number to wrap around.
v2: fix printf formats as well.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Gustavo Padovan <gustavo.padovan@collabora.co.uk>
Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/1464786612-5010-2-git-send-email-deathsimple@vodafone.de
Currently, we scan the list of mappings each time we want to operate on
the vram_mapping struct. Rather than repeatedly scanning these, look
them up once in the submission path, and then use _reference and
_unreference methods as necessary to manage this object.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Add tracking of the current execution state (iow, active GPU pipe).
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Export further minor feature bitmasks and the varyings count from
the GPU specifications registers to userspace.
Acked-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
This adds the etnaviv DRM driver and hooks it up in Makefiles
and Kconfig.
Signed-off-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>