As seen in the Vivante kernel driver, most GPUs with the BLT engine have
a broken TS cache flush. The workaround is to temporarily set the BLT
command to CLEAR_IMAGE, without actually executing the clear. Apparently
this state change is enough to trigger the required TS cache flush. As
the BLT engine is completely asychronous, we also need a few more stall
states to synchronize the flush with the frontend.
Root-caused-by: Jonathan Marek <jonathan@marek.ca>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
We haven't done any backmerge for a while due to the merge window, and it
starts to become an issue for komeda. Let's bring 5.4-rc1 in.
Signed-off-by: Maxime Ripard <mripard@kernel.org>
This builds on top of the MMU contexts introduced earlier. Instead of having
one context per GPU core, each GPU client receives its own context.
On MMUv1 this still means a single shared pagetable set is used by all
clients, but on MMUv2 there is now a distinct set of pagetables for each
client. As the command fetch is also translated via the MMU on MMUv2 the
kernel command ringbuffer is mapped into each of the client pagetables.
As the MMU context switch is a bit of a heavy operation, due to the needed
cache and TLB flushing, this patch implements a lazy way of switching the
MMU context. The kernel does not have its own MMU context, but reuses the
last client context for all of its operations. This has some visible impact,
as the GPU can now only be started once a client has submitted some work and
we got the client MMU context assigned. Also the MMU context has a different
lifetime than the general client context, as the GPU might still execute the
kernel command buffer in the context of a client even after the client has
completed all GPU work and has been terminated. Only when the GPU is runtime
suspended or switches to another clients MMU context is the old context
freed up.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Guido Günther <agx@sigxcpu.org>
This reworks the MMU handling to make it possible to have multiple MMU contexts.
A context is basically one instance of GPU page tables. Currently we have one
set of page tables per GPU, which isn't all that clever, as it has the
following two consequences:
1. All GPU clients (aka processes) are sharing the same pagetables, which means
there is no isolation between clients, but only between GPU assigned memory
spaces and the rest of the system. Better than nothing, but also not great.
2. Clients operating on the same set of buffers with different etnaviv GPU
cores, e.g. a workload using both the 2D and 3D GPU, need to map the used
buffers into the pagetable sets of each used GPU.
This patch reworks all the MMU handling to introduce the abstraction of the
MMU context. A context can be shared across different GPU cores, as long as
they have compatible MMU implementations, which is the case for all systems
with Vivante GPUs seen in the wild.
As MMUv1 is not able to change pagetables on the fly, without a
"stop the world" operation, which stops GPU, changes pagetables via CPU
interaction, restarts GPU, the implementation introduces a shared context on
MMUv1, which is returned whenever there is a request for a new context.
This patch assigns a MMU context to each GPU, so on MMUv2 systems there is
still one set of pagetables per GPU, but due to the shared context MMUv1
systems see a change in behavior as now a single pagetable set is used
across all GPU cores.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Guido Günther <agx@sigxcpu.org>
If a MMU is shared between multiple GPUs, all of them need to flush their
TLBs, so a single marker that gets reset on the first flush won't do.
Replace the flush marker with a sequence number, so that it's possible to
check if the TLB is in sync with the current page table state for each GPU.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Guido Günther <agx@sigxcpu.org>
This allows to decouple the cmdbuf suballocator create and mapping
the region into the GPU address space. Allowing multiple AS to share
a single cmdbuf suballoc.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Guido Günther <agx@sigxcpu.org>
Drop use of the deprecated drmP.h header file.
Fix fallout in all .c files.
The etnaviv_drv.h header file was made self-contained,
and missing includes was then added to the .c files that needed them.
In a few cases the list of include files was sorted.
Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Cc: Lucas Stach <l.stach@pengutronix.de>
Cc: Russell King <linux+etnaviv@armlinux.org.uk>
Cc: Christian Gmeiner <christian.gmeiner@gmail.com>
Cc: David Airlie <airlied@linux.ie>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: etnaviv@lists.freedesktop.org
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
It only written and we don't infer any useful information from
it anymore. Remove it.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
This replaces the repetitive GPL-2.0 license text in code and header files
with the SPDX tags. Generated hardware headers aren't changed, as any changes
there need to be done in the upstream rnndb repository.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
On GPUs with the security feature the MTLB config is stored in the PTA.
Add a function to trigger the initial PTA load through the FE.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Less dynamic allocations and slims down the cmdbuf object to only the
required information, as everything else is already available in the
submit object.
This also simplifies buffer and mappings lifetime management, as they
are now exlusively attached to the submit object and not additionally
to the cmdbuf.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
We'll need this in some places where only the submit is available. Also
this is a first step at slimming down the cmdbuf object.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
When manipulating the kernel command buffer the GPU mutex must be held, as
otherwise different callers might try to replace the same part of the
buffer, wreacking havok in the GPU execution.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
There is no need to store this in the gpu struct. MMU flushes are triggered
correctly in reaction to MMU maps and unmaps, independent of the current ctx.
Any required pipe switches can be infered from the current and the desired
GPU exec state.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
In order to support performance counters in a sane way we need to provide
a method to sync the GPU with the CPU. The GPU can process multpile command
buffers/events per irq. With the help of a 'sync point' we can trigger an event
and stop the GPU/FE immediately. When the CPU is done with is processing it
simply needs to restart the FE and the GPU will process the command stream.
Changes from v1 -> v2:
- process sync point with a work item to keep irq as fast as possible
Changes from v4 -> v5:
- renamed pmrs_* to sync_point_*
- call event_free(..) in sync_point_worker(..)
Signed-off-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Don't call the IOMMU directly, but go through the new cmdbuf abstraction.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
This will get more complex with the following changes, so move it
into its own place.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
If the GPU is done with one user command stream the buffers referenced
by this command stream may go away and get unmapped from the MMU. If
the write caches are still dirty at this point later evictions will run
into MMU faults, killing the GPU.
Make sure the write caches are flushed before signaling completion
of the user command stream.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Flushing works differently on MMUv2, in that it's only necessary
to set a single bit in the control register to flush all translation
units. A semaphore stall then makes sure that the flush has propagated
properly.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Both the safe/scratch address and the master TLB address are per pipe
with the CPU mapped registers not properly propagating to the
different translation units.
The only way to correctly configure all translation units is to have
a command stream snipped executed by the FE, before any other execution
can start.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
The GPU virtual address for the command buffers differs depending on
the IOMMU version. Move the calculation of the iova into etnaviv
mmu, to enable proper dispatch.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Improve the readibility of the function which inserts command buffers
and other maintanence commands into the GPUs ring buffer. We do this
by splitting the ring buffer reservation in two: one chunk for any
commands that need to be issued prior to the command buffer, and a
separate chunk for commands issued after the buffer.
The result is a much more obvious code flow in this function, and
localisation of the conditional maintanence commands prior to the
command buffer.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Clean up the GPU command submission path to prepare for the next change.
This makes the next change easier to read and understand.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Use the previous GPU pipe state when deciding which GPU caches should
be flushed prior to switching the current pipe. This avoids infering
what the previously selected pipe was, and potentially flushing the
wrong caches.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Flush the GPU caches to ensure that any dirty data is pushed out before
stopping the front end.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Add tracking of the current execution state (iow, active GPU pipe).
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Extract out the arming of a semaphore from the pipe select code.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Extract out the replacement of the WAIT command with some other command.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Provide a helper etnaviv_buffer_reserve() to ensure that we can fit a
set of commands into the ring buffer without wrapping by moving code
out of etnaviv_buffer_queue().
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
This adds the etnaviv DRM driver and hooks it up in Makefiles
and Kconfig.
Signed-off-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>