MMUv2 supports up to 40 bits of physical address by folding the upper
8 bits into bits [4:11] of the PTE.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
With etnaviv not being tied into the IOMMU framework anymore, the MMU
functions will only be called under sleeping locks. Thus we are able
to allocate the memory for the 2nd level page tables on demand without
having to deal with memory allocation in atomic context.
This speeds up driver intitialization on MMUv2 GPU cores, as we don't
need to preallocate all the page table memory and also reduces memory
consumption for most workloads, as most of them won't use the full
GPU virtual address space.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
We are likely to write multiple page entries at once and already ensure
proper write buffer flushing before GPU submit, so this improves CPU
time usage in the submit path without any downsides.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
I'm not aware of any case where tracing GPU register manipulation at the
kernel level would have been useful. It only adds more indirections and
adds to the code size.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
This was useful on MMUv1 GPUs, which don't generate proper faults,
when the GPU write caches weren't fully understood and not properly
handled by the kernel driver. As this has been fixed for quite some
time, the cycling though the MMU address space needlessly spreads
out the MMU mappings.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
The old way did clamp the jiffy conversion and thus caused the timeouts
to become negative after some time. Also it didn't work with userspace
which actually fills the upper 32bits of the 64bit timestamp value.
clock_gettime() is 32-bit on 32-bit architectures. Using 64-bit timespec
math, like we do in this commit, means that when a wrap occurs, the
specified timeout goes into the past and we can't request a timeout in
the future. As the Linux implementation of CLOCK_MONOTONIC is reasonable
and starts at 0, the first such timer wrap will occur after approx. 68
years of system uptime.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
The current limit of 2 leads to some GPU idle times, as the usual
IRQ latency leads to up to 3 jobs getting signaled at once with some
standard workloads.
A larger HW job limit might lead to slightly worse QoS, but we accept
that to not sacrifice GPU throughput in the common case.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
etnaviv_sched_dependency() and etnaviv_sched_run_job() are only
used in this file, so make them static.
This fixes the following sparse warnings:
drivers/gpu/drm/etnaviv/etnaviv_sched.c:30:18: warning: symbol 'etnaviv_sched_dependency' was not declared. Should it be static?
drivers/gpu/drm/etnaviv/etnaviv_sched.c:81:18: warning: symbol 'etnaviv_sched_run_job' was not declared. Should it be static?
Signed-off-by: Fabio Estevam <fabio.estevam@nxp.com>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
The Page Table Array is a new first level structure above the MTLB
availabale on GPUs with the security feature. Use the PTa to set up
the MMU when the security related states are handled by the kernel driver.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
On GPUs with the security feature the MTLB config is stored in the PTA.
Add a function to trigger the initial PTA load through the FE.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
GPUs with support for the security features need some additional
setup to get the frontend started.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
With the introduction of GPU security we have 3 different modes of
GPU operation:
- GPU core doesn't have security features -> no handling required
- the security related states are handled by the kernel driver
- the security related states are handled by a TrustZone application
Add a enum to differentiate between the different operation modes.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
New versions of the Vivante kernel driver don't trust the hardware feature
bits anymore, but use an internal hardware database. This also includes
more feature fields than are available in hardware.
As we can't trust the hardware feature bits to be correct anymore, we need
to replicate the HWDB in etanviv. For now only the GC7000L as found on
the i.MX8M is supported.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Update the state HI and common header from rnndb commit
8478eef32fd9 (rnndb: document secure GPU reset bit).
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
The slave interface clock is a clock input found on newer cores to gate
the register interface. For now we simply ungate it when the GPU is in
active state.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Split out the fault dumping, as this will get more complex in the future.
Also there is no need to read and dump the fault address from MMUs that
didn't signal a fault.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
The module autoloading can be triggered through the GPU core nodes
and the necessary platform device for the DRM toplevel device will
be instantiated on module init.
Suggested-by: Rob Herring <robh@kernel.org>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Rob Herring <robh@kernel.org>
This replaces the etnaviv internal hangcheck logic with the job timeout
handling provided by the DRM scheduler. This simplifies the driver further
and allows to replay jobs after a GPU reset, so only minimal state is lost.
This introduces a user-visible change in that we don't allow jobs to run
indefinitely as long as they make progress anymore, as this introduces
quality of service issues when multiple processes are using the GPU.
Userspace is now responsible to flush jobs in a way that the finish in a
reasonable time, where reasonable is currently defined as less than 500ms.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Populating objects, adding them to the GPU VM and patching/validating
the command stream might take a lot of CPU time. There is no reason to
hold all object reservations during that time.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Move the fence dependency handling to the scheduler where it belongs.
Jobs with unsignaled dependencies just get to sit in the scheduler queue
without holding any locks.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
This hooks in the DRM GPU scheduler. No improvement yet, as all the
dependency handling is still done in etnaviv_gem_submit. This just
replaces the actual GPU submit by passing through the scheduler.
Allows to get rid of the retire worker, as this is now driven by the
scheduler.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
This moves away from using the internal seqno as the userspace fence
reference. By moving to a generic ID, we can later replace the internal
fence by something different than the etnaviv seqno fence.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Some architecture ports like ARC don't provide the PHYS_OFFSET symbol.
Define it to 0 in that case, which is the most conservative default in
the usage context of the etnaviv driver.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Fixes the following sparse warnings:
drivers/gpu/drm/etnaviv/etnaviv_iommu.c:161:39: warning:
symbol 'etnaviv_iommuv1_ops' was not declared. Should it be static?
drivers/gpu/drm/etnaviv/etnaviv_iommu_v2.c:239:39: warning:
symbol 'etnaviv_iommuv2_ops' was not declared. Should it be static?
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
There is no need to hold the GPU lock while freeing the submit
object. Only move the retired submits from the GPU active list to
a temporary retire list under the GPU lock.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Now that the PMR lifetime issues are solved we can safely re-enable
performance counter profiling support.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
As long as there is an active submit, we want the GPU to stay awake. This
is slightly complicated by the fact that we really want to wake the GPU
at the last possible moment to achieve maximum power savings.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
The active count is used to check if the BO is idle, where idle is defined
as not active on the GPU and all VM mappings and reference counts dropped
to the initial state. As the idling of the mappings and references now only
happens in the submit cleanup, the active state handling must be moved to
the same location in order to keep the userspace semantics.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Less dynamic allocations and slims down the cmdbuf object to only the
required information, as everything else is already available in the
submit object.
This also simplifies buffer and mappings lifetime management, as they
are now exlusively attached to the submit object and not additionally
to the cmdbuf.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
The GPU exec state may have changed at the time when the perfmon sampling
is done, as it reflects the state of the last submission, not the current
GPU execution state.
So for proper sampling we must use the submit exec_state.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
We'll need this in some places where only the submit is available. Also
this is a first step at slimming down the cmdbuf object.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
To make them available to the event worker even after the actual
command stream execution has finished.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
The submit object lifetime will get extended to the actual GPU
execution. As multiple users will depend on this, add a kref to
properly control destruction of the object.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
The acquire_ctx is special in that it needs to be released from the same
thread as has been used to initialize it. This collides with the intention to
extend the submit lifetime beyond the gem_submit function with potentially
other threads doing the final cleanup.
Move the ww_acquire_ctx to the function local stack as suggested in the
documentation.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
This is safe to call in all paths, as the BO_PINNED flag tells us if the BO
needs unpinning.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Simplifies the cleanup path and moves fence waiting to a central location.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
The object fencing has nothing to do with the actual GPU buffer submit,
so move it to the gem submit path to have a cleaner split.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Use kzalloc so other code doesn't need to worry about uninitialized members.
Drop the non-standard GFP flags, as we really don't want to fail the submit
when under slight memory pressure. Remove one level of indentation by using
an early return if the allocation failed. Also remove the unused drm device
member.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
When manipulating the kernel command buffer the GPU mutex must be held, as
otherwise different callers might try to replace the same part of the
buffer, wreacking havok in the GPU execution.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Inserting the END command when suspending the GPU is changing the
command buffer state, which requires the GPU to be held.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
While the etnaviv workqueue needs to be ordered, as we rely on work items
being executed in queuing order, this is only true for a single GPU.
Having a shared workqueue for all GPUs in the system limits concurrency
artificially.
Getting each GPU its own ordered workqueue still meets our ordering
expectations and enables retire workers to run concurrently.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
There is no need to store this in the gpu struct. MMU flushes are triggered
correctly in reaction to MMU maps and unmaps, independent of the current ctx.
Any required pipe switches can be infered from the current and the desired
GPU exec state.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
There is no need to synchronize with oustanding retire jobs if the object
has gone idle. Retire jobs only ever change the object state from active to
idle, not the other way around.
The IOVA put race is uncritical, as the GEM_WAIT ioctl itself is holding
a reference to the GEM object, so the retire worker will not pull the
object into the CPU domain, which is the thing we are trying to guard
against with etnaviv_gpu_wait_obj_inactive. The ordering of the various
counts and waits may change a bit, but the userspace visible behavior at
the bounds of the syscall are unchanged.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Flush and prefetch are properly handled in the buffer code, data endianess
would need much wider changes than adding something to this single function.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Now that the userptr BO handling doesn't rely on the userspace restarting
the submit after object population, there is no need to special case the
-EAGAIN return value anymore.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>