In such a case we have to use secure aliases of some non-secure
registers.
This handling is switched on by DT property
"calxeda,smmu-secure-config-access" for an SMMU node.
Signed-off-by: Andreas Herrmann <andreas.herrmann@calxeda.com>
[will: merged with driver option handling patch]
Signed-off-by: Will Deacon <will.deacon@arm.com>
The DT parsing code that determines stream IDs uses
of_parse_phandle_with_args and thus MAX_MASTER_STREAMIDS
is always bound by MAX_PHANDLE_ARGS.
Signed-off-by: Andreas Herrmann <andreas.herrmann@calxeda.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
As the lock might be used through DMA-API which is allowed
in interrupt context.
Signed-off-by: Joerg Roedel <joro@8bytes.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Whilst trying to bring-up an SMMUv2 implementation with the table
walker plumbed into a coherent interconnect, I noticed that the memory
transactions targetting the CPU caches from the SMMU were marked as
outer-shareable instead of inner-shareable.
After a bunch of digging, it seems that we actually need to program
CBARn.BPSHCFG for s1-s2-bypass contexts to act as non-shareable in order
for the shareability configured in the corresponding TTBCR not to be
overridden with an outer-shareable attribute.
Cc: <stable@vger.kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Now that we populate page tables as we traverse them ("iommu/arm-smmu:
fix pud/pmd entry fill sequence"), we need to ensure that we flush out
our zeroed tables after initial allocation, to prevent speculative TLB
fills using bogus data.
This patch adds additional calls to arm_smmu_flush_pgtable during
initial table allocation, and moves the dsb required by coherent table
walkers into the helper.
Cc: <stable@vger.kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Commit a44a9791e7 ("iommu/arm-smmu: use mutex instead of spinlock for
locking page tables") replaced the page table spinlock with a mutex, to
allow blocking allocations to satisfy lazy mapping requests.
Unfortunately, it turns out that IOMMU mappings are created from atomic
context (e.g. spinlock held during a dma_map), so this change doesn't
really help us in practice.
This patch is a partial revert of the offending commit, bringing back
the original spinlock but replacing our page table allocations for any
levels below the pgd (which is allocated during domain init) with
GFP_ATOMIC instead of GFP_KERNEL.
Cc: <stable@vger.kernel.org>
Reported-by: Andreas Herrmann <andreas.herrmann@calxeda.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The ARM SMMU driver's population of puds and pmds is broken, since we
iterate over the next level of table repeatedly setting the current
level descriptor to point at the pmd being initialised. This is clearly
wrong when dealing with multiple pmds/puds.
This patch fixes the problem by moving the pud/pmd population out of the
loop and instead performing it when we allocate the next level (like we
correctly do for ptes already). The starting address for the next level
is then calculated prior to entering the loop.
Cc: <stable@vger.kernel.org>
Signed-off-by: Yifan Zhang <zhangyf@marvell.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Previously, all of our mappings were marked as executable, which isn't
usually required. Now that we have the IOMMU_EXEC flag, use that to
determine whether or not a mapping should be marked as executable.
Signed-off-by: Will Deacon <will.deacon@arm.com>
With the introduction of the VA_BITS definition for arm64, make use of
it in the driver, allowing up to 42-bits of VA space when configured
with 64k pages.
Signed-off-by: Will Deacon <will.deacon@arm.com>
IOMMU groups are expected by certain users of the IOMMU API,
e.g. VFIO. Add new devices found by the SMMU driver to an IOMMU
group to satisfy those users.
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Fix to return -ENODEV instead of 0 when context interrupt number
does no match in arm_smmu_device_dt_probe().
Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When handling mapping requests, we dereference the SMMU domain before
checking that it is NULL. This patch fixes the issue by removing the check
altogether, since we don't actually use the leaf_smmu when creating
mappings.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When creating IO mappings, we lazily allocate our page tables using the
standard, non-atomic allocator functions. This presents us with a
problem, since our page tables are protected with a spinlock.
This patch reworks the smmu_domain lock to use a mutex instead of a
spinlock. iova_to_phys is then reworked so that it only reads the page
tables, and can run in a lockless fashion, leaving the mutex to guard
against concurrent mapping threads.
Cc: <stable@vger.kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This time the updates contain:
* Tracepoints for certain IOMMU-API functions to make
their use easier to debug
* A tracepoint for IOMMU page faults to make it easier
to get them in user space
* Updates and fixes for the new ARM SMMU driver after
the first hardware showed up
* Various other fixes and cleanups in other IOMMU drivers
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQIcBAABAgAGBQJShVQAAAoJECvwRC2XARrj4T8P/2C/aej9QoEZhZRsJbClt7d6
6j6VoAYzFGQ5KKGuIXH/qmJqQKrDhRq7O/dP6XZEFYTDiyAcpLPK9sZ3eovNrur6
xW7TIpewZczEOPY0sz7Hkg90DgP0DnU37fELA0oYoUe55jQ0uZrcXmptcWlQssei
UZ6Cx2x9ebVcvPz2Ge7cNuDT8FXpu2MGNR7FLlh49EarFwMkl/al5oFuTcnmAojO
ypsafA5JBmsjhu3VpiI+VolZMEnYzUtZlIjv44cHw891RL5iQkcxVT/UWC8q3jHW
+OZZci21/MN3X4f2GcQUE5lEJTLX+mcmlGRoDF6B4Lh4n0IZGikyNThZMPRU1Q6x
6Ab76qHhOJtcGnxWcMiEbReUC6oPRFyr8YzTrJJfNp6iTMNgXgISKwL6UV1A7Lha
pZDXjAzREgxe8FbU3JZGfgcMg7WlnN/Y33R5E/UGwXK/MDAL0BCwNV4PBE0LCbtH
2qCzBC3TIWF7NMbIp0GnD8cbJJRO7c1hIZkVNRUwbUXkrMT75CfNIhq3l9xeWAIF
ooXvNa+MO0uJPQ/0IAJc5+AGBEEiPnvXEnp8XfTWE6S8dkA26LokKslI6fsZE27s
r0P+dHhc1OiHsIAngYqetWXZ/OdeNMfeBhIWeiKj2VKrT8MG8e/tdO9ICAHkVQSt
dnUAmLQqyR41hcI5hFEu
=IWTK
-----END PGP SIGNATURE-----
Merge tag 'iommu-updates-v3.13' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu
Pull IOMMU updates from Joerg Roedel:
"This time the updates contain:
- Tracepoints for certain IOMMU-API functions to make their use
easier to debug
- A tracepoint for IOMMU page faults to make it easier to get them in
user space
- Updates and fixes for the new ARM SMMU driver after the first
hardware showed up
- Various other fixes and cleanups in other IOMMU drivers"
* tag 'iommu-updates-v3.13' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (26 commits)
iommu/shmobile: Enable the driver on all ARM platforms
iommu/tegra-smmu: Staticize tegra_smmu_pm_ops
iommu/tegra-gart: Staticize tegra_gart_pm_ops
iommu/vt-d: Use list_for_each_entry_safe() for dmar_domain->devices traversal
iommu/vt-d: Use for_each_drhd_unit() instead of list_for_each_entry()
iommu/vt-d: Fixed interaction of VFIO_IOMMU_MAP_DMA with IOMMU address limits
iommu/arm-smmu: Clear global and context bank fault status registers
iommu/arm-smmu: Print context fault information
iommu/arm-smmu: Check for num_context_irqs > 0 to avoid divide by zero exception
iommu/arm-smmu: Refine check for proper size of mapped region
iommu/arm-smmu: Switch to subsys_initcall for driver registration
iommu/arm-smmu: use relaxed accessors where possible
iommu/arm-smmu: replace devm_request_and_ioremap by devm_ioremap_resource
iommu: Remove stack trace from broken irq remapping warning
iommu: Change iommu driver to call io_page_fault trace event
iommu: Add iommu_error class event to iommu trace
iommu/tegra: gart: cleanup devm_* functions usage
iommu/tegra: Print phys_addr_t using %pa
iommu: No need to pass '0x' when '%pa' is used
iommu: Change iommu driver to call unmap trace event
...
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Grant Likely <grant.likely@linaro.org>
Cc: Rob Herring <rob.herring@calxeda.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
After reset these registers have unknown values.
This might cause problems when evaluating SMMU_GFSR and/or SMMU_CB_FSR
in handlers for combined interrupts.
Signed-off-by: Andreas Herrmann <andreas.herrmann@calxeda.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Print context fault information when the fault was not handled by
report_iommu_fault.
Signed-off-by: Andreas Herrmann <andreas.herrmann@calxeda.com>
[will: fixed string formatting]
Signed-off-by: Will Deacon <will.deacon@arm.com>
With the right (or wrong;-) definition of v1 SMMU node in DTB it is
possible to trigger a division by zero in arm_smmu_init_domain_context
(if number of context irqs is 0):
if (smmu->version == 1) {
root_cfg->irptndx = atomic_inc_return(&smmu->irptndx);
=> root_cfg->irptndx %= smmu->num_context_irqs;
} else {
Avoid this by checking for num_context_irqs > 0 when probing
for SMMU devices.
Signed-off-by: Andreas Herrmann <andreas.herrmann@calxeda.com>
[will: changed to dev_err on probe failure path]
Signed-off-by: Will Deacon <will.deacon@arm.com>
There is already a check to print a warning if the size of SMMU
address space (calculated from SMMU register values) is greater than
the size of the mapped memory region (e.g. passed via DT to the
driver).
Adapt this check to print also a warning in case the mapped region is
larger than the SMMU address space.
Such a mismatch could be intentional (to fix wrong register values).
If its not intentional (e.g. due to wrong DT information) this will
very likely cause a malfunction of the driver as SMMU_CB_BASE is
derived from the size of the mapped region. The warning helps to
identify the root cause in this case.
Signed-off-by: Andreas Herrmann <andreas.herrmann@calxeda.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This should ensure that arm-smmu is initialized before other drivers
start handling devices that propably need smmu support.
Signed-off-by: Andreas Herrmann <andreas.herrmann@calxeda.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Apart from fault handling and page table manipulation, we don't care
about memory ordering between SMMU control registers and normal,
cacheable memory, so use the _relaxed I/O accessors wherever possible.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Use devm_ioremap_resource instead of devm_request_and_ioremap.
This was partly done using the semantic patch
scripts/coccinelle/api/devm_ioremap_resource.cocci
The error-handling code on the call to platform_get_resource was removed
manually, and the initialization of smmu->size was manually moved lower, to
take advantage of the NULL test on res performed by devm_ioremap_resource.
Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: Will Deacon <will.deacon@arm.com>
We currently reset and enable the SMMU before the device has finished
being probed, so if we fail later on (for example, because we couldn't
request a global irq successfully) then we will leave the device in an
active state.
This patch delays the reset and enabling of the SMMU hardware until
probing has completed.
Cc: <stable@vger.kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The extra semi-colon on the end breaks the test.
Cc: <stable@vger.kernel.org>
Tested-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Unsigned char is never equal to -1.
Cc: <stable@vger.kernel.org>
Tested-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
We only use ASIDs and VMIDs to identify individual stage-1 and stage-2
context-banks respectively, so rather than allocate these separately
from the context-banks, just calculate them based on the context bank
index.
Note that VMIDs are offset by 1, since VMID 0 is reserved for stage-1.
This doesn't cause us any issues with the numberspaces, since the
maximum number of context banks is half the minimum number of VMIDs.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Joerg Roedel <joro@8bytes.org>
Although permitted by the architecture, using VMIDs for stage-1
translations causes a complete nightmare for hypervisors, who end up
having to virtualise the VMID space across VMs, which may be using
multiple VMIDs each.
To make life easier for hypervisors (which might just decide not to
support this VMID virtualisation), this patch reworks the stage-1
context-bank TLB invalidation so that:
- Stage-1 mappings are marked non-global in the ptes
- Each Stage-1 context-bank is assigned an ASID in TTBR0
- VMID 0 is reserved for Stage-1 context-banks
This allows the hypervisor to overwrite the Stage-1 VMID in the CBAR
when trapping the write from the guest.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Joerg Roedel <joro@8bytes.org>
On systems which use a single, combined irq line for the SMMU, context
faults may result in us spuriously reporting global faults with zero
status registers.
This patch fixes up the fsr checks in both the context and global fault
interrupt handlers, so that we only report the fault if the fsr
indicates something did indeed go awry.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Joerg Roedel <joro@8bytes.org>
The bottom word of the pgd should always be written to the low half of
the TTBR, so we don't need to swap anything for big-endian.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Joerg Roedel <joro@8bytes.org>
This patch adds support for SMMUs implementing the ARM System MMU
architecture versions 1 or 2. Both arm and arm64 are supported, although
the v7s descriptor format is not used.
Cc: Rob Herring <robherring2@gmail.com>
Cc: Andreas Herrmann <andreas.herrmann@calxeda.com>
Cc: Olav Haugan <ohaugan@codeaurora.org>
Cc: Joerg Roedel <joro@8bytes.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Andreas Herrmann <andreas.herrmann@calxeda.com>
Signed-off-by: Joerg Roedel <joro@8bytes.org>