Commit Graph

103 Commits

Author SHA1 Message Date
Nicolin Chen 21d3c0402a iommu/tegra-smmu: Allow to group clients in same swgroup
There can be clients using the same swgroup in DT, for example i2c0
and i2c1. The current driver will add them to separate IOMMU groups,
though it has implemented device_group() callback which is to group
devices using different swgroups like DC and DCB.

All clients having the same swgroup should be also added to the same
IOMMU group so as to share an asid. Otherwise, the asid register may
get overwritten every time a new device is attached.

Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Link: https://lore.kernel.org/r/20200911071643.17212-4-nicoleotsuka@gmail.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-09-24 12:32:32 +02:00
Nicolin Chen 4fba98859b iommu/tegra-smmu: Fix iova->phys translation
IOVA might not be always 4KB aligned. So tegra_smmu_iova_to_phys
function needs to add on the lower 12-bit offset from input iova.

Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Link: https://lore.kernel.org/r/20200911071643.17212-3-nicoleotsuka@gmail.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-09-24 12:32:31 +02:00
Nicolin Chen 82fa58e81d iommu/tegra-smmu: Do not use PAGE_SHIFT and PAGE_MASK
PAGE_SHIFT and PAGE_MASK are defined corresponding to the page size
for CPU virtual addresses, which means PAGE_SHIFT could be a number
other than 12, but tegra-smmu maintains fixed 4KB IOVA pages and has
fixed [21:12] bit range for PTE entries.

So this patch replaces all PAGE_SHIFT/PAGE_MASK references with the
macros defined with SMMU_PTE_SHIFT.

Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Link: https://lore.kernel.org/r/20200911071643.17212-2-nicoleotsuka@gmail.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-09-24 12:32:31 +02:00
Nicolin Chen d5c152c340 iommu/tegra-smmu: Fix tlb_mask
The "num_tlb_lines" might not be a power-of-2 value, being 48 on
Tegra210 for example. So the current way of calculating tlb_mask
using the num_tlb_lines is not correct: tlb_mask=0x5f in case of
num_tlb_lines=48, which will trim a setting of 0x30 (48) to 0x10.

Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Link: https://lore.kernel.org/r/20200917113155.13438-2-nicoleotsuka@gmail.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-09-18 11:07:06 +02:00
Dmitry Osipenko 404d0b308e iommu/tegra-smmu: Add locking around mapping operations
The mapping operations of the Tegra SMMU driver are subjected to a race
condition issues because SMMU Address Space isn't allocated and freed
atomically, while it should be. This patch makes the mapping operations
atomic, it fixes an accidentally released Host1x Address Space problem
which happens while running multiple graphics tests in parallel on
Tegra30, i.e. by having multiple threads racing with each other in the
Host1x's submission and completion code paths, performing IOVA mappings
and unmappings in parallel.

Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Tested-by: Thierry Reding <treding@nvidia.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Link: https://lore.kernel.org/r/20200901203730.27865-1-digetx@gmail.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-09-04 14:27:18 +02:00
Thierry Reding 1ea5440e36 iommu/tegra-smmu: Prune IOMMU group when it is released
In order to share groups between multiple devices we keep track of them
in a per-SMMU list. When an IOMMU group is released, a dangling pointer
to it stays around in that list. Fix this by implementing an IOMMU data
release callback for groups where the dangling pointer can be removed.

Signed-off-by: Thierry Reding <treding@nvidia.com>
Link: https://lore.kernel.org/r/20200806155404.3936074-4-thierry.reding@gmail.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-09-04 11:00:14 +02:00
Thierry Reding 5b30fbfa2a iommu/tegra-smmu: Balance IOMMU group reference count
For groups that are shared between multiple devices, care must be taken
to acquire a reference for each device, otherwise the IOMMU core ends up
dropping the last reference too early, which will cause the group to be
released while consumers may still be thinking that they're holding a
reference to it.

Signed-off-by: Thierry Reding <treding@nvidia.com>
Link: https://lore.kernel.org/r/20200806155404.3936074-3-thierry.reding@gmail.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-09-04 11:00:14 +02:00
Thierry Reding 002957020e iommu/tegra-smmu: Set IOMMU group name
Set the name of static IOMMU groups to help with debugging.

Signed-off-by: Thierry Reding <treding@nvidia.com>
Link: https://lore.kernel.org/r/20200806155404.3936074-2-thierry.reding@gmail.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-09-04 11:00:14 +02:00
Joerg Roedel a5616e2460 iommu/tegra: Use dev_iommu_priv_get/set()
Remove the use of dev->archdata.iommu and use the private per-device
pointer provided by IOMMU core code instead.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com>
Link: https://lore.kernel.org/r/20200625130836.1916-7-joro@8bytes.org
2020-06-30 11:59:48 +02:00
Joerg Roedel b287ba7378 iommu/tegra: Convert to probe/release_device() call-backs
Convert the Tegra IOMMU drivers to use the probe_device() and
release_device() call-backs of iommu_ops, so that the iommu core code
does the group and sysfs setup.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
Link: https://lore.kernel.org/r/20200429133712.31431-27-joro@8bytes.org
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-05-05 14:36:14 +02:00
Joerg Roedel 9b3a713fee Merge branches 'iommu/fixes', 'arm/qcom', 'arm/renesas', 'arm/rockchip', 'arm/mediatek', 'arm/tegra', 'arm/smmu', 'x86/amd', 'x86/vt-d', 'virtio' and 'core' into next 2019-11-12 17:11:25 +01:00
Thierry Reding 96d3ab802e iommu/tegra-smmu: Fix page tables in > 4 GiB memory
Page tables that reside in physical memory beyond the 4 GiB boundary are
currently not working properly. The reason is that when the physical
address for page directory entries is read, it gets truncated at 32 bits
and can cause crashes when passing that address to the DMA API.

Fix this by first casting the PDE value to a dma_addr_t and then using
the page frame number mask for the SMMU instance to mask out the invalid
bits, which are typically used for mapping attributes, etc.

Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-18 11:46:11 +02:00
Navneet Kumar e31e592954 iommu/tegra-smmu: Fix client enablement order
Enable clients' translation only after setting up the swgroups.

Signed-off-by: Navneet Kumar <navneetk@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-18 11:46:11 +02:00
Navneet Kumar 446152d5b6 iommu/tegra-smmu: Use non-secure register for flushing
Use PTB_ASID instead of SMMU_CONFIG to flush smmu.
PTB_ASID can be accessed from non-secure mode, SMMU_CONFIG cannot be.
Using SMMU_CONFIG could pose a problem when kernel doesn't have secure
mode access enabled from boot.

Signed-off-by: Navneet Kumar <navneetk@nvidia.com>
Reviewed-by: Dmitry Osipenko <digetx@gmail.com>
Tested-by: Dmitry Osipenko <digetx@gmail.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-18 11:46:11 +02:00
Tom Murphy 781ca2de89 iommu: Add gfp parameter to iommu_ops::map
Add a gfp_t parameter to the iommu_ops::map function.
Remove the needless locking in the AMD iommu driver.

The iommu_ops::map function (or the iommu_map function which calls it)
was always supposed to be sleepable (according to Joerg's comment in
this thread: https://lore.kernel.org/patchwork/patch/977520/ ) and so
should probably have had a "might_sleep()" since it was written. However
currently the dma-iommu api can call iommu_map in an atomic context,
which it shouldn't do. This doesn't cause any problems because any iommu
driver which uses the dma-iommu api uses gfp_atomic in it's
iommu_ops::map function. But doing this wastes the memory allocators
atomic pools.

Signed-off-by: Tom Murphy <murphyt7@tcd.ie>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-15 11:31:04 +02:00
Will Deacon 56f8af5e9d iommu: Pass struct iommu_iotlb_gather to ->unmap() and ->iotlb_sync()
To allow IOMMU drivers to batch up TLB flushing operations and postpone
them until ->iotlb_sync() is called, extend the prototypes for the
->unmap() and ->iotlb_sync() IOMMU ops callbacks to take a pointer to
the current iommu_iotlb_gather structure.

All affected IOMMU drivers are updated, but there should be no
functional change since the extra parameter is ignored for now.

Signed-off-by: Will Deacon <will@kernel.org>
2019-07-29 17:22:52 +01:00
Thomas Gleixner d2912cb15b treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500
Based on 2 normalized pattern(s):

  this program is free software you can redistribute it and or modify
  it under the terms of the gnu general public license version 2 as
  published by the free software foundation

  this program is free software you can redistribute it and or modify
  it under the terms of the gnu general public license version 2 as
  published by the free software foundation #

extracted by the scancode license scanner the SPDX license identifier

  GPL-2.0-only

has been chosen to replace the boilerplate/reference in 4122 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Enrico Weigelt <info@metux.net>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-19 17:09:55 +02:00
Dmitry Osipenko 43d957b133 iommu/tegra-smmu: Respect IOMMU API read-write protections
Set PTE read/write attributes accordingly to the the protections requested
by IOMMU API.

Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-04-11 14:51:37 +02:00
Dmitry Osipenko 4f97031ff8 iommu/tegra-smmu: Properly release domain resources
Release all memory allocations associated with a released domain and emit
warning if domain is in-use at the time of destruction.

Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-04-11 14:51:37 +02:00
Dmitry Osipenko 43a0541e31 iommu/tegra-smmu: Fix invalid ASID bits on Tegra30/114
Both Tegra30 and Tegra114 have 4 ASID's and the corresponding bitfield of
the TLB_FLUSH register differs from later Tegra generations that have 128
ASID's.

In a result the PTE's are now flushed correctly from TLB and this fixes
problems with graphics (randomly failing tests) on Tegra30.

Cc: stable <stable@vger.kernel.org>
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-04-11 14:51:37 +02:00
Dmitry Osipenko 568ece5bab memory: tegra: Do not try to probe SMMU on Tegra20
Tegra20 doesn't have SMMU. Move out checking of the SMMU presence from
the SMMU driver into the Memory Controller driver. This change makes code
consistent in regards to how GART/SMMU presence checking is performed.

Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-01-16 13:54:13 +01:00
Joerg Roedel 03ebe48e23 Merge branches 'iommu/fixes', 'arm/renesas', 'arm/mediatek', 'arm/tegra', 'arm/omap', 'arm/smmu', 'x86/vt-d', 'x86/amd' and 'core' into next 2018-12-20 10:05:20 +01:00
Joerg Roedel db5d6a7004 iommu/tegra: Use helper functions to access dev->iommu_fwspec
Use the new helpers dev_iommu_fwspec_get()/set() to access
the dev->iommu_fwspec pointer. This makes it easier to move
that pointer later into another struct.

Cc: Thierry Reding <thierry.reding@gmail.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2018-12-17 10:38:32 +01:00
Yangtao Li 062e52a5af iommu/tegra: Change to use DEFINE_SHOW_ATTRIBUTE macro
Use DEFINE_SHOW_ATTRIBUTE macro to simplify the code.

Signed-off-by: Yangtao Li <tiny.windzz@gmail.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2018-11-22 17:10:59 +01:00
Christoph Hellwig d88e61faad iommu: Remove the ->map_sg indirection
All iommu drivers use the default_iommu_map_sg implementation, and there
is no good reason to ever override it.  Just expose it as iommu_map_sg
directly and remove the indirection, specially in our post-spectre world
where indirect calls are horribly expensive.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2018-08-08 11:06:20 +02:00
Wei Yongjun 83476bfaf6 iommu/tegra-smmu: Fix return value check in tegra_smmu_group_get()
In case of error, the function iommu_group_alloc() returns ERR_PTR() and
never returns NULL. The NULL test in the return value check should be
replaced with IS_ERR().

Fixes: 7f4c9176f7 ("iommu/tegra: Allow devices to be grouped")
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
2017-12-20 18:32:08 +01:00
Thierry Reding 7f4c9176f7 iommu/tegra: Allow devices to be grouped
Implement the ->device_group() and ->of_xlate() callbacks which are used
in order to group devices. Each group can then share a single domain.

This is implemented primarily in order to achieve the same semantics on
Tegra210 and earlier as on Tegra186 where the Tegra SMMU was replaced by
an ARM SMMU. Users of the IOMMU API can now use the same code to share
domains between devices, whereas previously they used to attach each
device individually.

Acked-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
2017-12-15 10:12:32 +01:00
Joerg Roedel 96302d89a0 arm/tegra: Call bus_set_iommu() after iommu_device_register()
The bus_set_iommu() function will call the add_device()
call-back which needs the iommu to be registered.

Reported-by: Jon Hunter <jonathanh@nvidia.com>
Fixes: 0b480e4470 ('iommu/tegra: Add support for struct iommu_device')
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2017-08-30 17:28:32 +02:00
Joerg Roedel 0b480e4470 iommu/tegra: Add support for struct iommu_device
Add a struct iommu_device to each tegra-smmu and register it
with the iommu-core. Also link devices added to the driver
to their respective hardware iommus.

Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2017-08-17 16:31:34 +02:00
Robin Murphy d92e1f8498 iommu/tegra-smmu: Add iommu_group support
As the last step to making groups mandatory, clean up the remaining
drivers by adding basic support. Whilst it may not perfectly reflect
the isolation capabilities of the hardware (tegra_smmu_swgroup sounds
suspiciously like something that might warrant representing at the
iommu_group level), using generic_device_group() should at least
maintain existing behaviour with respect to the API.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Tested-by: Mikko Perttunen <mperttunen@nvidia.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2017-08-10 00:03:50 +02:00
Joerg Roedel 461a6946b1 iommu: Remove pci.h include from trace/events/iommu.h
The include file does not need any PCI specifics, so remove
that include. Also fix the places that relied on it.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2017-04-29 00:20:49 +02:00
Thierry Reding 11cec15bf3 iommu/tegra-smmu: Parameterize number of TLB lines
The number of TLB lines was increased from 16 on Tegra30 to 32 on
Tegra114 and later. Parameterize the value so that the initial default
can be set accordingly.

On Tegra30, initializing the value to 32 would effectively disable the
TLB and hence cause massive latencies for memory accesses translated
through the SMMU. This is especially noticeable for isochronuous clients
such as display, whose FIFOs would continuously underrun.

Fixes: 8918465163 ("memory: Add NVIDIA Tegra memory controller support")
Signed-off-by: Thierry Reding <treding@nvidia.com>
2015-08-13 17:05:28 +02:00
Russell King 4080e99b83 iommu/tegra-smmu: Factor out tegra_smmu_set_pde()
This code is used both when creating a new page directory entry and when
tearing it down, with only the PDE value changing between both cases.

Factor the code out so that it can be reused.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
[treding@nvidia.com: make commit message more accurate]
Signed-off-by: Thierry Reding <treding@nvidia.com>
2015-08-13 16:06:44 +02:00
Russell King 7ffc6f066e iommu/tegra-smmu: Extract tegra_smmu_pte_get_use()
Extract the use count reference accounting into a separate function and
separate it from allocating the PTE.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
[treding@nvidia.com: extract and write commit message]
Signed-off-by: Thierry Reding <treding@nvidia.com>
2015-08-13 16:06:43 +02:00
Russell King 707917cbc6 iommu/tegra-smmu: Use __GFP_ZERO to allocate zeroed pages
Rather than explicitly zeroing pages allocated via alloc_page(), add
__GFP_ZERO to the gfp mask to ask the allocator for zeroed pages.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Thierry Reding <treding@nvidia.com>
2015-08-13 16:06:43 +02:00
Russell King 05a65f06f6 iommu/tegra-smmu: Remove PageReserved manipulation
Remove the unnecessary manipulation of the PageReserved flags in the
Tegra SMMU driver.  None of this is required as the page(s) remain
private to the SMMU driver.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Thierry Reding <treding@nvidia.com>
2015-08-13 16:06:42 +02:00
Russell King e3c971960f iommu/tegra-smmu: Convert to use DMA API
Use the DMA API instead of calling architecture internal functions in
the Tegra SMMU driver.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Thierry Reding <treding@nvidia.com>
2015-08-13 16:06:42 +02:00
Russell King d62c7a886c iommu/tegra-smmu: smmu_flush_ptc() wants device addresses
Pass smmu_flush_ptc() the device address rather than struct page
pointer.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Thierry Reding <treding@nvidia.com>
2015-08-13 16:06:41 +02:00
Russell King b8fe03827b iommu/tegra-smmu: Split smmu_flush_ptc()
smmu_flush_ptc() is used in two modes: one is to flush an individual
entry, the other is to flush all entries.  We know at the call site
which we require.  Split the function into smmu_flush_ptc_all() and
smmu_flush_ptc().

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Thierry Reding <treding@nvidia.com>
2015-08-13 16:06:41 +02:00
Russell King 4b3c7d1076 iommu/tegra-smmu: Move flush_dcache to tegra-smmu.c
Drivers should not be using __cpuc_* functions nor outer_cache_flush()
directly.  This change partly cleans up tegra-smmu.c.

The only difference between cache handling of the tegra variants is
Denver, which omits the call to outer_cache_flush().  This is due to
Denver being an ARM64 CPU, and the ARM64 architecture does not provide
this function.  (This, in itself, is a good reason why these should not
be used.)

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
[treding@nvidia.com: fix build failure on 64-bit ARM]
Signed-off-by: Thierry Reding <treding@nvidia.com>
2015-08-13 16:06:40 +02:00
Russell King 32924c76b0 iommu/tegra-smmu: Use kcalloc() to allocate counter array
Use kcalloc() to allocate the use-counter array for the page directory
entries/page tables.  Using kcalloc() allows us to be provided with
zero-initialised memory from the allocators, rather than initialising
it ourselves.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Thierry Reding <treding@nvidia.com>
2015-08-13 16:06:40 +02:00
Russell King 853520fa96 iommu/tegra-smmu: Store struct page pointer for page tables
Store the struct page pointer for the second level page tables, rather
than working back from the page directory entry.  This is necessary as
we want to eliminate the use of physical addresses used with
arch-private functions, switching instead to use the streaming DMA API.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Thierry Reding <treding@nvidia.com>
2015-08-13 16:06:39 +02:00
Russell King 0b42c7c113 iommu/tegra-smmu: Fix page table lookup in unmap/iova_to_phys methods
Fix the page table lookup in the unmap and iova_to_phys methods.
Neither of these methods should allocate a page table; a missing page
table should be treated the same as no mapping present.

More importantly, using as_get_pte() for an IOVA corresponding with a
non-present page table entry increments the use-count for the page
table, on the assumption that the caller of as_get_pte() is going to
setup a mapping.  This is an incorrect assumption.

Fix both of these bugs by providing a separate helper which only looks
up the page table, but never allocates it.  This is akin to pte_offset()
for CPU page tables.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Thierry Reding <treding@nvidia.com>
2015-08-13 16:06:39 +02:00
Russell King 34d35f8cbe iommu/tegra-smmu: Add iova_pd_index() and iova_pt_index() helpers
Add a pair of helpers to get the page directory and page table indexes.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Thierry Reding <treding@nvidia.com>
2015-08-13 16:06:38 +02:00
Russell King 8482ee5ea1 iommu/tegra-smmu: Factor out common PTE setting
Factor out the common PTE setting code into a separate function.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Thierry Reding <treding@nvidia.com>
2015-08-13 16:06:38 +02:00
Russell King b98e34f0c6 iommu/tegra-smmu: Fix unmap() method
The Tegra SMMU unmap path has several problems:
1. as_pte_put() can perform a write-after-free
2. tegra_smmu_unmap() can perform cache maintanence on a page we have
   just freed.
3. when a page table is unmapped, there is no CPU cache maintanence of
   the write clearing the page directory entry, nor is there any
   maintanence of the IOMMU to ensure that it sees the page table has
   gone.

Fix this by getting rid of as_pte_put(), and instead coding the PTE
unmap separately from the PDE unmap, placing the PDE unmap after the
PTE unmap has been completed.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Thierry Reding <treding@nvidia.com>
2015-08-13 16:06:37 +02:00
Russell King 9113785c3e iommu/tegra-smmu: Fix iova_to_phys() method
iova_to_phys() has several problems:
(a) iova_to_phys() is supposed to return 0 if there is no entry present
    for the iova.
(b) if as_get_pte() fails, we oops the kernel by dereferencing a NULL
    pointer.  Really, we should not even be trying to allocate a page
    table at all, but should only be returning the presence of the 2nd
    level page table.  This will be fixed in a subsequent patch.

Treat both of these conditions as "no mapping" conditions.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Thierry Reding <treding@nvidia.com>
2015-08-13 16:06:36 +02:00
Thierry Reding d1313e7896 iommu/tegra-smmu: Add debugfs support
Provide clients and swgroups files in debugfs. These files show for
which clients IOMMU translation is enabled and which ASID is associated
with each SWGROUP.

Cc: Hiroshi Doyu <hdoyu@nvidia.com>
Acked-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Thierry Reding <treding@nvidia.com>
2015-05-04 12:54:23 +02:00
Joerg Roedel 7f65ef01e1 Merge branches 'iommu/fixes', 'x86/vt-d', 'x86/amd', 'arm/smmu', 'arm/tegra' and 'core' into next
Conflicts:
	drivers/iommu/amd_iommu.c
	drivers/iommu/tegra-gart.c
	drivers/iommu/tegra-smmu.c
2015-04-02 13:33:19 +02:00
Thierry Reding 804cb54cbb iommu/tegra: smmu: Compute PFN mask at runtime
The SMMU on Tegra30 and Tegra114 supports addressing up to 4 GiB of
physical memory. On Tegra124 the addressable physical memory was
extended to 16 GiB. The page frame number stored in PTEs therefore
requires 20 or 22 bits, depending on SoC generation.

In order to cope with this, compute the proper value at runtime.

Reported-by: Joseph Lo <josephl@nvidia.com>
Cc: Hiroshi Doyu <hdoyu@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-03-31 16:35:35 +02:00