IOMMU updates for 5.11
- IOVA allocation optimisations and removal of unused code
- Introduction of DOMAIN_ATTR_IO_PGTABLE_CFG for parameterising the
page-table of an IOMMU domain
- Support for changing the default domain type in sysfs
- Optimisation to the way in which identity-mapped regions are created
- Driver updates:
* Arm SMMU updates, including continued work on Shared Virtual Memory
* Tegra SMMU updates, including support for PCI devices
* Intel VT-D updates, including conversion to the IOMMU-DMA API
- Cleanup, kerneldoc and minor refactoring
-----BEGIN PGP SIGNATURE-----
iQFEBAABCgAuFiEEPxTL6PPUbjXGY88ct6xw3ITBYzQFAl/XWy8QHHdpbGxAa2Vy
bmVsLm9yZwAKCRC3rHDchMFjNPejB/46QsXATkWt7hbDPIxlUvzUG8VP/FBNJ6A3
/4Z+4KBXR3zhvZJOEqTarnm6Uc22tWkYpNS3QAOuRW0EfVeD8H+og4SOA2iri5tR
x3GZUCng93APWpHdDtJP7kP/xuU47JsBblY/Ip9aJKYoXi9c9svtssAqKr008wxr
knv/xv/awQ0O7CNc3gAoz7mUagQxG/no+HMXMT3Fz9KWRzzvTi6s+7ZDm2faI0hO
GEJygsKbXxe1qbfeGqKTP/67EJVqjTGsLCF2zMogbnnD7DxadJ2hP0oNg5tvldT/
oDj9YWG6oLMfIVCwDVQXuWNfKxd7RGORMbYwKNAaRSvmkli6625h
=KFOO
-----END PGP SIGNATURE-----
Merge tag 'iommu-updates-v5.11' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull IOMMU updates from Will Deacon:
"There's a good mixture of improvements to the core code and driver
changes across the board.
One thing worth pointing out is that this includes a quirk to work
around behaviour in the i915 driver (see 65f746e828
("iommu: Add
quirk for Intel graphic devices in map_sg")), which otherwise
interacts badly with the conversion of the intel IOMMU driver over to
the DMA-IOMMU APU but has being fixed properly in the DRM tree.
We'll revert the quirk later this cycle once we've confirmed that
things don't fall apart without it.
Summary:
- IOVA allocation optimisations and removal of unused code
- Introduction of DOMAIN_ATTR_IO_PGTABLE_CFG for parameterising the
page-table of an IOMMU domain
- Support for changing the default domain type in sysfs
- Optimisation to the way in which identity-mapped regions are
created
- Driver updates:
* Arm SMMU updates, including continued work on Shared Virtual
Memory
* Tegra SMMU updates, including support for PCI devices
* Intel VT-D updates, including conversion to the IOMMU-DMA API
- Cleanup, kerneldoc and minor refactoring"
* tag 'iommu-updates-v5.11' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (50 commits)
iommu/amd: Add sanity check for interrupt remapping table length macros
dma-iommu: remove __iommu_dma_mmap
iommu/io-pgtable: Remove tlb_flush_leaf
iommu: Stop exporting free_iova_mem()
iommu: Stop exporting alloc_iova_mem()
iommu: Delete split_and_remove_iova()
iommu/io-pgtable-arm: Remove unused 'level' parameter from iopte_type() macro
iommu: Defer the early return in arm_(v7s/lpae)_map
iommu: Improve the performance for direct_mapping
iommu: avoid taking iova_rbtree_lock twice
iommu/vt-d: Avoid GFP_ATOMIC where it is not needed
iommu/vt-d: Remove set but not used variable
iommu: return error code when it can't get group
iommu: Fix htmldocs warnings in sysfs-kernel-iommu_groups
iommu: arm-smmu-impl: Add a space before open parenthesis
iommu: arm-smmu-impl: Use table to list QCOM implementations
iommu/arm-smmu: Move non-strict mode to use io_pgtable_domain_attr
iommu/arm-smmu: Add support for pagetable config domain attribute
iommu: Document usage of "/sys/kernel/iommu_groups/<grp_id>/type" file
iommu: Take lock before reading iommu group default domain type
...
This commit is contained in:
commit
19778dd504
|
@ -33,3 +33,33 @@ Description: In case an RMRR is used only by graphics or USB devices
|
|||
it is now exposed as "direct-relaxable" instead of "direct".
|
||||
In device assignment use case, for instance, those RMRR
|
||||
are considered to be relaxable and safe.
|
||||
|
||||
What: /sys/kernel/iommu_groups/<grp_id>/type
|
||||
Date: November 2020
|
||||
KernelVersion: v5.11
|
||||
Contact: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
|
||||
Description: /sys/kernel/iommu_groups/<grp_id>/type shows the type of default
|
||||
domain in use by iommu for this group. See include/linux/iommu.h
|
||||
for possible read values. A privileged user could request kernel to
|
||||
change the group type by writing to this file. Valid write values:
|
||||
|
||||
======== ======================================================
|
||||
DMA All the DMA transactions from the device in this group
|
||||
are translated by the iommu.
|
||||
identity All the DMA transactions from the device in this group
|
||||
are not translated by the iommu.
|
||||
auto Change to the type the device was booted with.
|
||||
======== ======================================================
|
||||
|
||||
The default domain type of a group may be modified only when
|
||||
|
||||
- The group has only one device.
|
||||
- The device in the group is not bound to any device driver.
|
||||
So, the users must unbind the appropriate driver before
|
||||
changing the default domain type.
|
||||
|
||||
Unbinding a device driver will take away the driver's control
|
||||
over the device and if done on devices that host root file
|
||||
system could lead to catastrophic effects (the users might
|
||||
need to reboot the machine to get it to normal state). So, it's
|
||||
expected that the users understand what they're doing.
|
||||
|
|
|
@ -1883,11 +1883,6 @@
|
|||
Note that using this option lowers the security
|
||||
provided by tboot because it makes the system
|
||||
vulnerable to DMA attacks.
|
||||
nobounce [Default off]
|
||||
Disable bounce buffer for untrusted devices such as
|
||||
the Thunderbolt devices. This will treat the untrusted
|
||||
devices as the trusted ones, hence might expose security
|
||||
risks of DMA attacks.
|
||||
|
||||
intel_idle.max_cstate= [KNL,HW,ACPI,X86]
|
||||
0 disables intel_idle and fall back on acpi_idle.
|
||||
|
|
|
@ -28,8 +28,6 @@ properties:
|
|||
- enum:
|
||||
- qcom,msm8996-smmu-v2
|
||||
- qcom,msm8998-smmu-v2
|
||||
- qcom,sc7180-smmu-v2
|
||||
- qcom,sdm845-smmu-v2
|
||||
- const: qcom,smmu-v2
|
||||
|
||||
- description: Qcom SoCs implementing "arm,mmu-500"
|
||||
|
@ -40,6 +38,13 @@ properties:
|
|||
- qcom,sm8150-smmu-500
|
||||
- qcom,sm8250-smmu-500
|
||||
- const: arm,mmu-500
|
||||
- description: Qcom Adreno GPUs implementing "arm,smmu-v2"
|
||||
items:
|
||||
- enum:
|
||||
- qcom,sc7180-smmu-v2
|
||||
- qcom,sdm845-smmu-v2
|
||||
- const: qcom,adreno-smmu
|
||||
- const: qcom,smmu-v2
|
||||
- description: Marvell SoCs implementing "arm,mmu-500"
|
||||
items:
|
||||
- const: marvell,ap806-smmu-500
|
||||
|
|
|
@ -139,7 +139,6 @@ static void msm_iommu_tlb_add_page(struct iommu_iotlb_gather *gather,
|
|||
static const struct iommu_flush_ops null_tlb_ops = {
|
||||
.tlb_flush_all = msm_iommu_tlb_flush_all,
|
||||
.tlb_flush_walk = msm_iommu_tlb_flush_walk,
|
||||
.tlb_flush_leaf = msm_iommu_tlb_flush_walk,
|
||||
.tlb_add_page = msm_iommu_tlb_add_page,
|
||||
};
|
||||
|
||||
|
|
|
@ -347,16 +347,9 @@ static void mmu_tlb_flush_walk(unsigned long iova, size_t size, size_t granule,
|
|||
mmu_tlb_sync_context(cookie);
|
||||
}
|
||||
|
||||
static void mmu_tlb_flush_leaf(unsigned long iova, size_t size, size_t granule,
|
||||
void *cookie)
|
||||
{
|
||||
mmu_tlb_sync_context(cookie);
|
||||
}
|
||||
|
||||
static const struct iommu_flush_ops mmu_tlb_ops = {
|
||||
.tlb_flush_all = mmu_tlb_inv_context_s1,
|
||||
.tlb_flush_walk = mmu_tlb_flush_walk,
|
||||
.tlb_flush_leaf = mmu_tlb_flush_leaf,
|
||||
};
|
||||
|
||||
int panfrost_mmu_pgtable_alloc(struct panfrost_file_priv *priv)
|
||||
|
|
|
@ -103,6 +103,11 @@ config IOMMU_DMA
|
|||
select IRQ_MSI_IOMMU
|
||||
select NEED_SG_DMA_LENGTH
|
||||
|
||||
# Shared Virtual Addressing library
|
||||
config IOMMU_SVA_LIB
|
||||
bool
|
||||
select IOASID
|
||||
|
||||
config FSL_PAMU
|
||||
bool "Freescale IOMMU support"
|
||||
depends on PCI
|
||||
|
@ -311,6 +316,8 @@ config ARM_SMMU_V3
|
|||
config ARM_SMMU_V3_SVA
|
||||
bool "Shared Virtual Addressing support for the ARM SMMUv3"
|
||||
depends on ARM_SMMU_V3
|
||||
select IOMMU_SVA_LIB
|
||||
select MMU_NOTIFIER
|
||||
help
|
||||
Support for sharing process address spaces with devices using the
|
||||
SMMUv3.
|
||||
|
|
|
@ -27,3 +27,4 @@ obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o
|
|||
obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
|
||||
obj-$(CONFIG_HYPERV_IOMMU) += hyperv-iommu.o
|
||||
obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
|
||||
obj-$(CONFIG_IOMMU_SVA_LIB) += iommu-sva-lib.o
|
||||
|
|
|
@ -255,11 +255,19 @@
|
|||
/* Bit value definition for dte irq remapping fields*/
|
||||
#define DTE_IRQ_PHYS_ADDR_MASK (((1ULL << 45)-1) << 6)
|
||||
#define DTE_IRQ_REMAP_INTCTL_MASK (0x3ULL << 60)
|
||||
#define DTE_IRQ_TABLE_LEN_MASK (0xfULL << 1)
|
||||
#define DTE_IRQ_REMAP_INTCTL (2ULL << 60)
|
||||
#define DTE_IRQ_TABLE_LEN (9ULL << 1)
|
||||
#define DTE_IRQ_REMAP_ENABLE 1ULL
|
||||
|
||||
/*
|
||||
* AMD IOMMU hardware only support 512 IRTEs despite
|
||||
* the architectural limitation of 2048 entries.
|
||||
*/
|
||||
#define DTE_INTTAB_ALIGNMENT 128
|
||||
#define DTE_INTTABLEN_VALUE 9ULL
|
||||
#define DTE_INTTABLEN (DTE_INTTABLEN_VALUE << 1)
|
||||
#define DTE_INTTABLEN_MASK (0xfULL << 1)
|
||||
#define MAX_IRQS_PER_TABLE (1 << DTE_INTTABLEN_VALUE)
|
||||
|
||||
#define PAGE_MODE_NONE 0x00
|
||||
#define PAGE_MODE_1_LEVEL 0x01
|
||||
#define PAGE_MODE_2_LEVEL 0x02
|
||||
|
@ -409,13 +417,6 @@ extern bool amd_iommu_np_cache;
|
|||
/* Only true if all IOMMUs support device IOTLBs */
|
||||
extern bool amd_iommu_iotlb_sup;
|
||||
|
||||
/*
|
||||
* AMD IOMMU hardware only support 512 IRTEs despite
|
||||
* the architectural limitation of 2048 entries.
|
||||
*/
|
||||
#define MAX_IRQS_PER_TABLE 512
|
||||
#define IRQ_TABLE_ALIGNMENT 128
|
||||
|
||||
struct irq_remap_table {
|
||||
raw_spinlock_t lock;
|
||||
unsigned min_index;
|
||||
|
|
|
@ -989,10 +989,10 @@ static bool copy_device_table(void)
|
|||
|
||||
irq_v = old_devtb[devid].data[2] & DTE_IRQ_REMAP_ENABLE;
|
||||
int_ctl = old_devtb[devid].data[2] & DTE_IRQ_REMAP_INTCTL_MASK;
|
||||
int_tab_len = old_devtb[devid].data[2] & DTE_IRQ_TABLE_LEN_MASK;
|
||||
int_tab_len = old_devtb[devid].data[2] & DTE_INTTABLEN_MASK;
|
||||
if (irq_v && (int_ctl || int_tab_len)) {
|
||||
if ((int_ctl != DTE_IRQ_REMAP_INTCTL) ||
|
||||
(int_tab_len != DTE_IRQ_TABLE_LEN)) {
|
||||
(int_tab_len != DTE_INTTABLEN)) {
|
||||
pr_err("Wrong old irq remapping flag: %#x\n", devid);
|
||||
return false;
|
||||
}
|
||||
|
@ -2757,7 +2757,7 @@ static int __init early_amd_iommu_init(void)
|
|||
remap_cache_sz = MAX_IRQS_PER_TABLE * (sizeof(u64) * 2);
|
||||
amd_iommu_irq_cache = kmem_cache_create("irq_remap_cache",
|
||||
remap_cache_sz,
|
||||
IRQ_TABLE_ALIGNMENT,
|
||||
DTE_INTTAB_ALIGNMENT,
|
||||
0, NULL);
|
||||
if (!amd_iommu_irq_cache)
|
||||
goto out;
|
||||
|
|
|
@ -3190,7 +3190,7 @@ static void set_dte_irq_entry(u16 devid, struct irq_remap_table *table)
|
|||
dte &= ~DTE_IRQ_PHYS_ADDR_MASK;
|
||||
dte |= iommu_virt_to_phys(table->table);
|
||||
dte |= DTE_IRQ_REMAP_INTCTL;
|
||||
dte |= DTE_IRQ_TABLE_LEN;
|
||||
dte |= DTE_INTTABLEN;
|
||||
dte |= DTE_IRQ_REMAP_ENABLE;
|
||||
|
||||
amd_iommu_dev_table[devid].data[2] = dte;
|
||||
|
|
|
@ -5,11 +5,35 @@
|
|||
|
||||
#include <linux/mm.h>
|
||||
#include <linux/mmu_context.h>
|
||||
#include <linux/mmu_notifier.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
#include "arm-smmu-v3.h"
|
||||
#include "../../iommu-sva-lib.h"
|
||||
#include "../../io-pgtable-arm.h"
|
||||
|
||||
struct arm_smmu_mmu_notifier {
|
||||
struct mmu_notifier mn;
|
||||
struct arm_smmu_ctx_desc *cd;
|
||||
bool cleared;
|
||||
refcount_t refs;
|
||||
struct list_head list;
|
||||
struct arm_smmu_domain *domain;
|
||||
};
|
||||
|
||||
#define mn_to_smmu(mn) container_of(mn, struct arm_smmu_mmu_notifier, mn)
|
||||
|
||||
struct arm_smmu_bond {
|
||||
struct iommu_sva sva;
|
||||
struct mm_struct *mm;
|
||||
struct arm_smmu_mmu_notifier *smmu_mn;
|
||||
struct list_head list;
|
||||
refcount_t refs;
|
||||
};
|
||||
|
||||
#define sva_to_bond(handle) \
|
||||
container_of(handle, struct arm_smmu_bond, sva)
|
||||
|
||||
static DEFINE_MUTEX(sva_lock);
|
||||
|
||||
/*
|
||||
|
@ -64,7 +88,6 @@ arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
|
|||
return NULL;
|
||||
}
|
||||
|
||||
__maybe_unused
|
||||
static struct arm_smmu_ctx_desc *arm_smmu_alloc_shared_cd(struct mm_struct *mm)
|
||||
{
|
||||
u16 asid;
|
||||
|
@ -145,7 +168,6 @@ out_put_context:
|
|||
return err < 0 ? ERR_PTR(err) : ret;
|
||||
}
|
||||
|
||||
__maybe_unused
|
||||
static void arm_smmu_free_shared_cd(struct arm_smmu_ctx_desc *cd)
|
||||
{
|
||||
if (arm_smmu_free_asid(cd)) {
|
||||
|
@ -155,6 +177,215 @@ static void arm_smmu_free_shared_cd(struct arm_smmu_ctx_desc *cd)
|
|||
}
|
||||
}
|
||||
|
||||
static void arm_smmu_mm_invalidate_range(struct mmu_notifier *mn,
|
||||
struct mm_struct *mm,
|
||||
unsigned long start, unsigned long end)
|
||||
{
|
||||
struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn);
|
||||
|
||||
arm_smmu_atc_inv_domain(smmu_mn->domain, mm->pasid, start,
|
||||
end - start + 1);
|
||||
}
|
||||
|
||||
static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
|
||||
{
|
||||
struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn);
|
||||
struct arm_smmu_domain *smmu_domain = smmu_mn->domain;
|
||||
|
||||
mutex_lock(&sva_lock);
|
||||
if (smmu_mn->cleared) {
|
||||
mutex_unlock(&sva_lock);
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
* DMA may still be running. Keep the cd valid to avoid C_BAD_CD events,
|
||||
* but disable translation.
|
||||
*/
|
||||
arm_smmu_write_ctx_desc(smmu_domain, mm->pasid, &quiet_cd);
|
||||
|
||||
arm_smmu_tlb_inv_asid(smmu_domain->smmu, smmu_mn->cd->asid);
|
||||
arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, 0, 0);
|
||||
|
||||
smmu_mn->cleared = true;
|
||||
mutex_unlock(&sva_lock);
|
||||
}
|
||||
|
||||
static void arm_smmu_mmu_notifier_free(struct mmu_notifier *mn)
|
||||
{
|
||||
kfree(mn_to_smmu(mn));
|
||||
}
|
||||
|
||||
static struct mmu_notifier_ops arm_smmu_mmu_notifier_ops = {
|
||||
.invalidate_range = arm_smmu_mm_invalidate_range,
|
||||
.release = arm_smmu_mm_release,
|
||||
.free_notifier = arm_smmu_mmu_notifier_free,
|
||||
};
|
||||
|
||||
/* Allocate or get existing MMU notifier for this {domain, mm} pair */
|
||||
static struct arm_smmu_mmu_notifier *
|
||||
arm_smmu_mmu_notifier_get(struct arm_smmu_domain *smmu_domain,
|
||||
struct mm_struct *mm)
|
||||
{
|
||||
int ret;
|
||||
struct arm_smmu_ctx_desc *cd;
|
||||
struct arm_smmu_mmu_notifier *smmu_mn;
|
||||
|
||||
list_for_each_entry(smmu_mn, &smmu_domain->mmu_notifiers, list) {
|
||||
if (smmu_mn->mn.mm == mm) {
|
||||
refcount_inc(&smmu_mn->refs);
|
||||
return smmu_mn;
|
||||
}
|
||||
}
|
||||
|
||||
cd = arm_smmu_alloc_shared_cd(mm);
|
||||
if (IS_ERR(cd))
|
||||
return ERR_CAST(cd);
|
||||
|
||||
smmu_mn = kzalloc(sizeof(*smmu_mn), GFP_KERNEL);
|
||||
if (!smmu_mn) {
|
||||
ret = -ENOMEM;
|
||||
goto err_free_cd;
|
||||
}
|
||||
|
||||
refcount_set(&smmu_mn->refs, 1);
|
||||
smmu_mn->cd = cd;
|
||||
smmu_mn->domain = smmu_domain;
|
||||
smmu_mn->mn.ops = &arm_smmu_mmu_notifier_ops;
|
||||
|
||||
ret = mmu_notifier_register(&smmu_mn->mn, mm);
|
||||
if (ret) {
|
||||
kfree(smmu_mn);
|
||||
goto err_free_cd;
|
||||
}
|
||||
|
||||
ret = arm_smmu_write_ctx_desc(smmu_domain, mm->pasid, cd);
|
||||
if (ret)
|
||||
goto err_put_notifier;
|
||||
|
||||
list_add(&smmu_mn->list, &smmu_domain->mmu_notifiers);
|
||||
return smmu_mn;
|
||||
|
||||
err_put_notifier:
|
||||
/* Frees smmu_mn */
|
||||
mmu_notifier_put(&smmu_mn->mn);
|
||||
err_free_cd:
|
||||
arm_smmu_free_shared_cd(cd);
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
|
||||
static void arm_smmu_mmu_notifier_put(struct arm_smmu_mmu_notifier *smmu_mn)
|
||||
{
|
||||
struct mm_struct *mm = smmu_mn->mn.mm;
|
||||
struct arm_smmu_ctx_desc *cd = smmu_mn->cd;
|
||||
struct arm_smmu_domain *smmu_domain = smmu_mn->domain;
|
||||
|
||||
if (!refcount_dec_and_test(&smmu_mn->refs))
|
||||
return;
|
||||
|
||||
list_del(&smmu_mn->list);
|
||||
arm_smmu_write_ctx_desc(smmu_domain, mm->pasid, NULL);
|
||||
|
||||
/*
|
||||
* If we went through clear(), we've already invalidated, and no
|
||||
* new TLB entry can have been formed.
|
||||
*/
|
||||
if (!smmu_mn->cleared) {
|
||||
arm_smmu_tlb_inv_asid(smmu_domain->smmu, cd->asid);
|
||||
arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, 0, 0);
|
||||
}
|
||||
|
||||
/* Frees smmu_mn */
|
||||
mmu_notifier_put(&smmu_mn->mn);
|
||||
arm_smmu_free_shared_cd(cd);
|
||||
}
|
||||
|
||||
static struct iommu_sva *
|
||||
__arm_smmu_sva_bind(struct device *dev, struct mm_struct *mm)
|
||||
{
|
||||
int ret;
|
||||
struct arm_smmu_bond *bond;
|
||||
struct arm_smmu_master *master = dev_iommu_priv_get(dev);
|
||||
struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
|
||||
struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
|
||||
|
||||
if (!master || !master->sva_enabled)
|
||||
return ERR_PTR(-ENODEV);
|
||||
|
||||
/* If bind() was already called for this {dev, mm} pair, reuse it. */
|
||||
list_for_each_entry(bond, &master->bonds, list) {
|
||||
if (bond->mm == mm) {
|
||||
refcount_inc(&bond->refs);
|
||||
return &bond->sva;
|
||||
}
|
||||
}
|
||||
|
||||
bond = kzalloc(sizeof(*bond), GFP_KERNEL);
|
||||
if (!bond)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
/* Allocate a PASID for this mm if necessary */
|
||||
ret = iommu_sva_alloc_pasid(mm, 1, (1U << master->ssid_bits) - 1);
|
||||
if (ret)
|
||||
goto err_free_bond;
|
||||
|
||||
bond->mm = mm;
|
||||
bond->sva.dev = dev;
|
||||
refcount_set(&bond->refs, 1);
|
||||
|
||||
bond->smmu_mn = arm_smmu_mmu_notifier_get(smmu_domain, mm);
|
||||
if (IS_ERR(bond->smmu_mn)) {
|
||||
ret = PTR_ERR(bond->smmu_mn);
|
||||
goto err_free_pasid;
|
||||
}
|
||||
|
||||
list_add(&bond->list, &master->bonds);
|
||||
return &bond->sva;
|
||||
|
||||
err_free_pasid:
|
||||
iommu_sva_free_pasid(mm);
|
||||
err_free_bond:
|
||||
kfree(bond);
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
|
||||
struct iommu_sva *
|
||||
arm_smmu_sva_bind(struct device *dev, struct mm_struct *mm, void *drvdata)
|
||||
{
|
||||
struct iommu_sva *handle;
|
||||
struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
|
||||
struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
|
||||
|
||||
if (smmu_domain->stage != ARM_SMMU_DOMAIN_S1)
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
||||
mutex_lock(&sva_lock);
|
||||
handle = __arm_smmu_sva_bind(dev, mm);
|
||||
mutex_unlock(&sva_lock);
|
||||
return handle;
|
||||
}
|
||||
|
||||
void arm_smmu_sva_unbind(struct iommu_sva *handle)
|
||||
{
|
||||
struct arm_smmu_bond *bond = sva_to_bond(handle);
|
||||
|
||||
mutex_lock(&sva_lock);
|
||||
if (refcount_dec_and_test(&bond->refs)) {
|
||||
list_del(&bond->list);
|
||||
arm_smmu_mmu_notifier_put(bond->smmu_mn);
|
||||
iommu_sva_free_pasid(bond->mm);
|
||||
kfree(bond);
|
||||
}
|
||||
mutex_unlock(&sva_lock);
|
||||
}
|
||||
|
||||
u32 arm_smmu_sva_get_pasid(struct iommu_sva *handle)
|
||||
{
|
||||
struct arm_smmu_bond *bond = sva_to_bond(handle);
|
||||
|
||||
return bond->mm->pasid;
|
||||
}
|
||||
|
||||
bool arm_smmu_sva_supported(struct arm_smmu_device *smmu)
|
||||
{
|
||||
unsigned long reg, fld;
|
||||
|
@ -246,3 +477,12 @@ int arm_smmu_master_disable_sva(struct arm_smmu_master *master)
|
|||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void arm_smmu_sva_notifier_synchronize(void)
|
||||
{
|
||||
/*
|
||||
* Some MMU notifiers may still be waiting to be freed, using
|
||||
* arm_smmu_mmu_notifier_free(). Wait for them.
|
||||
*/
|
||||
mmu_notifier_synchronize();
|
||||
}
|
||||
|
|
|
@ -33,7 +33,7 @@
|
|||
|
||||
#include "arm-smmu-v3.h"
|
||||
|
||||
static bool disable_bypass = 1;
|
||||
static bool disable_bypass = true;
|
||||
module_param(disable_bypass, bool, 0444);
|
||||
MODULE_PARM_DESC(disable_bypass,
|
||||
"Disable bypass streams such that incoming transactions from devices that are not attached to an iommu domain will report an abort back to the device and will not be allowed to pass through the SMMU.");
|
||||
|
@ -76,6 +76,12 @@ struct arm_smmu_option_prop {
|
|||
DEFINE_XARRAY_ALLOC1(arm_smmu_asid_xa);
|
||||
DEFINE_MUTEX(arm_smmu_asid_lock);
|
||||
|
||||
/*
|
||||
* Special value used by SVA when a process dies, to quiesce a CD without
|
||||
* disabling it.
|
||||
*/
|
||||
struct arm_smmu_ctx_desc quiet_cd = { 0 };
|
||||
|
||||
static struct arm_smmu_option_prop arm_smmu_options[] = {
|
||||
{ ARM_SMMU_OPT_SKIP_PREFETCH, "hisilicon,broken-prefetch-cmd" },
|
||||
{ ARM_SMMU_OPT_PAGE0_REGS_ONLY, "cavium,cn9900-broken-page1-regspace"},
|
||||
|
@ -91,11 +97,6 @@ static inline void __iomem *arm_smmu_page1_fixup(unsigned long offset,
|
|||
return smmu->base + offset;
|
||||
}
|
||||
|
||||
static struct arm_smmu_domain *to_smmu_domain(struct iommu_domain *dom)
|
||||
{
|
||||
return container_of(dom, struct arm_smmu_domain, domain);
|
||||
}
|
||||
|
||||
static void parse_driver_options(struct arm_smmu_device *smmu)
|
||||
{
|
||||
int i = 0;
|
||||
|
@ -983,7 +984,9 @@ int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, int ssid,
|
|||
* (2) Install a secondary CD, for SID+SSID traffic.
|
||||
* (3) Update ASID of a CD. Atomically write the first 64 bits of the
|
||||
* CD, then invalidate the old entry and mappings.
|
||||
* (4) Remove a secondary CD.
|
||||
* (4) Quiesce the context without clearing the valid bit. Disable
|
||||
* translation, and ignore any translation fault.
|
||||
* (5) Remove a secondary CD.
|
||||
*/
|
||||
u64 val;
|
||||
bool cd_live;
|
||||
|
@ -1000,8 +1003,10 @@ int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, int ssid,
|
|||
val = le64_to_cpu(cdptr[0]);
|
||||
cd_live = !!(val & CTXDESC_CD_0_V);
|
||||
|
||||
if (!cd) { /* (4) */
|
||||
if (!cd) { /* (5) */
|
||||
val = 0;
|
||||
} else if (cd == &quiet_cd) { /* (4) */
|
||||
val |= CTXDESC_CD_0_TCR_EPD0;
|
||||
} else if (cd_live) { /* (3) */
|
||||
val &= ~CTXDESC_CD_0_ASID;
|
||||
val |= FIELD_PREP(CTXDESC_CD_0_ASID, cd->asid);
|
||||
|
@ -1519,6 +1524,20 @@ arm_smmu_atc_inv_to_cmd(int ssid, unsigned long iova, size_t size,
|
|||
size_t inval_grain_shift = 12;
|
||||
unsigned long page_start, page_end;
|
||||
|
||||
/*
|
||||
* ATS and PASID:
|
||||
*
|
||||
* If substream_valid is clear, the PCIe TLP is sent without a PASID
|
||||
* prefix. In that case all ATC entries within the address range are
|
||||
* invalidated, including those that were requested with a PASID! There
|
||||
* is no way to invalidate only entries without PASID.
|
||||
*
|
||||
* When using STRTAB_STE_1_S1DSS_SSID0 (reserving CD 0 for non-PASID
|
||||
* traffic), translation requests without PASID create ATC entries
|
||||
* without PASID, which must be invalidated with substream_valid clear.
|
||||
* This has the unpleasant side-effect of invalidating all PASID-tagged
|
||||
* ATC entries within the address range.
|
||||
*/
|
||||
*cmd = (struct arm_smmu_cmdq_ent) {
|
||||
.opcode = CMDQ_OP_ATC_INV,
|
||||
.substream_valid = !!ssid,
|
||||
|
@ -1577,8 +1596,8 @@ static int arm_smmu_atc_inv_master(struct arm_smmu_master *master)
|
|||
return arm_smmu_cmdq_issue_sync(master->smmu);
|
||||
}
|
||||
|
||||
static int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain,
|
||||
int ssid, unsigned long iova, size_t size)
|
||||
int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain, int ssid,
|
||||
unsigned long iova, size_t size)
|
||||
{
|
||||
int i;
|
||||
unsigned long flags;
|
||||
|
@ -1741,16 +1760,9 @@ static void arm_smmu_tlb_inv_walk(unsigned long iova, size_t size,
|
|||
arm_smmu_tlb_inv_range(iova, size, granule, false, cookie);
|
||||
}
|
||||
|
||||
static void arm_smmu_tlb_inv_leaf(unsigned long iova, size_t size,
|
||||
size_t granule, void *cookie)
|
||||
{
|
||||
arm_smmu_tlb_inv_range(iova, size, granule, true, cookie);
|
||||
}
|
||||
|
||||
static const struct iommu_flush_ops arm_smmu_flush_ops = {
|
||||
.tlb_flush_all = arm_smmu_tlb_inv_context,
|
||||
.tlb_flush_walk = arm_smmu_tlb_inv_walk,
|
||||
.tlb_flush_leaf = arm_smmu_tlb_inv_leaf,
|
||||
.tlb_add_page = arm_smmu_tlb_inv_page_nosync,
|
||||
};
|
||||
|
||||
|
@ -1794,6 +1806,7 @@ static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
|
|||
mutex_init(&smmu_domain->init_mutex);
|
||||
INIT_LIST_HEAD(&smmu_domain->devices);
|
||||
spin_lock_init(&smmu_domain->devices_lock);
|
||||
INIT_LIST_HEAD(&smmu_domain->mmu_notifiers);
|
||||
|
||||
return &smmu_domain->domain;
|
||||
}
|
||||
|
@ -2589,6 +2602,9 @@ static struct iommu_ops arm_smmu_ops = {
|
|||
.dev_feat_enabled = arm_smmu_dev_feature_enabled,
|
||||
.dev_enable_feat = arm_smmu_dev_enable_feature,
|
||||
.dev_disable_feat = arm_smmu_dev_disable_feature,
|
||||
.sva_bind = arm_smmu_sva_bind,
|
||||
.sva_unbind = arm_smmu_sva_unbind,
|
||||
.sva_get_pasid = arm_smmu_sva_get_pasid,
|
||||
.pgsize_bitmap = -1UL, /* Restricted during device attach */
|
||||
};
|
||||
|
||||
|
@ -3611,6 +3627,12 @@ static const struct of_device_id arm_smmu_of_match[] = {
|
|||
};
|
||||
MODULE_DEVICE_TABLE(of, arm_smmu_of_match);
|
||||
|
||||
static void arm_smmu_driver_unregister(struct platform_driver *drv)
|
||||
{
|
||||
arm_smmu_sva_notifier_synchronize();
|
||||
platform_driver_unregister(drv);
|
||||
}
|
||||
|
||||
static struct platform_driver arm_smmu_driver = {
|
||||
.driver = {
|
||||
.name = "arm-smmu-v3",
|
||||
|
@ -3621,7 +3643,8 @@ static struct platform_driver arm_smmu_driver = {
|
|||
.remove = arm_smmu_device_remove,
|
||||
.shutdown = arm_smmu_device_shutdown,
|
||||
};
|
||||
module_platform_driver(arm_smmu_driver);
|
||||
module_driver(arm_smmu_driver, platform_driver_register,
|
||||
arm_smmu_driver_unregister);
|
||||
|
||||
MODULE_DESCRIPTION("IOMMU API for ARM architected SMMUv3 implementations");
|
||||
MODULE_AUTHOR("Will Deacon <will@kernel.org>");
|
||||
|
|
|
@ -678,15 +678,25 @@ struct arm_smmu_domain {
|
|||
|
||||
struct list_head devices;
|
||||
spinlock_t devices_lock;
|
||||
|
||||
struct list_head mmu_notifiers;
|
||||
};
|
||||
|
||||
static inline struct arm_smmu_domain *to_smmu_domain(struct iommu_domain *dom)
|
||||
{
|
||||
return container_of(dom, struct arm_smmu_domain, domain);
|
||||
}
|
||||
|
||||
extern struct xarray arm_smmu_asid_xa;
|
||||
extern struct mutex arm_smmu_asid_lock;
|
||||
extern struct arm_smmu_ctx_desc quiet_cd;
|
||||
|
||||
int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, int ssid,
|
||||
struct arm_smmu_ctx_desc *cd);
|
||||
void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid);
|
||||
bool arm_smmu_free_asid(struct arm_smmu_ctx_desc *cd);
|
||||
int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain, int ssid,
|
||||
unsigned long iova, size_t size);
|
||||
|
||||
#ifdef CONFIG_ARM_SMMU_V3_SVA
|
||||
bool arm_smmu_sva_supported(struct arm_smmu_device *smmu);
|
||||
|
@ -694,6 +704,11 @@ bool arm_smmu_master_sva_supported(struct arm_smmu_master *master);
|
|||
bool arm_smmu_master_sva_enabled(struct arm_smmu_master *master);
|
||||
int arm_smmu_master_enable_sva(struct arm_smmu_master *master);
|
||||
int arm_smmu_master_disable_sva(struct arm_smmu_master *master);
|
||||
struct iommu_sva *arm_smmu_sva_bind(struct device *dev, struct mm_struct *mm,
|
||||
void *drvdata);
|
||||
void arm_smmu_sva_unbind(struct iommu_sva *handle);
|
||||
u32 arm_smmu_sva_get_pasid(struct iommu_sva *handle);
|
||||
void arm_smmu_sva_notifier_synchronize(void);
|
||||
#else /* CONFIG_ARM_SMMU_V3_SVA */
|
||||
static inline bool arm_smmu_sva_supported(struct arm_smmu_device *smmu)
|
||||
{
|
||||
|
@ -719,5 +734,20 @@ static inline int arm_smmu_master_disable_sva(struct arm_smmu_master *master)
|
|||
{
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
static inline struct iommu_sva *
|
||||
arm_smmu_sva_bind(struct device *dev, struct mm_struct *mm, void *drvdata)
|
||||
{
|
||||
return ERR_PTR(-ENODEV);
|
||||
}
|
||||
|
||||
static inline void arm_smmu_sva_unbind(struct iommu_sva *handle) {}
|
||||
|
||||
static inline u32 arm_smmu_sva_get_pasid(struct iommu_sva *handle)
|
||||
{
|
||||
return IOMMU_PASID_INVALID;
|
||||
}
|
||||
|
||||
static inline void arm_smmu_sva_notifier_synchronize(void) {}
|
||||
#endif /* CONFIG_ARM_SMMU_V3_SVA */
|
||||
#endif /* _ARM_SMMU_V3_H */
|
||||
|
|
|
@ -12,7 +12,7 @@
|
|||
|
||||
static int arm_smmu_gr0_ns(int offset)
|
||||
{
|
||||
switch(offset) {
|
||||
switch (offset) {
|
||||
case ARM_SMMU_GR0_sCR0:
|
||||
case ARM_SMMU_GR0_sACR:
|
||||
case ARM_SMMU_GR0_sGFSR:
|
||||
|
@ -91,15 +91,12 @@ static struct arm_smmu_device *cavium_smmu_impl_init(struct arm_smmu_device *smm
|
|||
{
|
||||
struct cavium_smmu *cs;
|
||||
|
||||
cs = devm_kzalloc(smmu->dev, sizeof(*cs), GFP_KERNEL);
|
||||
cs = devm_krealloc(smmu->dev, smmu, sizeof(*cs), GFP_KERNEL);
|
||||
if (!cs)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
cs->smmu = *smmu;
|
||||
cs->smmu.impl = &cavium_impl;
|
||||
|
||||
devm_kfree(smmu->dev, smmu);
|
||||
|
||||
return &cs->smmu;
|
||||
}
|
||||
|
||||
|
@ -217,11 +214,7 @@ struct arm_smmu_device *arm_smmu_impl_init(struct arm_smmu_device *smmu)
|
|||
if (of_device_is_compatible(np, "nvidia,tegra194-smmu"))
|
||||
return nvidia_smmu_impl_init(smmu);
|
||||
|
||||
if (of_device_is_compatible(np, "qcom,sdm845-smmu-500") ||
|
||||
of_device_is_compatible(np, "qcom,sc7180-smmu-500") ||
|
||||
of_device_is_compatible(np, "qcom,sm8150-smmu-500") ||
|
||||
of_device_is_compatible(np, "qcom,sm8250-smmu-500"))
|
||||
return qcom_smmu_impl_init(smmu);
|
||||
smmu = qcom_smmu_impl_init(smmu);
|
||||
|
||||
if (of_device_is_compatible(np, "marvell,ap806-smmu-500"))
|
||||
smmu->impl = &mrvl_mmu500_impl;
|
||||
|
|
|
@ -242,18 +242,10 @@ struct arm_smmu_device *nvidia_smmu_impl_init(struct arm_smmu_device *smmu)
|
|||
struct nvidia_smmu *nvidia_smmu;
|
||||
struct platform_device *pdev = to_platform_device(dev);
|
||||
|
||||
nvidia_smmu = devm_kzalloc(dev, sizeof(*nvidia_smmu), GFP_KERNEL);
|
||||
nvidia_smmu = devm_krealloc(dev, smmu, sizeof(*nvidia_smmu), GFP_KERNEL);
|
||||
if (!nvidia_smmu)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
/*
|
||||
* Copy the data from struct arm_smmu_device *smmu allocated in
|
||||
* arm-smmu.c. The smmu from struct nvidia_smmu replaces the smmu
|
||||
* pointer used in arm-smmu.c once this function returns.
|
||||
* This is necessary to derive nvidia_smmu from smmu pointer passed
|
||||
* through arm_smmu_impl function calls subsequently.
|
||||
*/
|
||||
nvidia_smmu->smmu = *smmu;
|
||||
/* Instance 0 is ioremapped by arm-smmu.c. */
|
||||
nvidia_smmu->bases[0] = smmu->base;
|
||||
|
||||
|
@ -267,12 +259,5 @@ struct arm_smmu_device *nvidia_smmu_impl_init(struct arm_smmu_device *smmu)
|
|||
|
||||
nvidia_smmu->smmu.impl = &nvidia_smmu_impl;
|
||||
|
||||
/*
|
||||
* Free the struct arm_smmu_device *smmu allocated in arm-smmu.c.
|
||||
* Once this function returns, arm-smmu.c would use arm_smmu_device
|
||||
* allocated as part of struct nvidia_smmu.
|
||||
*/
|
||||
devm_kfree(dev, smmu);
|
||||
|
||||
return &nvidia_smmu->smmu;
|
||||
}
|
||||
|
|
|
@ -3,6 +3,7 @@
|
|||
* Copyright (c) 2019, The Linux Foundation. All rights reserved.
|
||||
*/
|
||||
|
||||
#include <linux/adreno-smmu-priv.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/qcom_scm.h>
|
||||
|
||||
|
@ -10,8 +11,155 @@
|
|||
|
||||
struct qcom_smmu {
|
||||
struct arm_smmu_device smmu;
|
||||
bool bypass_quirk;
|
||||
u8 bypass_cbndx;
|
||||
};
|
||||
|
||||
static struct qcom_smmu *to_qcom_smmu(struct arm_smmu_device *smmu)
|
||||
{
|
||||
return container_of(smmu, struct qcom_smmu, smmu);
|
||||
}
|
||||
|
||||
static void qcom_adreno_smmu_write_sctlr(struct arm_smmu_device *smmu, int idx,
|
||||
u32 reg)
|
||||
{
|
||||
/*
|
||||
* On the GPU device we want to process subsequent transactions after a
|
||||
* fault to keep the GPU from hanging
|
||||
*/
|
||||
reg |= ARM_SMMU_SCTLR_HUPCF;
|
||||
|
||||
arm_smmu_cb_write(smmu, idx, ARM_SMMU_CB_SCTLR, reg);
|
||||
}
|
||||
|
||||
#define QCOM_ADRENO_SMMU_GPU_SID 0
|
||||
|
||||
static bool qcom_adreno_smmu_is_gpu_device(struct device *dev)
|
||||
{
|
||||
struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
|
||||
int i;
|
||||
|
||||
/*
|
||||
* The GPU will always use SID 0 so that is a handy way to uniquely
|
||||
* identify it and configure it for per-instance pagetables
|
||||
*/
|
||||
for (i = 0; i < fwspec->num_ids; i++) {
|
||||
u16 sid = FIELD_GET(ARM_SMMU_SMR_ID, fwspec->ids[i]);
|
||||
|
||||
if (sid == QCOM_ADRENO_SMMU_GPU_SID)
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
static const struct io_pgtable_cfg *qcom_adreno_smmu_get_ttbr1_cfg(
|
||||
const void *cookie)
|
||||
{
|
||||
struct arm_smmu_domain *smmu_domain = (void *)cookie;
|
||||
struct io_pgtable *pgtable =
|
||||
io_pgtable_ops_to_pgtable(smmu_domain->pgtbl_ops);
|
||||
return &pgtable->cfg;
|
||||
}
|
||||
|
||||
/*
|
||||
* Local implementation to configure TTBR0 with the specified pagetable config.
|
||||
* The GPU driver will call this to enable TTBR0 when per-instance pagetables
|
||||
* are active
|
||||
*/
|
||||
|
||||
static int qcom_adreno_smmu_set_ttbr0_cfg(const void *cookie,
|
||||
const struct io_pgtable_cfg *pgtbl_cfg)
|
||||
{
|
||||
struct arm_smmu_domain *smmu_domain = (void *)cookie;
|
||||
struct io_pgtable *pgtable = io_pgtable_ops_to_pgtable(smmu_domain->pgtbl_ops);
|
||||
struct arm_smmu_cfg *cfg = &smmu_domain->cfg;
|
||||
struct arm_smmu_cb *cb = &smmu_domain->smmu->cbs[cfg->cbndx];
|
||||
|
||||
/* The domain must have split pagetables already enabled */
|
||||
if (cb->tcr[0] & ARM_SMMU_TCR_EPD1)
|
||||
return -EINVAL;
|
||||
|
||||
/* If the pagetable config is NULL, disable TTBR0 */
|
||||
if (!pgtbl_cfg) {
|
||||
/* Do nothing if it is already disabled */
|
||||
if ((cb->tcr[0] & ARM_SMMU_TCR_EPD0))
|
||||
return -EINVAL;
|
||||
|
||||
/* Set TCR to the original configuration */
|
||||
cb->tcr[0] = arm_smmu_lpae_tcr(&pgtable->cfg);
|
||||
cb->ttbr[0] = FIELD_PREP(ARM_SMMU_TTBRn_ASID, cb->cfg->asid);
|
||||
} else {
|
||||
u32 tcr = cb->tcr[0];
|
||||
|
||||
/* Don't call this again if TTBR0 is already enabled */
|
||||
if (!(cb->tcr[0] & ARM_SMMU_TCR_EPD0))
|
||||
return -EINVAL;
|
||||
|
||||
tcr |= arm_smmu_lpae_tcr(pgtbl_cfg);
|
||||
tcr &= ~(ARM_SMMU_TCR_EPD0 | ARM_SMMU_TCR_EPD1);
|
||||
|
||||
cb->tcr[0] = tcr;
|
||||
cb->ttbr[0] = pgtbl_cfg->arm_lpae_s1_cfg.ttbr;
|
||||
cb->ttbr[0] |= FIELD_PREP(ARM_SMMU_TTBRn_ASID, cb->cfg->asid);
|
||||
}
|
||||
|
||||
arm_smmu_write_context_bank(smmu_domain->smmu, cb->cfg->cbndx);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int qcom_adreno_smmu_alloc_context_bank(struct arm_smmu_domain *smmu_domain,
|
||||
struct arm_smmu_device *smmu,
|
||||
struct device *dev, int start)
|
||||
{
|
||||
int count;
|
||||
|
||||
/*
|
||||
* Assign context bank 0 to the GPU device so the GPU hardware can
|
||||
* switch pagetables
|
||||
*/
|
||||
if (qcom_adreno_smmu_is_gpu_device(dev)) {
|
||||
start = 0;
|
||||
count = 1;
|
||||
} else {
|
||||
start = 1;
|
||||
count = smmu->num_context_banks;
|
||||
}
|
||||
|
||||
return __arm_smmu_alloc_bitmap(smmu->context_map, start, count);
|
||||
}
|
||||
|
||||
static int qcom_adreno_smmu_init_context(struct arm_smmu_domain *smmu_domain,
|
||||
struct io_pgtable_cfg *pgtbl_cfg, struct device *dev)
|
||||
{
|
||||
struct adreno_smmu_priv *priv;
|
||||
|
||||
/* Only enable split pagetables for the GPU device (SID 0) */
|
||||
if (!qcom_adreno_smmu_is_gpu_device(dev))
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* All targets that use the qcom,adreno-smmu compatible string *should*
|
||||
* be AARCH64 stage 1 but double check because the arm-smmu code assumes
|
||||
* that is the case when the TTBR1 quirk is enabled
|
||||
*/
|
||||
if ((smmu_domain->stage == ARM_SMMU_DOMAIN_S1) &&
|
||||
(smmu_domain->cfg.fmt == ARM_SMMU_CTX_FMT_AARCH64))
|
||||
pgtbl_cfg->quirks |= IO_PGTABLE_QUIRK_ARM_TTBR1;
|
||||
|
||||
/*
|
||||
* Initialize private interface with GPU:
|
||||
*/
|
||||
|
||||
priv = dev_get_drvdata(dev);
|
||||
priv->cookie = smmu_domain;
|
||||
priv->get_ttbr1_cfg = qcom_adreno_smmu_get_ttbr1_cfg;
|
||||
priv->set_ttbr0_cfg = qcom_adreno_smmu_set_ttbr0_cfg;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct of_device_id qcom_smmu_client_of_match[] __maybe_unused = {
|
||||
{ .compatible = "qcom,adreno" },
|
||||
{ .compatible = "qcom,mdp4" },
|
||||
|
@ -23,6 +171,87 @@ static const struct of_device_id qcom_smmu_client_of_match[] __maybe_unused = {
|
|||
{ }
|
||||
};
|
||||
|
||||
static int qcom_smmu_cfg_probe(struct arm_smmu_device *smmu)
|
||||
{
|
||||
unsigned int last_s2cr = ARM_SMMU_GR0_S2CR(smmu->num_mapping_groups - 1);
|
||||
struct qcom_smmu *qsmmu = to_qcom_smmu(smmu);
|
||||
u32 reg;
|
||||
u32 smr;
|
||||
int i;
|
||||
|
||||
/*
|
||||
* With some firmware versions writes to S2CR of type FAULT are
|
||||
* ignored, and writing BYPASS will end up written as FAULT in the
|
||||
* register. Perform a write to S2CR to detect if this is the case and
|
||||
* if so reserve a context bank to emulate bypass streams.
|
||||
*/
|
||||
reg = FIELD_PREP(ARM_SMMU_S2CR_TYPE, S2CR_TYPE_BYPASS) |
|
||||
FIELD_PREP(ARM_SMMU_S2CR_CBNDX, 0xff) |
|
||||
FIELD_PREP(ARM_SMMU_S2CR_PRIVCFG, S2CR_PRIVCFG_DEFAULT);
|
||||
arm_smmu_gr0_write(smmu, last_s2cr, reg);
|
||||
reg = arm_smmu_gr0_read(smmu, last_s2cr);
|
||||
if (FIELD_GET(ARM_SMMU_S2CR_TYPE, reg) != S2CR_TYPE_BYPASS) {
|
||||
qsmmu->bypass_quirk = true;
|
||||
qsmmu->bypass_cbndx = smmu->num_context_banks - 1;
|
||||
|
||||
set_bit(qsmmu->bypass_cbndx, smmu->context_map);
|
||||
|
||||
reg = FIELD_PREP(ARM_SMMU_CBAR_TYPE, CBAR_TYPE_S1_TRANS_S2_BYPASS);
|
||||
arm_smmu_gr1_write(smmu, ARM_SMMU_GR1_CBAR(qsmmu->bypass_cbndx), reg);
|
||||
}
|
||||
|
||||
for (i = 0; i < smmu->num_mapping_groups; i++) {
|
||||
smr = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_SMR(i));
|
||||
|
||||
if (FIELD_GET(ARM_SMMU_SMR_VALID, smr)) {
|
||||
smmu->smrs[i].id = FIELD_GET(ARM_SMMU_SMR_ID, smr);
|
||||
smmu->smrs[i].mask = FIELD_GET(ARM_SMMU_SMR_MASK, smr);
|
||||
smmu->smrs[i].valid = true;
|
||||
|
||||
smmu->s2crs[i].type = S2CR_TYPE_BYPASS;
|
||||
smmu->s2crs[i].privcfg = S2CR_PRIVCFG_DEFAULT;
|
||||
smmu->s2crs[i].cbndx = 0xff;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void qcom_smmu_write_s2cr(struct arm_smmu_device *smmu, int idx)
|
||||
{
|
||||
struct arm_smmu_s2cr *s2cr = smmu->s2crs + idx;
|
||||
struct qcom_smmu *qsmmu = to_qcom_smmu(smmu);
|
||||
u32 cbndx = s2cr->cbndx;
|
||||
u32 type = s2cr->type;
|
||||
u32 reg;
|
||||
|
||||
if (qsmmu->bypass_quirk) {
|
||||
if (type == S2CR_TYPE_BYPASS) {
|
||||
/*
|
||||
* Firmware with quirky S2CR handling will substitute
|
||||
* BYPASS writes with FAULT, so point the stream to the
|
||||
* reserved context bank and ask for translation on the
|
||||
* stream
|
||||
*/
|
||||
type = S2CR_TYPE_TRANS;
|
||||
cbndx = qsmmu->bypass_cbndx;
|
||||
} else if (type == S2CR_TYPE_FAULT) {
|
||||
/*
|
||||
* Firmware with quirky S2CR handling will ignore FAULT
|
||||
* writes, so trick it to write FAULT by asking for a
|
||||
* BYPASS.
|
||||
*/
|
||||
type = S2CR_TYPE_BYPASS;
|
||||
cbndx = 0xff;
|
||||
}
|
||||
}
|
||||
|
||||
reg = FIELD_PREP(ARM_SMMU_S2CR_TYPE, type) |
|
||||
FIELD_PREP(ARM_SMMU_S2CR_CBNDX, cbndx) |
|
||||
FIELD_PREP(ARM_SMMU_S2CR_PRIVCFG, s2cr->privcfg);
|
||||
arm_smmu_gr0_write(smmu, ARM_SMMU_GR0_S2CR(idx), reg);
|
||||
}
|
||||
|
||||
static int qcom_smmu_def_domain_type(struct device *dev)
|
||||
{
|
||||
const struct of_device_id *match =
|
||||
|
@ -61,11 +290,22 @@ static int qcom_smmu500_reset(struct arm_smmu_device *smmu)
|
|||
}
|
||||
|
||||
static const struct arm_smmu_impl qcom_smmu_impl = {
|
||||
.cfg_probe = qcom_smmu_cfg_probe,
|
||||
.def_domain_type = qcom_smmu_def_domain_type,
|
||||
.reset = qcom_smmu500_reset,
|
||||
.write_s2cr = qcom_smmu_write_s2cr,
|
||||
};
|
||||
|
||||
struct arm_smmu_device *qcom_smmu_impl_init(struct arm_smmu_device *smmu)
|
||||
static const struct arm_smmu_impl qcom_adreno_smmu_impl = {
|
||||
.init_context = qcom_adreno_smmu_init_context,
|
||||
.def_domain_type = qcom_smmu_def_domain_type,
|
||||
.reset = qcom_smmu500_reset,
|
||||
.alloc_context_bank = qcom_adreno_smmu_alloc_context_bank,
|
||||
.write_sctlr = qcom_adreno_smmu_write_sctlr,
|
||||
};
|
||||
|
||||
static struct arm_smmu_device *qcom_smmu_create(struct arm_smmu_device *smmu,
|
||||
const struct arm_smmu_impl *impl)
|
||||
{
|
||||
struct qcom_smmu *qsmmu;
|
||||
|
||||
|
@ -73,14 +313,32 @@ struct arm_smmu_device *qcom_smmu_impl_init(struct arm_smmu_device *smmu)
|
|||
if (!qcom_scm_is_available())
|
||||
return ERR_PTR(-EPROBE_DEFER);
|
||||
|
||||
qsmmu = devm_kzalloc(smmu->dev, sizeof(*qsmmu), GFP_KERNEL);
|
||||
qsmmu = devm_krealloc(smmu->dev, smmu, sizeof(*qsmmu), GFP_KERNEL);
|
||||
if (!qsmmu)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
qsmmu->smmu = *smmu;
|
||||
|
||||
qsmmu->smmu.impl = &qcom_smmu_impl;
|
||||
devm_kfree(smmu->dev, smmu);
|
||||
qsmmu->smmu.impl = impl;
|
||||
|
||||
return &qsmmu->smmu;
|
||||
}
|
||||
|
||||
static const struct of_device_id __maybe_unused qcom_smmu_impl_of_match[] = {
|
||||
{ .compatible = "qcom,sc7180-smmu-500" },
|
||||
{ .compatible = "qcom,sdm845-smmu-500" },
|
||||
{ .compatible = "qcom,sm8150-smmu-500" },
|
||||
{ .compatible = "qcom,sm8250-smmu-500" },
|
||||
{ }
|
||||
};
|
||||
|
||||
struct arm_smmu_device *qcom_smmu_impl_init(struct arm_smmu_device *smmu)
|
||||
{
|
||||
const struct device_node *np = smmu->dev->of_node;
|
||||
|
||||
if (of_match_node(qcom_smmu_impl_of_match, np))
|
||||
return qcom_smmu_create(smmu, &qcom_smmu_impl);
|
||||
|
||||
if (of_device_is_compatible(np, "qcom,adreno-smmu"))
|
||||
return qcom_smmu_create(smmu, &qcom_adreno_smmu_impl);
|
||||
|
||||
return smmu;
|
||||
}
|
||||
|
|
|
@ -333,14 +333,6 @@ static void arm_smmu_tlb_inv_walk_s1(unsigned long iova, size_t size,
|
|||
arm_smmu_tlb_sync_context(cookie);
|
||||
}
|
||||
|
||||
static void arm_smmu_tlb_inv_leaf_s1(unsigned long iova, size_t size,
|
||||
size_t granule, void *cookie)
|
||||
{
|
||||
arm_smmu_tlb_inv_range_s1(iova, size, granule, cookie,
|
||||
ARM_SMMU_CB_S1_TLBIVAL);
|
||||
arm_smmu_tlb_sync_context(cookie);
|
||||
}
|
||||
|
||||
static void arm_smmu_tlb_add_page_s1(struct iommu_iotlb_gather *gather,
|
||||
unsigned long iova, size_t granule,
|
||||
void *cookie)
|
||||
|
@ -357,14 +349,6 @@ static void arm_smmu_tlb_inv_walk_s2(unsigned long iova, size_t size,
|
|||
arm_smmu_tlb_sync_context(cookie);
|
||||
}
|
||||
|
||||
static void arm_smmu_tlb_inv_leaf_s2(unsigned long iova, size_t size,
|
||||
size_t granule, void *cookie)
|
||||
{
|
||||
arm_smmu_tlb_inv_range_s2(iova, size, granule, cookie,
|
||||
ARM_SMMU_CB_S2_TLBIIPAS2L);
|
||||
arm_smmu_tlb_sync_context(cookie);
|
||||
}
|
||||
|
||||
static void arm_smmu_tlb_add_page_s2(struct iommu_iotlb_gather *gather,
|
||||
unsigned long iova, size_t granule,
|
||||
void *cookie)
|
||||
|
@ -373,8 +357,8 @@ static void arm_smmu_tlb_add_page_s2(struct iommu_iotlb_gather *gather,
|
|||
ARM_SMMU_CB_S2_TLBIIPAS2L);
|
||||
}
|
||||
|
||||
static void arm_smmu_tlb_inv_any_s2_v1(unsigned long iova, size_t size,
|
||||
size_t granule, void *cookie)
|
||||
static void arm_smmu_tlb_inv_walk_s2_v1(unsigned long iova, size_t size,
|
||||
size_t granule, void *cookie)
|
||||
{
|
||||
arm_smmu_tlb_inv_context_s2(cookie);
|
||||
}
|
||||
|
@ -401,21 +385,18 @@ static void arm_smmu_tlb_add_page_s2_v1(struct iommu_iotlb_gather *gather,
|
|||
static const struct iommu_flush_ops arm_smmu_s1_tlb_ops = {
|
||||
.tlb_flush_all = arm_smmu_tlb_inv_context_s1,
|
||||
.tlb_flush_walk = arm_smmu_tlb_inv_walk_s1,
|
||||
.tlb_flush_leaf = arm_smmu_tlb_inv_leaf_s1,
|
||||
.tlb_add_page = arm_smmu_tlb_add_page_s1,
|
||||
};
|
||||
|
||||
static const struct iommu_flush_ops arm_smmu_s2_tlb_ops_v2 = {
|
||||
.tlb_flush_all = arm_smmu_tlb_inv_context_s2,
|
||||
.tlb_flush_walk = arm_smmu_tlb_inv_walk_s2,
|
||||
.tlb_flush_leaf = arm_smmu_tlb_inv_leaf_s2,
|
||||
.tlb_add_page = arm_smmu_tlb_add_page_s2,
|
||||
};
|
||||
|
||||
static const struct iommu_flush_ops arm_smmu_s2_tlb_ops_v1 = {
|
||||
.tlb_flush_all = arm_smmu_tlb_inv_context_s2,
|
||||
.tlb_flush_walk = arm_smmu_tlb_inv_any_s2_v1,
|
||||
.tlb_flush_leaf = arm_smmu_tlb_inv_any_s2_v1,
|
||||
.tlb_flush_walk = arm_smmu_tlb_inv_walk_s2_v1,
|
||||
.tlb_add_page = arm_smmu_tlb_add_page_s2_v1,
|
||||
};
|
||||
|
||||
|
@ -617,7 +598,10 @@ void arm_smmu_write_context_bank(struct arm_smmu_device *smmu, int idx)
|
|||
if (IS_ENABLED(CONFIG_CPU_BIG_ENDIAN))
|
||||
reg |= ARM_SMMU_SCTLR_E;
|
||||
|
||||
arm_smmu_cb_write(smmu, idx, ARM_SMMU_CB_SCTLR, reg);
|
||||
if (smmu->impl && smmu->impl->write_sctlr)
|
||||
smmu->impl->write_sctlr(smmu, idx, reg);
|
||||
else
|
||||
arm_smmu_cb_write(smmu, idx, ARM_SMMU_CB_SCTLR, reg);
|
||||
}
|
||||
|
||||
static int arm_smmu_alloc_context_bank(struct arm_smmu_domain *smmu_domain,
|
||||
|
@ -783,8 +767,8 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain,
|
|||
goto out_clear_smmu;
|
||||
}
|
||||
|
||||
if (smmu_domain->non_strict)
|
||||
pgtbl_cfg.quirks |= IO_PGTABLE_QUIRK_NON_STRICT;
|
||||
if (smmu_domain->pgtbl_cfg.quirks)
|
||||
pgtbl_cfg.quirks |= smmu_domain->pgtbl_cfg.quirks;
|
||||
|
||||
pgtbl_ops = alloc_io_pgtable_ops(fmt, &pgtbl_cfg, smmu_domain);
|
||||
if (!pgtbl_ops) {
|
||||
|
@ -929,9 +913,16 @@ static void arm_smmu_write_smr(struct arm_smmu_device *smmu, int idx)
|
|||
static void arm_smmu_write_s2cr(struct arm_smmu_device *smmu, int idx)
|
||||
{
|
||||
struct arm_smmu_s2cr *s2cr = smmu->s2crs + idx;
|
||||
u32 reg = FIELD_PREP(ARM_SMMU_S2CR_TYPE, s2cr->type) |
|
||||
FIELD_PREP(ARM_SMMU_S2CR_CBNDX, s2cr->cbndx) |
|
||||
FIELD_PREP(ARM_SMMU_S2CR_PRIVCFG, s2cr->privcfg);
|
||||
u32 reg;
|
||||
|
||||
if (smmu->impl && smmu->impl->write_s2cr) {
|
||||
smmu->impl->write_s2cr(smmu, idx);
|
||||
return;
|
||||
}
|
||||
|
||||
reg = FIELD_PREP(ARM_SMMU_S2CR_TYPE, s2cr->type) |
|
||||
FIELD_PREP(ARM_SMMU_S2CR_CBNDX, s2cr->cbndx) |
|
||||
FIELD_PREP(ARM_SMMU_S2CR_PRIVCFG, s2cr->privcfg);
|
||||
|
||||
if (smmu->features & ARM_SMMU_FEAT_EXIDS && smmu->smrs &&
|
||||
smmu->smrs[idx].valid)
|
||||
|
@ -1501,15 +1492,24 @@ static int arm_smmu_domain_get_attr(struct iommu_domain *domain,
|
|||
case DOMAIN_ATTR_NESTING:
|
||||
*(int *)data = (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED);
|
||||
return 0;
|
||||
case DOMAIN_ATTR_IO_PGTABLE_CFG: {
|
||||
struct io_pgtable_domain_attr *pgtbl_cfg = data;
|
||||
*pgtbl_cfg = smmu_domain->pgtbl_cfg;
|
||||
|
||||
return 0;
|
||||
}
|
||||
default:
|
||||
return -ENODEV;
|
||||
}
|
||||
break;
|
||||
case IOMMU_DOMAIN_DMA:
|
||||
switch (attr) {
|
||||
case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
|
||||
*(int *)data = smmu_domain->non_strict;
|
||||
case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE: {
|
||||
bool non_strict = smmu_domain->pgtbl_cfg.quirks &
|
||||
IO_PGTABLE_QUIRK_NON_STRICT;
|
||||
*(int *)data = non_strict;
|
||||
return 0;
|
||||
}
|
||||
default:
|
||||
return -ENODEV;
|
||||
}
|
||||
|
@ -1541,6 +1541,17 @@ static int arm_smmu_domain_set_attr(struct iommu_domain *domain,
|
|||
else
|
||||
smmu_domain->stage = ARM_SMMU_DOMAIN_S1;
|
||||
break;
|
||||
case DOMAIN_ATTR_IO_PGTABLE_CFG: {
|
||||
struct io_pgtable_domain_attr *pgtbl_cfg = data;
|
||||
|
||||
if (smmu_domain->smmu) {
|
||||
ret = -EPERM;
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
smmu_domain->pgtbl_cfg = *pgtbl_cfg;
|
||||
break;
|
||||
}
|
||||
default:
|
||||
ret = -ENODEV;
|
||||
}
|
||||
|
@ -1548,7 +1559,10 @@ static int arm_smmu_domain_set_attr(struct iommu_domain *domain,
|
|||
case IOMMU_DOMAIN_DMA:
|
||||
switch (attr) {
|
||||
case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
|
||||
smmu_domain->non_strict = *(int *)data;
|
||||
if (*(int *)data)
|
||||
smmu_domain->pgtbl_cfg.quirks |= IO_PGTABLE_QUIRK_NON_STRICT;
|
||||
else
|
||||
smmu_domain->pgtbl_cfg.quirks &= ~IO_PGTABLE_QUIRK_NON_STRICT;
|
||||
break;
|
||||
default:
|
||||
ret = -ENODEV;
|
||||
|
|
|
@ -144,6 +144,7 @@ enum arm_smmu_cbar_type {
|
|||
#define ARM_SMMU_CB_SCTLR 0x0
|
||||
#define ARM_SMMU_SCTLR_S1_ASIDPNE BIT(12)
|
||||
#define ARM_SMMU_SCTLR_CFCFG BIT(7)
|
||||
#define ARM_SMMU_SCTLR_HUPCF BIT(8)
|
||||
#define ARM_SMMU_SCTLR_CFIE BIT(6)
|
||||
#define ARM_SMMU_SCTLR_CFRE BIT(5)
|
||||
#define ARM_SMMU_SCTLR_E BIT(4)
|
||||
|
@ -363,10 +364,10 @@ enum arm_smmu_domain_stage {
|
|||
struct arm_smmu_domain {
|
||||
struct arm_smmu_device *smmu;
|
||||
struct io_pgtable_ops *pgtbl_ops;
|
||||
struct io_pgtable_domain_attr pgtbl_cfg;
|
||||
const struct iommu_flush_ops *flush_ops;
|
||||
struct arm_smmu_cfg cfg;
|
||||
enum arm_smmu_domain_stage stage;
|
||||
bool non_strict;
|
||||
struct mutex init_mutex; /* Protects smmu pointer */
|
||||
spinlock_t cb_lock; /* Serialises ATS1* ops and TLB syncs */
|
||||
struct iommu_domain domain;
|
||||
|
@ -436,6 +437,8 @@ struct arm_smmu_impl {
|
|||
int (*alloc_context_bank)(struct arm_smmu_domain *smmu_domain,
|
||||
struct arm_smmu_device *smmu,
|
||||
struct device *dev, int start);
|
||||
void (*write_s2cr)(struct arm_smmu_device *smmu, int idx);
|
||||
void (*write_sctlr)(struct arm_smmu_device *smmu, int idx, u32 reg);
|
||||
};
|
||||
|
||||
#define INVALID_SMENDX -1
|
||||
|
|
|
@ -185,13 +185,6 @@ static void qcom_iommu_tlb_flush_walk(unsigned long iova, size_t size,
|
|||
qcom_iommu_tlb_sync(cookie);
|
||||
}
|
||||
|
||||
static void qcom_iommu_tlb_flush_leaf(unsigned long iova, size_t size,
|
||||
size_t granule, void *cookie)
|
||||
{
|
||||
qcom_iommu_tlb_inv_range_nosync(iova, size, granule, true, cookie);
|
||||
qcom_iommu_tlb_sync(cookie);
|
||||
}
|
||||
|
||||
static void qcom_iommu_tlb_add_page(struct iommu_iotlb_gather *gather,
|
||||
unsigned long iova, size_t granule,
|
||||
void *cookie)
|
||||
|
@ -202,7 +195,6 @@ static void qcom_iommu_tlb_add_page(struct iommu_iotlb_gather *gather,
|
|||
static const struct iommu_flush_ops qcom_flush_ops = {
|
||||
.tlb_flush_all = qcom_iommu_tlb_inv_context,
|
||||
.tlb_flush_walk = qcom_iommu_tlb_flush_walk,
|
||||
.tlb_flush_leaf = qcom_iommu_tlb_flush_leaf,
|
||||
.tlb_add_page = qcom_iommu_tlb_add_page,
|
||||
};
|
||||
|
||||
|
|
|
@ -20,9 +20,11 @@
|
|||
#include <linux/mm.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/swiotlb.h>
|
||||
#include <linux/scatterlist.h>
|
||||
#include <linux/vmalloc.h>
|
||||
#include <linux/crash_dump.h>
|
||||
#include <linux/dma-direct.h>
|
||||
|
||||
struct iommu_dma_msi_page {
|
||||
struct list_head list;
|
||||
|
@ -49,6 +51,27 @@ struct iommu_dma_cookie {
|
|||
struct iommu_domain *fq_domain;
|
||||
};
|
||||
|
||||
void iommu_dma_free_cpu_cached_iovas(unsigned int cpu,
|
||||
struct iommu_domain *domain)
|
||||
{
|
||||
struct iommu_dma_cookie *cookie = domain->iova_cookie;
|
||||
struct iova_domain *iovad = &cookie->iovad;
|
||||
|
||||
free_cpu_cached_iovas(cpu, iovad);
|
||||
}
|
||||
|
||||
static void iommu_dma_entry_dtor(unsigned long data)
|
||||
{
|
||||
struct page *freelist = (struct page *)data;
|
||||
|
||||
while (freelist) {
|
||||
unsigned long p = (unsigned long)page_address(freelist);
|
||||
|
||||
freelist = freelist->freelist;
|
||||
free_page(p);
|
||||
}
|
||||
}
|
||||
|
||||
static inline size_t cookie_msi_granule(struct iommu_dma_cookie *cookie)
|
||||
{
|
||||
if (cookie->type == IOMMU_DMA_IOVA_COOKIE)
|
||||
|
@ -343,7 +366,7 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
|
|||
if (!cookie->fq_domain && !iommu_domain_get_attr(domain,
|
||||
DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE, &attr) && attr) {
|
||||
if (init_iova_flush_queue(iovad, iommu_dma_flush_iotlb_all,
|
||||
NULL))
|
||||
iommu_dma_entry_dtor))
|
||||
pr_warn("iova flush queue initialization failed\n");
|
||||
else
|
||||
cookie->fq_domain = domain;
|
||||
|
@ -440,7 +463,7 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain,
|
|||
}
|
||||
|
||||
static void iommu_dma_free_iova(struct iommu_dma_cookie *cookie,
|
||||
dma_addr_t iova, size_t size)
|
||||
dma_addr_t iova, size_t size, struct page *freelist)
|
||||
{
|
||||
struct iova_domain *iovad = &cookie->iovad;
|
||||
|
||||
|
@ -449,7 +472,8 @@ static void iommu_dma_free_iova(struct iommu_dma_cookie *cookie,
|
|||
cookie->msi_iova -= size;
|
||||
else if (cookie->fq_domain) /* non-strict mode */
|
||||
queue_iova(iovad, iova_pfn(iovad, iova),
|
||||
size >> iova_shift(iovad), 0);
|
||||
size >> iova_shift(iovad),
|
||||
(unsigned long)freelist);
|
||||
else
|
||||
free_iova_fast(iovad, iova_pfn(iovad, iova),
|
||||
size >> iova_shift(iovad));
|
||||
|
@ -474,7 +498,32 @@ static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr,
|
|||
|
||||
if (!cookie->fq_domain)
|
||||
iommu_iotlb_sync(domain, &iotlb_gather);
|
||||
iommu_dma_free_iova(cookie, dma_addr, size);
|
||||
iommu_dma_free_iova(cookie, dma_addr, size, iotlb_gather.freelist);
|
||||
}
|
||||
|
||||
static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr,
|
||||
size_t size, enum dma_data_direction dir,
|
||||
unsigned long attrs)
|
||||
{
|
||||
struct iommu_domain *domain = iommu_get_dma_domain(dev);
|
||||
struct iommu_dma_cookie *cookie = domain->iova_cookie;
|
||||
struct iova_domain *iovad = &cookie->iovad;
|
||||
phys_addr_t phys;
|
||||
|
||||
phys = iommu_iova_to_phys(domain, dma_addr);
|
||||
if (WARN_ON(!phys))
|
||||
return;
|
||||
|
||||
__iommu_dma_unmap(dev, dma_addr, size);
|
||||
|
||||
if (unlikely(is_swiotlb_buffer(phys)))
|
||||
swiotlb_tbl_unmap_single(dev, phys, size,
|
||||
iova_align(iovad, size), dir, attrs);
|
||||
}
|
||||
|
||||
static bool dev_is_untrusted(struct device *dev)
|
||||
{
|
||||
return dev_is_pci(dev) && to_pci_dev(dev)->untrusted;
|
||||
}
|
||||
|
||||
static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
|
||||
|
@ -496,12 +545,60 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
|
|||
return DMA_MAPPING_ERROR;
|
||||
|
||||
if (iommu_map_atomic(domain, iova, phys - iova_off, size, prot)) {
|
||||
iommu_dma_free_iova(cookie, iova, size);
|
||||
iommu_dma_free_iova(cookie, iova, size, NULL);
|
||||
return DMA_MAPPING_ERROR;
|
||||
}
|
||||
return iova + iova_off;
|
||||
}
|
||||
|
||||
static dma_addr_t __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys,
|
||||
size_t org_size, dma_addr_t dma_mask, bool coherent,
|
||||
enum dma_data_direction dir, unsigned long attrs)
|
||||
{
|
||||
int prot = dma_info_to_prot(dir, coherent, attrs);
|
||||
struct iommu_domain *domain = iommu_get_dma_domain(dev);
|
||||
struct iommu_dma_cookie *cookie = domain->iova_cookie;
|
||||
struct iova_domain *iovad = &cookie->iovad;
|
||||
size_t aligned_size = org_size;
|
||||
void *padding_start;
|
||||
size_t padding_size;
|
||||
dma_addr_t iova;
|
||||
|
||||
/*
|
||||
* If both the physical buffer start address and size are
|
||||
* page aligned, we don't need to use a bounce page.
|
||||
*/
|
||||
if (IS_ENABLED(CONFIG_SWIOTLB) && dev_is_untrusted(dev) &&
|
||||
iova_offset(iovad, phys | org_size)) {
|
||||
aligned_size = iova_align(iovad, org_size);
|
||||
phys = swiotlb_tbl_map_single(dev, phys, org_size,
|
||||
aligned_size, dir, attrs);
|
||||
|
||||
if (phys == DMA_MAPPING_ERROR)
|
||||
return DMA_MAPPING_ERROR;
|
||||
|
||||
/* Cleanup the padding area. */
|
||||
padding_start = phys_to_virt(phys);
|
||||
padding_size = aligned_size;
|
||||
|
||||
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
|
||||
(dir == DMA_TO_DEVICE ||
|
||||
dir == DMA_BIDIRECTIONAL)) {
|
||||
padding_start += org_size;
|
||||
padding_size -= org_size;
|
||||
}
|
||||
|
||||
memset(padding_start, 0, padding_size);
|
||||
}
|
||||
|
||||
iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask);
|
||||
if ((iova == DMA_MAPPING_ERROR) && is_swiotlb_buffer(phys))
|
||||
swiotlb_tbl_unmap_single(dev, phys, org_size,
|
||||
aligned_size, dir, attrs);
|
||||
|
||||
return iova;
|
||||
}
|
||||
|
||||
static void __iommu_dma_free_pages(struct page **pages, int count)
|
||||
{
|
||||
while (count--)
|
||||
|
@ -649,37 +746,26 @@ out_unmap:
|
|||
out_free_sg:
|
||||
sg_free_table(&sgt);
|
||||
out_free_iova:
|
||||
iommu_dma_free_iova(cookie, iova, size);
|
||||
iommu_dma_free_iova(cookie, iova, size, NULL);
|
||||
out_free_pages:
|
||||
__iommu_dma_free_pages(pages, count);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/**
|
||||
* __iommu_dma_mmap - Map a buffer into provided user VMA
|
||||
* @pages: Array representing buffer from __iommu_dma_alloc()
|
||||
* @size: Size of buffer in bytes
|
||||
* @vma: VMA describing requested userspace mapping
|
||||
*
|
||||
* Maps the pages of the buffer in @pages into @vma. The caller is responsible
|
||||
* for verifying the correct size and protection of @vma beforehand.
|
||||
*/
|
||||
static int __iommu_dma_mmap(struct page **pages, size_t size,
|
||||
struct vm_area_struct *vma)
|
||||
{
|
||||
return vm_map_pages(vma, pages, PAGE_ALIGN(size) >> PAGE_SHIFT);
|
||||
}
|
||||
|
||||
static void iommu_dma_sync_single_for_cpu(struct device *dev,
|
||||
dma_addr_t dma_handle, size_t size, enum dma_data_direction dir)
|
||||
{
|
||||
phys_addr_t phys;
|
||||
|
||||
if (dev_is_dma_coherent(dev))
|
||||
if (dev_is_dma_coherent(dev) && !dev_is_untrusted(dev))
|
||||
return;
|
||||
|
||||
phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle);
|
||||
arch_sync_dma_for_cpu(phys, size, dir);
|
||||
if (!dev_is_dma_coherent(dev))
|
||||
arch_sync_dma_for_cpu(phys, size, dir);
|
||||
|
||||
if (is_swiotlb_buffer(phys))
|
||||
swiotlb_tbl_sync_single(dev, phys, size, dir, SYNC_FOR_CPU);
|
||||
}
|
||||
|
||||
static void iommu_dma_sync_single_for_device(struct device *dev,
|
||||
|
@ -687,11 +773,15 @@ static void iommu_dma_sync_single_for_device(struct device *dev,
|
|||
{
|
||||
phys_addr_t phys;
|
||||
|
||||
if (dev_is_dma_coherent(dev))
|
||||
if (dev_is_dma_coherent(dev) && !dev_is_untrusted(dev))
|
||||
return;
|
||||
|
||||
phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle);
|
||||
arch_sync_dma_for_device(phys, size, dir);
|
||||
if (is_swiotlb_buffer(phys))
|
||||
swiotlb_tbl_sync_single(dev, phys, size, dir, SYNC_FOR_DEVICE);
|
||||
|
||||
if (!dev_is_dma_coherent(dev))
|
||||
arch_sync_dma_for_device(phys, size, dir);
|
||||
}
|
||||
|
||||
static void iommu_dma_sync_sg_for_cpu(struct device *dev,
|
||||
|
@ -701,11 +791,17 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev,
|
|||
struct scatterlist *sg;
|
||||
int i;
|
||||
|
||||
if (dev_is_dma_coherent(dev))
|
||||
if (dev_is_dma_coherent(dev) && !dev_is_untrusted(dev))
|
||||
return;
|
||||
|
||||
for_each_sg(sgl, sg, nelems, i)
|
||||
arch_sync_dma_for_cpu(sg_phys(sg), sg->length, dir);
|
||||
for_each_sg(sgl, sg, nelems, i) {
|
||||
if (!dev_is_dma_coherent(dev))
|
||||
arch_sync_dma_for_cpu(sg_phys(sg), sg->length, dir);
|
||||
|
||||
if (is_swiotlb_buffer(sg_phys(sg)))
|
||||
swiotlb_tbl_sync_single(dev, sg_phys(sg), sg->length,
|
||||
dir, SYNC_FOR_CPU);
|
||||
}
|
||||
}
|
||||
|
||||
static void iommu_dma_sync_sg_for_device(struct device *dev,
|
||||
|
@ -715,11 +811,17 @@ static void iommu_dma_sync_sg_for_device(struct device *dev,
|
|||
struct scatterlist *sg;
|
||||
int i;
|
||||
|
||||
if (dev_is_dma_coherent(dev))
|
||||
if (dev_is_dma_coherent(dev) && !dev_is_untrusted(dev))
|
||||
return;
|
||||
|
||||
for_each_sg(sgl, sg, nelems, i)
|
||||
arch_sync_dma_for_device(sg_phys(sg), sg->length, dir);
|
||||
for_each_sg(sgl, sg, nelems, i) {
|
||||
if (is_swiotlb_buffer(sg_phys(sg)))
|
||||
swiotlb_tbl_sync_single(dev, sg_phys(sg), sg->length,
|
||||
dir, SYNC_FOR_DEVICE);
|
||||
|
||||
if (!dev_is_dma_coherent(dev))
|
||||
arch_sync_dma_for_device(sg_phys(sg), sg->length, dir);
|
||||
}
|
||||
}
|
||||
|
||||
static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
|
||||
|
@ -728,10 +830,10 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
|
|||
{
|
||||
phys_addr_t phys = page_to_phys(page) + offset;
|
||||
bool coherent = dev_is_dma_coherent(dev);
|
||||
int prot = dma_info_to_prot(dir, coherent, attrs);
|
||||
dma_addr_t dma_handle;
|
||||
|
||||
dma_handle = __iommu_dma_map(dev, phys, size, prot, dma_get_mask(dev));
|
||||
dma_handle = __iommu_dma_map_swiotlb(dev, phys, size, dma_get_mask(dev),
|
||||
coherent, dir, attrs);
|
||||
if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
|
||||
dma_handle != DMA_MAPPING_ERROR)
|
||||
arch_sync_dma_for_device(phys, size, dir);
|
||||
|
@ -743,7 +845,7 @@ static void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle,
|
|||
{
|
||||
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
|
||||
iommu_dma_sync_single_for_cpu(dev, dma_handle, size, dir);
|
||||
__iommu_dma_unmap(dev, dma_handle, size);
|
||||
__iommu_dma_unmap_swiotlb(dev, dma_handle, size, dir, attrs);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -761,6 +863,33 @@ static int __finalise_sg(struct device *dev, struct scatterlist *sg, int nents,
|
|||
unsigned int cur_len = 0, max_len = dma_get_max_seg_size(dev);
|
||||
int i, count = 0;
|
||||
|
||||
/*
|
||||
* The Intel graphic driver is used to assume that the returned
|
||||
* sg list is not combound. This blocks the efforts of converting
|
||||
* Intel IOMMU driver to dma-iommu api's. Add this quirk to make the
|
||||
* device driver work and should be removed once it's fixed in i915
|
||||
* driver.
|
||||
*/
|
||||
if (IS_ENABLED(CONFIG_DRM_I915) && dev_is_pci(dev) &&
|
||||
to_pci_dev(dev)->vendor == PCI_VENDOR_ID_INTEL &&
|
||||
(to_pci_dev(dev)->class >> 16) == PCI_BASE_CLASS_DISPLAY) {
|
||||
for_each_sg(sg, s, nents, i) {
|
||||
unsigned int s_iova_off = sg_dma_address(s);
|
||||
unsigned int s_length = sg_dma_len(s);
|
||||
unsigned int s_iova_len = s->length;
|
||||
|
||||
s->offset += s_iova_off;
|
||||
s->length = s_length;
|
||||
sg_dma_address(s) = dma_addr + s_iova_off;
|
||||
sg_dma_len(s) = s_length;
|
||||
dma_addr += s_iova_len;
|
||||
|
||||
pr_info_once("sg combining disabled due to i915 driver\n");
|
||||
}
|
||||
|
||||
return nents;
|
||||
}
|
||||
|
||||
for_each_sg(sg, s, nents, i) {
|
||||
/* Restore this segment's original unaligned fields first */
|
||||
unsigned int s_iova_off = sg_dma_address(s);
|
||||
|
@ -821,6 +950,39 @@ static void __invalidate_sg(struct scatterlist *sg, int nents)
|
|||
}
|
||||
}
|
||||
|
||||
static void iommu_dma_unmap_sg_swiotlb(struct device *dev, struct scatterlist *sg,
|
||||
int nents, enum dma_data_direction dir, unsigned long attrs)
|
||||
{
|
||||
struct scatterlist *s;
|
||||
int i;
|
||||
|
||||
for_each_sg(sg, s, nents, i)
|
||||
__iommu_dma_unmap_swiotlb(dev, sg_dma_address(s),
|
||||
sg_dma_len(s), dir, attrs);
|
||||
}
|
||||
|
||||
static int iommu_dma_map_sg_swiotlb(struct device *dev, struct scatterlist *sg,
|
||||
int nents, enum dma_data_direction dir, unsigned long attrs)
|
||||
{
|
||||
struct scatterlist *s;
|
||||
int i;
|
||||
|
||||
for_each_sg(sg, s, nents, i) {
|
||||
sg_dma_address(s) = __iommu_dma_map_swiotlb(dev, sg_phys(s),
|
||||
s->length, dma_get_mask(dev),
|
||||
dev_is_dma_coherent(dev), dir, attrs);
|
||||
if (sg_dma_address(s) == DMA_MAPPING_ERROR)
|
||||
goto out_unmap;
|
||||
sg_dma_len(s) = s->length;
|
||||
}
|
||||
|
||||
return nents;
|
||||
|
||||
out_unmap:
|
||||
iommu_dma_unmap_sg_swiotlb(dev, sg, i, dir, attrs | DMA_ATTR_SKIP_CPU_SYNC);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* The DMA API client is passing in a scatterlist which could describe
|
||||
* any old buffer layout, but the IOMMU API requires everything to be
|
||||
|
@ -847,6 +1009,9 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
|
|||
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
|
||||
iommu_dma_sync_sg_for_device(dev, sg, nents, dir);
|
||||
|
||||
if (dev_is_untrusted(dev))
|
||||
return iommu_dma_map_sg_swiotlb(dev, sg, nents, dir, attrs);
|
||||
|
||||
/*
|
||||
* Work out how much IOVA space we need, and align the segments to
|
||||
* IOVA granules for the IOMMU driver to handle. With some clever
|
||||
|
@ -900,7 +1065,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
|
|||
return __finalise_sg(dev, sg, nents, iova);
|
||||
|
||||
out_free_iova:
|
||||
iommu_dma_free_iova(cookie, iova, iova_len);
|
||||
iommu_dma_free_iova(cookie, iova, iova_len, NULL);
|
||||
out_restore_sg:
|
||||
__invalidate_sg(sg, nents);
|
||||
return 0;
|
||||
|
@ -916,6 +1081,11 @@ static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg,
|
|||
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
|
||||
iommu_dma_sync_sg_for_cpu(dev, sg, nents, dir);
|
||||
|
||||
if (dev_is_untrusted(dev)) {
|
||||
iommu_dma_unmap_sg_swiotlb(dev, sg, nents, dir, attrs);
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
* The scatterlist segments are mapped into a single
|
||||
* contiguous IOVA allocation, so this is incredibly easy.
|
||||
|
@ -1102,7 +1272,7 @@ static int iommu_dma_mmap(struct device *dev, struct vm_area_struct *vma,
|
|||
struct page **pages = dma_common_find_pages(cpu_addr);
|
||||
|
||||
if (pages)
|
||||
return __iommu_dma_mmap(pages, size, vma);
|
||||
return vm_map_pages(vma, pages, nr_pages);
|
||||
pfn = vmalloc_to_pfn(cpu_addr);
|
||||
} else {
|
||||
pfn = page_to_pfn(virt_to_page(cpu_addr));
|
||||
|
@ -1228,7 +1398,7 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev,
|
|||
return msi_page;
|
||||
|
||||
out_free_iova:
|
||||
iommu_dma_free_iova(cookie, iova, size);
|
||||
iommu_dma_free_iova(cookie, iova, size, NULL);
|
||||
out_free_page:
|
||||
kfree(msi_page);
|
||||
return NULL;
|
||||
|
|
|
@ -13,6 +13,7 @@ config INTEL_IOMMU
|
|||
select DMAR_TABLE
|
||||
select SWIOTLB
|
||||
select IOASID
|
||||
select IOMMU_DMA
|
||||
help
|
||||
DMA remapping (DMAR) devices support enables independent address
|
||||
translations for Direct Memory Access (DMA) from devices.
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -598,7 +598,7 @@ intel_svm_bind_mm(struct device *dev, unsigned int flags,
|
|||
if (mm) {
|
||||
ret = mmu_notifier_register(&svm->notifier, mm);
|
||||
if (ret) {
|
||||
ioasid_free(svm->pasid);
|
||||
ioasid_put(svm->pasid);
|
||||
kfree(svm);
|
||||
kfree(sdev);
|
||||
goto out;
|
||||
|
@ -616,7 +616,7 @@ intel_svm_bind_mm(struct device *dev, unsigned int flags,
|
|||
if (ret) {
|
||||
if (mm)
|
||||
mmu_notifier_unregister(&svm->notifier, mm);
|
||||
ioasid_free(svm->pasid);
|
||||
ioasid_put(svm->pasid);
|
||||
kfree(svm);
|
||||
kfree(sdev);
|
||||
goto out;
|
||||
|
@ -689,7 +689,7 @@ static int intel_svm_unbind_mm(struct device *dev, u32 pasid)
|
|||
kfree_rcu(sdev, rcu);
|
||||
|
||||
if (list_empty(&svm->devs)) {
|
||||
ioasid_free(svm->pasid);
|
||||
ioasid_put(svm->pasid);
|
||||
if (svm->mm) {
|
||||
mmu_notifier_unregister(&svm->notifier, svm->mm);
|
||||
/* Clear mm's pasid. */
|
||||
|
|
|
@ -522,14 +522,14 @@ static int arm_v7s_map(struct io_pgtable_ops *ops, unsigned long iova,
|
|||
struct io_pgtable *iop = &data->iop;
|
||||
int ret;
|
||||
|
||||
/* If no access, then nothing to do */
|
||||
if (!(prot & (IOMMU_READ | IOMMU_WRITE)))
|
||||
return 0;
|
||||
|
||||
if (WARN_ON(iova >= (1ULL << data->iop.cfg.ias) ||
|
||||
paddr >= (1ULL << data->iop.cfg.oas)))
|
||||
return -ERANGE;
|
||||
|
||||
/* If no access, then nothing to do */
|
||||
if (!(prot & (IOMMU_READ | IOMMU_WRITE)))
|
||||
return 0;
|
||||
|
||||
ret = __arm_v7s_map(data, iova, paddr, size, prot, 1, data->pgd, gfp);
|
||||
/*
|
||||
* Synchronise all PTE updates for the new mapping before there's
|
||||
|
@ -584,7 +584,7 @@ static arm_v7s_iopte arm_v7s_split_cont(struct arm_v7s_io_pgtable *data,
|
|||
__arm_v7s_pte_sync(ptep, ARM_V7S_CONT_PAGES, &iop->cfg);
|
||||
|
||||
size *= ARM_V7S_CONT_PAGES;
|
||||
io_pgtable_tlb_flush_leaf(iop, iova, size, size);
|
||||
io_pgtable_tlb_flush_walk(iop, iova, size, size);
|
||||
return pte;
|
||||
}
|
||||
|
||||
|
@ -866,7 +866,6 @@ static void __init dummy_tlb_add_page(struct iommu_iotlb_gather *gather,
|
|||
static const struct iommu_flush_ops dummy_tlb_ops __initconst = {
|
||||
.tlb_flush_all = dummy_tlb_flush_all,
|
||||
.tlb_flush_walk = dummy_tlb_flush,
|
||||
.tlb_flush_leaf = dummy_tlb_flush,
|
||||
.tlb_add_page = dummy_tlb_add_page,
|
||||
};
|
||||
|
||||
|
|
|
@ -130,7 +130,7 @@
|
|||
/* IOPTE accessors */
|
||||
#define iopte_deref(pte,d) __va(iopte_to_paddr(pte, d))
|
||||
|
||||
#define iopte_type(pte,l) \
|
||||
#define iopte_type(pte) \
|
||||
(((pte) >> ARM_LPAE_PTE_TYPE_SHIFT) & ARM_LPAE_PTE_TYPE_MASK)
|
||||
|
||||
#define iopte_prot(pte) ((pte) & ARM_LPAE_PTE_ATTR_MASK)
|
||||
|
@ -151,9 +151,9 @@ static inline bool iopte_leaf(arm_lpae_iopte pte, int lvl,
|
|||
enum io_pgtable_fmt fmt)
|
||||
{
|
||||
if (lvl == (ARM_LPAE_MAX_LEVELS - 1) && fmt != ARM_MALI_LPAE)
|
||||
return iopte_type(pte, lvl) == ARM_LPAE_PTE_TYPE_PAGE;
|
||||
return iopte_type(pte) == ARM_LPAE_PTE_TYPE_PAGE;
|
||||
|
||||
return iopte_type(pte, lvl) == ARM_LPAE_PTE_TYPE_BLOCK;
|
||||
return iopte_type(pte) == ARM_LPAE_PTE_TYPE_BLOCK;
|
||||
}
|
||||
|
||||
static arm_lpae_iopte paddr_to_iopte(phys_addr_t paddr,
|
||||
|
@ -280,7 +280,7 @@ static int arm_lpae_init_pte(struct arm_lpae_io_pgtable *data,
|
|||
/* We require an unmap first */
|
||||
WARN_ON(!selftest_running);
|
||||
return -EEXIST;
|
||||
} else if (iopte_type(pte, lvl) == ARM_LPAE_PTE_TYPE_TABLE) {
|
||||
} else if (iopte_type(pte) == ARM_LPAE_PTE_TYPE_TABLE) {
|
||||
/*
|
||||
* We need to unmap and free the old table before
|
||||
* overwriting it with a block entry.
|
||||
|
@ -450,10 +450,6 @@ static int arm_lpae_map(struct io_pgtable_ops *ops, unsigned long iova,
|
|||
arm_lpae_iopte prot;
|
||||
long iaext = (s64)iova >> cfg->ias;
|
||||
|
||||
/* If no access, then nothing to do */
|
||||
if (!(iommu_prot & (IOMMU_READ | IOMMU_WRITE)))
|
||||
return 0;
|
||||
|
||||
if (WARN_ON(!size || (size & cfg->pgsize_bitmap) != size))
|
||||
return -EINVAL;
|
||||
|
||||
|
@ -462,6 +458,10 @@ static int arm_lpae_map(struct io_pgtable_ops *ops, unsigned long iova,
|
|||
if (WARN_ON(iaext || paddr >> cfg->oas))
|
||||
return -ERANGE;
|
||||
|
||||
/* If no access, then nothing to do */
|
||||
if (!(iommu_prot & (IOMMU_READ | IOMMU_WRITE)))
|
||||
return 0;
|
||||
|
||||
prot = arm_lpae_prot_to_pte(data, iommu_prot);
|
||||
ret = __arm_lpae_map(data, iova, paddr, size, prot, lvl, ptep, gfp);
|
||||
/*
|
||||
|
@ -554,7 +554,7 @@ static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data,
|
|||
* block, but anything else is invalid. We can't misinterpret
|
||||
* a page entry here since we're never at the last level.
|
||||
*/
|
||||
if (iopte_type(pte, lvl - 1) != ARM_LPAE_PTE_TYPE_TABLE)
|
||||
if (iopte_type(pte) != ARM_LPAE_PTE_TYPE_TABLE)
|
||||
return 0;
|
||||
|
||||
tablep = iopte_deref(pte, data);
|
||||
|
@ -1094,7 +1094,6 @@ static void __init dummy_tlb_add_page(struct iommu_iotlb_gather *gather,
|
|||
static const struct iommu_flush_ops dummy_tlb_ops __initconst = {
|
||||
.tlb_flush_all = dummy_tlb_flush_all,
|
||||
.tlb_flush_walk = dummy_tlb_flush,
|
||||
.tlb_flush_leaf = dummy_tlb_flush,
|
||||
.tlb_add_page = dummy_tlb_add_page,
|
||||
};
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
/*
|
||||
* I/O Address Space ID allocator. There is one global IOASID space, split into
|
||||
* subsets. Users create a subset with DECLARE_IOASID_SET, then allocate and
|
||||
* free IOASIDs with ioasid_alloc and ioasid_free.
|
||||
* free IOASIDs with ioasid_alloc and ioasid_put.
|
||||
*/
|
||||
#include <linux/ioasid.h>
|
||||
#include <linux/module.h>
|
||||
|
@ -15,6 +15,7 @@ struct ioasid_data {
|
|||
struct ioasid_set *set;
|
||||
void *private;
|
||||
struct rcu_head rcu;
|
||||
refcount_t refs;
|
||||
};
|
||||
|
||||
/*
|
||||
|
@ -314,6 +315,7 @@ ioasid_t ioasid_alloc(struct ioasid_set *set, ioasid_t min, ioasid_t max,
|
|||
|
||||
data->set = set;
|
||||
data->private = private;
|
||||
refcount_set(&data->refs, 1);
|
||||
|
||||
/*
|
||||
* Custom allocator needs allocator data to perform platform specific
|
||||
|
@ -346,13 +348,36 @@ exit_free:
|
|||
EXPORT_SYMBOL_GPL(ioasid_alloc);
|
||||
|
||||
/**
|
||||
* ioasid_free - Free an IOASID
|
||||
* @ioasid: the ID to remove
|
||||
* ioasid_get - obtain a reference to the IOASID
|
||||
*/
|
||||
void ioasid_free(ioasid_t ioasid)
|
||||
void ioasid_get(ioasid_t ioasid)
|
||||
{
|
||||
struct ioasid_data *ioasid_data;
|
||||
|
||||
spin_lock(&ioasid_allocator_lock);
|
||||
ioasid_data = xa_load(&active_allocator->xa, ioasid);
|
||||
if (ioasid_data)
|
||||
refcount_inc(&ioasid_data->refs);
|
||||
else
|
||||
WARN_ON(1);
|
||||
spin_unlock(&ioasid_allocator_lock);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(ioasid_get);
|
||||
|
||||
/**
|
||||
* ioasid_put - Release a reference to an ioasid
|
||||
* @ioasid: the ID to remove
|
||||
*
|
||||
* Put a reference to the IOASID, free it when the number of references drops to
|
||||
* zero.
|
||||
*
|
||||
* Return: %true if the IOASID was freed, %false otherwise.
|
||||
*/
|
||||
bool ioasid_put(ioasid_t ioasid)
|
||||
{
|
||||
bool free = false;
|
||||
struct ioasid_data *ioasid_data;
|
||||
|
||||
spin_lock(&ioasid_allocator_lock);
|
||||
ioasid_data = xa_load(&active_allocator->xa, ioasid);
|
||||
if (!ioasid_data) {
|
||||
|
@ -360,6 +385,10 @@ void ioasid_free(ioasid_t ioasid)
|
|||
goto exit_unlock;
|
||||
}
|
||||
|
||||
free = refcount_dec_and_test(&ioasid_data->refs);
|
||||
if (!free)
|
||||
goto exit_unlock;
|
||||
|
||||
active_allocator->ops->free(ioasid, active_allocator->ops->pdata);
|
||||
/* Custom allocator needs additional steps to free the xa element */
|
||||
if (active_allocator->flags & IOASID_ALLOCATOR_CUSTOM) {
|
||||
|
@ -369,8 +398,9 @@ void ioasid_free(ioasid_t ioasid)
|
|||
|
||||
exit_unlock:
|
||||
spin_unlock(&ioasid_allocator_lock);
|
||||
return free;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(ioasid_free);
|
||||
EXPORT_SYMBOL_GPL(ioasid_put);
|
||||
|
||||
/**
|
||||
* ioasid_find - Find IOASID data
|
||||
|
|
|
@ -0,0 +1,86 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Helpers for IOMMU drivers implementing SVA
|
||||
*/
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/sched/mm.h>
|
||||
|
||||
#include "iommu-sva-lib.h"
|
||||
|
||||
static DEFINE_MUTEX(iommu_sva_lock);
|
||||
static DECLARE_IOASID_SET(iommu_sva_pasid);
|
||||
|
||||
/**
|
||||
* iommu_sva_alloc_pasid - Allocate a PASID for the mm
|
||||
* @mm: the mm
|
||||
* @min: minimum PASID value (inclusive)
|
||||
* @max: maximum PASID value (inclusive)
|
||||
*
|
||||
* Try to allocate a PASID for this mm, or take a reference to the existing one
|
||||
* provided it fits within the [@min, @max] range. On success the PASID is
|
||||
* available in mm->pasid, and must be released with iommu_sva_free_pasid().
|
||||
* @min must be greater than 0, because 0 indicates an unused mm->pasid.
|
||||
*
|
||||
* Returns 0 on success and < 0 on error.
|
||||
*/
|
||||
int iommu_sva_alloc_pasid(struct mm_struct *mm, ioasid_t min, ioasid_t max)
|
||||
{
|
||||
int ret = 0;
|
||||
ioasid_t pasid;
|
||||
|
||||
if (min == INVALID_IOASID || max == INVALID_IOASID ||
|
||||
min == 0 || max < min)
|
||||
return -EINVAL;
|
||||
|
||||
mutex_lock(&iommu_sva_lock);
|
||||
if (mm->pasid) {
|
||||
if (mm->pasid >= min && mm->pasid <= max)
|
||||
ioasid_get(mm->pasid);
|
||||
else
|
||||
ret = -EOVERFLOW;
|
||||
} else {
|
||||
pasid = ioasid_alloc(&iommu_sva_pasid, min, max, mm);
|
||||
if (pasid == INVALID_IOASID)
|
||||
ret = -ENOMEM;
|
||||
else
|
||||
mm->pasid = pasid;
|
||||
}
|
||||
mutex_unlock(&iommu_sva_lock);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(iommu_sva_alloc_pasid);
|
||||
|
||||
/**
|
||||
* iommu_sva_free_pasid - Release the mm's PASID
|
||||
* @mm: the mm
|
||||
*
|
||||
* Drop one reference to a PASID allocated with iommu_sva_alloc_pasid()
|
||||
*/
|
||||
void iommu_sva_free_pasid(struct mm_struct *mm)
|
||||
{
|
||||
mutex_lock(&iommu_sva_lock);
|
||||
if (ioasid_put(mm->pasid))
|
||||
mm->pasid = 0;
|
||||
mutex_unlock(&iommu_sva_lock);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(iommu_sva_free_pasid);
|
||||
|
||||
/* ioasid_find getter() requires a void * argument */
|
||||
static bool __mmget_not_zero(void *mm)
|
||||
{
|
||||
return mmget_not_zero(mm);
|
||||
}
|
||||
|
||||
/**
|
||||
* iommu_sva_find() - Find mm associated to the given PASID
|
||||
* @pasid: Process Address Space ID assigned to the mm
|
||||
*
|
||||
* On success a reference to the mm is taken, and must be released with mmput().
|
||||
*
|
||||
* Returns the mm corresponding to this PASID, or an error if not found.
|
||||
*/
|
||||
struct mm_struct *iommu_sva_find(ioasid_t pasid)
|
||||
{
|
||||
return ioasid_find(&iommu_sva_pasid, pasid, __mmget_not_zero);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(iommu_sva_find);
|
|
@ -0,0 +1,15 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/*
|
||||
* SVA library for IOMMU drivers
|
||||
*/
|
||||
#ifndef _IOMMU_SVA_LIB_H
|
||||
#define _IOMMU_SVA_LIB_H
|
||||
|
||||
#include <linux/ioasid.h>
|
||||
#include <linux/mm_types.h>
|
||||
|
||||
int iommu_sva_alloc_pasid(struct mm_struct *mm, ioasid_t min, ioasid_t max);
|
||||
void iommu_sva_free_pasid(struct mm_struct *mm);
|
||||
struct mm_struct *iommu_sva_find(ioasid_t pasid);
|
||||
|
||||
#endif /* _IOMMU_SVA_LIB_H */
|
|
@ -93,6 +93,8 @@ static void __iommu_detach_group(struct iommu_domain *domain,
|
|||
static int iommu_create_device_direct_mappings(struct iommu_group *group,
|
||||
struct device *dev);
|
||||
static struct iommu_group *iommu_group_get_for_dev(struct device *dev);
|
||||
static ssize_t iommu_group_store_type(struct iommu_group *group,
|
||||
const char *buf, size_t count);
|
||||
|
||||
#define IOMMU_GROUP_ATTR(_name, _mode, _show, _store) \
|
||||
struct iommu_group_attribute iommu_group_attr_##_name = \
|
||||
|
@ -253,8 +255,10 @@ int iommu_probe_device(struct device *dev)
|
|||
goto err_out;
|
||||
|
||||
group = iommu_group_get(dev);
|
||||
if (!group)
|
||||
if (!group) {
|
||||
ret = -ENODEV;
|
||||
goto err_release;
|
||||
}
|
||||
|
||||
/*
|
||||
* Try to allocate a default domain - needs support from the
|
||||
|
@ -501,6 +505,7 @@ static ssize_t iommu_group_show_type(struct iommu_group *group,
|
|||
{
|
||||
char *type = "unknown\n";
|
||||
|
||||
mutex_lock(&group->mutex);
|
||||
if (group->default_domain) {
|
||||
switch (group->default_domain->type) {
|
||||
case IOMMU_DOMAIN_BLOCKED:
|
||||
|
@ -517,6 +522,7 @@ static ssize_t iommu_group_show_type(struct iommu_group *group,
|
|||
break;
|
||||
}
|
||||
}
|
||||
mutex_unlock(&group->mutex);
|
||||
strcpy(buf, type);
|
||||
|
||||
return strlen(type);
|
||||
|
@ -527,7 +533,8 @@ static IOMMU_GROUP_ATTR(name, S_IRUGO, iommu_group_show_name, NULL);
|
|||
static IOMMU_GROUP_ATTR(reserved_regions, 0444,
|
||||
iommu_group_show_resv_regions, NULL);
|
||||
|
||||
static IOMMU_GROUP_ATTR(type, 0444, iommu_group_show_type, NULL);
|
||||
static IOMMU_GROUP_ATTR(type, 0644, iommu_group_show_type,
|
||||
iommu_group_store_type);
|
||||
|
||||
static void iommu_group_release(struct kobject *kobj)
|
||||
{
|
||||
|
@ -739,6 +746,7 @@ static int iommu_create_device_direct_mappings(struct iommu_group *group,
|
|||
/* We need to consider overlapping regions for different devices */
|
||||
list_for_each_entry(entry, &mappings, list) {
|
||||
dma_addr_t start, end, addr;
|
||||
size_t map_size = 0;
|
||||
|
||||
if (domain->ops->apply_resv_region)
|
||||
domain->ops->apply_resv_region(dev, domain, entry);
|
||||
|
@ -750,16 +758,27 @@ static int iommu_create_device_direct_mappings(struct iommu_group *group,
|
|||
entry->type != IOMMU_RESV_DIRECT_RELAXABLE)
|
||||
continue;
|
||||
|
||||
for (addr = start; addr < end; addr += pg_size) {
|
||||
for (addr = start; addr <= end; addr += pg_size) {
|
||||
phys_addr_t phys_addr;
|
||||
|
||||
phys_addr = iommu_iova_to_phys(domain, addr);
|
||||
if (phys_addr)
|
||||
continue;
|
||||
if (addr == end)
|
||||
goto map_end;
|
||||
|
||||
ret = iommu_map(domain, addr, addr, pg_size, entry->prot);
|
||||
if (ret)
|
||||
goto out;
|
||||
phys_addr = iommu_iova_to_phys(domain, addr);
|
||||
if (!phys_addr) {
|
||||
map_size += pg_size;
|
||||
continue;
|
||||
}
|
||||
|
||||
map_end:
|
||||
if (map_size) {
|
||||
ret = iommu_map(domain, addr - map_size,
|
||||
addr - map_size, map_size,
|
||||
entry->prot);
|
||||
if (ret)
|
||||
goto out;
|
||||
map_size = 0;
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -1462,12 +1481,14 @@ EXPORT_SYMBOL_GPL(fsl_mc_device_group);
|
|||
static int iommu_get_def_domain_type(struct device *dev)
|
||||
{
|
||||
const struct iommu_ops *ops = dev->bus->iommu_ops;
|
||||
unsigned int type = 0;
|
||||
|
||||
if (dev_is_pci(dev) && to_pci_dev(dev)->untrusted)
|
||||
return IOMMU_DOMAIN_DMA;
|
||||
|
||||
if (ops->def_domain_type)
|
||||
type = ops->def_domain_type(dev);
|
||||
return ops->def_domain_type(dev);
|
||||
|
||||
return (type == 0) ? iommu_def_domain_type : type;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int iommu_group_alloc_default_domain(struct bus_type *bus,
|
||||
|
@ -1509,7 +1530,7 @@ static int iommu_alloc_default_domain(struct iommu_group *group,
|
|||
if (group->default_domain)
|
||||
return 0;
|
||||
|
||||
type = iommu_get_def_domain_type(dev);
|
||||
type = iommu_get_def_domain_type(dev) ? : iommu_def_domain_type;
|
||||
|
||||
return iommu_group_alloc_default_domain(dev->bus, group, type);
|
||||
}
|
||||
|
@ -1647,12 +1668,8 @@ struct __group_domain_type {
|
|||
|
||||
static int probe_get_default_domain_type(struct device *dev, void *data)
|
||||
{
|
||||
const struct iommu_ops *ops = dev->bus->iommu_ops;
|
||||
struct __group_domain_type *gtype = data;
|
||||
unsigned int type = 0;
|
||||
|
||||
if (ops->def_domain_type)
|
||||
type = ops->def_domain_type(dev);
|
||||
unsigned int type = iommu_get_def_domain_type(dev);
|
||||
|
||||
if (type) {
|
||||
if (gtype->type && gtype->type != type) {
|
||||
|
@ -2997,8 +3014,6 @@ EXPORT_SYMBOL_GPL(iommu_sva_bind_device);
|
|||
* Put reference to a bond between device and address space. The device should
|
||||
* not be issuing any more transaction for this PASID. All outstanding page
|
||||
* requests for this PASID must have been flushed to the IOMMU.
|
||||
*
|
||||
* Returns 0 on success, or an error value
|
||||
*/
|
||||
void iommu_sva_unbind_device(struct iommu_sva *handle)
|
||||
{
|
||||
|
@ -3031,3 +3046,228 @@ u32 iommu_sva_get_pasid(struct iommu_sva *handle)
|
|||
return ops->sva_get_pasid(handle);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(iommu_sva_get_pasid);
|
||||
|
||||
/*
|
||||
* Changes the default domain of an iommu group that has *only* one device
|
||||
*
|
||||
* @group: The group for which the default domain should be changed
|
||||
* @prev_dev: The device in the group (this is used to make sure that the device
|
||||
* hasn't changed after the caller has called this function)
|
||||
* @type: The type of the new default domain that gets associated with the group
|
||||
*
|
||||
* Returns 0 on success and error code on failure
|
||||
*
|
||||
* Note:
|
||||
* 1. Presently, this function is called only when user requests to change the
|
||||
* group's default domain type through /sys/kernel/iommu_groups/<grp_id>/type
|
||||
* Please take a closer look if intended to use for other purposes.
|
||||
*/
|
||||
static int iommu_change_dev_def_domain(struct iommu_group *group,
|
||||
struct device *prev_dev, int type)
|
||||
{
|
||||
struct iommu_domain *prev_dom;
|
||||
struct group_device *grp_dev;
|
||||
int ret, dev_def_dom;
|
||||
struct device *dev;
|
||||
|
||||
if (!group)
|
||||
return -EINVAL;
|
||||
|
||||
mutex_lock(&group->mutex);
|
||||
|
||||
if (group->default_domain != group->domain) {
|
||||
dev_err_ratelimited(prev_dev, "Group not assigned to default domain\n");
|
||||
ret = -EBUSY;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/*
|
||||
* iommu group wasn't locked while acquiring device lock in
|
||||
* iommu_group_store_type(). So, make sure that the device count hasn't
|
||||
* changed while acquiring device lock.
|
||||
*
|
||||
* Changing default domain of an iommu group with two or more devices
|
||||
* isn't supported because there could be a potential deadlock. Consider
|
||||
* the following scenario. T1 is trying to acquire device locks of all
|
||||
* the devices in the group and before it could acquire all of them,
|
||||
* there could be another thread T2 (from different sub-system and use
|
||||
* case) that has already acquired some of the device locks and might be
|
||||
* waiting for T1 to release other device locks.
|
||||
*/
|
||||
if (iommu_group_device_count(group) != 1) {
|
||||
dev_err_ratelimited(prev_dev, "Cannot change default domain: Group has more than one device\n");
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Since group has only one device */
|
||||
grp_dev = list_first_entry(&group->devices, struct group_device, list);
|
||||
dev = grp_dev->dev;
|
||||
|
||||
if (prev_dev != dev) {
|
||||
dev_err_ratelimited(prev_dev, "Cannot change default domain: Device has been changed\n");
|
||||
ret = -EBUSY;
|
||||
goto out;
|
||||
}
|
||||
|
||||
prev_dom = group->default_domain;
|
||||
if (!prev_dom) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
dev_def_dom = iommu_get_def_domain_type(dev);
|
||||
if (!type) {
|
||||
/*
|
||||
* If the user hasn't requested any specific type of domain and
|
||||
* if the device supports both the domains, then default to the
|
||||
* domain the device was booted with
|
||||
*/
|
||||
type = dev_def_dom ? : iommu_def_domain_type;
|
||||
} else if (dev_def_dom && type != dev_def_dom) {
|
||||
dev_err_ratelimited(prev_dev, "Device cannot be in %s domain\n",
|
||||
iommu_domain_type_str(type));
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/*
|
||||
* Switch to a new domain only if the requested domain type is different
|
||||
* from the existing default domain type
|
||||
*/
|
||||
if (prev_dom->type == type) {
|
||||
ret = 0;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Sets group->default_domain to the newly allocated domain */
|
||||
ret = iommu_group_alloc_default_domain(dev->bus, group, type);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
ret = iommu_create_device_direct_mappings(group, dev);
|
||||
if (ret)
|
||||
goto free_new_domain;
|
||||
|
||||
ret = __iommu_attach_device(group->default_domain, dev);
|
||||
if (ret)
|
||||
goto free_new_domain;
|
||||
|
||||
group->domain = group->default_domain;
|
||||
|
||||
/*
|
||||
* Release the mutex here because ops->probe_finalize() call-back of
|
||||
* some vendor IOMMU drivers calls arm_iommu_attach_device() which
|
||||
* in-turn might call back into IOMMU core code, where it tries to take
|
||||
* group->mutex, resulting in a deadlock.
|
||||
*/
|
||||
mutex_unlock(&group->mutex);
|
||||
|
||||
/* Make sure dma_ops is appropriatley set */
|
||||
iommu_group_do_probe_finalize(dev, group->default_domain);
|
||||
iommu_domain_free(prev_dom);
|
||||
return 0;
|
||||
|
||||
free_new_domain:
|
||||
iommu_domain_free(group->default_domain);
|
||||
group->default_domain = prev_dom;
|
||||
group->domain = prev_dom;
|
||||
|
||||
out:
|
||||
mutex_unlock(&group->mutex);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Changing the default domain through sysfs requires the users to ubind the
|
||||
* drivers from the devices in the iommu group. Return failure if this doesn't
|
||||
* meet.
|
||||
*
|
||||
* We need to consider the race between this and the device release path.
|
||||
* device_lock(dev) is used here to guarantee that the device release path
|
||||
* will not be entered at the same time.
|
||||
*/
|
||||
static ssize_t iommu_group_store_type(struct iommu_group *group,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct group_device *grp_dev;
|
||||
struct device *dev;
|
||||
int ret, req_type;
|
||||
|
||||
if (!capable(CAP_SYS_ADMIN) || !capable(CAP_SYS_RAWIO))
|
||||
return -EACCES;
|
||||
|
||||
if (WARN_ON(!group))
|
||||
return -EINVAL;
|
||||
|
||||
if (sysfs_streq(buf, "identity"))
|
||||
req_type = IOMMU_DOMAIN_IDENTITY;
|
||||
else if (sysfs_streq(buf, "DMA"))
|
||||
req_type = IOMMU_DOMAIN_DMA;
|
||||
else if (sysfs_streq(buf, "auto"))
|
||||
req_type = 0;
|
||||
else
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* Lock/Unlock the group mutex here before device lock to
|
||||
* 1. Make sure that the iommu group has only one device (this is a
|
||||
* prerequisite for step 2)
|
||||
* 2. Get struct *dev which is needed to lock device
|
||||
*/
|
||||
mutex_lock(&group->mutex);
|
||||
if (iommu_group_device_count(group) != 1) {
|
||||
mutex_unlock(&group->mutex);
|
||||
pr_err_ratelimited("Cannot change default domain: Group has more than one device\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Since group has only one device */
|
||||
grp_dev = list_first_entry(&group->devices, struct group_device, list);
|
||||
dev = grp_dev->dev;
|
||||
get_device(dev);
|
||||
|
||||
/*
|
||||
* Don't hold the group mutex because taking group mutex first and then
|
||||
* the device lock could potentially cause a deadlock as below. Assume
|
||||
* two threads T1 and T2. T1 is trying to change default domain of an
|
||||
* iommu group and T2 is trying to hot unplug a device or release [1] VF
|
||||
* of a PCIe device which is in the same iommu group. T1 takes group
|
||||
* mutex and before it could take device lock assume T2 has taken device
|
||||
* lock and is yet to take group mutex. Now, both the threads will be
|
||||
* waiting for the other thread to release lock. Below, lock order was
|
||||
* suggested.
|
||||
* device_lock(dev);
|
||||
* mutex_lock(&group->mutex);
|
||||
* iommu_change_dev_def_domain();
|
||||
* mutex_unlock(&group->mutex);
|
||||
* device_unlock(dev);
|
||||
*
|
||||
* [1] Typical device release path
|
||||
* device_lock() from device/driver core code
|
||||
* -> bus_notifier()
|
||||
* -> iommu_bus_notifier()
|
||||
* -> iommu_release_device()
|
||||
* -> ops->release_device() vendor driver calls back iommu core code
|
||||
* -> mutex_lock() from iommu core code
|
||||
*/
|
||||
mutex_unlock(&group->mutex);
|
||||
|
||||
/* Check if the device in the group still has a driver bound to it */
|
||||
device_lock(dev);
|
||||
if (device_is_bound(dev)) {
|
||||
pr_err_ratelimited("Device is still bound to driver\n");
|
||||
ret = -EBUSY;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = iommu_change_dev_def_domain(group, dev, req_type);
|
||||
ret = ret ?: count;
|
||||
|
||||
out:
|
||||
device_unlock(dev);
|
||||
put_device(dev);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
|
|
@ -25,6 +25,7 @@ static void init_iova_rcaches(struct iova_domain *iovad);
|
|||
static void free_iova_rcaches(struct iova_domain *iovad);
|
||||
static void fq_destroy_all_entries(struct iova_domain *iovad);
|
||||
static void fq_flush_timeout(struct timer_list *t);
|
||||
static void free_global_cached_iovas(struct iova_domain *iovad);
|
||||
|
||||
void
|
||||
init_iova_domain(struct iova_domain *iovad, unsigned long granule,
|
||||
|
@ -184,8 +185,9 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad,
|
|||
struct rb_node *curr, *prev;
|
||||
struct iova *curr_iova;
|
||||
unsigned long flags;
|
||||
unsigned long new_pfn;
|
||||
unsigned long new_pfn, retry_pfn;
|
||||
unsigned long align_mask = ~0UL;
|
||||
unsigned long high_pfn = limit_pfn, low_pfn = iovad->start_pfn;
|
||||
|
||||
if (size_aligned)
|
||||
align_mask <<= fls_long(size - 1);
|
||||
|
@ -198,15 +200,25 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad,
|
|||
|
||||
curr = __get_cached_rbnode(iovad, limit_pfn);
|
||||
curr_iova = rb_entry(curr, struct iova, node);
|
||||
retry_pfn = curr_iova->pfn_hi + 1;
|
||||
|
||||
retry:
|
||||
do {
|
||||
limit_pfn = min(limit_pfn, curr_iova->pfn_lo);
|
||||
new_pfn = (limit_pfn - size) & align_mask;
|
||||
high_pfn = min(high_pfn, curr_iova->pfn_lo);
|
||||
new_pfn = (high_pfn - size) & align_mask;
|
||||
prev = curr;
|
||||
curr = rb_prev(curr);
|
||||
curr_iova = rb_entry(curr, struct iova, node);
|
||||
} while (curr && new_pfn <= curr_iova->pfn_hi);
|
||||
} while (curr && new_pfn <= curr_iova->pfn_hi && new_pfn >= low_pfn);
|
||||
|
||||
if (limit_pfn < size || new_pfn < iovad->start_pfn) {
|
||||
if (high_pfn < size || new_pfn < low_pfn) {
|
||||
if (low_pfn == iovad->start_pfn && retry_pfn < limit_pfn) {
|
||||
high_pfn = limit_pfn;
|
||||
low_pfn = retry_pfn;
|
||||
curr = &iovad->anchor.node;
|
||||
curr_iova = rb_entry(curr, struct iova, node);
|
||||
goto retry;
|
||||
}
|
||||
iovad->max32_alloc_size = size;
|
||||
goto iova32_full;
|
||||
}
|
||||
|
@ -231,18 +243,16 @@ static struct kmem_cache *iova_cache;
|
|||
static unsigned int iova_cache_users;
|
||||
static DEFINE_MUTEX(iova_cache_mutex);
|
||||
|
||||
struct iova *alloc_iova_mem(void)
|
||||
static struct iova *alloc_iova_mem(void)
|
||||
{
|
||||
return kmem_cache_zalloc(iova_cache, GFP_ATOMIC | __GFP_NOWARN);
|
||||
}
|
||||
EXPORT_SYMBOL(alloc_iova_mem);
|
||||
|
||||
void free_iova_mem(struct iova *iova)
|
||||
static void free_iova_mem(struct iova *iova)
|
||||
{
|
||||
if (iova->pfn_lo != IOVA_ANCHOR)
|
||||
kmem_cache_free(iova_cache, iova);
|
||||
}
|
||||
EXPORT_SYMBOL(free_iova_mem);
|
||||
|
||||
int iova_cache_get(void)
|
||||
{
|
||||
|
@ -390,10 +400,14 @@ EXPORT_SYMBOL_GPL(__free_iova);
|
|||
void
|
||||
free_iova(struct iova_domain *iovad, unsigned long pfn)
|
||||
{
|
||||
struct iova *iova = find_iova(iovad, pfn);
|
||||
unsigned long flags;
|
||||
struct iova *iova;
|
||||
|
||||
spin_lock_irqsave(&iovad->iova_rbtree_lock, flags);
|
||||
iova = private_find_iova(iovad, pfn);
|
||||
if (iova)
|
||||
__free_iova(iovad, iova);
|
||||
private_free_iova(iovad, iova);
|
||||
spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
|
||||
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(free_iova);
|
||||
|
@ -431,6 +445,7 @@ retry:
|
|||
flush_rcache = false;
|
||||
for_each_online_cpu(cpu)
|
||||
free_cpu_cached_iovas(cpu, iovad);
|
||||
free_global_cached_iovas(iovad);
|
||||
goto retry;
|
||||
}
|
||||
|
||||
|
@ -725,47 +740,6 @@ copy_reserved_iova(struct iova_domain *from, struct iova_domain *to)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(copy_reserved_iova);
|
||||
|
||||
struct iova *
|
||||
split_and_remove_iova(struct iova_domain *iovad, struct iova *iova,
|
||||
unsigned long pfn_lo, unsigned long pfn_hi)
|
||||
{
|
||||
unsigned long flags;
|
||||
struct iova *prev = NULL, *next = NULL;
|
||||
|
||||
spin_lock_irqsave(&iovad->iova_rbtree_lock, flags);
|
||||
if (iova->pfn_lo < pfn_lo) {
|
||||
prev = alloc_and_init_iova(iova->pfn_lo, pfn_lo - 1);
|
||||
if (prev == NULL)
|
||||
goto error;
|
||||
}
|
||||
if (iova->pfn_hi > pfn_hi) {
|
||||
next = alloc_and_init_iova(pfn_hi + 1, iova->pfn_hi);
|
||||
if (next == NULL)
|
||||
goto error;
|
||||
}
|
||||
|
||||
__cached_rbnode_delete_update(iovad, iova);
|
||||
rb_erase(&iova->node, &iovad->rbroot);
|
||||
|
||||
if (prev) {
|
||||
iova_insert_rbtree(&iovad->rbroot, prev, NULL);
|
||||
iova->pfn_lo = pfn_lo;
|
||||
}
|
||||
if (next) {
|
||||
iova_insert_rbtree(&iovad->rbroot, next, NULL);
|
||||
iova->pfn_hi = pfn_hi;
|
||||
}
|
||||
spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
|
||||
|
||||
return iova;
|
||||
|
||||
error:
|
||||
spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
|
||||
if (prev)
|
||||
free_iova_mem(prev);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/*
|
||||
* Magazine caches for IOVA ranges. For an introduction to magazines,
|
||||
* see the USENIX 2001 paper "Magazines and Vmem: Extending the Slab
|
||||
|
@ -1046,5 +1020,25 @@ void free_cpu_cached_iovas(unsigned int cpu, struct iova_domain *iovad)
|
|||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* free all the IOVA ranges of global cache
|
||||
*/
|
||||
static void free_global_cached_iovas(struct iova_domain *iovad)
|
||||
{
|
||||
struct iova_rcache *rcache;
|
||||
unsigned long flags;
|
||||
int i, j;
|
||||
|
||||
for (i = 0; i < IOVA_RANGE_CACHE_MAX_SIZE; ++i) {
|
||||
rcache = &iovad->rcaches[i];
|
||||
spin_lock_irqsave(&rcache->lock, flags);
|
||||
for (j = 0; j < rcache->depot_size; ++j) {
|
||||
iova_magazine_free_pfns(rcache->depot[j], iovad);
|
||||
iova_magazine_free(rcache->depot[j]);
|
||||
}
|
||||
rcache->depot_size = 0;
|
||||
spin_unlock_irqrestore(&rcache->lock, flags);
|
||||
}
|
||||
}
|
||||
MODULE_AUTHOR("Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
|
|
@ -325,7 +325,6 @@ static void ipmmu_tlb_flush(unsigned long iova, size_t size,
|
|||
static const struct iommu_flush_ops ipmmu_flush_ops = {
|
||||
.tlb_flush_all = ipmmu_tlb_flush_all,
|
||||
.tlb_flush_walk = ipmmu_tlb_flush,
|
||||
.tlb_flush_leaf = ipmmu_tlb_flush,
|
||||
};
|
||||
|
||||
/* -----------------------------------------------------------------------------
|
||||
|
|
|
@ -174,12 +174,6 @@ static void __flush_iotlb_walk(unsigned long iova, size_t size,
|
|||
__flush_iotlb_range(iova, size, granule, false, cookie);
|
||||
}
|
||||
|
||||
static void __flush_iotlb_leaf(unsigned long iova, size_t size,
|
||||
size_t granule, void *cookie)
|
||||
{
|
||||
__flush_iotlb_range(iova, size, granule, true, cookie);
|
||||
}
|
||||
|
||||
static void __flush_iotlb_page(struct iommu_iotlb_gather *gather,
|
||||
unsigned long iova, size_t granule, void *cookie)
|
||||
{
|
||||
|
@ -189,7 +183,6 @@ static void __flush_iotlb_page(struct iommu_iotlb_gather *gather,
|
|||
static const struct iommu_flush_ops msm_iommu_flush_ops = {
|
||||
.tlb_flush_all = __flush_iotlb,
|
||||
.tlb_flush_walk = __flush_iotlb_walk,
|
||||
.tlb_flush_leaf = __flush_iotlb_leaf,
|
||||
.tlb_add_page = __flush_iotlb_page,
|
||||
};
|
||||
|
||||
|
|
|
@ -240,7 +240,6 @@ static void mtk_iommu_tlb_flush_page_nosync(struct iommu_iotlb_gather *gather,
|
|||
static const struct iommu_flush_ops mtk_iommu_flush_ops = {
|
||||
.tlb_flush_all = mtk_iommu_tlb_flush_all,
|
||||
.tlb_flush_walk = mtk_iommu_tlb_flush_range_sync,
|
||||
.tlb_flush_leaf = mtk_iommu_tlb_flush_range_sync,
|
||||
.tlb_add_page = mtk_iommu_tlb_flush_page_nosync,
|
||||
};
|
||||
|
||||
|
|
|
@ -10,6 +10,7 @@
|
|||
#include <linux/kernel.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/spinlock.h>
|
||||
|
@ -256,26 +257,19 @@ static int tegra_smmu_alloc_asid(struct tegra_smmu *smmu, unsigned int *idp)
|
|||
{
|
||||
unsigned long id;
|
||||
|
||||
mutex_lock(&smmu->lock);
|
||||
|
||||
id = find_first_zero_bit(smmu->asids, smmu->soc->num_asids);
|
||||
if (id >= smmu->soc->num_asids) {
|
||||
mutex_unlock(&smmu->lock);
|
||||
if (id >= smmu->soc->num_asids)
|
||||
return -ENOSPC;
|
||||
}
|
||||
|
||||
set_bit(id, smmu->asids);
|
||||
*idp = id;
|
||||
|
||||
mutex_unlock(&smmu->lock);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void tegra_smmu_free_asid(struct tegra_smmu *smmu, unsigned int id)
|
||||
{
|
||||
mutex_lock(&smmu->lock);
|
||||
clear_bit(id, smmu->asids);
|
||||
mutex_unlock(&smmu->lock);
|
||||
}
|
||||
|
||||
static bool tegra_smmu_capable(enum iommu_cap cap)
|
||||
|
@ -420,17 +414,21 @@ static int tegra_smmu_as_prepare(struct tegra_smmu *smmu,
|
|||
struct tegra_smmu_as *as)
|
||||
{
|
||||
u32 value;
|
||||
int err;
|
||||
int err = 0;
|
||||
|
||||
mutex_lock(&smmu->lock);
|
||||
|
||||
if (as->use_count > 0) {
|
||||
as->use_count++;
|
||||
return 0;
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
as->pd_dma = dma_map_page(smmu->dev, as->pd, 0, SMMU_SIZE_PD,
|
||||
DMA_TO_DEVICE);
|
||||
if (dma_mapping_error(smmu->dev, as->pd_dma))
|
||||
return -ENOMEM;
|
||||
if (dma_mapping_error(smmu->dev, as->pd_dma)) {
|
||||
err = -ENOMEM;
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
/* We can't handle 64-bit DMA addresses */
|
||||
if (!smmu_dma_addr_valid(smmu, as->pd_dma)) {
|
||||
|
@ -453,83 +451,84 @@ static int tegra_smmu_as_prepare(struct tegra_smmu *smmu,
|
|||
as->smmu = smmu;
|
||||
as->use_count++;
|
||||
|
||||
mutex_unlock(&smmu->lock);
|
||||
|
||||
return 0;
|
||||
|
||||
err_unmap:
|
||||
dma_unmap_page(smmu->dev, as->pd_dma, SMMU_SIZE_PD, DMA_TO_DEVICE);
|
||||
unlock:
|
||||
mutex_unlock(&smmu->lock);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static void tegra_smmu_as_unprepare(struct tegra_smmu *smmu,
|
||||
struct tegra_smmu_as *as)
|
||||
{
|
||||
if (--as->use_count > 0)
|
||||
mutex_lock(&smmu->lock);
|
||||
|
||||
if (--as->use_count > 0) {
|
||||
mutex_unlock(&smmu->lock);
|
||||
return;
|
||||
}
|
||||
|
||||
tegra_smmu_free_asid(smmu, as->id);
|
||||
|
||||
dma_unmap_page(smmu->dev, as->pd_dma, SMMU_SIZE_PD, DMA_TO_DEVICE);
|
||||
|
||||
as->smmu = NULL;
|
||||
|
||||
mutex_unlock(&smmu->lock);
|
||||
}
|
||||
|
||||
static int tegra_smmu_attach_dev(struct iommu_domain *domain,
|
||||
struct device *dev)
|
||||
{
|
||||
struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
|
||||
struct tegra_smmu *smmu = dev_iommu_priv_get(dev);
|
||||
struct tegra_smmu_as *as = to_smmu_as(domain);
|
||||
struct device_node *np = dev->of_node;
|
||||
struct of_phandle_args args;
|
||||
unsigned int index = 0;
|
||||
int err = 0;
|
||||
unsigned int index;
|
||||
int err;
|
||||
|
||||
while (!of_parse_phandle_with_args(np, "iommus", "#iommu-cells", index,
|
||||
&args)) {
|
||||
unsigned int swgroup = args.args[0];
|
||||
|
||||
if (args.np != smmu->dev->of_node) {
|
||||
of_node_put(args.np);
|
||||
continue;
|
||||
}
|
||||
|
||||
of_node_put(args.np);
|
||||
if (!fwspec)
|
||||
return -ENOENT;
|
||||
|
||||
for (index = 0; index < fwspec->num_ids; index++) {
|
||||
err = tegra_smmu_as_prepare(smmu, as);
|
||||
if (err < 0)
|
||||
return err;
|
||||
if (err)
|
||||
goto disable;
|
||||
|
||||
tegra_smmu_enable(smmu, swgroup, as->id);
|
||||
index++;
|
||||
tegra_smmu_enable(smmu, fwspec->ids[index], as->id);
|
||||
}
|
||||
|
||||
if (index == 0)
|
||||
return -ENODEV;
|
||||
|
||||
return 0;
|
||||
|
||||
disable:
|
||||
while (index--) {
|
||||
tegra_smmu_disable(smmu, fwspec->ids[index], as->id);
|
||||
tegra_smmu_as_unprepare(smmu, as);
|
||||
}
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static void tegra_smmu_detach_dev(struct iommu_domain *domain, struct device *dev)
|
||||
{
|
||||
struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
|
||||
struct tegra_smmu_as *as = to_smmu_as(domain);
|
||||
struct device_node *np = dev->of_node;
|
||||
struct tegra_smmu *smmu = as->smmu;
|
||||
struct of_phandle_args args;
|
||||
unsigned int index = 0;
|
||||
unsigned int index;
|
||||
|
||||
while (!of_parse_phandle_with_args(np, "iommus", "#iommu-cells", index,
|
||||
&args)) {
|
||||
unsigned int swgroup = args.args[0];
|
||||
if (!fwspec)
|
||||
return;
|
||||
|
||||
if (args.np != smmu->dev->of_node) {
|
||||
of_node_put(args.np);
|
||||
continue;
|
||||
}
|
||||
|
||||
of_node_put(args.np);
|
||||
|
||||
tegra_smmu_disable(smmu, swgroup, as->id);
|
||||
for (index = 0; index < fwspec->num_ids; index++) {
|
||||
tegra_smmu_disable(smmu, fwspec->ids[index], as->id);
|
||||
tegra_smmu_as_unprepare(smmu, as);
|
||||
index++;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -799,75 +798,9 @@ static phys_addr_t tegra_smmu_iova_to_phys(struct iommu_domain *domain,
|
|||
return SMMU_PFN_PHYS(pfn) + SMMU_OFFSET_IN_PAGE(iova);
|
||||
}
|
||||
|
||||
static struct tegra_smmu *tegra_smmu_find(struct device_node *np)
|
||||
{
|
||||
struct platform_device *pdev;
|
||||
struct tegra_mc *mc;
|
||||
|
||||
pdev = of_find_device_by_node(np);
|
||||
if (!pdev)
|
||||
return NULL;
|
||||
|
||||
mc = platform_get_drvdata(pdev);
|
||||
if (!mc)
|
||||
return NULL;
|
||||
|
||||
return mc->smmu;
|
||||
}
|
||||
|
||||
static int tegra_smmu_configure(struct tegra_smmu *smmu, struct device *dev,
|
||||
struct of_phandle_args *args)
|
||||
{
|
||||
const struct iommu_ops *ops = smmu->iommu.ops;
|
||||
int err;
|
||||
|
||||
err = iommu_fwspec_init(dev, &dev->of_node->fwnode, ops);
|
||||
if (err < 0) {
|
||||
dev_err(dev, "failed to initialize fwspec: %d\n", err);
|
||||
return err;
|
||||
}
|
||||
|
||||
err = ops->of_xlate(dev, args);
|
||||
if (err < 0) {
|
||||
dev_err(dev, "failed to parse SW group ID: %d\n", err);
|
||||
iommu_fwspec_free(dev);
|
||||
return err;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct iommu_device *tegra_smmu_probe_device(struct device *dev)
|
||||
{
|
||||
struct device_node *np = dev->of_node;
|
||||
struct tegra_smmu *smmu = NULL;
|
||||
struct of_phandle_args args;
|
||||
unsigned int index = 0;
|
||||
int err;
|
||||
|
||||
while (of_parse_phandle_with_args(np, "iommus", "#iommu-cells", index,
|
||||
&args) == 0) {
|
||||
smmu = tegra_smmu_find(args.np);
|
||||
if (smmu) {
|
||||
err = tegra_smmu_configure(smmu, dev, &args);
|
||||
of_node_put(args.np);
|
||||
|
||||
if (err < 0)
|
||||
return ERR_PTR(err);
|
||||
|
||||
/*
|
||||
* Only a single IOMMU master interface is currently
|
||||
* supported by the Linux kernel, so abort after the
|
||||
* first match.
|
||||
*/
|
||||
dev_iommu_priv_set(dev, smmu);
|
||||
|
||||
break;
|
||||
}
|
||||
|
||||
of_node_put(args.np);
|
||||
index++;
|
||||
}
|
||||
struct tegra_smmu *smmu = dev_iommu_priv_get(dev);
|
||||
|
||||
if (!smmu)
|
||||
return ERR_PTR(-ENODEV);
|
||||
|
@ -875,10 +808,7 @@ static struct iommu_device *tegra_smmu_probe_device(struct device *dev)
|
|||
return &smmu->iommu;
|
||||
}
|
||||
|
||||
static void tegra_smmu_release_device(struct device *dev)
|
||||
{
|
||||
dev_iommu_priv_set(dev, NULL);
|
||||
}
|
||||
static void tegra_smmu_release_device(struct device *dev) {}
|
||||
|
||||
static const struct tegra_smmu_group_soc *
|
||||
tegra_smmu_find_group(struct tegra_smmu *smmu, unsigned int swgroup)
|
||||
|
@ -903,10 +833,12 @@ static void tegra_smmu_group_release(void *iommu_data)
|
|||
mutex_unlock(&smmu->lock);
|
||||
}
|
||||
|
||||
static struct iommu_group *tegra_smmu_group_get(struct tegra_smmu *smmu,
|
||||
unsigned int swgroup)
|
||||
static struct iommu_group *tegra_smmu_device_group(struct device *dev)
|
||||
{
|
||||
struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
|
||||
struct tegra_smmu *smmu = dev_iommu_priv_get(dev);
|
||||
const struct tegra_smmu_group_soc *soc;
|
||||
unsigned int swgroup = fwspec->ids[0];
|
||||
struct tegra_smmu_group *group;
|
||||
struct iommu_group *grp;
|
||||
|
||||
|
@ -934,7 +866,11 @@ static struct iommu_group *tegra_smmu_group_get(struct tegra_smmu *smmu,
|
|||
group->smmu = smmu;
|
||||
group->soc = soc;
|
||||
|
||||
group->group = iommu_group_alloc();
|
||||
if (dev_is_pci(dev))
|
||||
group->group = pci_device_group(dev);
|
||||
else
|
||||
group->group = generic_device_group(dev);
|
||||
|
||||
if (IS_ERR(group->group)) {
|
||||
devm_kfree(smmu->dev, group);
|
||||
mutex_unlock(&smmu->lock);
|
||||
|
@ -950,24 +886,24 @@ static struct iommu_group *tegra_smmu_group_get(struct tegra_smmu *smmu,
|
|||
return group->group;
|
||||
}
|
||||
|
||||
static struct iommu_group *tegra_smmu_device_group(struct device *dev)
|
||||
{
|
||||
struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
|
||||
struct tegra_smmu *smmu = dev_iommu_priv_get(dev);
|
||||
struct iommu_group *group;
|
||||
|
||||
group = tegra_smmu_group_get(smmu, fwspec->ids[0]);
|
||||
if (!group)
|
||||
group = generic_device_group(dev);
|
||||
|
||||
return group;
|
||||
}
|
||||
|
||||
static int tegra_smmu_of_xlate(struct device *dev,
|
||||
struct of_phandle_args *args)
|
||||
{
|
||||
struct platform_device *iommu_pdev = of_find_device_by_node(args->np);
|
||||
struct tegra_mc *mc = platform_get_drvdata(iommu_pdev);
|
||||
u32 id = args->args[0];
|
||||
|
||||
/*
|
||||
* Note: we are here releasing the reference of &iommu_pdev->dev, which
|
||||
* is mc->dev. Although some functions in tegra_smmu_ops may keep using
|
||||
* its private data beyond this point, it's still safe to do so because
|
||||
* the SMMU parent device is the same as the MC, so the reference count
|
||||
* isn't strictly necessary.
|
||||
*/
|
||||
put_device(&iommu_pdev->dev);
|
||||
|
||||
dev_iommu_priv_set(dev, mc->smmu);
|
||||
|
||||
return iommu_fwspec_add_ids(dev, &id, 1);
|
||||
}
|
||||
|
||||
|
@ -1092,16 +1028,6 @@ struct tegra_smmu *tegra_smmu_probe(struct device *dev,
|
|||
if (!smmu)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
/*
|
||||
* This is a bit of a hack. Ideally we'd want to simply return this
|
||||
* value. However the IOMMU registration process will attempt to add
|
||||
* all devices to the IOMMU when bus_set_iommu() is called. In order
|
||||
* not to rely on global variables to track the IOMMU instance, we
|
||||
* set it here so that it can be looked up from the .probe_device()
|
||||
* callback via the IOMMU device's .drvdata field.
|
||||
*/
|
||||
mc->smmu = smmu;
|
||||
|
||||
size = BITS_TO_LONGS(soc->num_asids) * sizeof(long);
|
||||
|
||||
smmu->asids = devm_kzalloc(dev, size, GFP_KERNEL);
|
||||
|
@ -1154,22 +1080,32 @@ struct tegra_smmu *tegra_smmu_probe(struct device *dev,
|
|||
iommu_device_set_fwnode(&smmu->iommu, dev->fwnode);
|
||||
|
||||
err = iommu_device_register(&smmu->iommu);
|
||||
if (err) {
|
||||
iommu_device_sysfs_remove(&smmu->iommu);
|
||||
return ERR_PTR(err);
|
||||
}
|
||||
if (err)
|
||||
goto remove_sysfs;
|
||||
|
||||
err = bus_set_iommu(&platform_bus_type, &tegra_smmu_ops);
|
||||
if (err < 0) {
|
||||
iommu_device_unregister(&smmu->iommu);
|
||||
iommu_device_sysfs_remove(&smmu->iommu);
|
||||
return ERR_PTR(err);
|
||||
}
|
||||
if (err < 0)
|
||||
goto unregister;
|
||||
|
||||
#ifdef CONFIG_PCI
|
||||
err = bus_set_iommu(&pci_bus_type, &tegra_smmu_ops);
|
||||
if (err < 0)
|
||||
goto unset_platform_bus;
|
||||
#endif
|
||||
|
||||
if (IS_ENABLED(CONFIG_DEBUG_FS))
|
||||
tegra_smmu_debugfs_init(smmu);
|
||||
|
||||
return smmu;
|
||||
|
||||
unset_platform_bus: __maybe_unused;
|
||||
bus_set_iommu(&platform_bus_type, NULL);
|
||||
unregister:
|
||||
iommu_device_unregister(&smmu->iommu);
|
||||
remove_sysfs:
|
||||
iommu_device_sysfs_remove(&smmu->iommu);
|
||||
|
||||
return ERR_PTR(err);
|
||||
}
|
||||
|
||||
void tegra_smmu_remove(struct tegra_smmu *smmu)
|
||||
|
|
|
@ -37,6 +37,9 @@ void iommu_dma_compose_msi_msg(struct msi_desc *desc,
|
|||
|
||||
void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list);
|
||||
|
||||
void iommu_dma_free_cpu_cached_iovas(unsigned int cpu,
|
||||
struct iommu_domain *domain);
|
||||
|
||||
#else /* CONFIG_IOMMU_DMA */
|
||||
|
||||
struct iommu_domain;
|
||||
|
@ -78,5 +81,10 @@ static inline void iommu_dma_get_resv_regions(struct device *dev, struct list_he
|
|||
{
|
||||
}
|
||||
|
||||
static inline void iommu_dma_free_cpu_cached_iovas(unsigned int cpu,
|
||||
struct iommu_domain *domain)
|
||||
{
|
||||
}
|
||||
|
||||
#endif /* CONFIG_IOMMU_DMA */
|
||||
#endif /* __DMA_IOMMU_H */
|
||||
|
|
|
@ -25,8 +25,6 @@ enum io_pgtable_fmt {
|
|||
* @tlb_flush_walk: Synchronously invalidate all intermediate TLB state
|
||||
* (sometimes referred to as the "walk cache") for a virtual
|
||||
* address range.
|
||||
* @tlb_flush_leaf: Synchronously invalidate all leaf TLB state for a virtual
|
||||
* address range.
|
||||
* @tlb_add_page: Optional callback to queue up leaf TLB invalidation for a
|
||||
* single page. IOMMUs that cannot batch TLB invalidation
|
||||
* operations efficiently will typically issue them here, but
|
||||
|
@ -40,8 +38,6 @@ struct iommu_flush_ops {
|
|||
void (*tlb_flush_all)(void *cookie);
|
||||
void (*tlb_flush_walk)(unsigned long iova, size_t size, size_t granule,
|
||||
void *cookie);
|
||||
void (*tlb_flush_leaf)(unsigned long iova, size_t size, size_t granule,
|
||||
void *cookie);
|
||||
void (*tlb_add_page)(struct iommu_iotlb_gather *gather,
|
||||
unsigned long iova, size_t granule, void *cookie);
|
||||
};
|
||||
|
@ -228,13 +224,6 @@ io_pgtable_tlb_flush_walk(struct io_pgtable *iop, unsigned long iova,
|
|||
iop->cfg.tlb->tlb_flush_walk(iova, size, granule, iop->cookie);
|
||||
}
|
||||
|
||||
static inline void
|
||||
io_pgtable_tlb_flush_leaf(struct io_pgtable *iop, unsigned long iova,
|
||||
size_t size, size_t granule)
|
||||
{
|
||||
iop->cfg.tlb->tlb_flush_leaf(iova, size, granule, iop->cookie);
|
||||
}
|
||||
|
||||
static inline void
|
||||
io_pgtable_tlb_add_page(struct io_pgtable *iop,
|
||||
struct iommu_iotlb_gather * gather, unsigned long iova,
|
||||
|
|
|
@ -34,7 +34,8 @@ struct ioasid_allocator_ops {
|
|||
#if IS_ENABLED(CONFIG_IOASID)
|
||||
ioasid_t ioasid_alloc(struct ioasid_set *set, ioasid_t min, ioasid_t max,
|
||||
void *private);
|
||||
void ioasid_free(ioasid_t ioasid);
|
||||
void ioasid_get(ioasid_t ioasid);
|
||||
bool ioasid_put(ioasid_t ioasid);
|
||||
void *ioasid_find(struct ioasid_set *set, ioasid_t ioasid,
|
||||
bool (*getter)(void *));
|
||||
int ioasid_register_allocator(struct ioasid_allocator_ops *allocator);
|
||||
|
@ -48,10 +49,15 @@ static inline ioasid_t ioasid_alloc(struct ioasid_set *set, ioasid_t min,
|
|||
return INVALID_IOASID;
|
||||
}
|
||||
|
||||
static inline void ioasid_free(ioasid_t ioasid)
|
||||
static inline void ioasid_get(ioasid_t ioasid)
|
||||
{
|
||||
}
|
||||
|
||||
static inline bool ioasid_put(ioasid_t ioasid)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
static inline void *ioasid_find(struct ioasid_set *set, ioasid_t ioasid,
|
||||
bool (*getter)(void *))
|
||||
{
|
||||
|
|
|
@ -181,6 +181,7 @@ struct iommu_iotlb_gather {
|
|||
unsigned long start;
|
||||
unsigned long end;
|
||||
size_t pgsize;
|
||||
struct page *freelist;
|
||||
};
|
||||
|
||||
/**
|
||||
|
|
|
@ -136,8 +136,6 @@ static inline unsigned long iova_pfn(struct iova_domain *iovad, dma_addr_t iova)
|
|||
int iova_cache_get(void);
|
||||
void iova_cache_put(void);
|
||||
|
||||
struct iova *alloc_iova_mem(void);
|
||||
void free_iova_mem(struct iova *iova);
|
||||
void free_iova(struct iova_domain *iovad, unsigned long pfn);
|
||||
void __free_iova(struct iova_domain *iovad, struct iova *iova);
|
||||
struct iova *alloc_iova(struct iova_domain *iovad, unsigned long size,
|
||||
|
@ -160,8 +158,6 @@ int init_iova_flush_queue(struct iova_domain *iovad,
|
|||
iova_flush_cb flush_cb, iova_entry_dtor entry_dtor);
|
||||
struct iova *find_iova(struct iova_domain *iovad, unsigned long pfn);
|
||||
void put_iova_domain(struct iova_domain *iovad);
|
||||
struct iova *split_and_remove_iova(struct iova_domain *iovad,
|
||||
struct iova *iova, unsigned long pfn_lo, unsigned long pfn_hi);
|
||||
void free_cpu_cached_iovas(unsigned int cpu, struct iova_domain *iovad);
|
||||
#else
|
||||
static inline int iova_cache_get(void)
|
||||
|
@ -173,15 +169,6 @@ static inline void iova_cache_put(void)
|
|||
{
|
||||
}
|
||||
|
||||
static inline struct iova *alloc_iova_mem(void)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static inline void free_iova_mem(struct iova *iova)
|
||||
{
|
||||
}
|
||||
|
||||
static inline void free_iova(struct iova_domain *iovad, unsigned long pfn)
|
||||
{
|
||||
}
|
||||
|
@ -258,14 +245,6 @@ static inline void put_iova_domain(struct iova_domain *iovad)
|
|||
{
|
||||
}
|
||||
|
||||
static inline struct iova *split_and_remove_iova(struct iova_domain *iovad,
|
||||
struct iova *iova,
|
||||
unsigned long pfn_lo,
|
||||
unsigned long pfn_hi)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static inline void free_cpu_cached_iovas(unsigned int cpu,
|
||||
struct iova_domain *iovad)
|
||||
{
|
||||
|
|
Loading…
Reference in New Issue