IOMMU Updates for Linux v4.8

In the updates:
 
 	* Big endian support and preparation for defered probing for the
 	  Exynos IOMMU driver
 
 	* Simplifications in iommu-group id handling
 
 	* Support for Mediatek generation one IOMMU hardware
 
 	* Conversion of the AMD IOMMU driver to use the generic IOVA
 	  allocator. This driver now also benefits from the recent
 	  scalability improvements in the IOVA code.
 
 	* Preparations to use generic DMA mapping code in the Rockchip
 	  IOMMU driver
 
 	* Device tree adaption and conversion to use generic page-table
 	  code for the MSM IOMMU driver
 
 	* An iova_to_phys optimization in the ARM-SMMU driver to greatly
 	  improve page-table teardown performance with VFIO
 
 	* Various other small fixes and conversions
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2
 
 iQIcBAABAgAGBQJXl3e+AAoJECvwRC2XARrjMIgP/1Mm9qIfcaAxKY4ByqbVfrH8
 313PO6rpwUhhywUmnf/1F/x+JbuLv8MmRXfSc106mdB1rq9NXpkORYKrqVxs0cSq
 6u6TzZWbF6WN1ipqXxDITNFBSy7u97K1VuFaKyYFfLbg8xrkcdkMZJ7BqM2xIEdk
 rnRKcfHo6wsmCXJ6InsUPmKAqU6AfMewZTGjO+v77Gce0rZEbsJ8n7BRKC9vO2bc
 akvN2W+zzEUSyhbuyYQBG+agpmC5GJvz4u+6QvAP5sxTWfAsnwAoPpP4xxR+/KjT
 eicHlja4v0YK6Hr4AJaMxoKfKIrCdqpWm0D2tg/edyWZCeg98AW/w7/s0I8OD3ao
 Otj6IqC8nPk0pYciOeEPQ7aqPbvKAqU2FYWt7lWamrdr98u2R3p2nXGl0KthoAj6
 JqzrCZXvBS7sj1IPLlGpj939yvbKbjpE0p7y1qhI1VEBXoBWFNvlKydkYx76BTGK
 F6paGVqn2Zwy00AqAsylTEkvIK063zwShZ6nPqz4bMdVlgzjrjCzdDecjfbHr8Ic
 6D2oCwyF+RJ8qw+Ecm9EmWFik80sgb+iUTeeYEXNf+YzLYt5McIj7fi3N+sUPel3
 YJ4S4x0sIpgUZZ1i+rOo8ZPAFHRU6SRPYV+ewaeYKrMt+Un5dTn9SddpqrJdbiUu
 YrF36BaQjc123IRGKrSd
 =xiS2
 -----END PGP SIGNATURE-----

Merge tag 'iommu-updates-v4.8' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu

Pull IOMMU updates from Joerg Roedel:

 - big-endian support and preparation for defered probing for the Exynos
   IOMMU driver

 - simplifications in iommu-group id handling

 - support for Mediatek generation one IOMMU hardware

 - conversion of the AMD IOMMU driver to use the generic IOVA allocator.
   This driver now also benefits from the recent scalability
   improvements in the IOVA code.

 - preparations to use generic DMA mapping code in the Rockchip IOMMU
   driver

 - device tree adaption and conversion to use generic page-table code
   for the MSM IOMMU driver

 - an iova_to_phys optimization in the ARM-SMMU driver to greatly
   improve page-table teardown performance with VFIO

 - various other small fixes and conversions

* tag 'iommu-updates-v4.8' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (59 commits)
  iommu/amd: Initialize dma-ops domains with 3-level page-table
  iommu/amd: Update Alias-DTE in update_device_table()
  iommu/vt-d: Return error code in domain_context_mapping_one()
  iommu/amd: Use container_of to get dma_ops_domain
  iommu/amd: Flush iova queue before releasing dma_ops_domain
  iommu/amd: Handle IOMMU_DOMAIN_DMA in ops->domain_free call-back
  iommu/amd: Use dev_data->domain in get_domain()
  iommu/amd: Optimize map_sg and unmap_sg
  iommu/amd: Introduce dir2prot() helper
  iommu/amd: Implement timeout to flush unmap queues
  iommu/amd: Implement flush queue
  iommu/amd: Allow NULL pointer parameter for domain_flush_complete()
  iommu/amd: Set up data structures for flush queue
  iommu/amd: Remove align-parameter from __map_single()
  iommu/amd: Remove other remains of old address allocator
  iommu/amd: Make use of the generic IOVA allocator
  iommu/amd: Remove special mapping code for dma_ops path
  iommu/amd: Pass gfp-flags to iommu_map_page()
  iommu/amd: Implement apply_dm_region call-back
  iommu/amd: Create a list of reserved iova addresses
  ...
This commit is contained in:
Linus Torvalds 2016-08-01 07:25:10 -04:00
commit dd9671172a
28 changed files with 2330 additions and 1673 deletions

View File

@ -1,6 +1,6 @@
* ARM SMMUv3 Architecture Implementation
The SMMUv3 architecture is a significant deparature from previous
The SMMUv3 architecture is a significant departure from previous
revisions, replacing the MMIO register interface with in-memory command
and event queues and adding support for the ATS and PRI components of
the PCIe specification.

View File

@ -1,7 +1,9 @@
* Mediatek IOMMU Architecture Implementation
Some Mediatek SOCs contain a Multimedia Memory Management Unit (M4U) which
uses the ARM Short-Descriptor translation table format for address translation.
Some Mediatek SOCs contain a Multimedia Memory Management Unit (M4U), and
this M4U have two generations of HW architecture. Generation one uses flat
pagetable, and only supports 4K size page mapping. Generation two uses the
ARM Short-Descriptor translation table format for address translation.
About the M4U Hardware Block Diagram, please check below:
@ -36,7 +38,9 @@ in each larb. Take a example, There are many ports like MC, PP, VLD in the
video decode local arbiter, all these ports are according to the video HW.
Required properties:
- compatible : must be "mediatek,mt8173-m4u".
- compatible : must be one of the following string:
"mediatek,mt2701-m4u" for mt2701 which uses generation one m4u HW.
"mediatek,mt8173-m4u" for mt8173 which uses generation two m4u HW.
- reg : m4u register base and size.
- interrupts : the interrupt of m4u.
- clocks : must contain one entry for each clock-names.
@ -46,7 +50,8 @@ Required properties:
according to the local arbiter index, like larb0, larb1, larb2...
- iommu-cells : must be 1. This is the mtk_m4u_id according to the HW.
Specifies the mtk_m4u_id as defined in
dt-binding/memory/mt8173-larb-port.h.
dt-binding/memory/mt2701-larb-port.h for mt2701 and
dt-binding/memory/mt8173-larb-port.h for mt8173
Example:
iommu: iommu@10205000 {

View File

@ -0,0 +1,64 @@
* QCOM IOMMU
The MSM IOMMU is an implementation compatible with the ARM VMSA short
descriptor page tables. It provides address translation for bus masters outside
of the CPU, each connected to the IOMMU through a port called micro-TLB.
Required Properties:
- compatible: Must contain "qcom,apq8064-iommu".
- reg: Base address and size of the IOMMU registers.
- interrupts: Specifiers for the MMU fault interrupts. For instances that
support secure mode two interrupts must be specified, for non-secure and
secure mode, in that order. For instances that don't support secure mode a
single interrupt must be specified.
- #iommu-cells: The number of cells needed to specify the stream id. This
is always 1.
- qcom,ncb: The total number of context banks in the IOMMU.
- clocks : List of clocks to be used during SMMU register access. See
Documentation/devicetree/bindings/clock/clock-bindings.txt
for information about the format. For each clock specified
here, there must be a corresponding entry in clock-names
(see below).
- clock-names : List of clock names corresponding to the clocks specified in
the "clocks" property (above).
Should be "smmu_pclk" for specifying the interface clock
required for iommu's register accesses.
Should be "smmu_clk" for specifying the functional clock
required by iommu for bus accesses.
Each bus master connected to an IOMMU must reference the IOMMU in its device
node with the following property:
- iommus: A reference to the IOMMU in multiple cells. The first cell is a
phandle to the IOMMU and the second cell is the stream id.
A single master device can be connected to more than one iommu
and multiple contexts in each of the iommu. So multiple entries
are required to list all the iommus and the stream ids that the
master is connected to.
Example: mdp iommu and its bus master
mdp_port0: iommu@7500000 {
compatible = "qcom,apq8064-iommu";
#iommu-cells = <1>;
clock-names =
"smmu_pclk",
"smmu_clk";
clocks =
<&mmcc SMMU_AHB_CLK>,
<&mmcc MDP_AXI_CLK>;
reg = <0x07500000 0x100000>;
interrupts =
<GIC_SPI 63 0>,
<GIC_SPI 64 0>;
qcom,ncb = <2>;
};
mdp: qcom,mdp@5100000 {
compatible = "qcom,mdp";
...
iommus = <&mdp_port0 0
&mdp_port0 2>;
};

View File

@ -2,16 +2,31 @@ SMI (Smart Multimedia Interface) Common
The hardware block diagram please check bindings/iommu/mediatek,iommu.txt
Mediatek SMI have two generations of HW architecture, mt8173 uses the second
generation of SMI HW while mt2701 uses the first generation HW of SMI.
There's slight differences between the two SMI, for generation 2, the
register which control the iommu port is at each larb's register base. But
for generation 1, the register is at smi ao base(smi always on register
base). Besides that, the smi async clock should be prepared and enabled for
SMI generation 1 to transform the smi clock into emi clock domain, but that is
not needed for SMI generation 2.
Required properties:
- compatible : must be "mediatek,mt8173-smi-common"
- compatible : must be one of :
"mediatek,mt2701-smi-common"
"mediatek,mt8173-smi-common"
- reg : the register and size of the SMI block.
- power-domains : a phandle to the power domain of this local arbiter.
- clocks : Must contain an entry for each entry in clock-names.
- clock-names : must contain 2 entries, as follows:
- clock-names : must contain 3 entries for generation 1 smi HW and 2 entries
for generation 2 smi HW as follows:
- "apb" : Advanced Peripheral Bus clock, It's the clock for setting
the register.
- "smi" : It's the clock for transfer data and command.
They may be the same if both source clocks are the same.
They may be the same if both source clocks are the same.
- "async" : asynchronous clock, it help transform the smi clock into the emi
clock domain, this clock is only needed by generation 1 smi HW.
Example:
smi_common: smi@14022000 {

View File

@ -3,7 +3,9 @@ SMI (Smart Multimedia Interface) Local Arbiter
The hardware block diagram please check bindings/iommu/mediatek,iommu.txt
Required properties:
- compatible : must be "mediatek,mt8173-smi-larb"
- compatible : must be one of :
"mediatek,mt8173-smi-larb"
"mediatek,mt2701-smi-larb"
- reg : the register and size of this local arbiter.
- mediatek,smi : a phandle to the smi_common node.
- power-domains : a phandle to the power domain of this local arbiter.

View File

@ -6223,6 +6223,7 @@ M: Joerg Roedel <joro@8bytes.org>
L: iommu@lists.linux-foundation.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git
S: Maintained
F: Documentation/devicetree/bindings/iommu/
F: drivers/iommu/
IP MASQUERADING

View File

@ -89,8 +89,8 @@ config MSM_IOMMU
bool "MSM IOMMU Support"
depends on ARM
depends on ARCH_MSM8X60 || ARCH_MSM8960 || COMPILE_TEST
depends on BROKEN
select IOMMU_API
select IOMMU_IO_PGTABLE_ARMV7S
help
Support for the IOMMUs found on certain Qualcomm SOCs.
These IOMMUs allow virtualization of the address space used by most
@ -111,6 +111,7 @@ config AMD_IOMMU
select PCI_PRI
select PCI_PASID
select IOMMU_API
select IOMMU_IOVA
depends on X86_64 && PCI && ACPI
---help---
With this option you can enable support for AMD IOMMU hardware in
@ -343,4 +344,22 @@ config MTK_IOMMU
If unsure, say N here.
config MTK_IOMMU_V1
bool "MTK IOMMU Version 1 (M4U gen1) Support"
depends on ARM
depends on ARCH_MEDIATEK || COMPILE_TEST
select ARM_DMA_USE_IOMMU
select IOMMU_API
select MEMORY
select MTK_SMI
select COMMON_CLK_MT2701_MMSYS
select COMMON_CLK_MT2701_IMGSYS
select COMMON_CLK_MT2701_VDECSYS
help
Support for the M4U on certain Mediatek SoCs. M4U generation 1 HW is
Multimedia Memory Managememt Unit. This option enables remapping of
DMA memory accesses for the multimedia subsystem.
if unsure, say N here.
endif # IOMMU_SUPPORT

View File

@ -7,7 +7,7 @@ obj-$(CONFIG_IOMMU_IO_PGTABLE_ARMV7S) += io-pgtable-arm-v7s.o
obj-$(CONFIG_IOMMU_IO_PGTABLE_LPAE) += io-pgtable-arm.o
obj-$(CONFIG_IOMMU_IOVA) += iova.o
obj-$(CONFIG_OF_IOMMU) += of_iommu.o
obj-$(CONFIG_MSM_IOMMU) += msm_iommu.o msm_iommu_dev.o
obj-$(CONFIG_MSM_IOMMU) += msm_iommu.o
obj-$(CONFIG_AMD_IOMMU) += amd_iommu.o amd_iommu_init.o
obj-$(CONFIG_AMD_IOMMU_V2) += amd_iommu_v2.o
obj-$(CONFIG_ARM_SMMU) += arm-smmu.o
@ -18,6 +18,7 @@ obj-$(CONFIG_INTEL_IOMMU_SVM) += intel-svm.o
obj-$(CONFIG_IPMMU_VMSA) += ipmmu-vmsa.o
obj-$(CONFIG_IRQ_REMAP) += intel_irq_remapping.o irq_remapping.o
obj-$(CONFIG_MTK_IOMMU) += mtk_iommu.o
obj-$(CONFIG_MTK_IOMMU_V1) += mtk_iommu_v1.o
obj-$(CONFIG_OMAP_IOMMU) += omap-iommu.o
obj-$(CONFIG_OMAP_IOMMU_DEBUG) += omap-iommu-debug.o
obj-$(CONFIG_ROCKCHIP_IOMMU) += rockchip-iommu.o

File diff suppressed because it is too large Load Diff

View File

@ -421,7 +421,6 @@ struct protection_domain {
bool updated; /* complete domain flush required */
unsigned dev_cnt; /* devices assigned to this domain */
unsigned dev_iommu[MAX_IOMMUS]; /* per-IOMMU reference count */
void *priv; /* private data */
};
/*

View File

@ -960,7 +960,7 @@ static int __init amd_iommu_v2_init(void)
spin_lock_init(&state_lock);
ret = -ENOMEM;
iommu_wq = create_workqueue("amd_iommu_v2");
iommu_wq = alloc_workqueue("amd_iommu_v2", WQ_MEM_RECLAIM, 0);
if (iommu_wq == NULL)
goto out;

View File

@ -2687,6 +2687,8 @@ static int __init arm_smmu_init(void)
if (ret)
return ret;
pci_request_acs();
return bus_set_iommu(&pci_bus_type, &arm_smmu_ops);
}

View File

@ -987,8 +987,8 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain,
* handler seeing a half-initialised domain state.
*/
irq = smmu->irqs[smmu->num_global_irqs + cfg->irptndx];
ret = request_irq(irq, arm_smmu_context_fault, IRQF_SHARED,
"arm-smmu-context-fault", domain);
ret = devm_request_irq(smmu->dev, irq, arm_smmu_context_fault,
IRQF_SHARED, "arm-smmu-context-fault", domain);
if (ret < 0) {
dev_err(smmu->dev, "failed to request context IRQ %d (%u)\n",
cfg->irptndx, irq);
@ -1028,7 +1028,7 @@ static void arm_smmu_destroy_domain_context(struct iommu_domain *domain)
if (cfg->irptndx != INVALID_IRPTNDX) {
irq = smmu->irqs[smmu->num_global_irqs + cfg->irptndx];
free_irq(irq, domain);
devm_free_irq(smmu->dev, irq, domain);
}
free_io_pgtable_ops(smmu_domain->pgtbl_ops);
@ -1986,15 +1986,15 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev)
}
for (i = 0; i < smmu->num_global_irqs; ++i) {
err = request_irq(smmu->irqs[i],
arm_smmu_global_fault,
IRQF_SHARED,
"arm-smmu global fault",
smmu);
err = devm_request_irq(smmu->dev, smmu->irqs[i],
arm_smmu_global_fault,
IRQF_SHARED,
"arm-smmu global fault",
smmu);
if (err) {
dev_err(dev, "failed to request global IRQ %d (%u)\n",
i, smmu->irqs[i]);
goto out_free_irqs;
goto out_put_masters;
}
}
@ -2006,10 +2006,6 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev)
arm_smmu_device_reset(smmu);
return 0;
out_free_irqs:
while (i--)
free_irq(smmu->irqs[i], smmu);
out_put_masters:
for (node = rb_first(&smmu->masters); node; node = rb_next(node)) {
struct arm_smmu_master *master
@ -2050,7 +2046,7 @@ static int arm_smmu_device_remove(struct platform_device *pdev)
dev_err(dev, "removing device with active domains!\n");
for (i = 0; i < smmu->num_global_irqs; ++i)
free_irq(smmu->irqs[i], smmu);
devm_free_irq(smmu->dev, smmu->irqs[i], smmu);
/* Turn the thing off */
writel(sCR0_CLIENTPD, ARM_SMMU_GR0_NS(smmu) + ARM_SMMU_GR0_sCR0);
@ -2096,8 +2092,10 @@ static int __init arm_smmu_init(void)
#endif
#ifdef CONFIG_PCI
if (!iommu_present(&pci_bus_type))
if (!iommu_present(&pci_bus_type)) {
pci_request_acs();
bus_set_iommu(&pci_bus_type, &arm_smmu_ops);
}
#endif
return 0;

View File

@ -241,8 +241,20 @@ int dmar_insert_dev_scope(struct dmar_pci_notify_info *info,
if (!dmar_match_pci_path(info, scope->bus, path, level))
continue;
if ((scope->entry_type == ACPI_DMAR_SCOPE_TYPE_ENDPOINT) ^
(info->dev->hdr_type == PCI_HEADER_TYPE_NORMAL)) {
/*
* We expect devices with endpoint scope to have normal PCI
* headers, and devices with bridge scope to have bridge PCI
* headers. However PCI NTB devices may be listed in the
* DMAR table with bridge scope, even though they have a
* normal PCI header. NTB devices are identified by class
* "BRIDGE_OTHER" (0680h) - we don't declare a socpe mismatch
* for this special case.
*/
if ((scope->entry_type == ACPI_DMAR_SCOPE_TYPE_ENDPOINT &&
info->dev->hdr_type != PCI_HEADER_TYPE_NORMAL) ||
(scope->entry_type == ACPI_DMAR_SCOPE_TYPE_BRIDGE &&
(info->dev->hdr_type == PCI_HEADER_TYPE_NORMAL &&
info->dev->class >> 8 != PCI_CLASS_BRIDGE_OTHER))) {
pr_warn("Device scope type does not match for %s\n",
pci_name(info->dev));
return -EINVAL;
@ -1155,8 +1167,6 @@ static int qi_check_fault(struct intel_iommu *iommu, int index)
(unsigned long long)qi->desc[index].high);
memcpy(&qi->desc[index], &qi->desc[wait_index],
sizeof(struct qi_desc));
__iommu_flush_cache(iommu, &qi->desc[index],
sizeof(struct qi_desc));
writel(DMA_FSTS_IQE, iommu->reg + DMAR_FSTS_REG);
return -EINVAL;
}
@ -1231,9 +1241,6 @@ restart:
hw[wait_index] = wait_desc;
__iommu_flush_cache(iommu, &hw[index], sizeof(struct qi_desc));
__iommu_flush_cache(iommu, &hw[wait_index], sizeof(struct qi_desc));
qi->free_head = (qi->free_head + 2) % QI_LENGTH;
qi->free_cnt -= 2;

View File

@ -54,6 +54,10 @@ typedef u32 sysmmu_pte_t;
#define lv2ent_small(pent) ((*(pent) & 2) == 2)
#define lv2ent_large(pent) ((*(pent) & 3) == 1)
#ifdef CONFIG_BIG_ENDIAN
#warning "revisit driver if we can enable big-endian ptes"
#endif
/*
* v1.x - v3.x SYSMMU supports 32bit physical and 32bit virtual address spaces
* v5.0 introduced support for 36bit physical address space by shifting
@ -322,14 +326,27 @@ static void __sysmmu_set_ptbase(struct sysmmu_drvdata *data, phys_addr_t pgd)
__sysmmu_tlb_invalidate(data);
}
static void __sysmmu_enable_clocks(struct sysmmu_drvdata *data)
{
BUG_ON(clk_prepare_enable(data->clk_master));
BUG_ON(clk_prepare_enable(data->clk));
BUG_ON(clk_prepare_enable(data->pclk));
BUG_ON(clk_prepare_enable(data->aclk));
}
static void __sysmmu_disable_clocks(struct sysmmu_drvdata *data)
{
clk_disable_unprepare(data->aclk);
clk_disable_unprepare(data->pclk);
clk_disable_unprepare(data->clk);
clk_disable_unprepare(data->clk_master);
}
static void __sysmmu_get_version(struct sysmmu_drvdata *data)
{
u32 ver;
clk_enable(data->clk_master);
clk_enable(data->clk);
clk_enable(data->pclk);
clk_enable(data->aclk);
__sysmmu_enable_clocks(data);
ver = readl(data->sfrbase + REG_MMU_VERSION);
@ -342,10 +359,7 @@ static void __sysmmu_get_version(struct sysmmu_drvdata *data)
dev_dbg(data->sysmmu, "hardware version: %d.%d\n",
MMU_MAJ_VER(data->version), MMU_MIN_VER(data->version));
clk_disable(data->aclk);
clk_disable(data->pclk);
clk_disable(data->clk);
clk_disable(data->clk_master);
__sysmmu_disable_clocks(data);
}
static void show_fault_information(struct sysmmu_drvdata *data,
@ -427,10 +441,7 @@ static void __sysmmu_disable_nocount(struct sysmmu_drvdata *data)
writel(CTRL_DISABLE, data->sfrbase + REG_MMU_CTRL);
writel(0, data->sfrbase + REG_MMU_CFG);
clk_disable(data->aclk);
clk_disable(data->pclk);
clk_disable(data->clk);
clk_disable(data->clk_master);
__sysmmu_disable_clocks(data);
}
static bool __sysmmu_disable(struct sysmmu_drvdata *data)
@ -475,10 +486,7 @@ static void __sysmmu_init_config(struct sysmmu_drvdata *data)
static void __sysmmu_enable_nocount(struct sysmmu_drvdata *data)
{
clk_enable(data->clk_master);
clk_enable(data->clk);
clk_enable(data->pclk);
clk_enable(data->aclk);
__sysmmu_enable_clocks(data);
writel(CTRL_BLOCK, data->sfrbase + REG_MMU_CTRL);
@ -488,6 +496,12 @@ static void __sysmmu_enable_nocount(struct sysmmu_drvdata *data)
writel(CTRL_ENABLE, data->sfrbase + REG_MMU_CTRL);
/*
* SYSMMU driver keeps master's clock enabled only for the short
* time, while accessing the registers. For performing address
* translation during DMA transaction it relies on the client
* driver to enable it.
*/
clk_disable(data->clk_master);
}
@ -524,16 +538,15 @@ static void sysmmu_tlb_invalidate_flpdcache(struct sysmmu_drvdata *data,
{
unsigned long flags;
clk_enable(data->clk_master);
spin_lock_irqsave(&data->lock, flags);
if (is_sysmmu_active(data)) {
if (data->version >= MAKE_MMU_VER(3, 3))
__sysmmu_tlb_invalidate_entry(data, iova, 1);
if (is_sysmmu_active(data) && data->version >= MAKE_MMU_VER(3, 3)) {
clk_enable(data->clk_master);
__sysmmu_tlb_invalidate_entry(data, iova, 1);
clk_disable(data->clk_master);
}
spin_unlock_irqrestore(&data->lock, flags);
clk_disable(data->clk_master);
}
static void sysmmu_tlb_invalidate_entry(struct sysmmu_drvdata *data,
@ -572,6 +585,8 @@ static void sysmmu_tlb_invalidate_entry(struct sysmmu_drvdata *data,
spin_unlock_irqrestore(&data->lock, flags);
}
static struct iommu_ops exynos_iommu_ops;
static int __init exynos_sysmmu_probe(struct platform_device *pdev)
{
int irq, ret;
@ -602,37 +617,22 @@ static int __init exynos_sysmmu_probe(struct platform_device *pdev)
}
data->clk = devm_clk_get(dev, "sysmmu");
if (!IS_ERR(data->clk)) {
ret = clk_prepare(data->clk);
if (ret) {
dev_err(dev, "Failed to prepare clk\n");
return ret;
}
} else {
if (PTR_ERR(data->clk) == -ENOENT)
data->clk = NULL;
}
else if (IS_ERR(data->clk))
return PTR_ERR(data->clk);
data->aclk = devm_clk_get(dev, "aclk");
if (!IS_ERR(data->aclk)) {
ret = clk_prepare(data->aclk);
if (ret) {
dev_err(dev, "Failed to prepare aclk\n");
return ret;
}
} else {
if (PTR_ERR(data->aclk) == -ENOENT)
data->aclk = NULL;
}
else if (IS_ERR(data->aclk))
return PTR_ERR(data->aclk);
data->pclk = devm_clk_get(dev, "pclk");
if (!IS_ERR(data->pclk)) {
ret = clk_prepare(data->pclk);
if (ret) {
dev_err(dev, "Failed to prepare pclk\n");
return ret;
}
} else {
if (PTR_ERR(data->pclk) == -ENOENT)
data->pclk = NULL;
}
else if (IS_ERR(data->pclk))
return PTR_ERR(data->pclk);
if (!data->clk && (!data->aclk || !data->pclk)) {
dev_err(dev, "Failed to get device clock(s)!\n");
@ -640,15 +640,10 @@ static int __init exynos_sysmmu_probe(struct platform_device *pdev)
}
data->clk_master = devm_clk_get(dev, "master");
if (!IS_ERR(data->clk_master)) {
ret = clk_prepare(data->clk_master);
if (ret) {
dev_err(dev, "Failed to prepare master's clk\n");
return ret;
}
} else {
if (PTR_ERR(data->clk_master) == -ENOENT)
data->clk_master = NULL;
}
else if (IS_ERR(data->clk_master))
return PTR_ERR(data->clk_master);
data->sysmmu = dev;
spin_lock_init(&data->lock);
@ -665,6 +660,8 @@ static int __init exynos_sysmmu_probe(struct platform_device *pdev)
pm_runtime_enable(dev);
of_iommu_set_ops(dev->of_node, &exynos_iommu_ops);
return 0;
}
@ -709,6 +706,7 @@ static struct platform_driver exynos_sysmmu_driver __refdata = {
.name = "exynos-sysmmu",
.of_match_table = sysmmu_of_match,
.pm = &sysmmu_pm_ops,
.suppress_bind_attrs = true,
}
};
@ -716,7 +714,7 @@ static inline void update_pte(sysmmu_pte_t *ent, sysmmu_pte_t val)
{
dma_sync_single_for_cpu(dma_dev, virt_to_phys(ent), sizeof(*ent),
DMA_TO_DEVICE);
*ent = val;
*ent = cpu_to_le32(val);
dma_sync_single_for_device(dma_dev, virt_to_phys(ent), sizeof(*ent),
DMA_TO_DEVICE);
}
@ -1357,7 +1355,6 @@ static int __init exynos_iommu_of_setup(struct device_node *np)
if (!dma_dev)
dma_dev = &pdev->dev;
of_iommu_set_ops(np, &exynos_iommu_ops);
return 0;
}

View File

@ -1672,7 +1672,7 @@ static int iommu_init_domains(struct intel_iommu *iommu)
return -ENOMEM;
}
size = ((ndomains >> 8) + 1) * sizeof(struct dmar_domain **);
size = (ALIGN(ndomains, 256) >> 8) * sizeof(struct dmar_domain **);
iommu->domains = kzalloc(size, GFP_KERNEL);
if (iommu->domains) {
@ -1737,7 +1737,7 @@ static void disable_dmar_iommu(struct intel_iommu *iommu)
static void free_dmar_iommu(struct intel_iommu *iommu)
{
if ((iommu->domains) && (iommu->domain_ids)) {
int elems = (cap_ndoms(iommu->cap) >> 8) + 1;
int elems = ALIGN(cap_ndoms(iommu->cap), 256) >> 8;
int i;
for (i = 0; i < elems; i++)
@ -2076,7 +2076,7 @@ out_unlock:
spin_unlock(&iommu->lock);
spin_unlock_irqrestore(&device_domain_lock, flags);
return 0;
return ret;
}
struct domain_context_mapping_data {

View File

@ -576,7 +576,7 @@ static phys_addr_t arm_lpae_iova_to_phys(struct io_pgtable_ops *ops,
return 0;
found_translation:
iova &= (ARM_LPAE_GRANULE(data) - 1);
iova &= (ARM_LPAE_BLOCK_SIZE(lvl, data) - 1);
return ((phys_addr_t)iopte_to_pfn(pte,data) << data->pg_shift) | iova;
}

View File

@ -34,8 +34,7 @@
#include <trace/events/iommu.h>
static struct kset *iommu_group_kset;
static struct ida iommu_group_ida;
static struct mutex iommu_group_mutex;
static DEFINE_IDA(iommu_group_ida);
struct iommu_callback_data {
const struct iommu_ops *ops;
@ -144,9 +143,7 @@ static void iommu_group_release(struct kobject *kobj)
if (group->iommu_data_release)
group->iommu_data_release(group->iommu_data);
mutex_lock(&iommu_group_mutex);
ida_remove(&iommu_group_ida, group->id);
mutex_unlock(&iommu_group_mutex);
ida_simple_remove(&iommu_group_ida, group->id);
if (group->default_domain)
iommu_domain_free(group->default_domain);
@ -186,26 +183,17 @@ struct iommu_group *iommu_group_alloc(void)
INIT_LIST_HEAD(&group->devices);
BLOCKING_INIT_NOTIFIER_HEAD(&group->notifier);
mutex_lock(&iommu_group_mutex);
again:
if (unlikely(0 == ida_pre_get(&iommu_group_ida, GFP_KERNEL))) {
ret = ida_simple_get(&iommu_group_ida, 0, 0, GFP_KERNEL);
if (ret < 0) {
kfree(group);
mutex_unlock(&iommu_group_mutex);
return ERR_PTR(-ENOMEM);
return ERR_PTR(ret);
}
if (-EAGAIN == ida_get_new(&iommu_group_ida, &group->id))
goto again;
mutex_unlock(&iommu_group_mutex);
group->id = ret;
ret = kobject_init_and_add(&group->kobj, &iommu_group_ktype,
NULL, "%d", group->id);
if (ret) {
mutex_lock(&iommu_group_mutex);
ida_remove(&iommu_group_ida, group->id);
mutex_unlock(&iommu_group_mutex);
ida_simple_remove(&iommu_group_ida, group->id);
kfree(group);
return ERR_PTR(ret);
}
@ -348,6 +336,9 @@ static int iommu_group_create_direct_mappings(struct iommu_group *group,
list_for_each_entry(entry, &mappings, list) {
dma_addr_t start, end, addr;
if (domain->ops->apply_dm_region)
domain->ops->apply_dm_region(dev, domain, entry);
start = ALIGN(entry->start, pg_size);
end = ALIGN(entry->start + entry->length, pg_size);
@ -1483,9 +1474,6 @@ static int __init iommu_init(void)
{
iommu_group_kset = kset_create_and_add("iommu_groups",
NULL, kernel_kobj);
ida_init(&iommu_group_ida);
mutex_init(&iommu_group_mutex);
BUG_ON(!iommu_group_kset);
return 0;

File diff suppressed because it is too large Load Diff

View File

@ -42,74 +42,53 @@
*/
#define MAX_NUM_MIDS 32
/* Maximum number of context banks that can be present in IOMMU */
#define IOMMU_MAX_CBS 128
/**
* struct msm_iommu_dev - a single IOMMU hardware instance
* name Human-readable name given to this IOMMU HW instance
* ncb Number of context banks present on this IOMMU HW instance
* dev: IOMMU device
* irq: Interrupt number
* clk: The bus clock for this IOMMU hardware instance
* pclk: The clock for the IOMMU bus interconnect
* dev_node: list head in qcom_iommu_device_list
* dom_node: list head for domain
* ctx_list: list of 'struct msm_iommu_ctx_dev'
* context_map: Bitmap to track allocated context banks
*/
struct msm_iommu_dev {
const char *name;
void __iomem *base;
int ncb;
struct device *dev;
int irq;
struct clk *clk;
struct clk *pclk;
struct list_head dev_node;
struct list_head dom_node;
struct list_head ctx_list;
DECLARE_BITMAP(context_map, IOMMU_MAX_CBS);
};
/**
* struct msm_iommu_ctx_dev - an IOMMU context bank instance
* name Human-readable name given to this context bank
* of_node node ptr of client device
* num Index of this context bank within the hardware
* mids List of Machine IDs that are to be mapped into this context
* bank, terminated by -1. The MID is a set of signals on the
* AXI bus that identifies the function associated with a specific
* memory request. (See ARM spec).
* num_mids Total number of mids
* node list head in ctx_list
*/
struct msm_iommu_ctx_dev {
const char *name;
struct device_node *of_node;
int num;
int mids[MAX_NUM_MIDS];
int num_mids;
struct list_head list;
};
/**
* struct msm_iommu_drvdata - A single IOMMU hardware instance
* @base: IOMMU config port base address (VA)
* @ncb The number of contexts on this IOMMU
* @irq: Interrupt number
* @clk: The bus clock for this IOMMU hardware instance
* @pclk: The clock for the IOMMU bus interconnect
*
* A msm_iommu_drvdata holds the global driver data about a single piece
* of an IOMMU hardware instance.
*/
struct msm_iommu_drvdata {
void __iomem *base;
int irq;
int ncb;
struct clk *clk;
struct clk *pclk;
};
/**
* struct msm_iommu_ctx_drvdata - an IOMMU context bank instance
* @num: Hardware context number of this context
* @pdev: Platform device associated wit this HW instance
* @attached_elm: List element for domains to track which devices are
* attached to them
*
* A msm_iommu_ctx_drvdata holds the driver data for a single context bank
* within each IOMMU hardware instance
*/
struct msm_iommu_ctx_drvdata {
int num;
struct platform_device *pdev;
struct list_head attached_elm;
};
/*
* Look up an IOMMU context device by its context name. NULL if none found.
* Useful for testing and drivers that do not yet fully have IOMMU stuff in
* their platform devices.
*/
struct device *msm_iommu_get_ctx(const char *ctx_name);
/*
* Interrupt handler for the IOMMU context fault interrupt. Hooking the
* interrupt is not supported in the API yet, but this will print an error

View File

@ -1,381 +0,0 @@
/* Copyright (c) 2010-2011, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
* 02110-1301, USA.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/io.h>
#include <linux/clk.h>
#include <linux/iommu.h>
#include <linux/interrupt.h>
#include <linux/err.h>
#include <linux/slab.h>
#include "msm_iommu_hw-8xxx.h"
#include "msm_iommu.h"
struct iommu_ctx_iter_data {
/* input */
const char *name;
/* output */
struct device *dev;
};
static struct platform_device *msm_iommu_root_dev;
static int each_iommu_ctx(struct device *dev, void *data)
{
struct iommu_ctx_iter_data *res = data;
struct msm_iommu_ctx_dev *c = dev->platform_data;
if (!res || !c || !c->name || !res->name)
return -EINVAL;
if (!strcmp(res->name, c->name)) {
res->dev = dev;
return 1;
}
return 0;
}
static int each_iommu(struct device *dev, void *data)
{
return device_for_each_child(dev, data, each_iommu_ctx);
}
struct device *msm_iommu_get_ctx(const char *ctx_name)
{
struct iommu_ctx_iter_data r;
int found;
if (!msm_iommu_root_dev) {
pr_err("No root IOMMU device.\n");
goto fail;
}
r.name = ctx_name;
found = device_for_each_child(&msm_iommu_root_dev->dev, &r, each_iommu);
if (!found) {
pr_err("Could not find context <%s>\n", ctx_name);
goto fail;
}
return r.dev;
fail:
return NULL;
}
EXPORT_SYMBOL(msm_iommu_get_ctx);
static void msm_iommu_reset(void __iomem *base, int ncb)
{
int ctx;
SET_RPUE(base, 0);
SET_RPUEIE(base, 0);
SET_ESRRESTORE(base, 0);
SET_TBE(base, 0);
SET_CR(base, 0);
SET_SPDMBE(base, 0);
SET_TESTBUSCR(base, 0);
SET_TLBRSW(base, 0);
SET_GLOBAL_TLBIALL(base, 0);
SET_RPU_ACR(base, 0);
SET_TLBLKCRWE(base, 1);
for (ctx = 0; ctx < ncb; ctx++) {
SET_BPRCOSH(base, ctx, 0);
SET_BPRCISH(base, ctx, 0);
SET_BPRCNSH(base, ctx, 0);
SET_BPSHCFG(base, ctx, 0);
SET_BPMTCFG(base, ctx, 0);
SET_ACTLR(base, ctx, 0);
SET_SCTLR(base, ctx, 0);
SET_FSRRESTORE(base, ctx, 0);
SET_TTBR0(base, ctx, 0);
SET_TTBR1(base, ctx, 0);
SET_TTBCR(base, ctx, 0);
SET_BFBCR(base, ctx, 0);
SET_PAR(base, ctx, 0);
SET_FAR(base, ctx, 0);
SET_CTX_TLBIALL(base, ctx, 0);
SET_TLBFLPTER(base, ctx, 0);
SET_TLBSLPTER(base, ctx, 0);
SET_TLBLKCR(base, ctx, 0);
SET_PRRR(base, ctx, 0);
SET_NMRR(base, ctx, 0);
SET_CONTEXTIDR(base, ctx, 0);
}
}
static int msm_iommu_probe(struct platform_device *pdev)
{
struct resource *r;
struct clk *iommu_clk;
struct clk *iommu_pclk;
struct msm_iommu_drvdata *drvdata;
struct msm_iommu_dev *iommu_dev = dev_get_platdata(&pdev->dev);
void __iomem *regs_base;
int ret, irq, par;
if (pdev->id == -1) {
msm_iommu_root_dev = pdev;
return 0;
}
drvdata = kzalloc(sizeof(*drvdata), GFP_KERNEL);
if (!drvdata) {
ret = -ENOMEM;
goto fail;
}
if (!iommu_dev) {
ret = -ENODEV;
goto fail;
}
iommu_pclk = clk_get(NULL, "smmu_pclk");
if (IS_ERR(iommu_pclk)) {
ret = -ENODEV;
goto fail;
}
ret = clk_prepare_enable(iommu_pclk);
if (ret)
goto fail_enable;
iommu_clk = clk_get(&pdev->dev, "iommu_clk");
if (!IS_ERR(iommu_clk)) {
if (clk_get_rate(iommu_clk) == 0)
clk_set_rate(iommu_clk, 1);
ret = clk_prepare_enable(iommu_clk);
if (ret) {
clk_put(iommu_clk);
goto fail_pclk;
}
} else
iommu_clk = NULL;
r = platform_get_resource_byname(pdev, IORESOURCE_MEM, "physbase");
regs_base = devm_ioremap_resource(&pdev->dev, r);
if (IS_ERR(regs_base)) {
ret = PTR_ERR(regs_base);
goto fail_clk;
}
irq = platform_get_irq_byname(pdev, "secure_irq");
if (irq < 0) {
ret = -ENODEV;
goto fail_clk;
}
msm_iommu_reset(regs_base, iommu_dev->ncb);
SET_M(regs_base, 0, 1);
SET_PAR(regs_base, 0, 0);
SET_V2PCFG(regs_base, 0, 1);
SET_V2PPR(regs_base, 0, 0);
par = GET_PAR(regs_base, 0);
SET_V2PCFG(regs_base, 0, 0);
SET_M(regs_base, 0, 0);
if (!par) {
pr_err("%s: Invalid PAR value detected\n", iommu_dev->name);
ret = -ENODEV;
goto fail_clk;
}
ret = request_irq(irq, msm_iommu_fault_handler, 0,
"msm_iommu_secure_irpt_handler", drvdata);
if (ret) {
pr_err("Request IRQ %d failed with ret=%d\n", irq, ret);
goto fail_clk;
}
drvdata->pclk = iommu_pclk;
drvdata->clk = iommu_clk;
drvdata->base = regs_base;
drvdata->irq = irq;
drvdata->ncb = iommu_dev->ncb;
pr_info("device %s mapped at %p, irq %d with %d ctx banks\n",
iommu_dev->name, regs_base, irq, iommu_dev->ncb);
platform_set_drvdata(pdev, drvdata);
clk_disable(iommu_clk);
clk_disable(iommu_pclk);
return 0;
fail_clk:
if (iommu_clk) {
clk_disable(iommu_clk);
clk_put(iommu_clk);
}
fail_pclk:
clk_disable_unprepare(iommu_pclk);
fail_enable:
clk_put(iommu_pclk);
fail:
kfree(drvdata);
return ret;
}
static int msm_iommu_remove(struct platform_device *pdev)
{
struct msm_iommu_drvdata *drv = NULL;
drv = platform_get_drvdata(pdev);
if (drv) {
if (drv->clk) {
clk_unprepare(drv->clk);
clk_put(drv->clk);
}
clk_unprepare(drv->pclk);
clk_put(drv->pclk);
memset(drv, 0, sizeof(*drv));
kfree(drv);
}
return 0;
}
static int msm_iommu_ctx_probe(struct platform_device *pdev)
{
struct msm_iommu_ctx_dev *c = dev_get_platdata(&pdev->dev);
struct msm_iommu_drvdata *drvdata;
struct msm_iommu_ctx_drvdata *ctx_drvdata;
int i, ret;
if (!c || !pdev->dev.parent)
return -EINVAL;
drvdata = dev_get_drvdata(pdev->dev.parent);
if (!drvdata)
return -ENODEV;
ctx_drvdata = kzalloc(sizeof(*ctx_drvdata), GFP_KERNEL);
if (!ctx_drvdata)
return -ENOMEM;
ctx_drvdata->num = c->num;
ctx_drvdata->pdev = pdev;
INIT_LIST_HEAD(&ctx_drvdata->attached_elm);
platform_set_drvdata(pdev, ctx_drvdata);
ret = clk_prepare_enable(drvdata->pclk);
if (ret)
goto fail;
if (drvdata->clk) {
ret = clk_prepare_enable(drvdata->clk);
if (ret) {
clk_disable_unprepare(drvdata->pclk);
goto fail;
}
}
/* Program the M2V tables for this context */
for (i = 0; i < MAX_NUM_MIDS; i++) {
int mid = c->mids[i];
if (mid == -1)
break;
SET_M2VCBR_N(drvdata->base, mid, 0);
SET_CBACR_N(drvdata->base, c->num, 0);
/* Set VMID = 0 */
SET_VMID(drvdata->base, mid, 0);
/* Set the context number for that MID to this context */
SET_CBNDX(drvdata->base, mid, c->num);
/* Set MID associated with this context bank to 0*/
SET_CBVMID(drvdata->base, c->num, 0);
/* Set the ASID for TLB tagging for this context */
SET_CONTEXTIDR_ASID(drvdata->base, c->num, c->num);
/* Set security bit override to be Non-secure */
SET_NSCFG(drvdata->base, mid, 3);
}
clk_disable(drvdata->clk);
clk_disable(drvdata->pclk);
dev_info(&pdev->dev, "context %s using bank %d\n", c->name, c->num);
return 0;
fail:
kfree(ctx_drvdata);
return ret;
}
static int msm_iommu_ctx_remove(struct platform_device *pdev)
{
struct msm_iommu_ctx_drvdata *drv = NULL;
drv = platform_get_drvdata(pdev);
if (drv) {
memset(drv, 0, sizeof(struct msm_iommu_ctx_drvdata));
kfree(drv);
}
return 0;
}
static struct platform_driver msm_iommu_driver = {
.driver = {
.name = "msm_iommu",
},
.probe = msm_iommu_probe,
.remove = msm_iommu_remove,
};
static struct platform_driver msm_iommu_ctx_driver = {
.driver = {
.name = "msm_iommu_ctx",
},
.probe = msm_iommu_ctx_probe,
.remove = msm_iommu_ctx_remove,
};
static struct platform_driver * const drivers[] = {
&msm_iommu_driver,
&msm_iommu_ctx_driver,
};
static int __init msm_iommu_driver_init(void)
{
return platform_register_drivers(drivers, ARRAY_SIZE(drivers));
}
static void __exit msm_iommu_driver_exit(void)
{
platform_unregister_drivers(drivers, ARRAY_SIZE(drivers));
}
subsys_initcall(msm_iommu_driver_init);
module_exit(msm_iommu_driver_exit);
MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Stepan Moskovchenko <stepanm@codeaurora.org>");

View File

@ -34,7 +34,7 @@
#include <dt-bindings/memory/mt8173-larb-port.h>
#include <soc/mediatek/smi.h>
#include "io-pgtable.h"
#include "mtk_iommu.h"
#define REG_MMU_PT_BASE_ADDR 0x000
@ -93,20 +93,6 @@
#define MTK_PROTECT_PA_ALIGN 128
struct mtk_iommu_suspend_reg {
u32 standard_axi_mode;
u32 dcm_dis;
u32 ctrl_reg;
u32 int_control0;
u32 int_main_control;
};
struct mtk_iommu_client_priv {
struct list_head client;
unsigned int mtk_m4u_id;
struct device *m4udev;
};
struct mtk_iommu_domain {
spinlock_t pgtlock; /* lock for page table */
@ -116,19 +102,6 @@ struct mtk_iommu_domain {
struct iommu_domain domain;
};
struct mtk_iommu_data {
void __iomem *base;
int irq;
struct device *dev;
struct clk *bclk;
phys_addr_t protect_base; /* protect memory base */
struct mtk_iommu_suspend_reg reg;
struct mtk_iommu_domain *m4u_dom;
struct iommu_group *m4u_group;
struct mtk_smi_iommu smi_imu; /* SMI larb iommu info */
bool enable_4GB;
};
static struct iommu_ops mtk_iommu_ops;
static struct mtk_iommu_domain *to_mtk_domain(struct iommu_domain *dom)
@ -455,7 +428,6 @@ static int mtk_iommu_of_xlate(struct device *dev, struct of_phandle_args *args)
if (!dev->archdata.iommu) {
/* Get the m4u device */
m4updev = of_find_device_by_node(args->np);
of_node_put(args->np);
if (WARN_ON(!m4updev))
return -EINVAL;
@ -552,25 +524,6 @@ static int mtk_iommu_hw_init(const struct mtk_iommu_data *data)
return 0;
}
static int compare_of(struct device *dev, void *data)
{
return dev->of_node == data;
}
static int mtk_iommu_bind(struct device *dev)
{
struct mtk_iommu_data *data = dev_get_drvdata(dev);
return component_bind_all(dev, &data->smi_imu);
}
static void mtk_iommu_unbind(struct device *dev)
{
struct mtk_iommu_data *data = dev_get_drvdata(dev);
component_unbind_all(dev, &data->smi_imu);
}
static const struct component_master_ops mtk_iommu_com_ops = {
.bind = mtk_iommu_bind,
.unbind = mtk_iommu_unbind,

77
drivers/iommu/mtk_iommu.h Normal file
View File

@ -0,0 +1,77 @@
/*
* Copyright (c) 2015-2016 MediaTek Inc.
* Author: Honghui Zhang <honghui.zhang@mediatek.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef _MTK_IOMMU_H_
#define _MTK_IOMMU_H_
#include <linux/clk.h>
#include <linux/component.h>
#include <linux/device.h>
#include <linux/io.h>
#include <linux/iommu.h>
#include <linux/list.h>
#include <linux/spinlock.h>
#include <soc/mediatek/smi.h>
#include "io-pgtable.h"
struct mtk_iommu_suspend_reg {
u32 standard_axi_mode;
u32 dcm_dis;
u32 ctrl_reg;
u32 int_control0;
u32 int_main_control;
};
struct mtk_iommu_client_priv {
struct list_head client;
unsigned int mtk_m4u_id;
struct device *m4udev;
};
struct mtk_iommu_domain;
struct mtk_iommu_data {
void __iomem *base;
int irq;
struct device *dev;
struct clk *bclk;
phys_addr_t protect_base; /* protect memory base */
struct mtk_iommu_suspend_reg reg;
struct mtk_iommu_domain *m4u_dom;
struct iommu_group *m4u_group;
struct mtk_smi_iommu smi_imu; /* SMI larb iommu info */
bool enable_4GB;
};
static int compare_of(struct device *dev, void *data)
{
return dev->of_node == data;
}
static int mtk_iommu_bind(struct device *dev)
{
struct mtk_iommu_data *data = dev_get_drvdata(dev);
return component_bind_all(dev, &data->smi_imu);
}
static void mtk_iommu_unbind(struct device *dev)
{
struct mtk_iommu_data *data = dev_get_drvdata(dev);
component_unbind_all(dev, &data->smi_imu);
}
#endif

View File

@ -0,0 +1,727 @@
/*
* Copyright (c) 2015-2016 MediaTek Inc.
* Author: Honghui Zhang <honghui.zhang@mediatek.com>
*
* Based on driver/iommu/mtk_iommu.c
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/bootmem.h>
#include <linux/bug.h>
#include <linux/clk.h>
#include <linux/component.h>
#include <linux/device.h>
#include <linux/dma-iommu.h>
#include <linux/err.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/iommu.h>
#include <linux/iopoll.h>
#include <linux/kmemleak.h>
#include <linux/list.h>
#include <linux/of_address.h>
#include <linux/of_iommu.h>
#include <linux/of_irq.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <asm/barrier.h>
#include <asm/dma-iommu.h>
#include <linux/module.h>
#include <dt-bindings/memory/mt2701-larb-port.h>
#include <soc/mediatek/smi.h>
#include "mtk_iommu.h"
#define REG_MMU_PT_BASE_ADDR 0x000
#define F_ALL_INVLD 0x2
#define F_MMU_INV_RANGE 0x1
#define F_INVLD_EN0 BIT(0)
#define F_INVLD_EN1 BIT(1)
#define F_MMU_FAULT_VA_MSK 0xfffff000
#define MTK_PROTECT_PA_ALIGN 128
#define REG_MMU_CTRL_REG 0x210
#define F_MMU_CTRL_COHERENT_EN BIT(8)
#define REG_MMU_IVRP_PADDR 0x214
#define REG_MMU_INT_CONTROL 0x220
#define F_INT_TRANSLATION_FAULT BIT(0)
#define F_INT_MAIN_MULTI_HIT_FAULT BIT(1)
#define F_INT_INVALID_PA_FAULT BIT(2)
#define F_INT_ENTRY_REPLACEMENT_FAULT BIT(3)
#define F_INT_TABLE_WALK_FAULT BIT(4)
#define F_INT_TLB_MISS_FAULT BIT(5)
#define F_INT_PFH_DMA_FIFO_OVERFLOW BIT(6)
#define F_INT_MISS_DMA_FIFO_OVERFLOW BIT(7)
#define F_MMU_TF_PROTECT_SEL(prot) (((prot) & 0x3) << 5)
#define F_INT_CLR_BIT BIT(12)
#define REG_MMU_FAULT_ST 0x224
#define REG_MMU_FAULT_VA 0x228
#define REG_MMU_INVLD_PA 0x22C
#define REG_MMU_INT_ID 0x388
#define REG_MMU_INVALIDATE 0x5c0
#define REG_MMU_INVLD_START_A 0x5c4
#define REG_MMU_INVLD_END_A 0x5c8
#define REG_MMU_INV_SEL 0x5d8
#define REG_MMU_STANDARD_AXI_MODE 0x5e8
#define REG_MMU_DCM 0x5f0
#define F_MMU_DCM_ON BIT(1)
#define REG_MMU_CPE_DONE 0x60c
#define F_DESC_VALID 0x2
#define F_DESC_NONSEC BIT(3)
#define MT2701_M4U_TF_LARB(TF) (6 - (((TF) >> 13) & 0x7))
#define MT2701_M4U_TF_PORT(TF) (((TF) >> 8) & 0xF)
/* MTK generation one iommu HW only support 4K size mapping */
#define MT2701_IOMMU_PAGE_SHIFT 12
#define MT2701_IOMMU_PAGE_SIZE (1UL << MT2701_IOMMU_PAGE_SHIFT)
/*
* MTK m4u support 4GB iova address space, and only support 4K page
* mapping. So the pagetable size should be exactly as 4M.
*/
#define M2701_IOMMU_PGT_SIZE SZ_4M
struct mtk_iommu_domain {
spinlock_t pgtlock; /* lock for page table */
struct iommu_domain domain;
u32 *pgt_va;
dma_addr_t pgt_pa;
struct mtk_iommu_data *data;
};
static struct mtk_iommu_domain *to_mtk_domain(struct iommu_domain *dom)
{
return container_of(dom, struct mtk_iommu_domain, domain);
}
static const int mt2701_m4u_in_larb[] = {
LARB0_PORT_OFFSET, LARB1_PORT_OFFSET,
LARB2_PORT_OFFSET, LARB3_PORT_OFFSET
};
static inline int mt2701_m4u_to_larb(int id)
{
int i;
for (i = ARRAY_SIZE(mt2701_m4u_in_larb) - 1; i >= 0; i--)
if ((id) >= mt2701_m4u_in_larb[i])
return i;
return 0;
}
static inline int mt2701_m4u_to_port(int id)
{
int larb = mt2701_m4u_to_larb(id);
return id - mt2701_m4u_in_larb[larb];
}
static void mtk_iommu_tlb_flush_all(struct mtk_iommu_data *data)
{
writel_relaxed(F_INVLD_EN1 | F_INVLD_EN0,
data->base + REG_MMU_INV_SEL);
writel_relaxed(F_ALL_INVLD, data->base + REG_MMU_INVALIDATE);
wmb(); /* Make sure the tlb flush all done */
}
static void mtk_iommu_tlb_flush_range(struct mtk_iommu_data *data,
unsigned long iova, size_t size)
{
int ret;
u32 tmp;
writel_relaxed(F_INVLD_EN1 | F_INVLD_EN0,
data->base + REG_MMU_INV_SEL);
writel_relaxed(iova & F_MMU_FAULT_VA_MSK,
data->base + REG_MMU_INVLD_START_A);
writel_relaxed((iova + size - 1) & F_MMU_FAULT_VA_MSK,
data->base + REG_MMU_INVLD_END_A);
writel_relaxed(F_MMU_INV_RANGE, data->base + REG_MMU_INVALIDATE);
ret = readl_poll_timeout_atomic(data->base + REG_MMU_CPE_DONE,
tmp, tmp != 0, 10, 100000);
if (ret) {
dev_warn(data->dev,
"Partial TLB flush timed out, falling back to full flush\n");
mtk_iommu_tlb_flush_all(data);
}
/* Clear the CPE status */
writel_relaxed(0, data->base + REG_MMU_CPE_DONE);
}
static irqreturn_t mtk_iommu_isr(int irq, void *dev_id)
{
struct mtk_iommu_data *data = dev_id;
struct mtk_iommu_domain *dom = data->m4u_dom;
u32 int_state, regval, fault_iova, fault_pa;
unsigned int fault_larb, fault_port;
/* Read error information from registers */
int_state = readl_relaxed(data->base + REG_MMU_FAULT_ST);
fault_iova = readl_relaxed(data->base + REG_MMU_FAULT_VA);
fault_iova &= F_MMU_FAULT_VA_MSK;
fault_pa = readl_relaxed(data->base + REG_MMU_INVLD_PA);
regval = readl_relaxed(data->base + REG_MMU_INT_ID);
fault_larb = MT2701_M4U_TF_LARB(regval);
fault_port = MT2701_M4U_TF_PORT(regval);
/*
* MTK v1 iommu HW could not determine whether the fault is read or
* write fault, report as read fault.
*/
if (report_iommu_fault(&dom->domain, data->dev, fault_iova,
IOMMU_FAULT_READ))
dev_err_ratelimited(data->dev,
"fault type=0x%x iova=0x%x pa=0x%x larb=%d port=%d\n",
int_state, fault_iova, fault_pa,
fault_larb, fault_port);
/* Interrupt clear */
regval = readl_relaxed(data->base + REG_MMU_INT_CONTROL);
regval |= F_INT_CLR_BIT;
writel_relaxed(regval, data->base + REG_MMU_INT_CONTROL);
mtk_iommu_tlb_flush_all(data);
return IRQ_HANDLED;
}
static void mtk_iommu_config(struct mtk_iommu_data *data,
struct device *dev, bool enable)
{
struct mtk_iommu_client_priv *head, *cur, *next;
struct mtk_smi_larb_iommu *larb_mmu;
unsigned int larbid, portid;
head = dev->archdata.iommu;
list_for_each_entry_safe(cur, next, &head->client, client) {
larbid = mt2701_m4u_to_larb(cur->mtk_m4u_id);
portid = mt2701_m4u_to_port(cur->mtk_m4u_id);
larb_mmu = &data->smi_imu.larb_imu[larbid];
dev_dbg(dev, "%s iommu port: %d\n",
enable ? "enable" : "disable", portid);
if (enable)
larb_mmu->mmu |= MTK_SMI_MMU_EN(portid);
else
larb_mmu->mmu &= ~MTK_SMI_MMU_EN(portid);
}
}
static int mtk_iommu_domain_finalise(struct mtk_iommu_data *data)
{
struct mtk_iommu_domain *dom = data->m4u_dom;
spin_lock_init(&dom->pgtlock);
dom->pgt_va = dma_zalloc_coherent(data->dev,
M2701_IOMMU_PGT_SIZE,
&dom->pgt_pa, GFP_KERNEL);
if (!dom->pgt_va)
return -ENOMEM;
writel(dom->pgt_pa, data->base + REG_MMU_PT_BASE_ADDR);
dom->data = data;
return 0;
}
static struct iommu_domain *mtk_iommu_domain_alloc(unsigned type)
{
struct mtk_iommu_domain *dom;
if (type != IOMMU_DOMAIN_UNMANAGED)
return NULL;
dom = kzalloc(sizeof(*dom), GFP_KERNEL);
if (!dom)
return NULL;
return &dom->domain;
}
static void mtk_iommu_domain_free(struct iommu_domain *domain)
{
struct mtk_iommu_domain *dom = to_mtk_domain(domain);
struct mtk_iommu_data *data = dom->data;
dma_free_coherent(data->dev, M2701_IOMMU_PGT_SIZE,
dom->pgt_va, dom->pgt_pa);
kfree(to_mtk_domain(domain));
}
static int mtk_iommu_attach_device(struct iommu_domain *domain,
struct device *dev)
{
struct mtk_iommu_domain *dom = to_mtk_domain(domain);
struct mtk_iommu_client_priv *priv = dev->archdata.iommu;
struct mtk_iommu_data *data;
int ret;
if (!priv)
return -ENODEV;
data = dev_get_drvdata(priv->m4udev);
if (!data->m4u_dom) {
data->m4u_dom = dom;
ret = mtk_iommu_domain_finalise(data);
if (ret) {
data->m4u_dom = NULL;
return ret;
}
}
mtk_iommu_config(data, dev, true);
return 0;
}
static void mtk_iommu_detach_device(struct iommu_domain *domain,
struct device *dev)
{
struct mtk_iommu_client_priv *priv = dev->archdata.iommu;
struct mtk_iommu_data *data;
if (!priv)
return;
data = dev_get_drvdata(priv->m4udev);
mtk_iommu_config(data, dev, false);
}
static int mtk_iommu_map(struct iommu_domain *domain, unsigned long iova,
phys_addr_t paddr, size_t size, int prot)
{
struct mtk_iommu_domain *dom = to_mtk_domain(domain);
unsigned int page_num = size >> MT2701_IOMMU_PAGE_SHIFT;
unsigned long flags;
unsigned int i;
u32 *pgt_base_iova = dom->pgt_va + (iova >> MT2701_IOMMU_PAGE_SHIFT);
u32 pabase = (u32)paddr;
int map_size = 0;
spin_lock_irqsave(&dom->pgtlock, flags);
for (i = 0; i < page_num; i++) {
if (pgt_base_iova[i]) {
memset(pgt_base_iova, 0, i * sizeof(u32));
break;
}
pgt_base_iova[i] = pabase | F_DESC_VALID | F_DESC_NONSEC;
pabase += MT2701_IOMMU_PAGE_SIZE;
map_size += MT2701_IOMMU_PAGE_SIZE;
}
spin_unlock_irqrestore(&dom->pgtlock, flags);
mtk_iommu_tlb_flush_range(dom->data, iova, size);
return map_size == size ? 0 : -EEXIST;
}
static size_t mtk_iommu_unmap(struct iommu_domain *domain,
unsigned long iova, size_t size)
{
struct mtk_iommu_domain *dom = to_mtk_domain(domain);
unsigned long flags;
u32 *pgt_base_iova = dom->pgt_va + (iova >> MT2701_IOMMU_PAGE_SHIFT);
unsigned int page_num = size >> MT2701_IOMMU_PAGE_SHIFT;
spin_lock_irqsave(&dom->pgtlock, flags);
memset(pgt_base_iova, 0, page_num * sizeof(u32));
spin_unlock_irqrestore(&dom->pgtlock, flags);
mtk_iommu_tlb_flush_range(dom->data, iova, size);
return size;
}
static phys_addr_t mtk_iommu_iova_to_phys(struct iommu_domain *domain,
dma_addr_t iova)
{
struct mtk_iommu_domain *dom = to_mtk_domain(domain);
unsigned long flags;
phys_addr_t pa;
spin_lock_irqsave(&dom->pgtlock, flags);
pa = *(dom->pgt_va + (iova >> MT2701_IOMMU_PAGE_SHIFT));
pa = pa & (~(MT2701_IOMMU_PAGE_SIZE - 1));
spin_unlock_irqrestore(&dom->pgtlock, flags);
return pa;
}
/*
* MTK generation one iommu HW only support one iommu domain, and all the client
* sharing the same iova address space.
*/
static int mtk_iommu_create_mapping(struct device *dev,
struct of_phandle_args *args)
{
struct mtk_iommu_client_priv *head, *priv, *next;
struct platform_device *m4updev;
struct dma_iommu_mapping *mtk_mapping;
struct device *m4udev;
int ret;
if (args->args_count != 1) {
dev_err(dev, "invalid #iommu-cells(%d) property for IOMMU\n",
args->args_count);
return -EINVAL;
}
if (!dev->archdata.iommu) {
/* Get the m4u device */
m4updev = of_find_device_by_node(args->np);
if (WARN_ON(!m4updev))
return -EINVAL;
head = kzalloc(sizeof(*head), GFP_KERNEL);
if (!head)
return -ENOMEM;
dev->archdata.iommu = head;
INIT_LIST_HEAD(&head->client);
head->m4udev = &m4updev->dev;
} else {
head = dev->archdata.iommu;
}
priv = kzalloc(sizeof(*priv), GFP_KERNEL);
if (!priv) {
ret = -ENOMEM;
goto err_free_mem;
}
priv->mtk_m4u_id = args->args[0];
list_add_tail(&priv->client, &head->client);
m4udev = head->m4udev;
mtk_mapping = m4udev->archdata.iommu;
if (!mtk_mapping) {
/* MTK iommu support 4GB iova address space. */
mtk_mapping = arm_iommu_create_mapping(&platform_bus_type,
0, 1ULL << 32);
if (IS_ERR(mtk_mapping)) {
ret = PTR_ERR(mtk_mapping);
goto err_free_mem;
}
m4udev->archdata.iommu = mtk_mapping;
}
ret = arm_iommu_attach_device(dev, mtk_mapping);
if (ret)
goto err_release_mapping;
return 0;
err_release_mapping:
arm_iommu_release_mapping(mtk_mapping);
m4udev->archdata.iommu = NULL;
err_free_mem:
list_for_each_entry_safe(priv, next, &head->client, client)
kfree(priv);
kfree(head);
dev->archdata.iommu = NULL;
return ret;
}
static int mtk_iommu_add_device(struct device *dev)
{
struct iommu_group *group;
struct of_phandle_args iommu_spec;
struct of_phandle_iterator it;
int err;
of_for_each_phandle(&it, err, dev->of_node, "iommus",
"#iommu-cells", 0) {
int count = of_phandle_iterator_args(&it, iommu_spec.args,
MAX_PHANDLE_ARGS);
iommu_spec.np = of_node_get(it.node);
iommu_spec.args_count = count;
mtk_iommu_create_mapping(dev, &iommu_spec);
of_node_put(iommu_spec.np);
}
if (!dev->archdata.iommu) /* Not a iommu client device */
return -ENODEV;
group = iommu_group_get_for_dev(dev);
if (IS_ERR(group))
return PTR_ERR(group);
iommu_group_put(group);
return 0;
}
static void mtk_iommu_remove_device(struct device *dev)
{
struct mtk_iommu_client_priv *head, *cur, *next;
head = dev->archdata.iommu;
if (!head)
return;
list_for_each_entry_safe(cur, next, &head->client, client) {
list_del(&cur->client);
kfree(cur);
}
kfree(head);
dev->archdata.iommu = NULL;
iommu_group_remove_device(dev);
}
static struct iommu_group *mtk_iommu_device_group(struct device *dev)
{
struct mtk_iommu_data *data;
struct mtk_iommu_client_priv *priv;
priv = dev->archdata.iommu;
if (!priv)
return ERR_PTR(-ENODEV);
/* All the client devices are in the same m4u iommu-group */
data = dev_get_drvdata(priv->m4udev);
if (!data->m4u_group) {
data->m4u_group = iommu_group_alloc();
if (IS_ERR(data->m4u_group))
dev_err(dev, "Failed to allocate M4U IOMMU group\n");
}
return data->m4u_group;
}
static int mtk_iommu_hw_init(const struct mtk_iommu_data *data)
{
u32 regval;
int ret;
ret = clk_prepare_enable(data->bclk);
if (ret) {
dev_err(data->dev, "Failed to enable iommu bclk(%d)\n", ret);
return ret;
}
regval = F_MMU_CTRL_COHERENT_EN | F_MMU_TF_PROTECT_SEL(2);
writel_relaxed(regval, data->base + REG_MMU_CTRL_REG);
regval = F_INT_TRANSLATION_FAULT |
F_INT_MAIN_MULTI_HIT_FAULT |
F_INT_INVALID_PA_FAULT |
F_INT_ENTRY_REPLACEMENT_FAULT |
F_INT_TABLE_WALK_FAULT |
F_INT_TLB_MISS_FAULT |
F_INT_PFH_DMA_FIFO_OVERFLOW |
F_INT_MISS_DMA_FIFO_OVERFLOW;
writel_relaxed(regval, data->base + REG_MMU_INT_CONTROL);
/* protect memory,hw will write here while translation fault */
writel_relaxed(data->protect_base,
data->base + REG_MMU_IVRP_PADDR);
writel_relaxed(F_MMU_DCM_ON, data->base + REG_MMU_DCM);
if (devm_request_irq(data->dev, data->irq, mtk_iommu_isr, 0,
dev_name(data->dev), (void *)data)) {
writel_relaxed(0, data->base + REG_MMU_PT_BASE_ADDR);
clk_disable_unprepare(data->bclk);
dev_err(data->dev, "Failed @ IRQ-%d Request\n", data->irq);
return -ENODEV;
}
return 0;
}
static struct iommu_ops mtk_iommu_ops = {
.domain_alloc = mtk_iommu_domain_alloc,
.domain_free = mtk_iommu_domain_free,
.attach_dev = mtk_iommu_attach_device,
.detach_dev = mtk_iommu_detach_device,
.map = mtk_iommu_map,
.unmap = mtk_iommu_unmap,
.map_sg = default_iommu_map_sg,
.iova_to_phys = mtk_iommu_iova_to_phys,
.add_device = mtk_iommu_add_device,
.remove_device = mtk_iommu_remove_device,
.device_group = mtk_iommu_device_group,
.pgsize_bitmap = ~0UL << MT2701_IOMMU_PAGE_SHIFT,
};
static const struct of_device_id mtk_iommu_of_ids[] = {
{ .compatible = "mediatek,mt2701-m4u", },
{}
};
static const struct component_master_ops mtk_iommu_com_ops = {
.bind = mtk_iommu_bind,
.unbind = mtk_iommu_unbind,
};
static int mtk_iommu_probe(struct platform_device *pdev)
{
struct mtk_iommu_data *data;
struct device *dev = &pdev->dev;
struct resource *res;
struct component_match *match = NULL;
struct of_phandle_args larb_spec;
struct of_phandle_iterator it;
void *protect;
int larb_nr, ret, err;
data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
if (!data)
return -ENOMEM;
data->dev = dev;
/* Protect memory. HW will access here while translation fault.*/
protect = devm_kzalloc(dev, MTK_PROTECT_PA_ALIGN * 2,
GFP_KERNEL | GFP_DMA);
if (!protect)
return -ENOMEM;
data->protect_base = ALIGN(virt_to_phys(protect), MTK_PROTECT_PA_ALIGN);
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
data->base = devm_ioremap_resource(dev, res);
if (IS_ERR(data->base))
return PTR_ERR(data->base);
data->irq = platform_get_irq(pdev, 0);
if (data->irq < 0)
return data->irq;
data->bclk = devm_clk_get(dev, "bclk");
if (IS_ERR(data->bclk))
return PTR_ERR(data->bclk);
larb_nr = 0;
of_for_each_phandle(&it, err, dev->of_node,
"mediatek,larbs", NULL, 0) {
struct platform_device *plarbdev;
int count = of_phandle_iterator_args(&it, larb_spec.args,
MAX_PHANDLE_ARGS);
if (count)
continue;
larb_spec.np = of_node_get(it.node);
if (!of_device_is_available(larb_spec.np))
continue;
plarbdev = of_find_device_by_node(larb_spec.np);
of_node_put(larb_spec.np);
if (!plarbdev) {
plarbdev = of_platform_device_create(
larb_spec.np, NULL,
platform_bus_type.dev_root);
if (!plarbdev)
return -EPROBE_DEFER;
}
data->smi_imu.larb_imu[larb_nr].dev = &plarbdev->dev;
component_match_add(dev, &match, compare_of, larb_spec.np);
larb_nr++;
}
data->smi_imu.larb_nr = larb_nr;
platform_set_drvdata(pdev, data);
ret = mtk_iommu_hw_init(data);
if (ret)
return ret;
if (!iommu_present(&platform_bus_type))
bus_set_iommu(&platform_bus_type, &mtk_iommu_ops);
return component_master_add_with_match(dev, &mtk_iommu_com_ops, match);
}
static int mtk_iommu_remove(struct platform_device *pdev)
{
struct mtk_iommu_data *data = platform_get_drvdata(pdev);
if (iommu_present(&platform_bus_type))
bus_set_iommu(&platform_bus_type, NULL);
clk_disable_unprepare(data->bclk);
devm_free_irq(&pdev->dev, data->irq, data);
component_master_del(&pdev->dev, &mtk_iommu_com_ops);
return 0;
}
static int __maybe_unused mtk_iommu_suspend(struct device *dev)
{
struct mtk_iommu_data *data = dev_get_drvdata(dev);
struct mtk_iommu_suspend_reg *reg = &data->reg;
void __iomem *base = data->base;
reg->standard_axi_mode = readl_relaxed(base +
REG_MMU_STANDARD_AXI_MODE);
reg->dcm_dis = readl_relaxed(base + REG_MMU_DCM);
reg->ctrl_reg = readl_relaxed(base + REG_MMU_CTRL_REG);
reg->int_control0 = readl_relaxed(base + REG_MMU_INT_CONTROL);
return 0;
}
static int __maybe_unused mtk_iommu_resume(struct device *dev)
{
struct mtk_iommu_data *data = dev_get_drvdata(dev);
struct mtk_iommu_suspend_reg *reg = &data->reg;
void __iomem *base = data->base;
writel_relaxed(data->m4u_dom->pgt_pa, base + REG_MMU_PT_BASE_ADDR);
writel_relaxed(reg->standard_axi_mode,
base + REG_MMU_STANDARD_AXI_MODE);
writel_relaxed(reg->dcm_dis, base + REG_MMU_DCM);
writel_relaxed(reg->ctrl_reg, base + REG_MMU_CTRL_REG);
writel_relaxed(reg->int_control0, base + REG_MMU_INT_CONTROL);
writel_relaxed(data->protect_base, base + REG_MMU_IVRP_PADDR);
return 0;
}
static const struct dev_pm_ops mtk_iommu_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(mtk_iommu_suspend, mtk_iommu_resume)
};
static struct platform_driver mtk_iommu_driver = {
.probe = mtk_iommu_probe,
.remove = mtk_iommu_remove,
.driver = {
.name = "mtk-iommu",
.of_match_table = mtk_iommu_of_ids,
.pm = &mtk_iommu_pm_ops,
}
};
static int __init m4u_init(void)
{
return platform_driver_register(&mtk_iommu_driver);
}
static void __exit m4u_exit(void)
{
return platform_driver_unregister(&mtk_iommu_driver);
}
subsys_initcall(m4u_init);
module_exit(m4u_exit);
MODULE_DESCRIPTION("IOMMU API for MTK architected m4u v1 implementations");
MODULE_AUTHOR("Honghui Zhang <honghui.zhang@mediatek.com>");
MODULE_LICENSE("GPL v2");

View File

@ -4,11 +4,10 @@
* published by the Free Software Foundation.
*/
#include <asm/cacheflush.h>
#include <asm/pgtable.h>
#include <linux/compiler.h>
#include <linux/delay.h>
#include <linux/device.h>
#include <linux/dma-iommu.h>
#include <linux/errno.h>
#include <linux/interrupt.h>
#include <linux/io.h>
@ -77,7 +76,9 @@
struct rk_iommu_domain {
struct list_head iommus;
struct platform_device *pdev;
u32 *dt; /* page directory table */
dma_addr_t dt_dma;
spinlock_t iommus_lock; /* lock for iommus list */
spinlock_t dt_lock; /* lock for modifying page directory table */
@ -93,14 +94,12 @@ struct rk_iommu {
struct iommu_domain *domain; /* domain to which iommu is attached */
};
static inline void rk_table_flush(u32 *va, unsigned int count)
static inline void rk_table_flush(struct rk_iommu_domain *dom, dma_addr_t dma,
unsigned int count)
{
phys_addr_t pa_start = virt_to_phys(va);
phys_addr_t pa_end = virt_to_phys(va + count);
size_t size = pa_end - pa_start;
size_t size = count * sizeof(u32); /* count of u32 entry */
__cpuc_flush_dcache_area(va, size);
outer_flush_range(pa_start, pa_end);
dma_sync_single_for_device(&dom->pdev->dev, dma, size, DMA_TO_DEVICE);
}
static struct rk_iommu_domain *to_rk_domain(struct iommu_domain *dom)
@ -183,10 +182,9 @@ static inline bool rk_dte_is_pt_valid(u32 dte)
return dte & RK_DTE_PT_VALID;
}
static u32 rk_mk_dte(u32 *pt)
static inline u32 rk_mk_dte(dma_addr_t pt_dma)
{
phys_addr_t pt_phys = virt_to_phys(pt);
return (pt_phys & RK_DTE_PT_ADDRESS_MASK) | RK_DTE_PT_VALID;
return (pt_dma & RK_DTE_PT_ADDRESS_MASK) | RK_DTE_PT_VALID;
}
/*
@ -603,13 +601,16 @@ static void rk_iommu_zap_iova_first_last(struct rk_iommu_domain *rk_domain,
static u32 *rk_dte_get_page_table(struct rk_iommu_domain *rk_domain,
dma_addr_t iova)
{
struct device *dev = &rk_domain->pdev->dev;
u32 *page_table, *dte_addr;
u32 dte;
u32 dte_index, dte;
phys_addr_t pt_phys;
dma_addr_t pt_dma;
assert_spin_locked(&rk_domain->dt_lock);
dte_addr = &rk_domain->dt[rk_iova_dte_index(iova)];
dte_index = rk_iova_dte_index(iova);
dte_addr = &rk_domain->dt[dte_index];
dte = *dte_addr;
if (rk_dte_is_pt_valid(dte))
goto done;
@ -618,19 +619,27 @@ static u32 *rk_dte_get_page_table(struct rk_iommu_domain *rk_domain,
if (!page_table)
return ERR_PTR(-ENOMEM);
dte = rk_mk_dte(page_table);
pt_dma = dma_map_single(dev, page_table, SPAGE_SIZE, DMA_TO_DEVICE);
if (dma_mapping_error(dev, pt_dma)) {
dev_err(dev, "DMA mapping error while allocating page table\n");
free_page((unsigned long)page_table);
return ERR_PTR(-ENOMEM);
}
dte = rk_mk_dte(pt_dma);
*dte_addr = dte;
rk_table_flush(page_table, NUM_PT_ENTRIES);
rk_table_flush(dte_addr, 1);
rk_table_flush(rk_domain, pt_dma, NUM_PT_ENTRIES);
rk_table_flush(rk_domain,
rk_domain->dt_dma + dte_index * sizeof(u32), 1);
done:
pt_phys = rk_dte_pt_address(dte);
return (u32 *)phys_to_virt(pt_phys);
}
static size_t rk_iommu_unmap_iova(struct rk_iommu_domain *rk_domain,
u32 *pte_addr, dma_addr_t iova, size_t size)
u32 *pte_addr, dma_addr_t pte_dma,
size_t size)
{
unsigned int pte_count;
unsigned int pte_total = size / SPAGE_SIZE;
@ -645,14 +654,14 @@ static size_t rk_iommu_unmap_iova(struct rk_iommu_domain *rk_domain,
pte_addr[pte_count] = rk_mk_pte_invalid(pte);
}
rk_table_flush(pte_addr, pte_count);
rk_table_flush(rk_domain, pte_dma, pte_count);
return pte_count * SPAGE_SIZE;
}
static int rk_iommu_map_iova(struct rk_iommu_domain *rk_domain, u32 *pte_addr,
dma_addr_t iova, phys_addr_t paddr, size_t size,
int prot)
dma_addr_t pte_dma, dma_addr_t iova,
phys_addr_t paddr, size_t size, int prot)
{
unsigned int pte_count;
unsigned int pte_total = size / SPAGE_SIZE;
@ -671,7 +680,7 @@ static int rk_iommu_map_iova(struct rk_iommu_domain *rk_domain, u32 *pte_addr,
paddr += SPAGE_SIZE;
}
rk_table_flush(pte_addr, pte_count);
rk_table_flush(rk_domain, pte_dma, pte_total);
/*
* Zap the first and last iova to evict from iotlb any previously
@ -684,7 +693,8 @@ static int rk_iommu_map_iova(struct rk_iommu_domain *rk_domain, u32 *pte_addr,
return 0;
unwind:
/* Unmap the range of iovas that we just mapped */
rk_iommu_unmap_iova(rk_domain, pte_addr, iova, pte_count * SPAGE_SIZE);
rk_iommu_unmap_iova(rk_domain, pte_addr, pte_dma,
pte_count * SPAGE_SIZE);
iova += pte_count * SPAGE_SIZE;
page_phys = rk_pte_page_address(pte_addr[pte_count]);
@ -699,8 +709,9 @@ static int rk_iommu_map(struct iommu_domain *domain, unsigned long _iova,
{
struct rk_iommu_domain *rk_domain = to_rk_domain(domain);
unsigned long flags;
dma_addr_t iova = (dma_addr_t)_iova;
dma_addr_t pte_dma, iova = (dma_addr_t)_iova;
u32 *page_table, *pte_addr;
u32 dte_index, pte_index;
int ret;
spin_lock_irqsave(&rk_domain->dt_lock, flags);
@ -718,8 +729,13 @@ static int rk_iommu_map(struct iommu_domain *domain, unsigned long _iova,
return PTR_ERR(page_table);
}
pte_addr = &page_table[rk_iova_pte_index(iova)];
ret = rk_iommu_map_iova(rk_domain, pte_addr, iova, paddr, size, prot);
dte_index = rk_domain->dt[rk_iova_dte_index(iova)];
pte_index = rk_iova_pte_index(iova);
pte_addr = &page_table[pte_index];
pte_dma = rk_dte_pt_address(dte_index) + pte_index * sizeof(u32);
ret = rk_iommu_map_iova(rk_domain, pte_addr, pte_dma, iova,
paddr, size, prot);
spin_unlock_irqrestore(&rk_domain->dt_lock, flags);
return ret;
@ -730,7 +746,7 @@ static size_t rk_iommu_unmap(struct iommu_domain *domain, unsigned long _iova,
{
struct rk_iommu_domain *rk_domain = to_rk_domain(domain);
unsigned long flags;
dma_addr_t iova = (dma_addr_t)_iova;
dma_addr_t pte_dma, iova = (dma_addr_t)_iova;
phys_addr_t pt_phys;
u32 dte;
u32 *pte_addr;
@ -754,7 +770,8 @@ static size_t rk_iommu_unmap(struct iommu_domain *domain, unsigned long _iova,
pt_phys = rk_dte_pt_address(dte);
pte_addr = (u32 *)phys_to_virt(pt_phys) + rk_iova_pte_index(iova);
unmap_size = rk_iommu_unmap_iova(rk_domain, pte_addr, iova, size);
pte_dma = pt_phys + rk_iova_pte_index(iova) * sizeof(u32);
unmap_size = rk_iommu_unmap_iova(rk_domain, pte_addr, pte_dma, size);
spin_unlock_irqrestore(&rk_domain->dt_lock, flags);
@ -787,7 +804,6 @@ static int rk_iommu_attach_device(struct iommu_domain *domain,
struct rk_iommu_domain *rk_domain = to_rk_domain(domain);
unsigned long flags;
int ret, i;
phys_addr_t dte_addr;
/*
* Allow 'virtual devices' (e.g., drm) to attach to domain.
@ -807,14 +823,14 @@ static int rk_iommu_attach_device(struct iommu_domain *domain,
iommu->domain = domain;
ret = devm_request_irq(dev, iommu->irq, rk_iommu_irq,
ret = devm_request_irq(iommu->dev, iommu->irq, rk_iommu_irq,
IRQF_SHARED, dev_name(dev), iommu);
if (ret)
return ret;
dte_addr = virt_to_phys(rk_domain->dt);
for (i = 0; i < iommu->num_mmu; i++) {
rk_iommu_write(iommu->bases[i], RK_MMU_DTE_ADDR, dte_addr);
rk_iommu_write(iommu->bases[i], RK_MMU_DTE_ADDR,
rk_domain->dt_dma);
rk_iommu_base_command(iommu->bases[i], RK_MMU_CMD_ZAP_CACHE);
rk_iommu_write(iommu->bases[i], RK_MMU_INT_MASK, RK_MMU_IRQ_MASK);
}
@ -860,7 +876,7 @@ static void rk_iommu_detach_device(struct iommu_domain *domain,
}
rk_iommu_disable_stall(iommu);
devm_free_irq(dev, iommu->irq, iommu);
devm_free_irq(iommu->dev, iommu->irq, iommu);
iommu->domain = NULL;
@ -870,13 +886,29 @@ static void rk_iommu_detach_device(struct iommu_domain *domain,
static struct iommu_domain *rk_iommu_domain_alloc(unsigned type)
{
struct rk_iommu_domain *rk_domain;
struct platform_device *pdev;
struct device *iommu_dev;
if (type != IOMMU_DOMAIN_UNMANAGED)
if (type != IOMMU_DOMAIN_UNMANAGED && type != IOMMU_DOMAIN_DMA)
return NULL;
rk_domain = kzalloc(sizeof(*rk_domain), GFP_KERNEL);
/* Register a pdev per domain, so DMA API can base on this *dev
* even some virtual master doesn't have an iommu slave
*/
pdev = platform_device_register_simple("rk_iommu_domain",
PLATFORM_DEVID_AUTO, NULL, 0);
if (IS_ERR(pdev))
return NULL;
rk_domain = devm_kzalloc(&pdev->dev, sizeof(*rk_domain), GFP_KERNEL);
if (!rk_domain)
return NULL;
goto err_unreg_pdev;
rk_domain->pdev = pdev;
if (type == IOMMU_DOMAIN_DMA &&
iommu_get_dma_cookie(&rk_domain->domain))
goto err_unreg_pdev;
/*
* rk32xx iommus use a 2 level pagetable.
@ -885,18 +917,36 @@ static struct iommu_domain *rk_iommu_domain_alloc(unsigned type)
*/
rk_domain->dt = (u32 *)get_zeroed_page(GFP_KERNEL | GFP_DMA32);
if (!rk_domain->dt)
goto err_dt;
goto err_put_cookie;
rk_table_flush(rk_domain->dt, NUM_DT_ENTRIES);
iommu_dev = &pdev->dev;
rk_domain->dt_dma = dma_map_single(iommu_dev, rk_domain->dt,
SPAGE_SIZE, DMA_TO_DEVICE);
if (dma_mapping_error(iommu_dev, rk_domain->dt_dma)) {
dev_err(iommu_dev, "DMA map error for DT\n");
goto err_free_dt;
}
rk_table_flush(rk_domain, rk_domain->dt_dma, NUM_DT_ENTRIES);
spin_lock_init(&rk_domain->iommus_lock);
spin_lock_init(&rk_domain->dt_lock);
INIT_LIST_HEAD(&rk_domain->iommus);
rk_domain->domain.geometry.aperture_start = 0;
rk_domain->domain.geometry.aperture_end = DMA_BIT_MASK(32);
rk_domain->domain.geometry.force_aperture = true;
return &rk_domain->domain;
err_dt:
kfree(rk_domain);
err_free_dt:
free_page((unsigned long)rk_domain->dt);
err_put_cookie:
if (type == IOMMU_DOMAIN_DMA)
iommu_put_dma_cookie(&rk_domain->domain);
err_unreg_pdev:
platform_device_unregister(pdev);
return NULL;
}
@ -912,12 +962,20 @@ static void rk_iommu_domain_free(struct iommu_domain *domain)
if (rk_dte_is_pt_valid(dte)) {
phys_addr_t pt_phys = rk_dte_pt_address(dte);
u32 *page_table = phys_to_virt(pt_phys);
dma_unmap_single(&rk_domain->pdev->dev, pt_phys,
SPAGE_SIZE, DMA_TO_DEVICE);
free_page((unsigned long)page_table);
}
}
dma_unmap_single(&rk_domain->pdev->dev, rk_domain->dt_dma,
SPAGE_SIZE, DMA_TO_DEVICE);
free_page((unsigned long)rk_domain->dt);
kfree(rk_domain);
if (domain->type == IOMMU_DOMAIN_DMA)
iommu_put_dma_cookie(&rk_domain->domain);
platform_device_unregister(rk_domain->pdev);
}
static bool rk_iommu_is_dev_iommu_master(struct device *dev)
@ -1022,17 +1080,43 @@ static const struct iommu_ops rk_iommu_ops = {
.detach_dev = rk_iommu_detach_device,
.map = rk_iommu_map,
.unmap = rk_iommu_unmap,
.map_sg = default_iommu_map_sg,
.add_device = rk_iommu_add_device,
.remove_device = rk_iommu_remove_device,
.iova_to_phys = rk_iommu_iova_to_phys,
.pgsize_bitmap = RK_IOMMU_PGSIZE_BITMAP,
};
static int rk_iommu_domain_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
dev->dma_parms = devm_kzalloc(dev, sizeof(*dev->dma_parms), GFP_KERNEL);
if (!dev->dma_parms)
return -ENOMEM;
/* Set dma_ops for dev, otherwise it would be dummy_dma_ops */
arch_setup_dma_ops(dev, 0, DMA_BIT_MASK(32), NULL, false);
dma_set_max_seg_size(dev, DMA_BIT_MASK(32));
dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(32));
return 0;
}
static struct platform_driver rk_iommu_domain_driver = {
.probe = rk_iommu_domain_probe,
.driver = {
.name = "rk_iommu_domain",
},
};
static int rk_iommu_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct rk_iommu *iommu;
struct resource *res;
int num_res = pdev->num_resources;
int i;
iommu = devm_kzalloc(dev, sizeof(*iommu), GFP_KERNEL);
@ -1042,12 +1126,13 @@ static int rk_iommu_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, iommu);
iommu->dev = dev;
iommu->num_mmu = 0;
iommu->bases = devm_kzalloc(dev, sizeof(*iommu->bases) * iommu->num_mmu,
iommu->bases = devm_kzalloc(dev, sizeof(*iommu->bases) * num_res,
GFP_KERNEL);
if (!iommu->bases)
return -ENOMEM;
for (i = 0; i < pdev->num_resources; i++) {
for (i = 0; i < num_res; i++) {
res = platform_get_resource(pdev, IORESOURCE_MEM, i);
if (!res)
continue;
@ -1103,11 +1188,19 @@ static int __init rk_iommu_init(void)
if (ret)
return ret;
return platform_driver_register(&rk_iommu_driver);
ret = platform_driver_register(&rk_iommu_domain_driver);
if (ret)
return ret;
ret = platform_driver_register(&rk_iommu_driver);
if (ret)
platform_driver_unregister(&rk_iommu_domain_driver);
return ret;
}
static void __exit rk_iommu_exit(void)
{
platform_driver_unregister(&rk_iommu_driver);
platform_driver_unregister(&rk_iommu_domain_driver);
}
subsys_initcall(rk_iommu_init);

View File

@ -21,19 +21,50 @@
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <soc/mediatek/smi.h>
#include <dt-bindings/memory/mt2701-larb-port.h>
#define SMI_LARB_MMU_EN 0xf00
#define REG_SMI_SECUR_CON_BASE 0x5c0
/* every register control 8 port, register offset 0x4 */
#define REG_SMI_SECUR_CON_OFFSET(id) (((id) >> 3) << 2)
#define REG_SMI_SECUR_CON_ADDR(id) \
(REG_SMI_SECUR_CON_BASE + REG_SMI_SECUR_CON_OFFSET(id))
/*
* every port have 4 bit to control, bit[port + 3] control virtual or physical,
* bit[port + 2 : port + 1] control the domain, bit[port] control the security
* or non-security.
*/
#define SMI_SECUR_CON_VAL_MSK(id) (~(0xf << (((id) & 0x7) << 2)))
#define SMI_SECUR_CON_VAL_VIRT(id) BIT((((id) & 0x7) << 2) + 3)
/* mt2701 domain should be set to 3 */
#define SMI_SECUR_CON_VAL_DOMAIN(id) (0x3 << ((((id) & 0x7) << 2) + 1))
struct mtk_smi_larb_gen {
int port_in_larb[MTK_LARB_NR_MAX + 1];
void (*config_port)(struct device *);
};
struct mtk_smi {
struct device *dev;
struct clk *clk_apb, *clk_smi;
struct device *dev;
struct clk *clk_apb, *clk_smi;
struct clk *clk_async; /*only needed by mt2701*/
void __iomem *smi_ao_base;
};
struct mtk_smi_larb { /* larb: local arbiter */
struct mtk_smi smi;
void __iomem *base;
struct device *smi_common_dev;
u32 *mmu;
struct mtk_smi smi;
void __iomem *base;
struct device *smi_common_dev;
const struct mtk_smi_larb_gen *larb_gen;
int larbid;
u32 *mmu;
};
enum mtk_smi_gen {
MTK_SMI_GEN1,
MTK_SMI_GEN2
};
static int mtk_smi_enable(const struct mtk_smi *smi)
@ -71,6 +102,7 @@ static void mtk_smi_disable(const struct mtk_smi *smi)
int mtk_smi_larb_get(struct device *larbdev)
{
struct mtk_smi_larb *larb = dev_get_drvdata(larbdev);
const struct mtk_smi_larb_gen *larb_gen = larb->larb_gen;
struct mtk_smi *common = dev_get_drvdata(larb->smi_common_dev);
int ret;
@ -87,7 +119,7 @@ int mtk_smi_larb_get(struct device *larbdev)
}
/* Configure the iommu info for this larb */
writel(*larb->mmu, larb->base + SMI_LARB_MMU_EN);
larb_gen->config_port(larbdev);
return 0;
}
@ -126,6 +158,45 @@ mtk_smi_larb_bind(struct device *dev, struct device *master, void *data)
return -ENODEV;
}
static void mtk_smi_larb_config_port(struct device *dev)
{
struct mtk_smi_larb *larb = dev_get_drvdata(dev);
writel(*larb->mmu, larb->base + SMI_LARB_MMU_EN);
}
static void mtk_smi_larb_config_port_gen1(struct device *dev)
{
struct mtk_smi_larb *larb = dev_get_drvdata(dev);
const struct mtk_smi_larb_gen *larb_gen = larb->larb_gen;
struct mtk_smi *common = dev_get_drvdata(larb->smi_common_dev);
int i, m4u_port_id, larb_port_num;
u32 sec_con_val, reg_val;
m4u_port_id = larb_gen->port_in_larb[larb->larbid];
larb_port_num = larb_gen->port_in_larb[larb->larbid + 1]
- larb_gen->port_in_larb[larb->larbid];
for (i = 0; i < larb_port_num; i++, m4u_port_id++) {
if (*larb->mmu & BIT(i)) {
/* bit[port + 3] controls the virtual or physical */
sec_con_val = SMI_SECUR_CON_VAL_VIRT(m4u_port_id);
} else {
/* do not need to enable m4u for this port */
continue;
}
reg_val = readl(common->smi_ao_base
+ REG_SMI_SECUR_CON_ADDR(m4u_port_id));
reg_val &= SMI_SECUR_CON_VAL_MSK(m4u_port_id);
reg_val |= sec_con_val;
reg_val |= SMI_SECUR_CON_VAL_DOMAIN(m4u_port_id);
writel(reg_val,
common->smi_ao_base
+ REG_SMI_SECUR_CON_ADDR(m4u_port_id));
}
}
static void
mtk_smi_larb_unbind(struct device *dev, struct device *master, void *data)
{
@ -137,6 +208,31 @@ static const struct component_ops mtk_smi_larb_component_ops = {
.unbind = mtk_smi_larb_unbind,
};
static const struct mtk_smi_larb_gen mtk_smi_larb_mt8173 = {
/* mt8173 do not need the port in larb */
.config_port = mtk_smi_larb_config_port,
};
static const struct mtk_smi_larb_gen mtk_smi_larb_mt2701 = {
.port_in_larb = {
LARB0_PORT_OFFSET, LARB1_PORT_OFFSET,
LARB2_PORT_OFFSET, LARB3_PORT_OFFSET
},
.config_port = mtk_smi_larb_config_port_gen1,
};
static const struct of_device_id mtk_smi_larb_of_ids[] = {
{
.compatible = "mediatek,mt8173-smi-larb",
.data = &mtk_smi_larb_mt8173
},
{
.compatible = "mediatek,mt2701-smi-larb",
.data = &mtk_smi_larb_mt2701
},
{}
};
static int mtk_smi_larb_probe(struct platform_device *pdev)
{
struct mtk_smi_larb *larb;
@ -144,14 +240,20 @@ static int mtk_smi_larb_probe(struct platform_device *pdev)
struct device *dev = &pdev->dev;
struct device_node *smi_node;
struct platform_device *smi_pdev;
const struct of_device_id *of_id;
if (!dev->pm_domain)
return -EPROBE_DEFER;
of_id = of_match_node(mtk_smi_larb_of_ids, pdev->dev.of_node);
if (!of_id)
return -EINVAL;
larb = devm_kzalloc(dev, sizeof(*larb), GFP_KERNEL);
if (!larb)
return -ENOMEM;
larb->larb_gen = of_id->data;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
larb->base = devm_ioremap_resource(dev, res);
if (IS_ERR(larb->base))
@ -191,24 +293,34 @@ static int mtk_smi_larb_remove(struct platform_device *pdev)
return 0;
}
static const struct of_device_id mtk_smi_larb_of_ids[] = {
{ .compatible = "mediatek,mt8173-smi-larb",},
{}
};
static struct platform_driver mtk_smi_larb_driver = {
.probe = mtk_smi_larb_probe,
.remove = mtk_smi_larb_remove,
.remove = mtk_smi_larb_remove,
.driver = {
.name = "mtk-smi-larb",
.of_match_table = mtk_smi_larb_of_ids,
}
};
static const struct of_device_id mtk_smi_common_of_ids[] = {
{
.compatible = "mediatek,mt8173-smi-common",
.data = (void *)MTK_SMI_GEN2
},
{
.compatible = "mediatek,mt2701-smi-common",
.data = (void *)MTK_SMI_GEN1
},
{}
};
static int mtk_smi_common_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct mtk_smi *common;
struct resource *res;
const struct of_device_id *of_id;
enum mtk_smi_gen smi_gen;
if (!dev->pm_domain)
return -EPROBE_DEFER;
@ -226,6 +338,29 @@ static int mtk_smi_common_probe(struct platform_device *pdev)
if (IS_ERR(common->clk_smi))
return PTR_ERR(common->clk_smi);
of_id = of_match_node(mtk_smi_common_of_ids, pdev->dev.of_node);
if (!of_id)
return -EINVAL;
/*
* for mtk smi gen 1, we need to get the ao(always on) base to config
* m4u port, and we need to enable the aync clock for transform the smi
* clock into emi clock domain, but for mtk smi gen2, there's no smi ao
* base.
*/
smi_gen = (enum mtk_smi_gen)of_id->data;
if (smi_gen == MTK_SMI_GEN1) {
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
common->smi_ao_base = devm_ioremap_resource(dev, res);
if (IS_ERR(common->smi_ao_base))
return PTR_ERR(common->smi_ao_base);
common->clk_async = devm_clk_get(dev, "async");
if (IS_ERR(common->clk_async))
return PTR_ERR(common->clk_async);
clk_prepare_enable(common->clk_async);
}
pm_runtime_enable(dev);
platform_set_drvdata(pdev, common);
return 0;
@ -237,11 +372,6 @@ static int mtk_smi_common_remove(struct platform_device *pdev)
return 0;
}
static const struct of_device_id mtk_smi_common_of_ids[] = {
{ .compatible = "mediatek,mt8173-smi-common", },
{}
};
static struct platform_driver mtk_smi_common_driver = {
.probe = mtk_smi_common_probe,
.remove = mtk_smi_common_remove,
@ -272,4 +402,5 @@ err_unreg_smi:
platform_driver_unregister(&mtk_smi_common_driver);
return ret;
}
subsys_initcall(mtk_smi_init);

View File

@ -0,0 +1,85 @@
/*
* Copyright (c) 2015 MediaTek Inc.
* Author: Honghui Zhang <honghui.zhang@mediatek.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef _MT2701_LARB_PORT_H_
#define _MT2701_LARB_PORT_H_
/*
* Mediatek m4u generation 1 such as mt2701 has flat m4u port numbers,
* the first port's id for larb[N] would be the last port's id of larb[N - 1]
* plus one while larb[0]'s first port number is 0. The definition of
* MT2701_M4U_ID_LARBx is following HW register spec.
* But m4u generation 2 like mt8173 have different port number, it use fixed
* offset for each larb, the first port's id for larb[N] would be (N * 32).
*/
#define LARB0_PORT_OFFSET 0
#define LARB1_PORT_OFFSET 11
#define LARB2_PORT_OFFSET 21
#define LARB3_PORT_OFFSET 43
#define MT2701_M4U_ID_LARB0(port) ((port) + LARB0_PORT_OFFSET)
#define MT2701_M4U_ID_LARB1(port) ((port) + LARB1_PORT_OFFSET)
#define MT2701_M4U_ID_LARB2(port) ((port) + LARB2_PORT_OFFSET)
/* Port define for larb0 */
#define MT2701_M4U_PORT_DISP_OVL_0 MT2701_M4U_ID_LARB0(0)
#define MT2701_M4U_PORT_DISP_RDMA1 MT2701_M4U_ID_LARB0(1)
#define MT2701_M4U_PORT_DISP_RDMA MT2701_M4U_ID_LARB0(2)
#define MT2701_M4U_PORT_DISP_WDMA MT2701_M4U_ID_LARB0(3)
#define MT2701_M4U_PORT_MM_CMDQ MT2701_M4U_ID_LARB0(4)
#define MT2701_M4U_PORT_MDP_RDMA MT2701_M4U_ID_LARB0(5)
#define MT2701_M4U_PORT_MDP_WDMA MT2701_M4U_ID_LARB0(6)
#define MT2701_M4U_PORT_MDP_ROTO MT2701_M4U_ID_LARB0(7)
#define MT2701_M4U_PORT_MDP_ROTCO MT2701_M4U_ID_LARB0(8)
#define MT2701_M4U_PORT_MDP_ROTVO MT2701_M4U_ID_LARB0(9)
#define MT2701_M4U_PORT_MDP_RDMA1 MT2701_M4U_ID_LARB0(10)
/* Port define for larb1 */
#define MT2701_M4U_PORT_VDEC_MC_EXT MT2701_M4U_ID_LARB1(0)
#define MT2701_M4U_PORT_VDEC_PP_EXT MT2701_M4U_ID_LARB1(1)
#define MT2701_M4U_PORT_VDEC_PPWRAP_EXT MT2701_M4U_ID_LARB1(2)
#define MT2701_M4U_PORT_VDEC_AVC_MV_EXT MT2701_M4U_ID_LARB1(3)
#define MT2701_M4U_PORT_VDEC_PRED_RD_EXT MT2701_M4U_ID_LARB1(4)
#define MT2701_M4U_PORT_VDEC_PRED_WR_EXT MT2701_M4U_ID_LARB1(5)
#define MT2701_M4U_PORT_VDEC_VLD_EXT MT2701_M4U_ID_LARB1(6)
#define MT2701_M4U_PORT_VDEC_VLD2_EXT MT2701_M4U_ID_LARB1(7)
#define MT2701_M4U_PORT_VDEC_TILE_EXT MT2701_M4U_ID_LARB1(8)
#define MT2701_M4U_PORT_VDEC_IMG_RESZ_EXT MT2701_M4U_ID_LARB1(9)
/* Port define for larb2 */
#define MT2701_M4U_PORT_VENC_RCPU MT2701_M4U_ID_LARB2(0)
#define MT2701_M4U_PORT_VENC_REC_FRM MT2701_M4U_ID_LARB2(1)
#define MT2701_M4U_PORT_VENC_BSDMA MT2701_M4U_ID_LARB2(2)
#define MT2701_M4U_PORT_JPGENC_RDMA MT2701_M4U_ID_LARB2(3)
#define MT2701_M4U_PORT_VENC_LT_RCPU MT2701_M4U_ID_LARB2(4)
#define MT2701_M4U_PORT_VENC_LT_REC_FRM MT2701_M4U_ID_LARB2(5)
#define MT2701_M4U_PORT_VENC_LT_BSDMA MT2701_M4U_ID_LARB2(6)
#define MT2701_M4U_PORT_JPGDEC_BSDMA MT2701_M4U_ID_LARB2(7)
#define MT2701_M4U_PORT_VENC_SV_COMV MT2701_M4U_ID_LARB2(8)
#define MT2701_M4U_PORT_VENC_RD_COMV MT2701_M4U_ID_LARB2(9)
#define MT2701_M4U_PORT_JPGENC_BSDMA MT2701_M4U_ID_LARB2(10)
#define MT2701_M4U_PORT_VENC_CUR_LUMA MT2701_M4U_ID_LARB2(11)
#define MT2701_M4U_PORT_VENC_CUR_CHROMA MT2701_M4U_ID_LARB2(12)
#define MT2701_M4U_PORT_VENC_REF_LUMA MT2701_M4U_ID_LARB2(13)
#define MT2701_M4U_PORT_VENC_REF_CHROMA MT2701_M4U_ID_LARB2(14)
#define MT2701_M4U_PORT_IMG_RESZ MT2701_M4U_ID_LARB2(15)
#define MT2701_M4U_PORT_VENC_LT_SV_COMV MT2701_M4U_ID_LARB2(16)
#define MT2701_M4U_PORT_VENC_LT_RD_COMV MT2701_M4U_ID_LARB2(17)
#define MT2701_M4U_PORT_VENC_LT_CUR_LUMA MT2701_M4U_ID_LARB2(18)
#define MT2701_M4U_PORT_VENC_LT_CUR_CHROMA MT2701_M4U_ID_LARB2(19)
#define MT2701_M4U_PORT_VENC_LT_REF_LUMA MT2701_M4U_ID_LARB2(20)
#define MT2701_M4U_PORT_VENC_LT_REF_CHROMA MT2701_M4U_ID_LARB2(21)
#define MT2701_M4U_PORT_JPGDEC_WDMA MT2701_M4U_ID_LARB2(22)
#endif

View File

@ -152,6 +152,7 @@ struct iommu_dm_region {
* @domain_set_attr: Change domain attributes
* @get_dm_regions: Request list of direct mapping requirements for a device
* @put_dm_regions: Free list of direct mapping requirements for a device
* @apply_dm_region: Temporary helper call-back for iova reserved ranges
* @domain_window_enable: Configure and enable a particular window for a domain
* @domain_window_disable: Disable a particular window for a domain
* @domain_set_windows: Set the number of windows for a domain
@ -186,6 +187,8 @@ struct iommu_ops {
/* Request/Free a list of direct mapping requirements for a device */
void (*get_dm_regions)(struct device *dev, struct list_head *list);
void (*put_dm_regions)(struct device *dev, struct list_head *list);
void (*apply_dm_region)(struct device *dev, struct iommu_domain *domain,
struct iommu_dm_region *region);
/* Window handling functions */
int (*domain_window_enable)(struct iommu_domain *domain, u32 wnd_nr,