OpenCloudOS-Kernel/drivers/iommu/Kconfig

520 lines
16 KiB
Plaintext
Raw Normal View History

# SPDX-License-Identifier: GPL-2.0-only
# The IOVA library may also be used by non-IOMMU_API users
config IOMMU_IOVA
tristate
# The IOASID library may also be used by non-IOMMU_API users
config IOASID
tristate
# IOMMU_API always gets selected by whoever wants it.
config IOMMU_API
bool
menuconfig IOMMU_SUPPORT
bool "IOMMU Hardware Support"
depends on MMU
default y
---help---
Say Y here if you want to compile device drivers for IO Memory
Management Units into the kernel. These devices usually allow to
remap DMA requests and/or remap interrupts from other devices on the
system.
if IOMMU_SUPPORT
menu "Generic IOMMU Pagetable Support"
# Selected by the actual pagetable implementations
config IOMMU_IO_PGTABLE
bool
config IOMMU_IO_PGTABLE_LPAE
bool "ARMv7/v8 Long Descriptor Format"
select IOMMU_IO_PGTABLE
depends on ARM || ARM64 || (COMPILE_TEST && !GENERIC_ATOMIC64)
help
Enable support for the ARM long descriptor pagetable format.
This allocator supports 4K/2M/1G, 16K/32M and 64K/512M page
sizes at both stage-1 and stage-2, as well as address spaces
up to 48-bits in size.
config IOMMU_IO_PGTABLE_LPAE_SELFTEST
bool "LPAE selftests"
depends on IOMMU_IO_PGTABLE_LPAE
help
Enable self-tests for LPAE page table allocator. This performs
a series of page-table consistency checks during boot.
If unsure, say N here.
config IOMMU_IO_PGTABLE_ARMV7S
bool "ARMv7/v8 Short Descriptor Format"
select IOMMU_IO_PGTABLE
depends on ARM || ARM64 || COMPILE_TEST
help
Enable support for the ARM Short-descriptor pagetable format.
This supports 32-bit virtual and physical addresses mapped using
2-level tables with 4KB pages/1MB sections, and contiguous entries
for 64KB pages/16MB supersections if indicated by the IOMMU driver.
config IOMMU_IO_PGTABLE_ARMV7S_SELFTEST
bool "ARMv7s selftests"
depends on IOMMU_IO_PGTABLE_ARMV7S
help
Enable self-tests for ARMv7s page table allocator. This performs
a series of page-table consistency checks during boot.
If unsure, say N here.
endmenu
config IOMMU_DEBUGFS
bool "Export IOMMU internals in DebugFS"
depends on DEBUG_FS
help
Allows exposure of IOMMU device internals. This option enables
the use of debugfs by IOMMU drivers as required. Devices can,
at initialization time, cause the IOMMU code to create a top-level
debug/iommu directory, and then populate a subdirectory with
entries as required.
config IOMMU_DEFAULT_PASSTHROUGH
bool "IOMMU passthrough by default"
depends on IOMMU_API
help
Enable passthrough by default, removing the need to pass in
iommu.passthrough=on or iommu=pt through command line. If this
is enabled, you can still disable with iommu.passthrough=off
or iommu=nopt depending on the architecture.
If unsure, say N here.
config OF_IOMMU
def_bool y
depends on OF && IOMMU_API
# IOMMU-agnostic DMA-mapping layer
config IOMMU_DMA
bool
select IOMMU_API
select IOMMU_IOVA
select IRQ_MSI_IOMMU
select NEED_SG_DMA_LENGTH
iommu/fsl: Freescale PAMU driver and iommu implementation. Following is a brief description of the PAMU hardware: PAMU determines what action to take and whether to authorize the action on the basis of the memory address, a Logical IO Device Number (LIODN), and PAACT table (logically) indexed by LIODN and address. Hardware devices which need to access memory must provide an LIODN in addition to the memory address. Peripheral Access Authorization and Control Tables (PAACTs) are the primary data structures used by PAMU. A PAACT is a table of peripheral access authorization and control entries (PAACE).Each PAACE defines the range of I/O bus address space that is accessible by the LIOD and the associated access capabilities. There are two types of PAACTs: primary PAACT (PPAACT) and secondary PAACT (SPAACT).A given physical I/O device may be able to act as one or more independent logical I/O devices (LIODs). Each such logical I/O device is assigned an identifier called logical I/O device number (LIODN). A LIODN is allocated a contiguous portion of the I/O bus address space called the DSA window for performing DSA operations. The DSA window may optionally be divided into multiple sub-windows, each of which may be used to map to a region in system storage space. The first sub-window is referred to as the primary sub-window and the remaining are called secondary sub-windows. This patch provides the PAMU driver (fsl_pamu.c) and the corresponding IOMMU API implementation (fsl_pamu_domain.c). The PAMU hardware driver (fsl_pamu.c) has been derived from the work done by Ashish Kalra and Timur Tabi. [For iommu group support] Acked-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Timur Tabi <timur@tabi.org> Signed-off-by: Varun Sethi <Varun.Sethi@freescale.com> Signed-off-by: Joerg Roedel <joro@8bytes.org>
2013-07-15 12:50:57 +08:00
config FSL_PAMU
bool "Freescale IOMMU support"
depends on PCI
depends on PHYS_64BIT
depends on PPC_E500MC || (COMPILE_TEST && PPC)
iommu/fsl: Freescale PAMU driver and iommu implementation. Following is a brief description of the PAMU hardware: PAMU determines what action to take and whether to authorize the action on the basis of the memory address, a Logical IO Device Number (LIODN), and PAACT table (logically) indexed by LIODN and address. Hardware devices which need to access memory must provide an LIODN in addition to the memory address. Peripheral Access Authorization and Control Tables (PAACTs) are the primary data structures used by PAMU. A PAACT is a table of peripheral access authorization and control entries (PAACE).Each PAACE defines the range of I/O bus address space that is accessible by the LIOD and the associated access capabilities. There are two types of PAACTs: primary PAACT (PPAACT) and secondary PAACT (SPAACT).A given physical I/O device may be able to act as one or more independent logical I/O devices (LIODs). Each such logical I/O device is assigned an identifier called logical I/O device number (LIODN). A LIODN is allocated a contiguous portion of the I/O bus address space called the DSA window for performing DSA operations. The DSA window may optionally be divided into multiple sub-windows, each of which may be used to map to a region in system storage space. The first sub-window is referred to as the primary sub-window and the remaining are called secondary sub-windows. This patch provides the PAMU driver (fsl_pamu.c) and the corresponding IOMMU API implementation (fsl_pamu_domain.c). The PAMU hardware driver (fsl_pamu.c) has been derived from the work done by Ashish Kalra and Timur Tabi. [For iommu group support] Acked-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Timur Tabi <timur@tabi.org> Signed-off-by: Varun Sethi <Varun.Sethi@freescale.com> Signed-off-by: Joerg Roedel <joro@8bytes.org>
2013-07-15 12:50:57 +08:00
select IOMMU_API
select GENERIC_ALLOCATOR
help
Freescale PAMU support. PAMU is the IOMMU present on Freescale QorIQ platforms.
PAMU can authorize memory access, remap the memory address, and remap I/O
transaction types.
# MSM IOMMU support
config MSM_IOMMU
bool "MSM IOMMU Support"
depends on ARM
depends on ARCH_MSM8X60 || ARCH_MSM8960 || COMPILE_TEST
select IOMMU_API
select IOMMU_IO_PGTABLE_ARMV7S
help
Support for the IOMMUs found on certain Qualcomm SOCs.
These IOMMUs allow virtualization of the address space used by most
cores within the multimedia subsystem.
If unsure, say N here.
config IOMMU_PGTABLES_L2
def_bool y
depends on MSM_IOMMU && MMU && SMP && CPU_DCACHE_DISABLE=n
# AMD IOMMU support
config AMD_IOMMU
bool "AMD IOMMU support"
select SWIOTLB
select PCI_MSI
select PCI_ATS
select PCI_PRI
select PCI_PASID
select IOMMU_API
select IOMMU_IOVA
select IOMMU_DMA
x86, build, pci: Fix PCI_MSI build on !SMP Commit ebd97be635 ('PCI: remove ARCH_SUPPORTS_MSI kconfig option') removed the ARCH_SUPPORTS_MSI option which architectures could select to indicate that they support MSI. Now, all architectures are supposed to build fine when MSI support is enabled: instead of having the architecture tell *when* MSI support can be used, it's up to the architecture code to ensure that MSI support can be enabled. On x86, commit ebd97be635 removed the following line: select ARCH_SUPPORTS_MSI if (X86_LOCAL_APIC && X86_IO_APIC) Which meant that MSI support was only available when the local APIC and I/O APIC were enabled. While this is always true on SMP or x86-64, it is not necessarily the case on i386 !SMP. The below patch makes sure that the local APIC and I/O APIC support is always enabled when MSI support is enabled. To do so, it: * Ensures the X86_UP_APIC option is not visible when PCI_MSI is enabled. This is the option that allows, on UP machines, to enable or not the APIC support. It is already not visible on SMP systems, or x86-64 systems, for example. We're simply also making it invisible on i386 MSI systems. * Ensures that the X86_LOCAL_APIC and X86_IO_APIC options are 'y' when PCI_MSI is enabled. Notice that this change requires a change in drivers/iommu/Kconfig to avoid a recursive Kconfig dependencey. The AMD_IOMMU option selects PCI_MSI, but was depending on X86_IO_APIC. This dependency is no longer needed: as soon as PCI_MSI is selected, the presence of X86_IO_APIC is guaranteed. Moreover, the AMD_IOMMU already depended on X86_64, which already guaranteed that X86_IO_APIC was enabled, so this dependency was anyway redundant. Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Link: http://lkml.kernel.org/r/1380794354-9079-1-git-send-email-thomas.petazzoni@free-electrons.com Reported-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2013-10-03 17:59:14 +08:00
depends on X86_64 && PCI && ACPI
---help---
With this option you can enable support for AMD IOMMU hardware in
your system. An IOMMU is a hardware component which provides
remapping of DMA memory accesses from devices. With an AMD IOMMU you
can isolate the DMA memory of different devices and protect the
system from misbehaving device drivers or hardware.
You can find out if your system has an AMD IOMMU if you look into
your BIOS for an option to enable it or if you have an IVRS ACPI
table.
config AMD_IOMMU_V2
tristate "AMD IOMMU Version 2 driver"
depends on AMD_IOMMU
select MMU_NOTIFIER
---help---
This option enables support for the AMD IOMMUv2 features of the IOMMU
hardware. Select this option if you want to use devices that support
the PCI PRI and PASID interface.
config AMD_IOMMU_DEBUGFS
bool "Enable AMD IOMMU internals in DebugFS"
depends on AMD_IOMMU && IOMMU_DEBUGFS
---help---
!!!WARNING!!! !!!WARNING!!! !!!WARNING!!! !!!WARNING!!!
DO NOT ENABLE THIS OPTION UNLESS YOU REALLY, -REALLY- KNOW WHAT YOU ARE DOING!!!
Exposes AMD IOMMU device internals in DebugFS.
This option is -NOT- intended for production environments, and should
not generally be enabled.
# Intel IOMMU support
config DMAR_TABLE
bool
config INTEL_IOMMU
bool "Support for Intel IOMMU using DMA Remapping Devices"
depends on PCI_MSI && ACPI && (X86 || IA64)
select IOMMU_API
select IOMMU_IOVA
select NEED_DMA_MAP_STATE
select DMAR_TABLE
select SWIOTLB
help
DMA remapping (DMAR) devices support enables independent address
translations for Direct Memory Access (DMA) from devices.
These DMA remapping devices are reported via ACPI tables
and include PCI device scope covered by these DMA
remapping devices.
config INTEL_IOMMU_DEBUGFS
bool "Export Intel IOMMU internals in Debugfs"
depends on INTEL_IOMMU && IOMMU_DEBUGFS
help
!!!WARNING!!!
DO NOT ENABLE THIS OPTION UNLESS YOU REALLY KNOW WHAT YOU ARE DOING!!!
Expose Intel IOMMU internals in Debugfs.
This option is -NOT- intended for production environments, and should
only be enabled for debugging Intel IOMMU.
config INTEL_IOMMU_SVM
bool "Support for Shared Virtual Memory with Intel IOMMU"
depends on INTEL_IOMMU && X86
select PCI_PASID
select PCI_PRI
select MMU_NOTIFIER
select IOASID
help
Shared Virtual Memory (SVM) provides a facility for devices
to access DMA resources through process address space by
means of a Process Address Space ID (PASID).
config INTEL_IOMMU_DEFAULT_ON
def_bool y
prompt "Enable Intel DMA Remapping Devices by default"
depends on INTEL_IOMMU
help
Selecting this option will enable a DMAR device at boot time if
one is found. If this option is not selected, DMAR support can
be enabled by passing intel_iommu=on to the kernel.
config INTEL_IOMMU_BROKEN_GFX_WA
bool "Workaround broken graphics drivers (going away soon)"
depends on INTEL_IOMMU && BROKEN && X86
---help---
Current Graphics drivers tend to use physical address
for DMA and avoid using DMA APIs. Setting this config
option permits the IOMMU driver to set a unity map for
all the OS-visible memory. Hence the driver can continue
to use physical addresses for DMA, at least until this
option is removed in the 2.6.32 kernel.
config INTEL_IOMMU_FLOPPY_WA
def_bool y
depends on INTEL_IOMMU && X86
---help---
Floppy disk drivers are known to bypass DMA API calls
thereby failing to work when IOMMU is enabled. This
workaround will setup a 1:1 mapping for the first
16MiB to make floppy (an ISA device) work.
config INTEL_IOMMU_SCALABLE_MODE_DEFAULT_ON
bool "Enable Intel IOMMU scalable mode by default"
depends on INTEL_IOMMU
help
Selecting this option will enable by default the scalable mode if
hardware presents the capability. The scalable mode is defined in
VT-d 3.0. The scalable mode capability could be checked by reading
/sys/devices/virtual/iommu/dmar*/intel-iommu/ecap. If this option
is not selected, scalable mode support could also be enabled by
passing intel_iommu=sm_on to the kernel. If not sure, please use
the default value.
config IRQ_REMAP
bool "Support for Interrupt Remapping"
depends on X86_64 && X86_IO_APIC && PCI_MSI && ACPI
select DMAR_TABLE
---help---
Supports Interrupt remapping for IO-APIC and MSI devices.
To use x2apic mode in the CPU's which support x2APIC enhancements or
to support platforms with CPU's having > 8 bit APIC ID, say Y.
# OMAP IOMMU support
config OMAP_IOMMU
bool "OMAP IOMMU Support"
depends on ARM && MMU
depends on ARCH_OMAP2PLUS || COMPILE_TEST
select IOMMU_API
---help---
The OMAP3 media platform drivers depend on iommu support,
if you need them say Y here.
config OMAP_IOMMU_DEBUG
bool "Export OMAP IOMMU internals in DebugFS"
depends on OMAP_IOMMU && DEBUG_FS
---help---
Select this to see extensive information about
the internal state of OMAP IOMMU in debugfs.
Say N unless you know you need this.
iommu/rockchip: rk3288 iommu driver The rk3288 has several iommus. Each iommu belongs to a single master device. There is one device (ISP) that has two slave iommus, but that case is not yet supported by this driver. At subsys init, the iommu driver registers itself as the iommu driver for the platform bus. The master devices find their slave iommus using the "iommus" field in their devicetree description. Since each slave iommu belongs to exactly one master, their is no additional data needed at probe to associate a slave with its master. An iommu device's power domain, clock and irq are all shared with its master device, and the master device must be careful to attach from the iommu only after powering and clocking it (and leave it powered and clocked before detaching). Because their is no guarantee what the status of the iommu is at probe, and since the driver does not even know if the device is powered, we delay requesting its irq until the master device attaches, at which point we have a guarantee that the device is powered and clocked and we can reset it and disable its interrupt mask. An iommu_domain describes a virtual iova address space. Each iommu_domain has a corresponding page table that lists the mappings from iova to physical address. For the rk3288 iommu, the page table has two levels: The Level 1 "directory_table" has 1024 4-byte dte entries. Each dte points to a level 2 "page_table". Each level 2 page_table has 1024 4-byte pte entries. Each pte points to a 4 KiB page of memory. An iommu_domain is created when a dma_iommu_mapping is created via arm_iommu_create_mapping. Master devices can then attach themselves to this mapping (or attach the mapping to themselves?) by calling arm_iommu_attach_device(). This in turn instructs the iommu driver to write the page table's physical address into the slave iommu's "Directory Table Entry" (DTE) register. In fact multiple master devices, each with their own slave iommu device, can all attach to the same mapping. The iommus for these devices will share the same iommu_domain and therefore point to the same page table. Thus, the iommu domain maintains a list of iommu devices which are attached. This driver relies on the iommu core to ensure that all devices have detached before destroying a domain. v6: - add .add/remove_device() callbacks. - parse platform_device device tree nodes for "iommus" property - store platform device pointer as group iommudata - Check for existence of iommu group instead of relying on a dev_get_drvdata() to return NULL for a NULL device. v7: - fixup some strings. - In rk_iommu_disable_paging() # and % were reversed. Signed-off-by: Daniel Kurtz <djkurtz@chromium.org> Signed-off-by: Simon Xue <xxm@rock-chips.com> Reviewed-by: Grant Grundler <grundler@chromium.org> Reviewed-by: Stéphane Marchesin <marcheu@chromium.org> Tested-by: Heiko Stuebner <heiko@sntech.de> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2014-11-03 10:53:27 +08:00
config ROCKCHIP_IOMMU
bool "Rockchip IOMMU Support"
depends on ARM || ARM64
depends on ARCH_ROCKCHIP || COMPILE_TEST
iommu/rockchip: rk3288 iommu driver The rk3288 has several iommus. Each iommu belongs to a single master device. There is one device (ISP) that has two slave iommus, but that case is not yet supported by this driver. At subsys init, the iommu driver registers itself as the iommu driver for the platform bus. The master devices find their slave iommus using the "iommus" field in their devicetree description. Since each slave iommu belongs to exactly one master, their is no additional data needed at probe to associate a slave with its master. An iommu device's power domain, clock and irq are all shared with its master device, and the master device must be careful to attach from the iommu only after powering and clocking it (and leave it powered and clocked before detaching). Because their is no guarantee what the status of the iommu is at probe, and since the driver does not even know if the device is powered, we delay requesting its irq until the master device attaches, at which point we have a guarantee that the device is powered and clocked and we can reset it and disable its interrupt mask. An iommu_domain describes a virtual iova address space. Each iommu_domain has a corresponding page table that lists the mappings from iova to physical address. For the rk3288 iommu, the page table has two levels: The Level 1 "directory_table" has 1024 4-byte dte entries. Each dte points to a level 2 "page_table". Each level 2 page_table has 1024 4-byte pte entries. Each pte points to a 4 KiB page of memory. An iommu_domain is created when a dma_iommu_mapping is created via arm_iommu_create_mapping. Master devices can then attach themselves to this mapping (or attach the mapping to themselves?) by calling arm_iommu_attach_device(). This in turn instructs the iommu driver to write the page table's physical address into the slave iommu's "Directory Table Entry" (DTE) register. In fact multiple master devices, each with their own slave iommu device, can all attach to the same mapping. The iommus for these devices will share the same iommu_domain and therefore point to the same page table. Thus, the iommu domain maintains a list of iommu devices which are attached. This driver relies on the iommu core to ensure that all devices have detached before destroying a domain. v6: - add .add/remove_device() callbacks. - parse platform_device device tree nodes for "iommus" property - store platform device pointer as group iommudata - Check for existence of iommu group instead of relying on a dev_get_drvdata() to return NULL for a NULL device. v7: - fixup some strings. - In rk_iommu_disable_paging() # and % were reversed. Signed-off-by: Daniel Kurtz <djkurtz@chromium.org> Signed-off-by: Simon Xue <xxm@rock-chips.com> Reviewed-by: Grant Grundler <grundler@chromium.org> Reviewed-by: Stéphane Marchesin <marcheu@chromium.org> Tested-by: Heiko Stuebner <heiko@sntech.de> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2014-11-03 10:53:27 +08:00
select IOMMU_API
select ARM_DMA_USE_IOMMU
help
Support for IOMMUs found on Rockchip rk32xx SOCs.
These IOMMUs allow virtualization of the address space used by most
cores within the multimedia subsystem.
Say Y here if you are using a Rockchip SoC that includes an IOMMU
device.
config TEGRA_IOMMU_GART
bool "Tegra GART IOMMU Support"
depends on ARCH_TEGRA_2x_SOC
depends on TEGRA_MC
select IOMMU_API
help
Enables support for remapping discontiguous physical memory
shared with the operating system into contiguous I/O virtual
space through the GART (Graphics Address Relocation Table)
hardware included on Tegra SoCs.
config TEGRA_IOMMU_SMMU
memory: Add NVIDIA Tegra memory controller support The memory controller on NVIDIA Tegra exposes various knobs that can be used to tune the behaviour of the clients attached to it. Currently this driver sets up the latency allowance registers to the HW defaults. Eventually an API should be exported by this driver (via a custom API or a generic subsystem) to allow clients to register latency requirements. This driver also registers an IOMMU (SMMU) that's implemented by the memory controller. It is supported on Tegra30, Tegra114 and Tegra124 currently. Tegra20 has a GART instead. The Tegra SMMU operates on memory clients and SWGROUPs. A memory client is a unidirectional, special-purpose DMA master. A SWGROUP represents a set of memory clients that form a logical functional unit corresponding to a single device. Typically a device has two clients: one client for read transactions and one client for write transactions, but there are also devices that have only read clients, but many of them (such as the display controllers). Because there is no 1:1 relationship between memory clients and devices the driver keeps a table of memory clients and the SWGROUPs that they belong to per SoC. Note that this is an exception and due to the fact that the SMMU is tightly integrated with the rest of the Tegra SoC. The use of these tables is discouraged in drivers for generic IOMMU devices such as the ARM SMMU because the same IOMMU could be used in any number of SoCs and keeping such tables for each SoC would not scale. Acked-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Thierry Reding <treding@nvidia.com>
2014-04-16 15:24:44 +08:00
bool "NVIDIA Tegra SMMU Support"
depends on ARCH_TEGRA
depends on TEGRA_AHB
depends on TEGRA_MC
select IOMMU_API
help
memory: Add NVIDIA Tegra memory controller support The memory controller on NVIDIA Tegra exposes various knobs that can be used to tune the behaviour of the clients attached to it. Currently this driver sets up the latency allowance registers to the HW defaults. Eventually an API should be exported by this driver (via a custom API or a generic subsystem) to allow clients to register latency requirements. This driver also registers an IOMMU (SMMU) that's implemented by the memory controller. It is supported on Tegra30, Tegra114 and Tegra124 currently. Tegra20 has a GART instead. The Tegra SMMU operates on memory clients and SWGROUPs. A memory client is a unidirectional, special-purpose DMA master. A SWGROUP represents a set of memory clients that form a logical functional unit corresponding to a single device. Typically a device has two clients: one client for read transactions and one client for write transactions, but there are also devices that have only read clients, but many of them (such as the display controllers). Because there is no 1:1 relationship between memory clients and devices the driver keeps a table of memory clients and the SWGROUPs that they belong to per SoC. Note that this is an exception and due to the fact that the SMMU is tightly integrated with the rest of the Tegra SoC. The use of these tables is discouraged in drivers for generic IOMMU devices such as the ARM SMMU because the same IOMMU could be used in any number of SoCs and keeping such tables for each SoC would not scale. Acked-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Thierry Reding <treding@nvidia.com>
2014-04-16 15:24:44 +08:00
This driver supports the IOMMU hardware (SMMU) found on NVIDIA Tegra
SoCs (Tegra30 up to Tegra210).
config EXYNOS_IOMMU
bool "Exynos IOMMU Support"
depends on ARCH_EXYNOS && MMU
depends on !CPU_BIG_ENDIAN # revisit driver if we can enable big-endian ptes
select IOMMU_API
select ARM_DMA_USE_IOMMU
help
Support for the IOMMU (System MMU) of Samsung Exynos application
processor family. This enables H/W multimedia accelerators to see
non-linear physical memory chunks as linear memory in their
address space.
If unsure, say N here.
config EXYNOS_IOMMU_DEBUG
bool "Debugging log for Exynos IOMMU"
depends on EXYNOS_IOMMU
help
Select this to see the detailed log message that shows what
happens in the IOMMU driver.
Say N unless you need kernel log message for IOMMU debugging.
config IPMMU_VMSA
bool "Renesas VMSA-compatible IPMMU"
depends on ARM || IOMMU_DMA
depends on ARCH_RENESAS || (COMPILE_TEST && !GENERIC_ATOMIC64)
select IOMMU_API
select IOMMU_IO_PGTABLE_LPAE
select ARM_DMA_USE_IOMMU
help
Support for the Renesas VMSA-compatible IPMMU found in the R-Mobile
APE6, R-Car Gen2, and R-Car Gen3 SoCs.
If unsure, say N.
config SPAPR_TCE_IOMMU
bool "sPAPR TCE IOMMU Support"
depends on PPC_POWERNV || PPC_PSERIES
select IOMMU_API
help
Enables bits of IOMMU API required by VFIO. The iommu_ops
is not implemented as it is not necessary for VFIO.
# ARM IOMMU support
config ARM_SMMU
tristate "ARM Ltd. System MMU (SMMU) Support"
depends on (ARM64 || ARM) && MMU
select IOMMU_API
select IOMMU_IO_PGTABLE_LPAE
select ARM_DMA_USE_IOMMU if ARM
help
Support for implementations of the ARM System MMU architecture
versions 1 and 2.
Say Y here if your SoC includes an IOMMU device implementing
the ARM SMMU architecture.
config ARM_SMMU_LEGACY_DT_BINDINGS
bool "Support the legacy \"mmu-masters\" devicetree bindings"
depends on ARM_SMMU=y && OF
help
Support for the badly designed and deprecated "mmu-masters"
devicetree bindings. This allows some DMA masters to attach
to the SMMU but does not provide any support via the DMA API.
If you're lucky, you might be able to get VFIO up and running.
If you say Y here then you'll make me very sad. Instead, say N
and move your firmware to the utopian future that was 2016.
2019-03-02 03:20:17 +08:00
config ARM_SMMU_DISABLE_BYPASS_BY_DEFAULT
bool "Default to disabling bypass on ARM SMMU v1 and v2"
depends on ARM_SMMU
default y
help
Say Y here to (by default) disable bypass streams such that
incoming transactions from devices that are not attached to
an iommu domain will report an abort back to the device and
will not be allowed to pass through the SMMU.
Any old kernels that existed before this KConfig was
introduced would default to _allowing_ bypass (AKA the
equivalent of NO for this config). However the default for
this option is YES because the old behavior is insecure.
There are few reasons to allow unmatched stream bypass, and
even fewer good ones. If saying YES here breaks your board
you should work on fixing your board. This KConfig option
is expected to be removed in the future and we'll simply
hardcode the bypass disable in the code.
NOTE: the kernel command line parameter
'arm-smmu.disable_bypass' will continue to override this
config.
config ARM_SMMU_V3
tristate "ARM Ltd. System MMU Version 3 (SMMUv3) Support"
depends on ARM64
select IOMMU_API
select IOMMU_IO_PGTABLE_LPAE
select GENERIC_MSI_IRQ_DOMAIN
help
Support for implementations of the ARM System MMU architecture
version 3 providing translation support to a PCIe root complex.
Say Y here if your system includes an IOMMU device implementing
the ARM SMMUv3 architecture.
config S390_IOMMU
def_bool y if S390 && PCI
depends on S390 && PCI
select IOMMU_API
help
Support for the IOMMU API for s390 PCI devices.
config S390_CCW_IOMMU
bool "S390 CCW IOMMU Support"
depends on S390 && CCW
select IOMMU_API
help
Enables bits of IOMMU API required by VFIO. The iommu_ops
is not implemented as it is not necessary for VFIO.
s390: vfio-ap: base implementation of VFIO AP device driver Introduces a new AP device driver. This device driver is built on the VFIO mediated device framework. The framework provides sysfs interfaces that facilitate passthrough access by guests to devices installed on the linux host. The VFIO AP device driver will serve two purposes: 1. Provide the interfaces to reserve AP devices for exclusive use by KVM guests. This is accomplished by unbinding the devices to be reserved for guest usage from the zcrypt device driver and binding them to the VFIO AP device driver. 2. Implements the functions, callbacks and sysfs attribute interfaces required to create one or more VFIO mediated devices each of which will be used to configure the AP matrix for a guest and serve as a file descriptor for facilitating communication between QEMU and the VFIO AP device driver. When the VFIO AP device driver is initialized: * It registers with the AP bus for control of type 10 (CEX4 and newer) AP queue devices. This limitation was imposed due to: 1. A desire to keep the code as simple as possible; 2. Some older models are no longer supported by the kernel and others are getting close to end of service. 3. A lack of older systems on which to test older devices. The probe and remove callbacks will be provided to support the binding/unbinding of AP queue devices to/from the VFIO AP device driver. * Creates a matrix device, /sys/devices/vfio_ap/matrix, to serve as the parent of the mediated devices created, one for each guest, and to hold the APQNs of the AP devices bound to the VFIO AP device driver. Signed-off-by: Tony Krowiak <akrowiak@linux.ibm.com> Reviewed-by: Halil Pasic <pasic@linux.ibm.com> Tested-by: Michael Mueller <mimu@linux.ibm.com> Tested-by: Farhan Ali <alifm@linux.ibm.com> Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20180925231641.4954-5-akrowiak@linux.vnet.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2018-09-26 07:16:19 +08:00
config S390_AP_IOMMU
bool "S390 AP IOMMU Support"
depends on S390 && ZCRYPT
select IOMMU_API
help
Enables bits of IOMMU API required by VFIO. The iommu_ops
is not implemented as it is not necessary for VFIO.
config MTK_IOMMU
bool "MTK IOMMU Support"
depends on ARM || ARM64
depends on ARCH_MEDIATEK || COMPILE_TEST
select ARM_DMA_USE_IOMMU
select IOMMU_API
select IOMMU_DMA
select IOMMU_IO_PGTABLE_ARMV7S
select MEMORY
select MTK_SMI
help
Support for the M4U on certain Mediatek SOCs. M4U is MultiMedia
Memory Management Unit. This option enables remapping of DMA memory
accesses for the multimedia subsystem.
If unsure, say N here.
config MTK_IOMMU_V1
bool "MTK IOMMU Version 1 (M4U gen1) Support"
depends on ARM
depends on ARCH_MEDIATEK || COMPILE_TEST
select ARM_DMA_USE_IOMMU
select IOMMU_API
select MEMORY
select MTK_SMI
help
Support for the M4U on certain Mediatek SoCs. M4U generation 1 HW is
Multimedia Memory Managememt Unit. This option enables remapping of
DMA memory accesses for the multimedia subsystem.
if unsure, say N here.
config QCOM_IOMMU
# Note: iommu drivers cannot (yet?) be built as modules
bool "Qualcomm IOMMU Support"
depends on ARCH_QCOM || (COMPILE_TEST && !GENERIC_ATOMIC64)
select IOMMU_API
select IOMMU_IO_PGTABLE_LPAE
select ARM_DMA_USE_IOMMU
help
Support for IOMMU on certain Qualcomm SoCs.
config HYPERV_IOMMU
bool "Hyper-V x2APIC IRQ Handling"
depends on HYPERV && X86
select IOMMU_API
default HYPERV
help
Stub IOMMU driver to handle IRQs as to allow Hyper-V Linux
guests to run with x2APIC mode enabled.
config VIRTIO_IOMMU
bool "Virtio IOMMU driver"
depends on VIRTIO=y
depends on ARM64
select IOMMU_API
select INTERVAL_TREE
help
Para-virtualised IOMMU driver with virtio.
Say Y here if you intend to run this kernel as a guest.
endif # IOMMU_SUPPORT