Commit Graph

199 Commits

Author SHA1 Message Date
Dan Carpenter 1fb260bc00 iommu/amd: Remove an unneeded condition
get_device_id() returns an unsigned short device id.  It never fails and
it never returns a negative so we can remove this condition.

Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-01-07 13:08:07 +01:00
Joerg Roedel a639a8eecf iommu/amd: Preallocate dma_ops apertures based on dma_mask
Preallocate between 4 and 8 apertures when a device gets it
dma_mask. With more apertures we reduce the lock contention
of the domain lock significantly.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:54 +01:00
Joerg Roedel 7b5e25b84e iommu/amd: Use trylock to aquire bitmap_lock
First search for a non-contended aperture with trylock
before spinning.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:54 +01:00
Joerg Roedel 5f6bed5005 iommu/amd: Make dma_ops_domain->next_index percpu
Make this pointer percpu so that we start searching for new
addresses in the range we last stopped and which is has a
higher probability of being still in the cache.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:54 +01:00
Joerg Roedel 92d420ec02 iommu/amd: Relax locking in dma_ops path
Remove the long holding times of the domain->lock and rely
on the bitmap_lock instead.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:54 +01:00
Joerg Roedel a73c156665 iommu/amd: Initialize new aperture range before making it visible
Make sure the aperture range is fully initialized before it
is visible to the address allocator.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:53 +01:00
Joerg Roedel 7bfa5bd270 iommu/amd: Build io page-tables with cmpxchg64
This allows to build up the page-tables without holding any
locks. As a consequence it removes the need to pre-populate
dma_ops page-tables.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:53 +01:00
Joerg Roedel 266a3bd28f iommu/amd: Allocate new aperture ranges in dma_ops_alloc_addresses
It really belongs there and not in __map_single.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:53 +01:00
Joerg Roedel 4eeca8c5e7 iommu/amd: Optimize dma_ops_free_addresses
Don't flush the iommu tlb when we free something behind the
current next_bit pointer. Update the next_bit pointer
instead and let the flush happen on the next wraparound in
the allocation path.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:53 +01:00
Joerg Roedel ab7032bb9c iommu/amd: Remove need_flush from struct dma_ops_domain
The flushing of iommu tlbs is now done on a per-range basis.
So there is no need anymore for domain-wide flush tracking.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:53 +01:00
Joerg Roedel 2a87442c5b iommu/amd: Iterate over all aperture ranges in dma_ops_area_alloc
This way we don't need to care about the next_index wrapping
around in dma_ops_alloc_addresses.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:52 +01:00
Joerg Roedel d41ab09896 iommu/amd: Flush iommu tlb in dma_ops_free_addresses
Instead of setting need_flush, do the flush directly in
dma_ops_free_addresses.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:52 +01:00
Joerg Roedel ebaecb423b iommu/amd: Rename dma_ops_domain->next_address to next_index
It points to the next aperture index to allocate from. We
don't need the full address anymore because this is now
tracked in struct aperture_range.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:52 +01:00
Joerg Roedel 05ab49e005 iommu/amd: Remove 'start' parameter from dma_ops_area_alloc
Parameter is not needed because the value is part of the
already passed in struct dma_ops_domain.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:52 +01:00
Joerg Roedel ccb50e03da iommu/amd: Flush iommu tlb in dma_ops_aperture_alloc()
Since the allocator wraparound happens in this function now,
flush the iommu tlb there too.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:51 +01:00
Joerg Roedel 60e6a7cb44 iommu/amd: Retry address allocation within one aperture
Instead of skipping to the next aperture, first try again in
the current one.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:51 +01:00
Joerg Roedel ae62d49c7a iommu/amd: Move aperture_range.offset to another cache-line
Moving it before the pte_pages array puts in into the same
cache-line as the spin-lock and the bitmap array pointer.
This should safe a cache-miss.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:51 +01:00
Joerg Roedel a0f51447f4 iommu/amd: Add dma_ops_aperture_alloc() function
Make this a wrapper around iommu_ops_area_alloc() for now
and add more logic to this function later on.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:51 +01:00
Joerg Roedel b57c3c802e iommu/amd: Pass correct shift to iommu_area_alloc()
The page-offset of the aperture must be passed instead of 0.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:50 +01:00
Joerg Roedel 84b3a0bc88 iommu/amd: Flush the IOMMU TLB before the addresses are freed
This allows to keep the bitmap_lock only for a very short
period of time.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:50 +01:00
Joerg Roedel 53b3b65aa5 iommu/amd: Flush IOMMU TLB on __map_single error path
There have been present PTEs which in theory could have made
it to the IOMMU TLB. Flush the addresses out on the error
path to make sure no stale entries remain.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:50 +01:00
Joerg Roedel 08c5fb938e iommu/amd: Introduce bitmap_lock in struct aperture_range
This lock only protects the address allocation bitmap in one
aperture.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:50 +01:00
Joerg Roedel 007b74bab2 iommu/amd: Move 'struct dma_ops_domain' definition to amd_iommu.c
It is only used in this file anyway, so keep it there. Same
with 'struct aperture_range'.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:50 +01:00
Joerg Roedel a7fb668fd8 iommu/amd: Warn only once on unexpected pte value
This prevents possible flooding of the kernel log.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28 17:18:50 +01:00
Mel Gorman d0164adc89 mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep and avoiding waking kswapd
__GFP_WAIT has been used to identify atomic context in callers that hold
spinlocks or are in interrupts.  They are expected to be high priority and
have access one of two watermarks lower than "min" which can be referred
to as the "atomic reserve".  __GFP_HIGH users get access to the first
lower watermark and can be called the "high priority reserve".

Over time, callers had a requirement to not block when fallback options
were available.  Some have abused __GFP_WAIT leading to a situation where
an optimisitic allocation with a fallback option can access atomic
reserves.

This patch uses __GFP_ATOMIC to identify callers that are truely atomic,
cannot sleep and have no alternative.  High priority users continue to use
__GFP_HIGH.  __GFP_DIRECT_RECLAIM identifies callers that can sleep and
are willing to enter direct reclaim.  __GFP_KSWAPD_RECLAIM to identify
callers that want to wake kswapd for background reclaim.  __GFP_WAIT is
redefined as a caller that is willing to enter direct reclaim and wake
kswapd for background reclaim.

This patch then converts a number of sites

o __GFP_ATOMIC is used by callers that are high priority and have memory
  pools for those requests. GFP_ATOMIC uses this flag.

o Callers that have a limited mempool to guarantee forward progress clear
  __GFP_DIRECT_RECLAIM but keep __GFP_KSWAPD_RECLAIM. bio allocations fall
  into this category where kswapd will still be woken but atomic reserves
  are not used as there is a one-entry mempool to guarantee progress.

o Callers that are checking if they are non-blocking should use the
  helper gfpflags_allow_blocking() where possible. This is because
  checking for __GFP_WAIT as was done historically now can trigger false
  positives. Some exceptions like dm-crypt.c exist where the code intent
  is clearer if __GFP_DIRECT_RECLAIM is used instead of the helper due to
  flag manipulations.

o Callers that built their own GFP flags instead of starting with GFP_KERNEL
  and friends now also need to specify __GFP_KSWAPD_RECLAIM.

The first key hazard to watch out for is callers that removed __GFP_WAIT
and was depending on access to atomic reserves for inconspicuous reasons.
In some cases it may be appropriate for them to use __GFP_HIGH.

The second key hazard is callers that assembled their own combination of
GFP flags instead of starting with something like GFP_KERNEL.  They may
now wish to specify __GFP_KSWAPD_RECLAIM.  It's almost certainly harmless
if it's missed in most cases as other activity will wake kswapd.

Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Christoph Lameter <cl@linux.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Vitaly Wool <vitalywool@gmail.com>
Cc: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-11-06 17:50:42 -08:00
Joerg Roedel b67ad2f7c7 Merge branches 'x86/vt-d', 'arm/omap', 'arm/smmu', 's390', 'core' and 'x86/amd' into next
Conflicts:
	drivers/iommu/amd_iommu_types.h
2015-11-02 20:03:34 +09:00
Joerg Roedel a960fadbe6 iommu: Add device_group call-back to x86 iommu drivers
Set the device_group call-back to pci_device_group() for the
Intel VT-d and the AMD IOMMU driver.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-10-22 00:00:49 +02:00
Joerg Roedel deba4bce16 iommu/amd: Remove cmd_buf_size and evt_buf_size from struct amd_iommu
The driver always uses a constant size for these buffers
anyway, so there is no need to waste memory to store the
sizes.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-10-21 11:30:33 +02:00
Joerg Roedel 61289cbaf6 iommu/amd: Remove old alias handling code
This mostly removes the code to create dev_data structures
for alias device ids. They are not necessary anymore, as
they were only created for device ids which have no struct
pci_dev associated with it.  But these device ids are
handled in a simpler way now, so there is no need for this
code anymore.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-10-21 11:30:32 +02:00
Joerg Roedel e25bfb56ea iommu/amd: Set alias DTE in do_attach/do_detach
With this we don't have to create dev_data entries for
non-existent devices (which only exist as request-ids).

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-10-21 11:30:32 +02:00
Joerg Roedel 272e4f99e9 iommu/amd: WARN when __[attach|detach]_device are called with irqs enabled
These functions rely on being called with IRQs disabled. Add
a WARN_ON to detect early when its not.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-10-21 11:29:27 +02:00
Joerg Roedel f1dd0a8bcd iommu/amd: Don't disable IRQs in __detach_device
This function is already called with IRQs disabled already.
So no need to disable them again.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-10-21 11:29:27 +02:00
Joerg Roedel 150952f969 iommu/amd: Do not iterate over alias-list in __[attach|detach]_device
The alias list is handled aleady by iommu core code. No need
anymore to handle it in this part of the AMD IOMMU code

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-10-21 11:29:27 +02:00
Joerg Roedel f34c73f55a iommu/amd: Do not BUG_ON in __detach_device()
The condition in the BUG_ON is an indicator of a BUG, but no
reason to kill the code path. Turn it into a WARN_ON and
bail out if it is hit.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-10-21 11:29:26 +02:00
Joerg Roedel cbf3ccd09d iommu/amd: Don't clear DTE flags when modifying it
During device assignment/deassignment the flags in the DTE
get lost, which might cause spurious faults, for example
when the device tries to access the system management range.
Fix this by not clearing the flags with the rest of the DTE.

Reported-by: G. Richard Bellamy <rbellamy@pteradigm.com>
Tested-by: G. Richard Bellamy <rbellamy@pteradigm.com>
Cc: stable@vger.kernel.org
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-10-21 11:29:06 +02:00
Joerg Roedel 5adad99154 iommu/amd: Fix NULL pointer deref on device detach
When a device group is detached from its domain, the iommu
core code calls into the iommu driver to detach each device
individually.

Before this functionality went into the iommu core code, it
was implemented in the drivers, also in the AMD IOMMU
driver as the device alias handling code.

This code is still present, as there might be aliases that
don't exist as real PCI devices (and are therefore invisible
to the iommu core code).

Unfortunatly it might happen now, that a device is unbound
multiple times from its domain, first by the alias handling
code and then by the iommu core code (or vice verca).

This ends up in the do_detach function which dereferences
the dev_data->domain pointer. When the device is already
detached, this pointer is NULL and we get a kernel oops.

Removing the alias code completly is not an option, as that
would also remove the code which handles invisible aliases.
The code could be simplified, but this is too big of a
change outside the merge window.

For now, just check the dev_data->domain pointer in
do_detach and bail out if it is NULL.

Reported-by: Andreas Hartmann <andihartmann@freenet.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-10-09 17:59:33 +02:00
Joerg Roedel 23d3a98c13 iommu/amd: Use BUG_ON instead of if () BUG()
Found by a coccicheck script.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-13 19:49:14 +02:00
Joerg Roedel a130e69f28 iommu/amd: Simplify allocation in irq_remapping_alloc()
Allocate the irq data only in the loop.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-13 19:49:05 +02:00
Joerg Roedel 1c1cc454aa iommu/amd: Allow non-ATS devices in IOMMUv2 domains
With the grouping of multi-function devices a non-ATS
capable device might also end up in the same domain as an
IOMMUv2 capable device.
So handle this situation gracefully and don't consider it a
bug anymore.

Tested-by: Oded Gabbay <oded.gabbay@gmail.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-07-31 15:15:41 +02:00
Joerg Roedel 5271782835 iommu/amd: Set global dma_ops if swiotlb is disabled
Some AMD systems also have non-PCI devices which can do DMA.
Those can't be handled by the AMD IOMMU, as the hardware can
only handle PCI. These devices would end up with no dma_ops,
as neither the per-device nor the global dma_ops will get
set. SWIOTLB provides global dma_ops when it is active, so
make sure there are global dma_ops too when swiotlb is
disabled.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-07-30 10:28:48 +02:00
Joerg Roedel 3230232457 iommu/amd: Use swiotlb in passthrough mode
In passthrough mode (iommu=pt) all devices are identity
mapped. If a device does not support 64bit DMA it might
still need remapping. Make sure swiotlb is initialized to
provide this remapping.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-07-30 10:28:48 +02:00
Joerg Roedel 02ca20212f iommu/amd: Allow non-IOMMUv2 devices in IOMMUv2 domains
Since devices with IOMMUv2 functionality might be in the
same group as devices without it, allow those devices in
IOMMUv2 domains too.
Otherwise attaching the group with the IOMMUv2 device to the
domain will fail.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-07-30 10:28:48 +02:00
Joerg Roedel 1e6a7b04c0 iommu/amd: Use iommu core for passthrough mode
Remove the AMD IOMMU driver implementation for passthrough
mode and rely on the new iommu core features for that.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-07-30 10:28:48 +02:00
Linus Torvalds 44b061f77f IOMMU Fixes for Linux v4.2-rc0
Four fixes have queued up to fix regressions introduced after v4.1:
 
 	* Don't fail IOMMU driver initialization when the add_device
 	  call-back returns -ENODEV, as that just means that the device
 	  is not translated by the IOMMU. This is pretty common on ARM.
 
 	* Two fixes for the ARM-SMMU driver for a wrong feature check
 	  and to remove a redundant NULL check.
 
 	* A fix for the AMD IOMMU driver to fix a boot panic on systems
 	  where the BIOS requests Unity Mappings in the IVRS table.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iQIcBAABAgAGBQJVlBZTAAoJECvwRC2XARrj6ngQAI/fjz1cW4WYVDBGPffoHhtB
 9BH72XQTyt2OaQPsiECnWkl4bAoJ0TmS6dvcYTf75znqClL1Eez/TqfATEPOSFwI
 7N0qkkVc3OffvF3XnxksNNV4tLaojdIFNdxAVrrOOuWDeNKC4Rkvcx+Typ9Y7CxI
 YR4+qdkPqjYVn13JVMvZDr6SLAnvfHPSIcW1CP3vQzH6w4mWJSmRMLd42Xel1Kb7
 hvEDqlT6k6KJxBt3W601eo3sgqZ1AJTFiY4RFh0diHbHQlgg1PcsbWsL5QJMHozi
 SSHFDCxag9NgHy97OTcGuDptD9F9fI4+t1ANtWULis7+sN5Bx5/xsG/VRJ9fpiMN
 RNlcCMCufC89EHXdoPuAvOcoPmUHqv1CU9I6+DpOo9FQrGMoXDrdosApNaJZ73E/
 qtgzJN0hueeBOvB7Hk+U+mI4BSzAtGguHoO+LzjrZBzoW5L9WWuznmHYriLE0bMm
 uKnZFBEnXFe8DugQ3ta7PkyzIWsnD0O++NRueN9pSOLvOUpNk6Iddv4hER9QwwPA
 RQOfsASEo1ResAd9SJGnPX1MQxXxl4OB/9R1Q648lQguAj7WhV1nn21cISgLjESC
 nEKma+A7dGT6nOTm/wK+wokAgndOGlztMU9wJBK12ozxrhbO+0VP/oTjhhmvcWGb
 DbpzhyeCpi1qLmsZe0x7
 =NzGR
 -----END PGP SIGNATURE-----

Merge tag 'iommu-fixes-v4.2' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu

Pul IOMMU fixes from Joerg Roedel:
 "Four fixes have queued up to fix regressions introduced after v4.1:

   - Don't fail IOMMU driver initialization when the add_device
     call-back returns -ENODEV, as that just means that the device is
     not translated by the IOMMU.  This is pretty common on ARM.

   - Two fixes for the ARM-SMMU driver for a wrong feature check and to
     remove a redundant NULL check.

   - A fix for the AMD IOMMU driver to fix a boot panic on systems where
     the BIOS requests Unity Mappings in the IVRS table"

* tag 'iommu-fixes-v4.2' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu:
  iommu/amd: Introduce protection_domain_init() function
  iommu/arm-smmu: Delete an unnecessary check before the function call "free_io_pgtable_ops"
  iommu/arm-smmu: Fix broken ATOS check
  iommu: Ignore -ENODEV errors from add_device call-back
2015-07-01 14:44:22 -07:00
Joerg Roedel 7a5a566eab iommu/amd: Introduce protection_domain_init() function
This function contains the common parts between the
initialization of dma_ops_domains and usual protection
domains. This also fixes a long-standing bug which was
uncovered by recent changes, in which the api_lock was not
initialized for dma_ops_domains.

Reported-by: George Wang <xuw2015@gmail.com>
Tested-by: George Wang <xuw2015@gmail.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-07-01 08:43:07 +02:00
Linus Torvalds 6eae81a5e2 IOMMU Updates for Linux v4.2
This time with bigger changes than usual:
 
 	* A new IOMMU driver for the ARM SMMUv3. This IOMMU is pretty
 	  different from SMMUv1 and v2 in that it is configured through
 	  in-memory structures and not through the MMIO register region.
 	  The ARM SMMUv3 also supports IO demand paging for PCI devices
 	  with PRI/PASID capabilities, but this is not implemented in
 	  the driver yet.
 
 	* Lots of cleanups and device-tree support for the Exynos IOMMU
 	  driver. This is part of the effort to bring Exynos DRM support
 	  upstream.
 
 	* Introduction of default domains into the IOMMU core code. The
 	  rationale behind this is to move functionalily out of the
 	  IOMMU drivers to common code to get to a unified behavior
 	  between different drivers.
 	  The patches here introduce a default domain for iommu-groups
 	  (isolation groups). A device will now always be attached to a
 	  domain, either the default domain or another domain handled by
 	  the device driver. The IOMMU drivers have to be modified to
 	  make use of that feature. So long the AMD IOMMU driver is
 	  converted, with others to follow.
 
 	* Patches for the Intel VT-d drvier to fix DMAR faults that
 	  happen when a kdump kernel boots. When the kdump kernel boots
 	  it re-initializes the IOMMU hardware, which destroys all
 	  mappings from the crashed kernel. As this happens before
 	  the endpoint devices are re-initialized, any in-flight DMA
 	  causes a DMAR fault. These faults cause PCI master aborts,
 	  which some devices can't handle properly and go into an
 	  undefined state, so that the device driver in the kdump kernel
 	  fails to initialize them and the dump fails.
 	  This is now fixed by copying over the mapping structures (only
 	  context tables and interrupt remapping tables) from the old
 	  kernel and keep the old mappings in place until the device
 	  driver of the new kernel takes over. This emulates the the
 	  behavior without an IOMMU to the best degree possible.
 
 	* A couple of other small fixes and cleanups.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iQIcBAABAgAGBQJViSIWAAoJECvwRC2XARrjl+cP/2FXS7SWDq91VFiIZfXfPt8H
 C5Ef3OGWCnMzn4MKE1ExkyDhC+AH6pF1s4zi3XfT6b8iOA+DUpa51rxJjixszt31
 tQwmvB7hWu4mznGxSN7EA0Pm0l/v3tBAY5BvG598af0aNZFFJ6po+31MyQA5X67+
 6xpqLbH/hm4IZhFBOEzZwxuWWsNxlJwwzKqeAjGyqeUhdruRYZiPHWQ17sDjwLM/
 QcVvWBb7meOtKv1OCtpzC4sglSk3scbAfEHMEBuDt8cI6OD7/t2VzPXDWWZuXGqK
 nRAxCT7NrXvyOnv0xwdn0j5p1FUGipVxvhsGWX7sJsh3UHWm8Q+5rRKFFVI9pm50
 QcMjiIMazK5VwcAkDnLoDgSz4Zz6TfHXEOqSJ2vjTPt2VDP/J9zdM2iwHx2ujicI
 mIkrtmsBprvAPx6e9jcqiS5L/Xy1y1xewXuGxa5F2XOjqdoXkPqaupjlyrWzrChA
 MC8w67FdzjHDPCfIqfIWZpJQj4f1OFQGd3HS5HpkBACxIwCg85gRw4DEMfD/sirO
 BL2VM0RO/bB5+4R0AY7UA2VszQvNMqedj1bA4vAbrnXqOh8BI/0GgeoWiBMXhyX1
 qvT1jl+cxuCm5tgBOMUGYoRyF+//bH+l78jLsTYaWRtuVzFlkAX6idNvYYK0dmNt
 tLII2IIZBk87P3pF4d6A
 =Zicw
 -----END PGP SIGNATURE-----

Merge tag 'iommu-updates-v4.2' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu

Pull IOMMU updates from Joerg Roedel:
 "This time with bigger changes than usual:

   - A new IOMMU driver for the ARM SMMUv3.

     This IOMMU is pretty different from SMMUv1 and v2 in that it is
     configured through in-memory structures and not through the MMIO
     register region.  The ARM SMMUv3 also supports IO demand paging for
     PCI devices with PRI/PASID capabilities, but this is not
     implemented in the driver yet.

   - Lots of cleanups and device-tree support for the Exynos IOMMU
     driver.  This is part of the effort to bring Exynos DRM support
     upstream.

   - Introduction of default domains into the IOMMU core code.

     The rationale behind this is to move functionalily out of the IOMMU
     drivers to common code to get to a unified behavior between
     different drivers.  The patches here introduce a default domain for
     iommu-groups (isolation groups).

     A device will now always be attached to a domain, either the
     default domain or another domain handled by the device driver.  The
     IOMMU drivers have to be modified to make use of that feature.  So
     long the AMD IOMMU driver is converted, with others to follow.

   - Patches for the Intel VT-d drvier to fix DMAR faults that happen
     when a kdump kernel boots.

     When the kdump kernel boots it re-initializes the IOMMU hardware,
     which destroys all mappings from the crashed kernel.  As this
     happens before the endpoint devices are re-initialized, any
     in-flight DMA causes a DMAR fault.  These faults cause PCI master
     aborts, which some devices can't handle properly and go into an
     undefined state, so that the device driver in the kdump kernel
     fails to initialize them and the dump fails.

     This is now fixed by copying over the mapping structures (only
     context tables and interrupt remapping tables) from the old kernel
     and keep the old mappings in place until the device driver of the
     new kernel takes over.  This emulates the the behavior without an
     IOMMU to the best degree possible.

   - A couple of other small fixes and cleanups"

* tag 'iommu-updates-v4.2' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (69 commits)
  iommu/amd: Handle large pages correctly in free_pagetable
  iommu/vt-d: Don't disable IR when it was previously enabled
  iommu/vt-d: Make sure copied over IR entries are not reused
  iommu/vt-d: Copy IR table from old kernel when in kdump mode
  iommu/vt-d: Set IRTA in intel_setup_irq_remapping
  iommu/vt-d: Disable IRQ remapping in intel_prepare_irq_remapping
  iommu/vt-d: Move QI initializationt to intel_setup_irq_remapping
  iommu/vt-d: Move EIM detection to intel_prepare_irq_remapping
  iommu/vt-d: Enable Translation only if it was previously disabled
  iommu/vt-d: Don't disable translation prior to OS handover
  iommu/vt-d: Don't copy translation tables if RTT bit needs to be changed
  iommu/vt-d: Don't do early domain assignment if kdump kernel
  iommu/vt-d: Allocate si_domain in init_dmars()
  iommu/vt-d: Mark copied context entries
  iommu/vt-d: Do not re-use domain-ids from the old kernel
  iommu/vt-d: Copy translation tables from old kernel
  iommu/vt-d: Detect pre enabled translation
  iommu/vt-d: Make root entry visible for hardware right after allocation
  iommu/vt-d: Init QI before root entry is allocated
  iommu/vt-d: Cleanup log messages
  ...
2015-06-23 18:27:19 -07:00
Linus Torvalds d70b3ef54c Merge branch 'x86-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 core updates from Ingo Molnar:
 "There were so many changes in the x86/asm, x86/apic and x86/mm topics
  in this cycle that the topical separation of -tip broke down somewhat -
  so the result is a more traditional architecture pull request,
  collected into the 'x86/core' topic.

  The topics were still maintained separately as far as possible, so
  bisectability and conceptual separation should still be pretty good -
  but there were a handful of merge points to avoid excessive
  dependencies (and conflicts) that would have been poorly tested in the
  end.

  The next cycle will hopefully be much more quiet (or at least will
  have fewer dependencies).

  The main changes in this cycle were:

   * x86/apic changes, with related IRQ core changes: (Jiang Liu, Thomas
     Gleixner)

     - This is the second and most intrusive part of changes to the x86
       interrupt handling - full conversion to hierarchical interrupt
       domains:

          [IOAPIC domain]   -----
                                 |
          [MSI domain]      --------[Remapping domain] ----- [ Vector domain ]
                                 |   (optional)          |
          [HPET MSI domain] -----                        |
                                                         |
          [DMAR domain]     -----------------------------
                                                         |
          [Legacy domain]   -----------------------------

       This now reflects the actual hardware and allowed us to distangle
       the domain specific code from the underlying parent domain, which
       can be optional in the case of interrupt remapping.  It's a clear
       separation of functionality and removes quite some duct tape
       constructs which plugged the remap code between ioapic/msi/hpet
       and the vector management.

     - Intel IOMMU IRQ remapping enhancements, to allow direct interrupt
       injection into guests (Feng Wu)

   * x86/asm changes:

     - Tons of cleanups and small speedups, micro-optimizations.  This
       is in preparation to move a good chunk of the low level entry
       code from assembly to C code (Denys Vlasenko, Andy Lutomirski,
       Brian Gerst)

     - Moved all system entry related code to a new home under
       arch/x86/entry/ (Ingo Molnar)

     - Removal of the fragile and ugly CFI dwarf debuginfo annotations.
       Conversion to C will reintroduce many of them - but meanwhile
       they are only getting in the way, and the upstream kernel does
       not rely on them (Ingo Molnar)

     - NOP handling refinements. (Borislav Petkov)

   * x86/mm changes:

     - Big PAT and MTRR rework: making the code more robust and
       preparing to phase out exposing direct MTRR interfaces to drivers -
       in favor of using PAT driven interfaces (Toshi Kani, Luis R
       Rodriguez, Borislav Petkov)

     - New ioremap_wt()/set_memory_wt() interfaces to support
       Write-Through cached memory mappings.  This is especially
       important for good performance on NVDIMM hardware (Toshi Kani)

   * x86/ras changes:

     - Add support for deferred errors on AMD (Aravind Gopalakrishnan)

       This is an important RAS feature which adds hardware support for
       poisoned data.  That means roughly that the hardware marks data
       which it has detected as corrupted but wasn't able to correct, as
       poisoned data and raises an APIC interrupt to signal that in the
       form of a deferred error.  It is the OS's responsibility then to
       take proper recovery action and thus prolonge system lifetime as
       far as possible.

     - Add support for Intel "Local MCE"s: upcoming CPUs will support
       CPU-local MCE interrupts, as opposed to the traditional system-
       wide broadcasted MCE interrupts (Ashok Raj)

     - Misc cleanups (Borislav Petkov)

   * x86/platform changes:

     - Intel Atom SoC updates

  ... and lots of other cleanups, fixlets and other changes - see the
  shortlog and the Git log for details"

* 'x86-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (222 commits)
  x86/hpet: Use proper hpet device number for MSI allocation
  x86/hpet: Check for irq==0 when allocating hpet MSI interrupts
  x86/mm/pat, drivers/infiniband/ipath: Use arch_phys_wc_add() and require PAT disabled
  x86/mm/pat, drivers/media/ivtv: Use arch_phys_wc_add() and require PAT disabled
  x86/platform/intel/baytrail: Add comments about why we disabled HPET on Baytrail
  genirq: Prevent crash in irq_move_irq()
  genirq: Enhance irq_data_to_desc() to support hierarchy irqdomain
  iommu, x86: Properly handle posted interrupts for IOMMU hotplug
  iommu, x86: Provide irq_remapping_cap() interface
  iommu, x86: Setup Posted-Interrupts capability for Intel iommu
  iommu, x86: Add cap_pi_support() to detect VT-d PI capability
  iommu, x86: Avoid migrating VT-d posted interrupts
  iommu, x86: Save the mode (posted or remapped) of an IRTE
  iommu, x86: Implement irq_set_vcpu_affinity for intel_ir_chip
  iommu: dmar: Provide helper to copy shared irte fields
  iommu: dmar: Extend struct irte for VT-d Posted-Interrupts
  iommu: Add new member capability to struct irq_remap_ops
  x86/asm/entry/64: Disentangle error_entry/exit gsbase/ebx/usermode code
  x86/asm/entry/32: Shorten __audit_syscall_entry() args preparation
  x86/asm/entry/32: Explain reloading of registers after __audit_syscall_entry()
  ...
2015-06-22 17:59:09 -07:00
Joerg Roedel 5ffde2f671 Merge branches 'arm/rockchip', 'arm/exynos', 'arm/smmu', 'x86/vt-d', 'x86/amd', 'default-domains' and 'core' into next 2015-06-19 17:17:47 +02:00
Joerg Roedel 0b3fff54bc iommu/amd: Handle large pages correctly in free_pagetable
Make sure that we are skipping over large PTEs while walking
the page-table tree.

Cc: stable@kernel.org
Fixes: 5c34c403b7 ("iommu/amd: Fix memory leak in free_pagetable")
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-06-19 17:17:34 +02:00
Joerg Roedel 4d58b8a6de iommu/amd: Handle errors returned from iommu_init_device
Without this patch only -ENOTSUPP is handled, but there are
other possible errors. Handle them too.

Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-06-11 09:42:25 +02:00