Features:
- SWIOTLB has tracing added when doing bounce buffer. - Xen ARM/ARM64 can use Xen-SWIOTLB. This work allows Linux to safely program real devices for DMA operations when running as a guest on Xen on ARM, without IOMMU support.*1 - xen_raw_printk works with PVHVM guests if needed. Bug-fixes: - Make memory ballooning work under HVM with large MMIO region. - Inform hypervisor of MCFG regions found in ACPI DSDT. - Remove deprecated IRQF_DISABLED. - Remove deprecated __cpuinit. [*1]: "On arm and arm64 all Xen guests, including dom0, run with second stage translation enabled. As a consequence when dom0 programs a device for a DMA operation is going to use (pseudo) physical addresses instead machine addresses. This work introduces two trees to track physical to machine and machine to physical mappings of foreign pages. Local pages are assumed mapped 1:1 (physical address == machine address). It enables the SWIOTLB-Xen driver on ARM and ARM64, so that Linux can translate physical addresses to machine addresses for dma operations when necessary. " (Stefano). -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.15 (GNU/Linux) iQEcBAABAgAGBQJSgS86AAoJEFjIrFwIi8fJpY4H/R2gke1A1p9UvTwbkaDhgPs/ u/mkI6aH+ktgvu5QZNprki660uydtc4Ck7y8leeLGYw+ed1Ys559SJhRc/x8jBYZ Hh2chnplld0LAjSpdIDTTePArE1xBo4Gz+fT0zc5cVh0leJwOXn92Kx8N5AWD/T3 gwH4Ok4K1dzZBIls7imM2AM/L1xcApcx3Dl/QpNcoePQtR4yLuPWMUbb3LM8pbUY 0B6ZVN4GOhtJ84z8HRKnh4uMnBYmhmky6laTlHVa6L+j1fv7aAPCdNbePjIt/Pvj HVYB1O/ht73yHw0zGfK6lhoGG8zlu+Q7sgiut9UsGZZfh34+BRKzNTypqJ3ezQo= =xc43 -----END PGP SIGNATURE----- Merge tag 'stable/for-linus-3.13-rc0-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip Pull Xen updates from Konrad Rzeszutek Wilk: "This has tons of fixes and two major features which are concentrated around the Xen SWIOTLB library. The short <blurb> is that the tracing facility (just one function) has been added to SWIOTLB to make it easier to track I/O progress. Additionally under Xen and ARM (32 & 64) the Xen-SWIOTLB driver "is used to translate physical to machine and machine to physical addresses of foreign[guest] pages for DMA operations" (Stefano) when booting under hardware without proper IOMMU. There are also bug-fixes, cleanups, compile warning fixes, etc. The commit times for some of the commits is a bit fresh - that is b/c we wanted to make sure we have the Ack's from the ARM folks - which with the string of back-to-back conferences took a bit of time. Rest assured - the code has been stewing in #linux-next for some time. Features: - SWIOTLB has tracing added when doing bounce buffer. - Xen ARM/ARM64 can use Xen-SWIOTLB. This work allows Linux to safely program real devices for DMA operations when running as a guest on Xen on ARM, without IOMMU support. [*1] - xen_raw_printk works with PVHVM guests if needed. Bug-fixes: - Make memory ballooning work under HVM with large MMIO region. - Inform hypervisor of MCFG regions found in ACPI DSDT. - Remove deprecated IRQF_DISABLED. - Remove deprecated __cpuinit. [*1]: "On arm and arm64 all Xen guests, including dom0, run with second stage translation enabled. As a consequence when dom0 programs a device for a DMA operation is going to use (pseudo) physical addresses instead machine addresses. This work introduces two trees to track physical to machine and machine to physical mappings of foreign pages. Local pages are assumed mapped 1:1 (physical address == machine address). It enables the SWIOTLB-Xen driver on ARM and ARM64, so that Linux can translate physical addresses to machine addresses for dma operations when necessary. " (Stefano)" * tag 'stable/for-linus-3.13-rc0-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip: (32 commits) xen/arm: pfn_to_mfn and mfn_to_pfn return the argument if nothing is in the p2m arm,arm64/include/asm/io.h: define struct bio_vec swiotlb-xen: missing include dma-direction.h pci-swiotlb-xen: call pci_request_acs only ifdef CONFIG_PCI arm: make SWIOTLB available xen: delete new instances of added __cpuinit xen/balloon: Set balloon's initial state to number of existing RAM pages xen/mcfg: Call PHYSDEVOP_pci_mmcfg_reserved for MCFG areas. xen: remove deprecated IRQF_DISABLED x86/xen: remove deprecated IRQF_DISABLED swiotlb-xen: fix error code returned by xen_swiotlb_map_sg_attrs swiotlb-xen: static inline xen_phys_to_bus, xen_bus_to_phys, xen_virt_to_bus and range_straddles_page_boundary grant-table: call set_phys_to_machine after mapping grant refs arm,arm64: do not always merge biovec if we are running on Xen swiotlb: print a warning when the swiotlb is full swiotlb-xen: use xen_dma_map/unmap_page, xen_dma_sync_single_for_cpu/device xen: introduce xen_dma_map/unmap_page and xen_dma_sync_single_for_cpu/device tracing/events: Fix swiotlb tracepoint creation swiotlb-xen: use xen_alloc/free_coherent_pages xen: introduce xen_alloc/free_coherent_pages ...
This commit is contained in:
commit
eda670c626
|
@ -1862,6 +1862,12 @@ config CC_STACKPROTECTOR
|
||||||
neutralized via a kernel panic.
|
neutralized via a kernel panic.
|
||||||
This feature requires gcc version 4.2 or above.
|
This feature requires gcc version 4.2 or above.
|
||||||
|
|
||||||
|
config SWIOTLB
|
||||||
|
def_bool y
|
||||||
|
|
||||||
|
config IOMMU_HELPER
|
||||||
|
def_bool SWIOTLB
|
||||||
|
|
||||||
config XEN_DOM0
|
config XEN_DOM0
|
||||||
def_bool y
|
def_bool y
|
||||||
depends on XEN
|
depends on XEN
|
||||||
|
@ -1872,6 +1878,7 @@ config XEN
|
||||||
depends on CPU_V7 && !CPU_V6
|
depends on CPU_V7 && !CPU_V6
|
||||||
depends on !GENERIC_ATOMIC64
|
depends on !GENERIC_ATOMIC64
|
||||||
select ARM_PSCI
|
select ARM_PSCI
|
||||||
|
select SWIOTLB_XEN
|
||||||
help
|
help
|
||||||
Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
|
Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
|
||||||
|
|
||||||
|
|
|
@ -11,17 +11,28 @@
|
||||||
#include <asm-generic/dma-coherent.h>
|
#include <asm-generic/dma-coherent.h>
|
||||||
#include <asm/memory.h>
|
#include <asm/memory.h>
|
||||||
|
|
||||||
|
#include <xen/xen.h>
|
||||||
|
#include <asm/xen/hypervisor.h>
|
||||||
|
|
||||||
#define DMA_ERROR_CODE (~0)
|
#define DMA_ERROR_CODE (~0)
|
||||||
extern struct dma_map_ops arm_dma_ops;
|
extern struct dma_map_ops arm_dma_ops;
|
||||||
extern struct dma_map_ops arm_coherent_dma_ops;
|
extern struct dma_map_ops arm_coherent_dma_ops;
|
||||||
|
|
||||||
static inline struct dma_map_ops *get_dma_ops(struct device *dev)
|
static inline struct dma_map_ops *__generic_dma_ops(struct device *dev)
|
||||||
{
|
{
|
||||||
if (dev && dev->archdata.dma_ops)
|
if (dev && dev->archdata.dma_ops)
|
||||||
return dev->archdata.dma_ops;
|
return dev->archdata.dma_ops;
|
||||||
return &arm_dma_ops;
|
return &arm_dma_ops;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static inline struct dma_map_ops *get_dma_ops(struct device *dev)
|
||||||
|
{
|
||||||
|
if (xen_initial_domain())
|
||||||
|
return xen_dma_ops;
|
||||||
|
else
|
||||||
|
return __generic_dma_ops(dev);
|
||||||
|
}
|
||||||
|
|
||||||
static inline void set_dma_ops(struct device *dev, struct dma_map_ops *ops)
|
static inline void set_dma_ops(struct device *dev, struct dma_map_ops *ops)
|
||||||
{
|
{
|
||||||
BUG_ON(!dev);
|
BUG_ON(!dev);
|
||||||
|
@ -94,6 +105,39 @@ static inline unsigned long dma_max_pfn(struct device *dev)
|
||||||
}
|
}
|
||||||
#define dma_max_pfn(dev) dma_max_pfn(dev)
|
#define dma_max_pfn(dev) dma_max_pfn(dev)
|
||||||
|
|
||||||
|
static inline dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr)
|
||||||
|
{
|
||||||
|
unsigned int offset = paddr & ~PAGE_MASK;
|
||||||
|
return pfn_to_dma(dev, __phys_to_pfn(paddr)) + offset;
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline phys_addr_t dma_to_phys(struct device *dev, dma_addr_t dev_addr)
|
||||||
|
{
|
||||||
|
unsigned int offset = dev_addr & ~PAGE_MASK;
|
||||||
|
return __pfn_to_phys(dma_to_pfn(dev, dev_addr)) + offset;
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
|
||||||
|
{
|
||||||
|
u64 limit, mask;
|
||||||
|
|
||||||
|
if (!dev->dma_mask)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
mask = *dev->dma_mask;
|
||||||
|
|
||||||
|
limit = (mask + 1) & ~mask;
|
||||||
|
if (limit && size > limit)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
if ((addr | (addr + size - 1)) & ~mask)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void dma_mark_clean(void *addr, size_t size) { }
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* DMA errors are defined by all-bits-set in the DMA address.
|
* DMA errors are defined by all-bits-set in the DMA address.
|
||||||
*/
|
*/
|
||||||
|
|
|
@ -24,9 +24,11 @@
|
||||||
#ifdef __KERNEL__
|
#ifdef __KERNEL__
|
||||||
|
|
||||||
#include <linux/types.h>
|
#include <linux/types.h>
|
||||||
|
#include <linux/blk_types.h>
|
||||||
#include <asm/byteorder.h>
|
#include <asm/byteorder.h>
|
||||||
#include <asm/memory.h>
|
#include <asm/memory.h>
|
||||||
#include <asm-generic/pci_iomap.h>
|
#include <asm-generic/pci_iomap.h>
|
||||||
|
#include <xen/xen.h>
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* ISA I/O bus memory addresses are 1:1 with the physical address.
|
* ISA I/O bus memory addresses are 1:1 with the physical address.
|
||||||
|
@ -372,6 +374,13 @@ extern void pci_iounmap(struct pci_dev *dev, void __iomem *addr);
|
||||||
#define BIOVEC_MERGEABLE(vec1, vec2) \
|
#define BIOVEC_MERGEABLE(vec1, vec2) \
|
||||||
((bvec_to_phys((vec1)) + (vec1)->bv_len) == bvec_to_phys((vec2)))
|
((bvec_to_phys((vec1)) + (vec1)->bv_len) == bvec_to_phys((vec2)))
|
||||||
|
|
||||||
|
struct bio_vec;
|
||||||
|
extern bool xen_biovec_phys_mergeable(const struct bio_vec *vec1,
|
||||||
|
const struct bio_vec *vec2);
|
||||||
|
#define BIOVEC_PHYS_MERGEABLE(vec1, vec2) \
|
||||||
|
(__BIOVEC_PHYS_MERGEABLE(vec1, vec2) && \
|
||||||
|
(!xen_domain() || xen_biovec_phys_mergeable(vec1, vec2)))
|
||||||
|
|
||||||
#ifdef CONFIG_MMU
|
#ifdef CONFIG_MMU
|
||||||
#define ARCH_HAS_VALID_PHYS_ADDR_RANGE
|
#define ARCH_HAS_VALID_PHYS_ADDR_RANGE
|
||||||
extern int valid_phys_addr_range(phys_addr_t addr, size_t size);
|
extern int valid_phys_addr_range(phys_addr_t addr, size_t size);
|
||||||
|
|
|
@ -16,4 +16,6 @@ static inline enum paravirt_lazy_mode paravirt_get_lazy_mode(void)
|
||||||
return PARAVIRT_LAZY_NONE;
|
return PARAVIRT_LAZY_NONE;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
extern struct dma_map_ops *xen_dma_ops;
|
||||||
|
|
||||||
#endif /* _ASM_ARM_XEN_HYPERVISOR_H */
|
#endif /* _ASM_ARM_XEN_HYPERVISOR_H */
|
||||||
|
|
|
@ -0,0 +1,50 @@
|
||||||
|
#ifndef _ASM_ARM_XEN_PAGE_COHERENT_H
|
||||||
|
#define _ASM_ARM_XEN_PAGE_COHERENT_H
|
||||||
|
|
||||||
|
#include <asm/page.h>
|
||||||
|
#include <linux/dma-attrs.h>
|
||||||
|
#include <linux/dma-mapping.h>
|
||||||
|
|
||||||
|
static inline void *xen_alloc_coherent_pages(struct device *hwdev, size_t size,
|
||||||
|
dma_addr_t *dma_handle, gfp_t flags,
|
||||||
|
struct dma_attrs *attrs)
|
||||||
|
{
|
||||||
|
return __generic_dma_ops(hwdev)->alloc(hwdev, size, dma_handle, flags, attrs);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void xen_free_coherent_pages(struct device *hwdev, size_t size,
|
||||||
|
void *cpu_addr, dma_addr_t dma_handle,
|
||||||
|
struct dma_attrs *attrs)
|
||||||
|
{
|
||||||
|
__generic_dma_ops(hwdev)->free(hwdev, size, cpu_addr, dma_handle, attrs);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void xen_dma_map_page(struct device *hwdev, struct page *page,
|
||||||
|
unsigned long offset, size_t size, enum dma_data_direction dir,
|
||||||
|
struct dma_attrs *attrs)
|
||||||
|
{
|
||||||
|
__generic_dma_ops(hwdev)->map_page(hwdev, page, offset, size, dir, attrs);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle,
|
||||||
|
size_t size, enum dma_data_direction dir,
|
||||||
|
struct dma_attrs *attrs)
|
||||||
|
{
|
||||||
|
if (__generic_dma_ops(hwdev)->unmap_page)
|
||||||
|
__generic_dma_ops(hwdev)->unmap_page(hwdev, handle, size, dir, attrs);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void xen_dma_sync_single_for_cpu(struct device *hwdev,
|
||||||
|
dma_addr_t handle, size_t size, enum dma_data_direction dir)
|
||||||
|
{
|
||||||
|
if (__generic_dma_ops(hwdev)->sync_single_for_cpu)
|
||||||
|
__generic_dma_ops(hwdev)->sync_single_for_cpu(hwdev, handle, size, dir);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void xen_dma_sync_single_for_device(struct device *hwdev,
|
||||||
|
dma_addr_t handle, size_t size, enum dma_data_direction dir)
|
||||||
|
{
|
||||||
|
if (__generic_dma_ops(hwdev)->sync_single_for_device)
|
||||||
|
__generic_dma_ops(hwdev)->sync_single_for_device(hwdev, handle, size, dir);
|
||||||
|
}
|
||||||
|
#endif /* _ASM_ARM_XEN_PAGE_COHERENT_H */
|
|
@ -6,12 +6,12 @@
|
||||||
|
|
||||||
#include <linux/pfn.h>
|
#include <linux/pfn.h>
|
||||||
#include <linux/types.h>
|
#include <linux/types.h>
|
||||||
|
#include <linux/dma-mapping.h>
|
||||||
|
|
||||||
|
#include <xen/xen.h>
|
||||||
#include <xen/interface/grant_table.h>
|
#include <xen/interface/grant_table.h>
|
||||||
|
|
||||||
#define pfn_to_mfn(pfn) (pfn)
|
|
||||||
#define phys_to_machine_mapping_valid(pfn) (1)
|
#define phys_to_machine_mapping_valid(pfn) (1)
|
||||||
#define mfn_to_pfn(mfn) (mfn)
|
|
||||||
#define mfn_to_virt(m) (__va(mfn_to_pfn(m) << PAGE_SHIFT))
|
#define mfn_to_virt(m) (__va(mfn_to_pfn(m) << PAGE_SHIFT))
|
||||||
|
|
||||||
#define pte_mfn pte_pfn
|
#define pte_mfn pte_pfn
|
||||||
|
@ -32,6 +32,38 @@ typedef struct xpaddr {
|
||||||
|
|
||||||
#define INVALID_P2M_ENTRY (~0UL)
|
#define INVALID_P2M_ENTRY (~0UL)
|
||||||
|
|
||||||
|
unsigned long __pfn_to_mfn(unsigned long pfn);
|
||||||
|
unsigned long __mfn_to_pfn(unsigned long mfn);
|
||||||
|
extern struct rb_root phys_to_mach;
|
||||||
|
|
||||||
|
static inline unsigned long pfn_to_mfn(unsigned long pfn)
|
||||||
|
{
|
||||||
|
unsigned long mfn;
|
||||||
|
|
||||||
|
if (phys_to_mach.rb_node != NULL) {
|
||||||
|
mfn = __pfn_to_mfn(pfn);
|
||||||
|
if (mfn != INVALID_P2M_ENTRY)
|
||||||
|
return mfn;
|
||||||
|
}
|
||||||
|
|
||||||
|
return pfn;
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline unsigned long mfn_to_pfn(unsigned long mfn)
|
||||||
|
{
|
||||||
|
unsigned long pfn;
|
||||||
|
|
||||||
|
if (phys_to_mach.rb_node != NULL) {
|
||||||
|
pfn = __mfn_to_pfn(mfn);
|
||||||
|
if (pfn != INVALID_P2M_ENTRY)
|
||||||
|
return pfn;
|
||||||
|
}
|
||||||
|
|
||||||
|
return mfn;
|
||||||
|
}
|
||||||
|
|
||||||
|
#define mfn_to_local_pfn(mfn) mfn_to_pfn(mfn)
|
||||||
|
|
||||||
static inline xmaddr_t phys_to_machine(xpaddr_t phys)
|
static inline xmaddr_t phys_to_machine(xpaddr_t phys)
|
||||||
{
|
{
|
||||||
unsigned offset = phys.paddr & ~PAGE_MASK;
|
unsigned offset = phys.paddr & ~PAGE_MASK;
|
||||||
|
@ -76,11 +108,9 @@ static inline int m2p_remove_override(struct page *page, bool clear_pte)
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn)
|
bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn);
|
||||||
{
|
bool __set_phys_to_machine_multi(unsigned long pfn, unsigned long mfn,
|
||||||
BUG_ON(pfn != mfn && mfn != INVALID_P2M_ENTRY);
|
unsigned long nr_pages);
|
||||||
return true;
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline bool set_phys_to_machine(unsigned long pfn, unsigned long mfn)
|
static inline bool set_phys_to_machine(unsigned long pfn, unsigned long mfn)
|
||||||
{
|
{
|
||||||
|
|
|
@ -1 +1 @@
|
||||||
obj-y := enlighten.o hypercall.o grant-table.o
|
obj-y := enlighten.o hypercall.o grant-table.o p2m.o mm.o
|
||||||
|
|
|
@ -0,0 +1,65 @@
|
||||||
|
#include <linux/bootmem.h>
|
||||||
|
#include <linux/gfp.h>
|
||||||
|
#include <linux/export.h>
|
||||||
|
#include <linux/slab.h>
|
||||||
|
#include <linux/types.h>
|
||||||
|
#include <linux/dma-mapping.h>
|
||||||
|
#include <linux/vmalloc.h>
|
||||||
|
#include <linux/swiotlb.h>
|
||||||
|
|
||||||
|
#include <xen/xen.h>
|
||||||
|
#include <xen/interface/memory.h>
|
||||||
|
#include <xen/swiotlb-xen.h>
|
||||||
|
|
||||||
|
#include <asm/cacheflush.h>
|
||||||
|
#include <asm/xen/page.h>
|
||||||
|
#include <asm/xen/hypercall.h>
|
||||||
|
#include <asm/xen/interface.h>
|
||||||
|
|
||||||
|
int xen_create_contiguous_region(phys_addr_t pstart, unsigned int order,
|
||||||
|
unsigned int address_bits,
|
||||||
|
dma_addr_t *dma_handle)
|
||||||
|
{
|
||||||
|
if (!xen_initial_domain())
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
|
/* we assume that dom0 is mapped 1:1 for now */
|
||||||
|
*dma_handle = pstart;
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(xen_create_contiguous_region);
|
||||||
|
|
||||||
|
void xen_destroy_contiguous_region(phys_addr_t pstart, unsigned int order)
|
||||||
|
{
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(xen_destroy_contiguous_region);
|
||||||
|
|
||||||
|
struct dma_map_ops *xen_dma_ops;
|
||||||
|
EXPORT_SYMBOL_GPL(xen_dma_ops);
|
||||||
|
|
||||||
|
static struct dma_map_ops xen_swiotlb_dma_ops = {
|
||||||
|
.mapping_error = xen_swiotlb_dma_mapping_error,
|
||||||
|
.alloc = xen_swiotlb_alloc_coherent,
|
||||||
|
.free = xen_swiotlb_free_coherent,
|
||||||
|
.sync_single_for_cpu = xen_swiotlb_sync_single_for_cpu,
|
||||||
|
.sync_single_for_device = xen_swiotlb_sync_single_for_device,
|
||||||
|
.sync_sg_for_cpu = xen_swiotlb_sync_sg_for_cpu,
|
||||||
|
.sync_sg_for_device = xen_swiotlb_sync_sg_for_device,
|
||||||
|
.map_sg = xen_swiotlb_map_sg_attrs,
|
||||||
|
.unmap_sg = xen_swiotlb_unmap_sg_attrs,
|
||||||
|
.map_page = xen_swiotlb_map_page,
|
||||||
|
.unmap_page = xen_swiotlb_unmap_page,
|
||||||
|
.dma_supported = xen_swiotlb_dma_supported,
|
||||||
|
.set_dma_mask = xen_swiotlb_set_dma_mask,
|
||||||
|
};
|
||||||
|
|
||||||
|
int __init xen_mm_init(void)
|
||||||
|
{
|
||||||
|
if (!xen_initial_domain())
|
||||||
|
return 0;
|
||||||
|
xen_swiotlb_init(1, false);
|
||||||
|
xen_dma_ops = &xen_swiotlb_dma_ops;
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
arch_initcall(xen_mm_init);
|
|
@ -0,0 +1,208 @@
|
||||||
|
#include <linux/bootmem.h>
|
||||||
|
#include <linux/gfp.h>
|
||||||
|
#include <linux/export.h>
|
||||||
|
#include <linux/rwlock.h>
|
||||||
|
#include <linux/slab.h>
|
||||||
|
#include <linux/types.h>
|
||||||
|
#include <linux/dma-mapping.h>
|
||||||
|
#include <linux/vmalloc.h>
|
||||||
|
#include <linux/swiotlb.h>
|
||||||
|
|
||||||
|
#include <xen/xen.h>
|
||||||
|
#include <xen/interface/memory.h>
|
||||||
|
#include <xen/swiotlb-xen.h>
|
||||||
|
|
||||||
|
#include <asm/cacheflush.h>
|
||||||
|
#include <asm/xen/page.h>
|
||||||
|
#include <asm/xen/hypercall.h>
|
||||||
|
#include <asm/xen/interface.h>
|
||||||
|
|
||||||
|
struct xen_p2m_entry {
|
||||||
|
unsigned long pfn;
|
||||||
|
unsigned long mfn;
|
||||||
|
unsigned long nr_pages;
|
||||||
|
struct rb_node rbnode_mach;
|
||||||
|
struct rb_node rbnode_phys;
|
||||||
|
};
|
||||||
|
|
||||||
|
rwlock_t p2m_lock;
|
||||||
|
struct rb_root phys_to_mach = RB_ROOT;
|
||||||
|
static struct rb_root mach_to_phys = RB_ROOT;
|
||||||
|
|
||||||
|
static int xen_add_phys_to_mach_entry(struct xen_p2m_entry *new)
|
||||||
|
{
|
||||||
|
struct rb_node **link = &phys_to_mach.rb_node;
|
||||||
|
struct rb_node *parent = NULL;
|
||||||
|
struct xen_p2m_entry *entry;
|
||||||
|
int rc = 0;
|
||||||
|
|
||||||
|
while (*link) {
|
||||||
|
parent = *link;
|
||||||
|
entry = rb_entry(parent, struct xen_p2m_entry, rbnode_phys);
|
||||||
|
|
||||||
|
if (new->mfn == entry->mfn)
|
||||||
|
goto err_out;
|
||||||
|
if (new->pfn == entry->pfn)
|
||||||
|
goto err_out;
|
||||||
|
|
||||||
|
if (new->pfn < entry->pfn)
|
||||||
|
link = &(*link)->rb_left;
|
||||||
|
else
|
||||||
|
link = &(*link)->rb_right;
|
||||||
|
}
|
||||||
|
rb_link_node(&new->rbnode_phys, parent, link);
|
||||||
|
rb_insert_color(&new->rbnode_phys, &phys_to_mach);
|
||||||
|
goto out;
|
||||||
|
|
||||||
|
err_out:
|
||||||
|
rc = -EINVAL;
|
||||||
|
pr_warn("%s: cannot add pfn=%pa -> mfn=%pa: pfn=%pa -> mfn=%pa already exists\n",
|
||||||
|
__func__, &new->pfn, &new->mfn, &entry->pfn, &entry->mfn);
|
||||||
|
out:
|
||||||
|
return rc;
|
||||||
|
}
|
||||||
|
|
||||||
|
unsigned long __pfn_to_mfn(unsigned long pfn)
|
||||||
|
{
|
||||||
|
struct rb_node *n = phys_to_mach.rb_node;
|
||||||
|
struct xen_p2m_entry *entry;
|
||||||
|
unsigned long irqflags;
|
||||||
|
|
||||||
|
read_lock_irqsave(&p2m_lock, irqflags);
|
||||||
|
while (n) {
|
||||||
|
entry = rb_entry(n, struct xen_p2m_entry, rbnode_phys);
|
||||||
|
if (entry->pfn <= pfn &&
|
||||||
|
entry->pfn + entry->nr_pages > pfn) {
|
||||||
|
read_unlock_irqrestore(&p2m_lock, irqflags);
|
||||||
|
return entry->mfn + (pfn - entry->pfn);
|
||||||
|
}
|
||||||
|
if (pfn < entry->pfn)
|
||||||
|
n = n->rb_left;
|
||||||
|
else
|
||||||
|
n = n->rb_right;
|
||||||
|
}
|
||||||
|
read_unlock_irqrestore(&p2m_lock, irqflags);
|
||||||
|
|
||||||
|
return INVALID_P2M_ENTRY;
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(__pfn_to_mfn);
|
||||||
|
|
||||||
|
static int xen_add_mach_to_phys_entry(struct xen_p2m_entry *new)
|
||||||
|
{
|
||||||
|
struct rb_node **link = &mach_to_phys.rb_node;
|
||||||
|
struct rb_node *parent = NULL;
|
||||||
|
struct xen_p2m_entry *entry;
|
||||||
|
int rc = 0;
|
||||||
|
|
||||||
|
while (*link) {
|
||||||
|
parent = *link;
|
||||||
|
entry = rb_entry(parent, struct xen_p2m_entry, rbnode_mach);
|
||||||
|
|
||||||
|
if (new->mfn == entry->mfn)
|
||||||
|
goto err_out;
|
||||||
|
if (new->pfn == entry->pfn)
|
||||||
|
goto err_out;
|
||||||
|
|
||||||
|
if (new->mfn < entry->mfn)
|
||||||
|
link = &(*link)->rb_left;
|
||||||
|
else
|
||||||
|
link = &(*link)->rb_right;
|
||||||
|
}
|
||||||
|
rb_link_node(&new->rbnode_mach, parent, link);
|
||||||
|
rb_insert_color(&new->rbnode_mach, &mach_to_phys);
|
||||||
|
goto out;
|
||||||
|
|
||||||
|
err_out:
|
||||||
|
rc = -EINVAL;
|
||||||
|
pr_warn("%s: cannot add pfn=%pa -> mfn=%pa: pfn=%pa -> mfn=%pa already exists\n",
|
||||||
|
__func__, &new->pfn, &new->mfn, &entry->pfn, &entry->mfn);
|
||||||
|
out:
|
||||||
|
return rc;
|
||||||
|
}
|
||||||
|
|
||||||
|
unsigned long __mfn_to_pfn(unsigned long mfn)
|
||||||
|
{
|
||||||
|
struct rb_node *n = mach_to_phys.rb_node;
|
||||||
|
struct xen_p2m_entry *entry;
|
||||||
|
unsigned long irqflags;
|
||||||
|
|
||||||
|
read_lock_irqsave(&p2m_lock, irqflags);
|
||||||
|
while (n) {
|
||||||
|
entry = rb_entry(n, struct xen_p2m_entry, rbnode_mach);
|
||||||
|
if (entry->mfn <= mfn &&
|
||||||
|
entry->mfn + entry->nr_pages > mfn) {
|
||||||
|
read_unlock_irqrestore(&p2m_lock, irqflags);
|
||||||
|
return entry->pfn + (mfn - entry->mfn);
|
||||||
|
}
|
||||||
|
if (mfn < entry->mfn)
|
||||||
|
n = n->rb_left;
|
||||||
|
else
|
||||||
|
n = n->rb_right;
|
||||||
|
}
|
||||||
|
read_unlock_irqrestore(&p2m_lock, irqflags);
|
||||||
|
|
||||||
|
return INVALID_P2M_ENTRY;
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(__mfn_to_pfn);
|
||||||
|
|
||||||
|
bool __set_phys_to_machine_multi(unsigned long pfn,
|
||||||
|
unsigned long mfn, unsigned long nr_pages)
|
||||||
|
{
|
||||||
|
int rc;
|
||||||
|
unsigned long irqflags;
|
||||||
|
struct xen_p2m_entry *p2m_entry;
|
||||||
|
struct rb_node *n = phys_to_mach.rb_node;
|
||||||
|
|
||||||
|
if (mfn == INVALID_P2M_ENTRY) {
|
||||||
|
write_lock_irqsave(&p2m_lock, irqflags);
|
||||||
|
while (n) {
|
||||||
|
p2m_entry = rb_entry(n, struct xen_p2m_entry, rbnode_phys);
|
||||||
|
if (p2m_entry->pfn <= pfn &&
|
||||||
|
p2m_entry->pfn + p2m_entry->nr_pages > pfn) {
|
||||||
|
rb_erase(&p2m_entry->rbnode_mach, &mach_to_phys);
|
||||||
|
rb_erase(&p2m_entry->rbnode_phys, &phys_to_mach);
|
||||||
|
write_unlock_irqrestore(&p2m_lock, irqflags);
|
||||||
|
kfree(p2m_entry);
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
if (pfn < p2m_entry->pfn)
|
||||||
|
n = n->rb_left;
|
||||||
|
else
|
||||||
|
n = n->rb_right;
|
||||||
|
}
|
||||||
|
write_unlock_irqrestore(&p2m_lock, irqflags);
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
p2m_entry = kzalloc(sizeof(struct xen_p2m_entry), GFP_NOWAIT);
|
||||||
|
if (!p2m_entry) {
|
||||||
|
pr_warn("cannot allocate xen_p2m_entry\n");
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
p2m_entry->pfn = pfn;
|
||||||
|
p2m_entry->nr_pages = nr_pages;
|
||||||
|
p2m_entry->mfn = mfn;
|
||||||
|
|
||||||
|
write_lock_irqsave(&p2m_lock, irqflags);
|
||||||
|
if ((rc = xen_add_phys_to_mach_entry(p2m_entry) < 0) ||
|
||||||
|
(rc = xen_add_mach_to_phys_entry(p2m_entry) < 0)) {
|
||||||
|
write_unlock_irqrestore(&p2m_lock, irqflags);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
write_unlock_irqrestore(&p2m_lock, irqflags);
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(__set_phys_to_machine_multi);
|
||||||
|
|
||||||
|
bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn)
|
||||||
|
{
|
||||||
|
return __set_phys_to_machine_multi(pfn, mfn, 1);
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(__set_phys_to_machine);
|
||||||
|
|
||||||
|
int p2m_init(void)
|
||||||
|
{
|
||||||
|
rwlock_init(&p2m_lock);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
arch_initcall(p2m_init);
|
|
@ -220,6 +220,7 @@ config XEN_DOM0
|
||||||
config XEN
|
config XEN
|
||||||
bool "Xen guest support on ARM64 (EXPERIMENTAL)"
|
bool "Xen guest support on ARM64 (EXPERIMENTAL)"
|
||||||
depends on ARM64 && OF
|
depends on ARM64 && OF
|
||||||
|
select SWIOTLB_XEN
|
||||||
help
|
help
|
||||||
Say Y if you want to run Linux in a Virtual Machine on Xen on ARM64.
|
Say Y if you want to run Linux in a Virtual Machine on Xen on ARM64.
|
||||||
|
|
||||||
|
|
|
@ -23,11 +23,15 @@
|
||||||
|
|
||||||
#include <asm-generic/dma-coherent.h>
|
#include <asm-generic/dma-coherent.h>
|
||||||
|
|
||||||
|
#include <xen/xen.h>
|
||||||
|
#include <asm/xen/hypervisor.h>
|
||||||
|
|
||||||
#define ARCH_HAS_DMA_GET_REQUIRED_MASK
|
#define ARCH_HAS_DMA_GET_REQUIRED_MASK
|
||||||
|
|
||||||
|
#define DMA_ERROR_CODE (~(dma_addr_t)0)
|
||||||
extern struct dma_map_ops *dma_ops;
|
extern struct dma_map_ops *dma_ops;
|
||||||
|
|
||||||
static inline struct dma_map_ops *get_dma_ops(struct device *dev)
|
static inline struct dma_map_ops *__generic_dma_ops(struct device *dev)
|
||||||
{
|
{
|
||||||
if (unlikely(!dev) || !dev->archdata.dma_ops)
|
if (unlikely(!dev) || !dev->archdata.dma_ops)
|
||||||
return dma_ops;
|
return dma_ops;
|
||||||
|
@ -35,6 +39,14 @@ static inline struct dma_map_ops *get_dma_ops(struct device *dev)
|
||||||
return dev->archdata.dma_ops;
|
return dev->archdata.dma_ops;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static inline struct dma_map_ops *get_dma_ops(struct device *dev)
|
||||||
|
{
|
||||||
|
if (xen_initial_domain())
|
||||||
|
return xen_dma_ops;
|
||||||
|
else
|
||||||
|
return __generic_dma_ops(dev);
|
||||||
|
}
|
||||||
|
|
||||||
#include <asm-generic/dma-mapping-common.h>
|
#include <asm-generic/dma-mapping-common.h>
|
||||||
|
|
||||||
static inline dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr)
|
static inline dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr)
|
||||||
|
|
|
@ -22,11 +22,14 @@
|
||||||
#ifdef __KERNEL__
|
#ifdef __KERNEL__
|
||||||
|
|
||||||
#include <linux/types.h>
|
#include <linux/types.h>
|
||||||
|
#include <linux/blk_types.h>
|
||||||
|
|
||||||
#include <asm/byteorder.h>
|
#include <asm/byteorder.h>
|
||||||
#include <asm/barrier.h>
|
#include <asm/barrier.h>
|
||||||
#include <asm/pgtable.h>
|
#include <asm/pgtable.h>
|
||||||
|
|
||||||
|
#include <xen/xen.h>
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Generic IO read/write. These perform native-endian accesses.
|
* Generic IO read/write. These perform native-endian accesses.
|
||||||
*/
|
*/
|
||||||
|
@ -263,5 +266,12 @@ extern int devmem_is_allowed(unsigned long pfn);
|
||||||
*/
|
*/
|
||||||
#define xlate_dev_kmem_ptr(p) p
|
#define xlate_dev_kmem_ptr(p) p
|
||||||
|
|
||||||
|
struct bio_vec;
|
||||||
|
extern bool xen_biovec_phys_mergeable(const struct bio_vec *vec1,
|
||||||
|
const struct bio_vec *vec2);
|
||||||
|
#define BIOVEC_PHYS_MERGEABLE(vec1, vec2) \
|
||||||
|
(__BIOVEC_PHYS_MERGEABLE(vec1, vec2) && \
|
||||||
|
(!xen_domain() || xen_biovec_phys_mergeable(vec1, vec2)))
|
||||||
|
|
||||||
#endif /* __KERNEL__ */
|
#endif /* __KERNEL__ */
|
||||||
#endif /* __ASM_IO_H */
|
#endif /* __ASM_IO_H */
|
||||||
|
|
|
@ -0,0 +1,47 @@
|
||||||
|
#ifndef _ASM_ARM64_XEN_PAGE_COHERENT_H
|
||||||
|
#define _ASM_ARM64_XEN_PAGE_COHERENT_H
|
||||||
|
|
||||||
|
#include <asm/page.h>
|
||||||
|
#include <linux/dma-attrs.h>
|
||||||
|
#include <linux/dma-mapping.h>
|
||||||
|
|
||||||
|
static inline void *xen_alloc_coherent_pages(struct device *hwdev, size_t size,
|
||||||
|
dma_addr_t *dma_handle, gfp_t flags,
|
||||||
|
struct dma_attrs *attrs)
|
||||||
|
{
|
||||||
|
return __generic_dma_ops(hwdev)->alloc(hwdev, size, dma_handle, flags, attrs);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void xen_free_coherent_pages(struct device *hwdev, size_t size,
|
||||||
|
void *cpu_addr, dma_addr_t dma_handle,
|
||||||
|
struct dma_attrs *attrs)
|
||||||
|
{
|
||||||
|
__generic_dma_ops(hwdev)->free(hwdev, size, cpu_addr, dma_handle, attrs);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void xen_dma_map_page(struct device *hwdev, struct page *page,
|
||||||
|
unsigned long offset, size_t size, enum dma_data_direction dir,
|
||||||
|
struct dma_attrs *attrs)
|
||||||
|
{
|
||||||
|
__generic_dma_ops(hwdev)->map_page(hwdev, page, offset, size, dir, attrs);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle,
|
||||||
|
size_t size, enum dma_data_direction dir,
|
||||||
|
struct dma_attrs *attrs)
|
||||||
|
{
|
||||||
|
__generic_dma_ops(hwdev)->unmap_page(hwdev, handle, size, dir, attrs);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void xen_dma_sync_single_for_cpu(struct device *hwdev,
|
||||||
|
dma_addr_t handle, size_t size, enum dma_data_direction dir)
|
||||||
|
{
|
||||||
|
__generic_dma_ops(hwdev)->sync_single_for_cpu(hwdev, handle, size, dir);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void xen_dma_sync_single_for_device(struct device *hwdev,
|
||||||
|
dma_addr_t handle, size_t size, enum dma_data_direction dir)
|
||||||
|
{
|
||||||
|
__generic_dma_ops(hwdev)->sync_single_for_device(hwdev, handle, size, dir);
|
||||||
|
}
|
||||||
|
#endif /* _ASM_ARM64_XEN_PAGE_COHERENT_H */
|
|
@ -1,2 +1,2 @@
|
||||||
xen-arm-y += $(addprefix ../../arm/xen/, enlighten.o grant-table.o)
|
xen-arm-y += $(addprefix ../../arm/xen/, enlighten.o grant-table.o p2m.o mm.o)
|
||||||
obj-y := xen-arm.o hypercall.o
|
obj-y := xen-arm.o hypercall.o
|
||||||
|
|
|
@ -0,0 +1,38 @@
|
||||||
|
#ifndef _ASM_IA64_XEN_PAGE_COHERENT_H
|
||||||
|
#define _ASM_IA64_XEN_PAGE_COHERENT_H
|
||||||
|
|
||||||
|
#include <asm/page.h>
|
||||||
|
#include <linux/dma-attrs.h>
|
||||||
|
#include <linux/dma-mapping.h>
|
||||||
|
|
||||||
|
static inline void *xen_alloc_coherent_pages(struct device *hwdev, size_t size,
|
||||||
|
dma_addr_t *dma_handle, gfp_t flags,
|
||||||
|
struct dma_attrs *attrs)
|
||||||
|
{
|
||||||
|
void *vstart = (void*)__get_free_pages(flags, get_order(size));
|
||||||
|
*dma_handle = virt_to_phys(vstart);
|
||||||
|
return vstart;
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void xen_free_coherent_pages(struct device *hwdev, size_t size,
|
||||||
|
void *cpu_addr, dma_addr_t dma_handle,
|
||||||
|
struct dma_attrs *attrs)
|
||||||
|
{
|
||||||
|
free_pages((unsigned long) cpu_addr, get_order(size));
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void xen_dma_map_page(struct device *hwdev, struct page *page,
|
||||||
|
unsigned long offset, size_t size, enum dma_data_direction dir,
|
||||||
|
struct dma_attrs *attrs) { }
|
||||||
|
|
||||||
|
static inline void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle,
|
||||||
|
size_t size, enum dma_data_direction dir,
|
||||||
|
struct dma_attrs *attrs) { }
|
||||||
|
|
||||||
|
static inline void xen_dma_sync_single_for_cpu(struct device *hwdev,
|
||||||
|
dma_addr_t handle, size_t size, enum dma_data_direction dir) { }
|
||||||
|
|
||||||
|
static inline void xen_dma_sync_single_for_device(struct device *hwdev,
|
||||||
|
dma_addr_t handle, size_t size, enum dma_data_direction dir) { }
|
||||||
|
|
||||||
|
#endif /* _ASM_IA64_XEN_PAGE_COHERENT_H */
|
|
@ -0,0 +1,38 @@
|
||||||
|
#ifndef _ASM_X86_XEN_PAGE_COHERENT_H
|
||||||
|
#define _ASM_X86_XEN_PAGE_COHERENT_H
|
||||||
|
|
||||||
|
#include <asm/page.h>
|
||||||
|
#include <linux/dma-attrs.h>
|
||||||
|
#include <linux/dma-mapping.h>
|
||||||
|
|
||||||
|
static inline void *xen_alloc_coherent_pages(struct device *hwdev, size_t size,
|
||||||
|
dma_addr_t *dma_handle, gfp_t flags,
|
||||||
|
struct dma_attrs *attrs)
|
||||||
|
{
|
||||||
|
void *vstart = (void*)__get_free_pages(flags, get_order(size));
|
||||||
|
*dma_handle = virt_to_phys(vstart);
|
||||||
|
return vstart;
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void xen_free_coherent_pages(struct device *hwdev, size_t size,
|
||||||
|
void *cpu_addr, dma_addr_t dma_handle,
|
||||||
|
struct dma_attrs *attrs)
|
||||||
|
{
|
||||||
|
free_pages((unsigned long) cpu_addr, get_order(size));
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void xen_dma_map_page(struct device *hwdev, struct page *page,
|
||||||
|
unsigned long offset, size_t size, enum dma_data_direction dir,
|
||||||
|
struct dma_attrs *attrs) { }
|
||||||
|
|
||||||
|
static inline void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle,
|
||||||
|
size_t size, enum dma_data_direction dir,
|
||||||
|
struct dma_attrs *attrs) { }
|
||||||
|
|
||||||
|
static inline void xen_dma_sync_single_for_cpu(struct device *hwdev,
|
||||||
|
dma_addr_t handle, size_t size, enum dma_data_direction dir) { }
|
||||||
|
|
||||||
|
static inline void xen_dma_sync_single_for_device(struct device *hwdev,
|
||||||
|
dma_addr_t handle, size_t size, enum dma_data_direction dir) { }
|
||||||
|
|
||||||
|
#endif /* _ASM_X86_XEN_PAGE_COHERENT_H */
|
|
@ -468,8 +468,8 @@ PV_CALLEE_SAVE_REGS_THUNK(xen_pgd_val);
|
||||||
* 3 PCD PWT UC UC UC
|
* 3 PCD PWT UC UC UC
|
||||||
* 4 PAT WB WC WB
|
* 4 PAT WB WC WB
|
||||||
* 5 PAT PWT WC WP WT
|
* 5 PAT PWT WC WP WT
|
||||||
* 6 PAT PCD UC- UC UC-
|
* 6 PAT PCD UC- rsv UC-
|
||||||
* 7 PAT PCD PWT UC UC UC
|
* 7 PAT PCD PWT UC rsv UC
|
||||||
*/
|
*/
|
||||||
|
|
||||||
void xen_set_pat(u64 pat)
|
void xen_set_pat(u64 pat)
|
||||||
|
@ -2328,12 +2328,14 @@ static int xen_exchange_memory(unsigned long extents_in, unsigned int order_in,
|
||||||
return success;
|
return success;
|
||||||
}
|
}
|
||||||
|
|
||||||
int xen_create_contiguous_region(unsigned long vstart, unsigned int order,
|
int xen_create_contiguous_region(phys_addr_t pstart, unsigned int order,
|
||||||
unsigned int address_bits)
|
unsigned int address_bits,
|
||||||
|
dma_addr_t *dma_handle)
|
||||||
{
|
{
|
||||||
unsigned long *in_frames = discontig_frames, out_frame;
|
unsigned long *in_frames = discontig_frames, out_frame;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
int success;
|
int success;
|
||||||
|
unsigned long vstart = (unsigned long)phys_to_virt(pstart);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Currently an auto-translated guest will not perform I/O, nor will
|
* Currently an auto-translated guest will not perform I/O, nor will
|
||||||
|
@ -2368,15 +2370,17 @@ int xen_create_contiguous_region(unsigned long vstart, unsigned int order,
|
||||||
|
|
||||||
spin_unlock_irqrestore(&xen_reservation_lock, flags);
|
spin_unlock_irqrestore(&xen_reservation_lock, flags);
|
||||||
|
|
||||||
|
*dma_handle = virt_to_machine(vstart).maddr;
|
||||||
return success ? 0 : -ENOMEM;
|
return success ? 0 : -ENOMEM;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(xen_create_contiguous_region);
|
EXPORT_SYMBOL_GPL(xen_create_contiguous_region);
|
||||||
|
|
||||||
void xen_destroy_contiguous_region(unsigned long vstart, unsigned int order)
|
void xen_destroy_contiguous_region(phys_addr_t pstart, unsigned int order)
|
||||||
{
|
{
|
||||||
unsigned long *out_frames = discontig_frames, in_frame;
|
unsigned long *out_frames = discontig_frames, in_frame;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
int success;
|
int success;
|
||||||
|
unsigned long vstart;
|
||||||
|
|
||||||
if (xen_feature(XENFEAT_auto_translated_physmap))
|
if (xen_feature(XENFEAT_auto_translated_physmap))
|
||||||
return;
|
return;
|
||||||
|
@ -2384,6 +2388,7 @@ void xen_destroy_contiguous_region(unsigned long vstart, unsigned int order)
|
||||||
if (unlikely(order > MAX_CONTIG_ORDER))
|
if (unlikely(order > MAX_CONTIG_ORDER))
|
||||||
return;
|
return;
|
||||||
|
|
||||||
|
vstart = (unsigned long)phys_to_virt(pstart);
|
||||||
memset((void *) vstart, 0, PAGE_SIZE << order);
|
memset((void *) vstart, 0, PAGE_SIZE << order);
|
||||||
|
|
||||||
spin_lock_irqsave(&xen_reservation_lock, flags);
|
spin_lock_irqsave(&xen_reservation_lock, flags);
|
||||||
|
|
|
@ -799,10 +799,10 @@ bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn)
|
||||||
{
|
{
|
||||||
unsigned topidx, mididx, idx;
|
unsigned topidx, mididx, idx;
|
||||||
|
|
||||||
if (unlikely(xen_feature(XENFEAT_auto_translated_physmap))) {
|
/* don't track P2M changes in autotranslate guests */
|
||||||
BUG_ON(pfn != mfn && mfn != INVALID_P2M_ENTRY);
|
if (unlikely(xen_feature(XENFEAT_auto_translated_physmap)))
|
||||||
return true;
|
return true;
|
||||||
}
|
|
||||||
if (unlikely(pfn >= MAX_P2M_PFN)) {
|
if (unlikely(pfn >= MAX_P2M_PFN)) {
|
||||||
BUG_ON(mfn != INVALID_P2M_ENTRY);
|
BUG_ON(mfn != INVALID_P2M_ENTRY);
|
||||||
return true;
|
return true;
|
||||||
|
|
|
@ -75,8 +75,10 @@ void __init pci_xen_swiotlb_init(void)
|
||||||
xen_swiotlb_init(1, true /* early */);
|
xen_swiotlb_init(1, true /* early */);
|
||||||
dma_ops = &xen_swiotlb_dma_ops;
|
dma_ops = &xen_swiotlb_dma_ops;
|
||||||
|
|
||||||
|
#ifdef CONFIG_PCI
|
||||||
/* Make sure ACS will be enabled */
|
/* Make sure ACS will be enabled */
|
||||||
pci_request_acs();
|
pci_request_acs();
|
||||||
|
#endif
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -92,8 +94,10 @@ int pci_xen_swiotlb_init_late(void)
|
||||||
return rc;
|
return rc;
|
||||||
|
|
||||||
dma_ops = &xen_swiotlb_dma_ops;
|
dma_ops = &xen_swiotlb_dma_ops;
|
||||||
|
#ifdef CONFIG_PCI
|
||||||
/* Make sure ACS will be enabled */
|
/* Make sure ACS will be enabled */
|
||||||
pci_request_acs();
|
pci_request_acs();
|
||||||
|
#endif
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
|
@ -556,7 +556,7 @@ void xen_enable_syscall(void)
|
||||||
}
|
}
|
||||||
#endif /* CONFIG_X86_64 */
|
#endif /* CONFIG_X86_64 */
|
||||||
}
|
}
|
||||||
void __cpuinit xen_enable_nmi(void)
|
void xen_enable_nmi(void)
|
||||||
{
|
{
|
||||||
#ifdef CONFIG_X86_64
|
#ifdef CONFIG_X86_64
|
||||||
if (register_callback(CALLBACKTYPE_nmi, nmi))
|
if (register_callback(CALLBACKTYPE_nmi, nmi))
|
||||||
|
|
|
@ -149,7 +149,7 @@ static int xen_smp_intr_init(unsigned int cpu)
|
||||||
rc = bind_ipi_to_irqhandler(XEN_RESCHEDULE_VECTOR,
|
rc = bind_ipi_to_irqhandler(XEN_RESCHEDULE_VECTOR,
|
||||||
cpu,
|
cpu,
|
||||||
xen_reschedule_interrupt,
|
xen_reschedule_interrupt,
|
||||||
IRQF_DISABLED|IRQF_PERCPU|IRQF_NOBALANCING,
|
IRQF_PERCPU|IRQF_NOBALANCING,
|
||||||
resched_name,
|
resched_name,
|
||||||
NULL);
|
NULL);
|
||||||
if (rc < 0)
|
if (rc < 0)
|
||||||
|
@ -161,7 +161,7 @@ static int xen_smp_intr_init(unsigned int cpu)
|
||||||
rc = bind_ipi_to_irqhandler(XEN_CALL_FUNCTION_VECTOR,
|
rc = bind_ipi_to_irqhandler(XEN_CALL_FUNCTION_VECTOR,
|
||||||
cpu,
|
cpu,
|
||||||
xen_call_function_interrupt,
|
xen_call_function_interrupt,
|
||||||
IRQF_DISABLED|IRQF_PERCPU|IRQF_NOBALANCING,
|
IRQF_PERCPU|IRQF_NOBALANCING,
|
||||||
callfunc_name,
|
callfunc_name,
|
||||||
NULL);
|
NULL);
|
||||||
if (rc < 0)
|
if (rc < 0)
|
||||||
|
@ -171,7 +171,7 @@ static int xen_smp_intr_init(unsigned int cpu)
|
||||||
|
|
||||||
debug_name = kasprintf(GFP_KERNEL, "debug%d", cpu);
|
debug_name = kasprintf(GFP_KERNEL, "debug%d", cpu);
|
||||||
rc = bind_virq_to_irqhandler(VIRQ_DEBUG, cpu, xen_debug_interrupt,
|
rc = bind_virq_to_irqhandler(VIRQ_DEBUG, cpu, xen_debug_interrupt,
|
||||||
IRQF_DISABLED | IRQF_PERCPU | IRQF_NOBALANCING,
|
IRQF_PERCPU | IRQF_NOBALANCING,
|
||||||
debug_name, NULL);
|
debug_name, NULL);
|
||||||
if (rc < 0)
|
if (rc < 0)
|
||||||
goto fail;
|
goto fail;
|
||||||
|
@ -182,7 +182,7 @@ static int xen_smp_intr_init(unsigned int cpu)
|
||||||
rc = bind_ipi_to_irqhandler(XEN_CALL_FUNCTION_SINGLE_VECTOR,
|
rc = bind_ipi_to_irqhandler(XEN_CALL_FUNCTION_SINGLE_VECTOR,
|
||||||
cpu,
|
cpu,
|
||||||
xen_call_function_single_interrupt,
|
xen_call_function_single_interrupt,
|
||||||
IRQF_DISABLED|IRQF_PERCPU|IRQF_NOBALANCING,
|
IRQF_PERCPU|IRQF_NOBALANCING,
|
||||||
callfunc_name,
|
callfunc_name,
|
||||||
NULL);
|
NULL);
|
||||||
if (rc < 0)
|
if (rc < 0)
|
||||||
|
@ -201,7 +201,7 @@ static int xen_smp_intr_init(unsigned int cpu)
|
||||||
rc = bind_ipi_to_irqhandler(XEN_IRQ_WORK_VECTOR,
|
rc = bind_ipi_to_irqhandler(XEN_IRQ_WORK_VECTOR,
|
||||||
cpu,
|
cpu,
|
||||||
xen_irq_work_interrupt,
|
xen_irq_work_interrupt,
|
||||||
IRQF_DISABLED|IRQF_PERCPU|IRQF_NOBALANCING,
|
IRQF_PERCPU|IRQF_NOBALANCING,
|
||||||
callfunc_name,
|
callfunc_name,
|
||||||
NULL);
|
NULL);
|
||||||
if (rc < 0)
|
if (rc < 0)
|
||||||
|
|
|
@ -234,7 +234,7 @@ void xen_init_lock_cpu(int cpu)
|
||||||
irq = bind_ipi_to_irqhandler(XEN_SPIN_UNLOCK_VECTOR,
|
irq = bind_ipi_to_irqhandler(XEN_SPIN_UNLOCK_VECTOR,
|
||||||
cpu,
|
cpu,
|
||||||
dummy_handler,
|
dummy_handler,
|
||||||
IRQF_DISABLED|IRQF_PERCPU|IRQF_NOBALANCING,
|
IRQF_PERCPU|IRQF_NOBALANCING,
|
||||||
name,
|
name,
|
||||||
NULL);
|
NULL);
|
||||||
|
|
||||||
|
|
|
@ -443,8 +443,7 @@ void xen_setup_timer(int cpu)
|
||||||
name = "<timer kasprintf failed>";
|
name = "<timer kasprintf failed>";
|
||||||
|
|
||||||
irq = bind_virq_to_irqhandler(VIRQ_TIMER, cpu, xen_timer_interrupt,
|
irq = bind_virq_to_irqhandler(VIRQ_TIMER, cpu, xen_timer_interrupt,
|
||||||
IRQF_DISABLED|IRQF_PERCPU|
|
IRQF_PERCPU|IRQF_NOBALANCING|IRQF_TIMER|
|
||||||
IRQF_NOBALANCING|IRQF_TIMER|
|
|
||||||
IRQF_FORCE_RESUME,
|
IRQF_FORCE_RESUME,
|
||||||
name, NULL);
|
name, NULL);
|
||||||
|
|
||||||
|
|
|
@ -183,7 +183,7 @@ static int dom0_write_console(uint32_t vtermno, const char *str, int len)
|
||||||
{
|
{
|
||||||
int rc = HYPERVISOR_console_io(CONSOLEIO_write, len, (char *)str);
|
int rc = HYPERVISOR_console_io(CONSOLEIO_write, len, (char *)str);
|
||||||
if (rc < 0)
|
if (rc < 0)
|
||||||
return 0;
|
return rc;
|
||||||
|
|
||||||
return len;
|
return len;
|
||||||
}
|
}
|
||||||
|
@ -642,7 +642,22 @@ struct console xenboot_console = {
|
||||||
|
|
||||||
void xen_raw_console_write(const char *str)
|
void xen_raw_console_write(const char *str)
|
||||||
{
|
{
|
||||||
dom0_write_console(0, str, strlen(str));
|
ssize_t len = strlen(str);
|
||||||
|
int rc = 0;
|
||||||
|
|
||||||
|
if (xen_domain()) {
|
||||||
|
rc = dom0_write_console(0, str, len);
|
||||||
|
#ifdef CONFIG_X86
|
||||||
|
if (rc == -ENOSYS && xen_hvm_domain())
|
||||||
|
goto outb_print;
|
||||||
|
|
||||||
|
} else if (xen_cpuid_base()) {
|
||||||
|
int i;
|
||||||
|
outb_print:
|
||||||
|
for (i = 0; i < len; i++)
|
||||||
|
outb(str[i], 0xe9);
|
||||||
|
#endif
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
void xen_raw_printk(const char *fmt, ...)
|
void xen_raw_printk(const char *fmt, ...)
|
||||||
|
|
|
@ -140,7 +140,6 @@ config XEN_GRANT_DEV_ALLOC
|
||||||
|
|
||||||
config SWIOTLB_XEN
|
config SWIOTLB_XEN
|
||||||
def_bool y
|
def_bool y
|
||||||
depends on PCI && X86
|
|
||||||
select SWIOTLB
|
select SWIOTLB
|
||||||
|
|
||||||
config XEN_TMEM
|
config XEN_TMEM
|
||||||
|
|
|
@ -596,7 +596,7 @@ static void __init balloon_add_region(unsigned long start_pfn,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static int __cpuinit balloon_cpu_notify(struct notifier_block *self,
|
static int balloon_cpu_notify(struct notifier_block *self,
|
||||||
unsigned long action, void *hcpu)
|
unsigned long action, void *hcpu)
|
||||||
{
|
{
|
||||||
int cpu = (long)hcpu;
|
int cpu = (long)hcpu;
|
||||||
|
@ -616,7 +616,7 @@ static int __cpuinit balloon_cpu_notify(struct notifier_block *self,
|
||||||
return NOTIFY_OK;
|
return NOTIFY_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct notifier_block balloon_cpu_notifier __cpuinitdata = {
|
static struct notifier_block balloon_cpu_notifier = {
|
||||||
.notifier_call = balloon_cpu_notify,
|
.notifier_call = balloon_cpu_notify,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -641,7 +641,7 @@ static int __init balloon_init(void)
|
||||||
|
|
||||||
balloon_stats.current_pages = xen_pv_domain()
|
balloon_stats.current_pages = xen_pv_domain()
|
||||||
? min(xen_start_info->nr_pages - xen_released_pages, max_pfn)
|
? min(xen_start_info->nr_pages - xen_released_pages, max_pfn)
|
||||||
: max_pfn;
|
: get_num_physpages();
|
||||||
balloon_stats.target_pages = balloon_stats.current_pages;
|
balloon_stats.target_pages = balloon_stats.current_pages;
|
||||||
balloon_stats.balloon_low = 0;
|
balloon_stats.balloon_low = 0;
|
||||||
balloon_stats.balloon_high = 0;
|
balloon_stats.balloon_high = 0;
|
||||||
|
|
|
@ -305,7 +305,7 @@ static int evtchn_bind_to_user(struct per_user_data *u, int port)
|
||||||
if (rc < 0)
|
if (rc < 0)
|
||||||
goto err;
|
goto err;
|
||||||
|
|
||||||
rc = bind_evtchn_to_irqhandler(port, evtchn_interrupt, IRQF_DISABLED,
|
rc = bind_evtchn_to_irqhandler(port, evtchn_interrupt, 0,
|
||||||
u->name, evtchn);
|
u->name, evtchn);
|
||||||
if (rc < 0)
|
if (rc < 0)
|
||||||
goto err;
|
goto err;
|
||||||
|
|
|
@ -49,6 +49,7 @@
|
||||||
#include <xen/grant_table.h>
|
#include <xen/grant_table.h>
|
||||||
#include <xen/interface/memory.h>
|
#include <xen/interface/memory.h>
|
||||||
#include <xen/hvc-console.h>
|
#include <xen/hvc-console.h>
|
||||||
|
#include <xen/swiotlb-xen.h>
|
||||||
#include <asm/xen/hypercall.h>
|
#include <asm/xen/hypercall.h>
|
||||||
#include <asm/xen/interface.h>
|
#include <asm/xen/interface.h>
|
||||||
|
|
||||||
|
@ -898,8 +899,16 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
|
||||||
gnttab_retry_eagain_gop(GNTTABOP_map_grant_ref, map_ops + i,
|
gnttab_retry_eagain_gop(GNTTABOP_map_grant_ref, map_ops + i,
|
||||||
&map_ops[i].status, __func__);
|
&map_ops[i].status, __func__);
|
||||||
|
|
||||||
if (xen_feature(XENFEAT_auto_translated_physmap))
|
/* this is basically a nop on x86 */
|
||||||
|
if (xen_feature(XENFEAT_auto_translated_physmap)) {
|
||||||
|
for (i = 0; i < count; i++) {
|
||||||
|
if (map_ops[i].status)
|
||||||
|
continue;
|
||||||
|
set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
|
||||||
|
map_ops[i].dev_bus_addr >> PAGE_SHIFT);
|
||||||
|
}
|
||||||
return ret;
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
|
if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
|
||||||
arch_enter_lazy_mmu_mode();
|
arch_enter_lazy_mmu_mode();
|
||||||
|
@ -942,8 +951,14 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
if (xen_feature(XENFEAT_auto_translated_physmap))
|
/* this is basically a nop on x86 */
|
||||||
|
if (xen_feature(XENFEAT_auto_translated_physmap)) {
|
||||||
|
for (i = 0; i < count; i++) {
|
||||||
|
set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
|
||||||
|
INVALID_P2M_ENTRY);
|
||||||
|
}
|
||||||
return ret;
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
|
if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
|
||||||
arch_enter_lazy_mmu_mode();
|
arch_enter_lazy_mmu_mode();
|
||||||
|
|
|
@ -26,6 +26,7 @@
|
||||||
#include <asm/xen/hypervisor.h>
|
#include <asm/xen/hypervisor.h>
|
||||||
#include <asm/xen/hypercall.h>
|
#include <asm/xen/hypercall.h>
|
||||||
#include "../pci/pci.h"
|
#include "../pci/pci.h"
|
||||||
|
#include <asm/pci_x86.h>
|
||||||
|
|
||||||
static bool __read_mostly pci_seg_supported = true;
|
static bool __read_mostly pci_seg_supported = true;
|
||||||
|
|
||||||
|
@ -192,3 +193,49 @@ static int __init register_xen_pci_notifier(void)
|
||||||
}
|
}
|
||||||
|
|
||||||
arch_initcall(register_xen_pci_notifier);
|
arch_initcall(register_xen_pci_notifier);
|
||||||
|
|
||||||
|
#ifdef CONFIG_PCI_MMCONFIG
|
||||||
|
static int __init xen_mcfg_late(void)
|
||||||
|
{
|
||||||
|
struct pci_mmcfg_region *cfg;
|
||||||
|
int rc;
|
||||||
|
|
||||||
|
if (!xen_initial_domain())
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
if ((pci_probe & PCI_PROBE_MMCONF) == 0)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
if (list_empty(&pci_mmcfg_list))
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
/* Check whether they are in the right area. */
|
||||||
|
list_for_each_entry(cfg, &pci_mmcfg_list, list) {
|
||||||
|
struct physdev_pci_mmcfg_reserved r;
|
||||||
|
|
||||||
|
r.address = cfg->address;
|
||||||
|
r.segment = cfg->segment;
|
||||||
|
r.start_bus = cfg->start_bus;
|
||||||
|
r.end_bus = cfg->end_bus;
|
||||||
|
r.flags = XEN_PCI_MMCFG_RESERVED;
|
||||||
|
|
||||||
|
rc = HYPERVISOR_physdev_op(PHYSDEVOP_pci_mmcfg_reserved, &r);
|
||||||
|
switch (rc) {
|
||||||
|
case 0:
|
||||||
|
case -ENOSYS:
|
||||||
|
continue;
|
||||||
|
|
||||||
|
default:
|
||||||
|
pr_warn("Failed to report MMCONFIG reservation"
|
||||||
|
" state for %s to hypervisor"
|
||||||
|
" (%d)\n",
|
||||||
|
cfg->name, rc);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
/*
|
||||||
|
* Needs to be done after acpi_init which are subsys_initcall.
|
||||||
|
*/
|
||||||
|
subsys_initcall_sync(xen_mcfg_late);
|
||||||
|
#endif
|
||||||
|
|
|
@ -84,7 +84,7 @@ static irqreturn_t do_hvm_evtchn_intr(int irq, void *dev_id)
|
||||||
static int xen_allocate_irq(struct pci_dev *pdev)
|
static int xen_allocate_irq(struct pci_dev *pdev)
|
||||||
{
|
{
|
||||||
return request_irq(pdev->irq, do_hvm_evtchn_intr,
|
return request_irq(pdev->irq, do_hvm_evtchn_intr,
|
||||||
IRQF_DISABLED | IRQF_NOBALANCING | IRQF_TRIGGER_RISING,
|
IRQF_NOBALANCING | IRQF_TRIGGER_RISING,
|
||||||
"xen-platform-pci", pdev);
|
"xen-platform-pci", pdev);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -42,12 +42,31 @@
|
||||||
#include <xen/page.h>
|
#include <xen/page.h>
|
||||||
#include <xen/xen-ops.h>
|
#include <xen/xen-ops.h>
|
||||||
#include <xen/hvc-console.h>
|
#include <xen/hvc-console.h>
|
||||||
|
|
||||||
|
#include <asm/dma-mapping.h>
|
||||||
|
#include <asm/xen/page-coherent.h>
|
||||||
|
|
||||||
|
#include <trace/events/swiotlb.h>
|
||||||
/*
|
/*
|
||||||
* Used to do a quick range check in swiotlb_tbl_unmap_single and
|
* Used to do a quick range check in swiotlb_tbl_unmap_single and
|
||||||
* swiotlb_tbl_sync_single_*, to see if the memory was in fact allocated by this
|
* swiotlb_tbl_sync_single_*, to see if the memory was in fact allocated by this
|
||||||
* API.
|
* API.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
|
#ifndef CONFIG_X86
|
||||||
|
static unsigned long dma_alloc_coherent_mask(struct device *dev,
|
||||||
|
gfp_t gfp)
|
||||||
|
{
|
||||||
|
unsigned long dma_mask = 0;
|
||||||
|
|
||||||
|
dma_mask = dev->coherent_dma_mask;
|
||||||
|
if (!dma_mask)
|
||||||
|
dma_mask = (gfp & GFP_DMA) ? DMA_BIT_MASK(24) : DMA_BIT_MASK(32);
|
||||||
|
|
||||||
|
return dma_mask;
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
|
||||||
static char *xen_io_tlb_start, *xen_io_tlb_end;
|
static char *xen_io_tlb_start, *xen_io_tlb_end;
|
||||||
static unsigned long xen_io_tlb_nslabs;
|
static unsigned long xen_io_tlb_nslabs;
|
||||||
/*
|
/*
|
||||||
|
@ -56,17 +75,17 @@ static unsigned long xen_io_tlb_nslabs;
|
||||||
|
|
||||||
static u64 start_dma_addr;
|
static u64 start_dma_addr;
|
||||||
|
|
||||||
static dma_addr_t xen_phys_to_bus(phys_addr_t paddr)
|
static inline dma_addr_t xen_phys_to_bus(phys_addr_t paddr)
|
||||||
{
|
{
|
||||||
return phys_to_machine(XPADDR(paddr)).maddr;
|
return phys_to_machine(XPADDR(paddr)).maddr;
|
||||||
}
|
}
|
||||||
|
|
||||||
static phys_addr_t xen_bus_to_phys(dma_addr_t baddr)
|
static inline phys_addr_t xen_bus_to_phys(dma_addr_t baddr)
|
||||||
{
|
{
|
||||||
return machine_to_phys(XMADDR(baddr)).paddr;
|
return machine_to_phys(XMADDR(baddr)).paddr;
|
||||||
}
|
}
|
||||||
|
|
||||||
static dma_addr_t xen_virt_to_bus(void *address)
|
static inline dma_addr_t xen_virt_to_bus(void *address)
|
||||||
{
|
{
|
||||||
return xen_phys_to_bus(virt_to_phys(address));
|
return xen_phys_to_bus(virt_to_phys(address));
|
||||||
}
|
}
|
||||||
|
@ -89,7 +108,7 @@ static int check_pages_physically_contiguous(unsigned long pfn,
|
||||||
return 1;
|
return 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int range_straddles_page_boundary(phys_addr_t p, size_t size)
|
static inline int range_straddles_page_boundary(phys_addr_t p, size_t size)
|
||||||
{
|
{
|
||||||
unsigned long pfn = PFN_DOWN(p);
|
unsigned long pfn = PFN_DOWN(p);
|
||||||
unsigned int offset = p & ~PAGE_MASK;
|
unsigned int offset = p & ~PAGE_MASK;
|
||||||
|
@ -126,6 +145,8 @@ xen_swiotlb_fixup(void *buf, size_t size, unsigned long nslabs)
|
||||||
{
|
{
|
||||||
int i, rc;
|
int i, rc;
|
||||||
int dma_bits;
|
int dma_bits;
|
||||||
|
dma_addr_t dma_handle;
|
||||||
|
phys_addr_t p = virt_to_phys(buf);
|
||||||
|
|
||||||
dma_bits = get_order(IO_TLB_SEGSIZE << IO_TLB_SHIFT) + PAGE_SHIFT;
|
dma_bits = get_order(IO_TLB_SEGSIZE << IO_TLB_SHIFT) + PAGE_SHIFT;
|
||||||
|
|
||||||
|
@ -135,9 +156,9 @@ xen_swiotlb_fixup(void *buf, size_t size, unsigned long nslabs)
|
||||||
|
|
||||||
do {
|
do {
|
||||||
rc = xen_create_contiguous_region(
|
rc = xen_create_contiguous_region(
|
||||||
(unsigned long)buf + (i << IO_TLB_SHIFT),
|
p + (i << IO_TLB_SHIFT),
|
||||||
get_order(slabs << IO_TLB_SHIFT),
|
get_order(slabs << IO_TLB_SHIFT),
|
||||||
dma_bits);
|
dma_bits, &dma_handle);
|
||||||
} while (rc && dma_bits++ < max_dma_bits);
|
} while (rc && dma_bits++ < max_dma_bits);
|
||||||
if (rc)
|
if (rc)
|
||||||
return rc;
|
return rc;
|
||||||
|
@ -263,7 +284,6 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
|
||||||
void *ret;
|
void *ret;
|
||||||
int order = get_order(size);
|
int order = get_order(size);
|
||||||
u64 dma_mask = DMA_BIT_MASK(32);
|
u64 dma_mask = DMA_BIT_MASK(32);
|
||||||
unsigned long vstart;
|
|
||||||
phys_addr_t phys;
|
phys_addr_t phys;
|
||||||
dma_addr_t dev_addr;
|
dma_addr_t dev_addr;
|
||||||
|
|
||||||
|
@ -278,8 +298,12 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
|
||||||
if (dma_alloc_from_coherent(hwdev, size, dma_handle, &ret))
|
if (dma_alloc_from_coherent(hwdev, size, dma_handle, &ret))
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
vstart = __get_free_pages(flags, order);
|
/* On ARM this function returns an ioremap'ped virtual address for
|
||||||
ret = (void *)vstart;
|
* which virt_to_phys doesn't return the corresponding physical
|
||||||
|
* address. In fact on ARM virt_to_phys only works for kernel direct
|
||||||
|
* mapped RAM memory. Also see comment below.
|
||||||
|
*/
|
||||||
|
ret = xen_alloc_coherent_pages(hwdev, size, dma_handle, flags, attrs);
|
||||||
|
|
||||||
if (!ret)
|
if (!ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
@ -287,18 +311,21 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
|
||||||
if (hwdev && hwdev->coherent_dma_mask)
|
if (hwdev && hwdev->coherent_dma_mask)
|
||||||
dma_mask = dma_alloc_coherent_mask(hwdev, flags);
|
dma_mask = dma_alloc_coherent_mask(hwdev, flags);
|
||||||
|
|
||||||
phys = virt_to_phys(ret);
|
/* At this point dma_handle is the physical address, next we are
|
||||||
|
* going to set it to the machine address.
|
||||||
|
* Do not use virt_to_phys(ret) because on ARM it doesn't correspond
|
||||||
|
* to *dma_handle. */
|
||||||
|
phys = *dma_handle;
|
||||||
dev_addr = xen_phys_to_bus(phys);
|
dev_addr = xen_phys_to_bus(phys);
|
||||||
if (((dev_addr + size - 1 <= dma_mask)) &&
|
if (((dev_addr + size - 1 <= dma_mask)) &&
|
||||||
!range_straddles_page_boundary(phys, size))
|
!range_straddles_page_boundary(phys, size))
|
||||||
*dma_handle = dev_addr;
|
*dma_handle = dev_addr;
|
||||||
else {
|
else {
|
||||||
if (xen_create_contiguous_region(vstart, order,
|
if (xen_create_contiguous_region(phys, order,
|
||||||
fls64(dma_mask)) != 0) {
|
fls64(dma_mask), dma_handle) != 0) {
|
||||||
free_pages(vstart, order);
|
xen_free_coherent_pages(hwdev, size, ret, (dma_addr_t)phys, attrs);
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
*dma_handle = virt_to_machine(ret).maddr;
|
|
||||||
}
|
}
|
||||||
memset(ret, 0, size);
|
memset(ret, 0, size);
|
||||||
return ret;
|
return ret;
|
||||||
|
@ -319,13 +346,15 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
|
||||||
if (hwdev && hwdev->coherent_dma_mask)
|
if (hwdev && hwdev->coherent_dma_mask)
|
||||||
dma_mask = hwdev->coherent_dma_mask;
|
dma_mask = hwdev->coherent_dma_mask;
|
||||||
|
|
||||||
phys = virt_to_phys(vaddr);
|
/* do not use virt_to_phys because on ARM it doesn't return you the
|
||||||
|
* physical address */
|
||||||
|
phys = xen_bus_to_phys(dev_addr);
|
||||||
|
|
||||||
if (((dev_addr + size - 1 > dma_mask)) ||
|
if (((dev_addr + size - 1 > dma_mask)) ||
|
||||||
range_straddles_page_boundary(phys, size))
|
range_straddles_page_boundary(phys, size))
|
||||||
xen_destroy_contiguous_region((unsigned long)vaddr, order);
|
xen_destroy_contiguous_region(phys, order);
|
||||||
|
|
||||||
free_pages((unsigned long)vaddr, order);
|
xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys, attrs);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(xen_swiotlb_free_coherent);
|
EXPORT_SYMBOL_GPL(xen_swiotlb_free_coherent);
|
||||||
|
|
||||||
|
@ -352,16 +381,25 @@ dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
|
||||||
* buffering it.
|
* buffering it.
|
||||||
*/
|
*/
|
||||||
if (dma_capable(dev, dev_addr, size) &&
|
if (dma_capable(dev, dev_addr, size) &&
|
||||||
!range_straddles_page_boundary(phys, size) && !swiotlb_force)
|
!range_straddles_page_boundary(phys, size) && !swiotlb_force) {
|
||||||
|
/* we are not interested in the dma_addr returned by
|
||||||
|
* xen_dma_map_page, only in the potential cache flushes executed
|
||||||
|
* by the function. */
|
||||||
|
xen_dma_map_page(dev, page, offset, size, dir, attrs);
|
||||||
return dev_addr;
|
return dev_addr;
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Oh well, have to allocate and map a bounce buffer.
|
* Oh well, have to allocate and map a bounce buffer.
|
||||||
*/
|
*/
|
||||||
|
trace_swiotlb_bounced(dev, dev_addr, size, swiotlb_force);
|
||||||
|
|
||||||
map = swiotlb_tbl_map_single(dev, start_dma_addr, phys, size, dir);
|
map = swiotlb_tbl_map_single(dev, start_dma_addr, phys, size, dir);
|
||||||
if (map == SWIOTLB_MAP_ERROR)
|
if (map == SWIOTLB_MAP_ERROR)
|
||||||
return DMA_ERROR_CODE;
|
return DMA_ERROR_CODE;
|
||||||
|
|
||||||
|
xen_dma_map_page(dev, pfn_to_page(map >> PAGE_SHIFT),
|
||||||
|
map & ~PAGE_MASK, size, dir, attrs);
|
||||||
dev_addr = xen_phys_to_bus(map);
|
dev_addr = xen_phys_to_bus(map);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -384,12 +422,15 @@ EXPORT_SYMBOL_GPL(xen_swiotlb_map_page);
|
||||||
* whatever the device wrote there.
|
* whatever the device wrote there.
|
||||||
*/
|
*/
|
||||||
static void xen_unmap_single(struct device *hwdev, dma_addr_t dev_addr,
|
static void xen_unmap_single(struct device *hwdev, dma_addr_t dev_addr,
|
||||||
size_t size, enum dma_data_direction dir)
|
size_t size, enum dma_data_direction dir,
|
||||||
|
struct dma_attrs *attrs)
|
||||||
{
|
{
|
||||||
phys_addr_t paddr = xen_bus_to_phys(dev_addr);
|
phys_addr_t paddr = xen_bus_to_phys(dev_addr);
|
||||||
|
|
||||||
BUG_ON(dir == DMA_NONE);
|
BUG_ON(dir == DMA_NONE);
|
||||||
|
|
||||||
|
xen_dma_unmap_page(hwdev, paddr, size, dir, attrs);
|
||||||
|
|
||||||
/* NOTE: We use dev_addr here, not paddr! */
|
/* NOTE: We use dev_addr here, not paddr! */
|
||||||
if (is_xen_swiotlb_buffer(dev_addr)) {
|
if (is_xen_swiotlb_buffer(dev_addr)) {
|
||||||
swiotlb_tbl_unmap_single(hwdev, paddr, size, dir);
|
swiotlb_tbl_unmap_single(hwdev, paddr, size, dir);
|
||||||
|
@ -412,7 +453,7 @@ void xen_swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
|
||||||
size_t size, enum dma_data_direction dir,
|
size_t size, enum dma_data_direction dir,
|
||||||
struct dma_attrs *attrs)
|
struct dma_attrs *attrs)
|
||||||
{
|
{
|
||||||
xen_unmap_single(hwdev, dev_addr, size, dir);
|
xen_unmap_single(hwdev, dev_addr, size, dir, attrs);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(xen_swiotlb_unmap_page);
|
EXPORT_SYMBOL_GPL(xen_swiotlb_unmap_page);
|
||||||
|
|
||||||
|
@ -435,11 +476,15 @@ xen_swiotlb_sync_single(struct device *hwdev, dma_addr_t dev_addr,
|
||||||
|
|
||||||
BUG_ON(dir == DMA_NONE);
|
BUG_ON(dir == DMA_NONE);
|
||||||
|
|
||||||
|
if (target == SYNC_FOR_CPU)
|
||||||
|
xen_dma_sync_single_for_cpu(hwdev, paddr, size, dir);
|
||||||
|
|
||||||
/* NOTE: We use dev_addr here, not paddr! */
|
/* NOTE: We use dev_addr here, not paddr! */
|
||||||
if (is_xen_swiotlb_buffer(dev_addr)) {
|
if (is_xen_swiotlb_buffer(dev_addr))
|
||||||
swiotlb_tbl_sync_single(hwdev, paddr, size, dir, target);
|
swiotlb_tbl_sync_single(hwdev, paddr, size, dir, target);
|
||||||
return;
|
|
||||||
}
|
if (target == SYNC_FOR_DEVICE)
|
||||||
|
xen_dma_sync_single_for_cpu(hwdev, paddr, size, dir);
|
||||||
|
|
||||||
if (dir != DMA_FROM_DEVICE)
|
if (dir != DMA_FROM_DEVICE)
|
||||||
return;
|
return;
|
||||||
|
@ -502,16 +547,26 @@ xen_swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl,
|
||||||
sg->length,
|
sg->length,
|
||||||
dir);
|
dir);
|
||||||
if (map == SWIOTLB_MAP_ERROR) {
|
if (map == SWIOTLB_MAP_ERROR) {
|
||||||
|
dev_warn(hwdev, "swiotlb buffer is full\n");
|
||||||
/* Don't panic here, we expect map_sg users
|
/* Don't panic here, we expect map_sg users
|
||||||
to do proper error handling. */
|
to do proper error handling. */
|
||||||
xen_swiotlb_unmap_sg_attrs(hwdev, sgl, i, dir,
|
xen_swiotlb_unmap_sg_attrs(hwdev, sgl, i, dir,
|
||||||
attrs);
|
attrs);
|
||||||
sg_dma_len(sgl) = 0;
|
sg_dma_len(sgl) = 0;
|
||||||
return DMA_ERROR_CODE;
|
return 0;
|
||||||
}
|
}
|
||||||
sg->dma_address = xen_phys_to_bus(map);
|
sg->dma_address = xen_phys_to_bus(map);
|
||||||
} else
|
} else {
|
||||||
|
/* we are not interested in the dma_addr returned by
|
||||||
|
* xen_dma_map_page, only in the potential cache flushes executed
|
||||||
|
* by the function. */
|
||||||
|
xen_dma_map_page(hwdev, pfn_to_page(paddr >> PAGE_SHIFT),
|
||||||
|
paddr & ~PAGE_MASK,
|
||||||
|
sg->length,
|
||||||
|
dir,
|
||||||
|
attrs);
|
||||||
sg->dma_address = dev_addr;
|
sg->dma_address = dev_addr;
|
||||||
|
}
|
||||||
sg_dma_len(sg) = sg->length;
|
sg_dma_len(sg) = sg->length;
|
||||||
}
|
}
|
||||||
return nelems;
|
return nelems;
|
||||||
|
@ -533,7 +588,7 @@ xen_swiotlb_unmap_sg_attrs(struct device *hwdev, struct scatterlist *sgl,
|
||||||
BUG_ON(dir == DMA_NONE);
|
BUG_ON(dir == DMA_NONE);
|
||||||
|
|
||||||
for_each_sg(sgl, sg, nelems, i)
|
for_each_sg(sgl, sg, nelems, i)
|
||||||
xen_unmap_single(hwdev, sg->dma_address, sg_dma_len(sg), dir);
|
xen_unmap_single(hwdev, sg->dma_address, sg_dma_len(sg), dir, attrs);
|
||||||
|
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(xen_swiotlb_unmap_sg_attrs);
|
EXPORT_SYMBOL_GPL(xen_swiotlb_unmap_sg_attrs);
|
||||||
|
@ -593,3 +648,15 @@ xen_swiotlb_dma_supported(struct device *hwdev, u64 mask)
|
||||||
return xen_virt_to_bus(xen_io_tlb_end - 1) <= mask;
|
return xen_virt_to_bus(xen_io_tlb_end - 1) <= mask;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(xen_swiotlb_dma_supported);
|
EXPORT_SYMBOL_GPL(xen_swiotlb_dma_supported);
|
||||||
|
|
||||||
|
int
|
||||||
|
xen_swiotlb_set_dma_mask(struct device *dev, u64 dma_mask)
|
||||||
|
{
|
||||||
|
if (!dev->dma_mask || !xen_swiotlb_dma_supported(dev, dma_mask))
|
||||||
|
return -EIO;
|
||||||
|
|
||||||
|
*dev->dma_mask = dma_mask;
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(xen_swiotlb_set_dma_mask);
|
||||||
|
|
|
@ -0,0 +1,46 @@
|
||||||
|
#undef TRACE_SYSTEM
|
||||||
|
#define TRACE_SYSTEM swiotlb
|
||||||
|
|
||||||
|
#if !defined(_TRACE_SWIOTLB_H) || defined(TRACE_HEADER_MULTI_READ)
|
||||||
|
#define _TRACE_SWIOTLB_H
|
||||||
|
|
||||||
|
#include <linux/tracepoint.h>
|
||||||
|
|
||||||
|
TRACE_EVENT(swiotlb_bounced,
|
||||||
|
|
||||||
|
TP_PROTO(struct device *dev,
|
||||||
|
dma_addr_t dev_addr,
|
||||||
|
size_t size,
|
||||||
|
int swiotlb_force),
|
||||||
|
|
||||||
|
TP_ARGS(dev, dev_addr, size, swiotlb_force),
|
||||||
|
|
||||||
|
TP_STRUCT__entry(
|
||||||
|
__string( dev_name, dev_name(dev) )
|
||||||
|
__field( u64, dma_mask )
|
||||||
|
__field( dma_addr_t, dev_addr )
|
||||||
|
__field( size_t, size )
|
||||||
|
__field( int, swiotlb_force )
|
||||||
|
),
|
||||||
|
|
||||||
|
TP_fast_assign(
|
||||||
|
__assign_str(dev_name, dev_name(dev));
|
||||||
|
__entry->dma_mask = (dev->dma_mask ? *dev->dma_mask : 0);
|
||||||
|
__entry->dev_addr = dev_addr;
|
||||||
|
__entry->size = size;
|
||||||
|
__entry->swiotlb_force = swiotlb_force;
|
||||||
|
),
|
||||||
|
|
||||||
|
TP_printk("dev_name: %s dma_mask=%llx dev_addr=%llx "
|
||||||
|
"size=%zu %s",
|
||||||
|
__get_str(dev_name),
|
||||||
|
__entry->dma_mask,
|
||||||
|
(unsigned long long)__entry->dev_addr,
|
||||||
|
__entry->size,
|
||||||
|
__entry->swiotlb_force ? "swiotlb_force" : "" )
|
||||||
|
);
|
||||||
|
|
||||||
|
#endif /* _TRACE_SWIOTLB_H */
|
||||||
|
|
||||||
|
/* This part must be outside protection */
|
||||||
|
#include <trace/define_trace.h>
|
|
@ -231,6 +231,17 @@ struct physdev_get_free_pirq {
|
||||||
#define XEN_PCI_DEV_VIRTFN 0x2
|
#define XEN_PCI_DEV_VIRTFN 0x2
|
||||||
#define XEN_PCI_DEV_PXM 0x4
|
#define XEN_PCI_DEV_PXM 0x4
|
||||||
|
|
||||||
|
#define XEN_PCI_MMCFG_RESERVED 0x1
|
||||||
|
|
||||||
|
#define PHYSDEVOP_pci_mmcfg_reserved 24
|
||||||
|
struct physdev_pci_mmcfg_reserved {
|
||||||
|
uint64_t address;
|
||||||
|
uint16_t segment;
|
||||||
|
uint8_t start_bus;
|
||||||
|
uint8_t end_bus;
|
||||||
|
uint32_t flags;
|
||||||
|
};
|
||||||
|
|
||||||
#define PHYSDEVOP_pci_device_add 25
|
#define PHYSDEVOP_pci_device_add 25
|
||||||
struct physdev_pci_device_add {
|
struct physdev_pci_device_add {
|
||||||
/* IN */
|
/* IN */
|
||||||
|
|
|
@ -1,6 +1,7 @@
|
||||||
#ifndef __LINUX_SWIOTLB_XEN_H
|
#ifndef __LINUX_SWIOTLB_XEN_H
|
||||||
#define __LINUX_SWIOTLB_XEN_H
|
#define __LINUX_SWIOTLB_XEN_H
|
||||||
|
|
||||||
|
#include <linux/dma-direction.h>
|
||||||
#include <linux/swiotlb.h>
|
#include <linux/swiotlb.h>
|
||||||
|
|
||||||
extern int xen_swiotlb_init(int verbose, bool early);
|
extern int xen_swiotlb_init(int verbose, bool early);
|
||||||
|
@ -55,4 +56,6 @@ xen_swiotlb_dma_mapping_error(struct device *hwdev, dma_addr_t dma_addr);
|
||||||
extern int
|
extern int
|
||||||
xen_swiotlb_dma_supported(struct device *hwdev, u64 mask);
|
xen_swiotlb_dma_supported(struct device *hwdev, u64 mask);
|
||||||
|
|
||||||
|
extern int
|
||||||
|
xen_swiotlb_set_dma_mask(struct device *dev, u64 dma_mask);
|
||||||
#endif /* __LINUX_SWIOTLB_XEN_H */
|
#endif /* __LINUX_SWIOTLB_XEN_H */
|
||||||
|
|
|
@ -19,10 +19,11 @@ void xen_arch_resume(void);
|
||||||
int xen_setup_shutdown_event(void);
|
int xen_setup_shutdown_event(void);
|
||||||
|
|
||||||
extern unsigned long *xen_contiguous_bitmap;
|
extern unsigned long *xen_contiguous_bitmap;
|
||||||
int xen_create_contiguous_region(unsigned long vstart, unsigned int order,
|
int xen_create_contiguous_region(phys_addr_t pstart, unsigned int order,
|
||||||
unsigned int address_bits);
|
unsigned int address_bits,
|
||||||
|
dma_addr_t *dma_handle);
|
||||||
|
|
||||||
void xen_destroy_contiguous_region(unsigned long vstart, unsigned int order);
|
void xen_destroy_contiguous_region(phys_addr_t pstart, unsigned int order);
|
||||||
|
|
||||||
struct vm_area_struct;
|
struct vm_area_struct;
|
||||||
int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
|
int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
|
||||||
|
|
|
@ -38,6 +38,9 @@
|
||||||
#include <linux/bootmem.h>
|
#include <linux/bootmem.h>
|
||||||
#include <linux/iommu-helper.h>
|
#include <linux/iommu-helper.h>
|
||||||
|
|
||||||
|
#define CREATE_TRACE_POINTS
|
||||||
|
#include <trace/events/swiotlb.h>
|
||||||
|
|
||||||
#define OFFSET(val,align) ((unsigned long) \
|
#define OFFSET(val,align) ((unsigned long) \
|
||||||
( (val) & ( (align) - 1)))
|
( (val) & ( (align) - 1)))
|
||||||
|
|
||||||
|
@ -502,6 +505,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev,
|
||||||
|
|
||||||
not_found:
|
not_found:
|
||||||
spin_unlock_irqrestore(&io_tlb_lock, flags);
|
spin_unlock_irqrestore(&io_tlb_lock, flags);
|
||||||
|
dev_warn(hwdev, "swiotlb buffer is full\n");
|
||||||
return SWIOTLB_MAP_ERROR;
|
return SWIOTLB_MAP_ERROR;
|
||||||
found:
|
found:
|
||||||
spin_unlock_irqrestore(&io_tlb_lock, flags);
|
spin_unlock_irqrestore(&io_tlb_lock, flags);
|
||||||
|
@ -726,6 +730,8 @@ dma_addr_t swiotlb_map_page(struct device *dev, struct page *page,
|
||||||
if (dma_capable(dev, dev_addr, size) && !swiotlb_force)
|
if (dma_capable(dev, dev_addr, size) && !swiotlb_force)
|
||||||
return dev_addr;
|
return dev_addr;
|
||||||
|
|
||||||
|
trace_swiotlb_bounced(dev, dev_addr, size, swiotlb_force);
|
||||||
|
|
||||||
/* Oh well, have to allocate and map a bounce buffer. */
|
/* Oh well, have to allocate and map a bounce buffer. */
|
||||||
map = map_single(dev, phys, size, dir);
|
map = map_single(dev, phys, size, dir);
|
||||||
if (map == SWIOTLB_MAP_ERROR) {
|
if (map == SWIOTLB_MAP_ERROR) {
|
||||||
|
|
Loading…
Reference in New Issue