PCI changes for the v3.16 merge window:
Enumeration - Notify driver before and after device reset (Keith Busch) - Use reset notification in NVMe (Keith Busch) NUMA - Warn if we have to guess host bridge node information (Myron Stowe) - Work around AMD Fam15h BIOSes that fail to provide _PXM (Suravee Suthikulpanit) - Clean up and mark early_root_info_init() as deprecated (Suravee Suthikulpanit) Driver binding - Add "driver_override" for force specific binding (Alex Williamson) - Fail "new_id" addition for devices we already know about (Bandan Das) Resource management - Support BAR sizes up to 8GB (Nikhil Rao, Alan Cox) - Don't move IORESOURCE_PCI_FIXED resources (Bjorn Helgaas) - Mark SBx00 HPET BAR as IORESOURCE_PCI_FIXED (Bjorn Helgaas) - Fail safely if we can't handle BARs larger than 4GB (Bjorn Helgaas) - Reject BAR above 4GB if dma_addr_t is too small (Bjorn Helgaas) - Don't convert BAR address to resource if dma_addr_t is too small (Bjorn Helgaas) - Don't set BAR to zero if dma_addr_t is too small (Bjorn Helgaas) - Don't print anything while decoding is disabled (Bjorn Helgaas) - Don't add disabled subtractive decode bus resources (Bjorn Helgaas) - Add resource allocation comments (Bjorn Helgaas) - Restrict 64-bit prefetchable bridge windows to 64-bit resources (Yinghai Lu) - Assign i82875p_edac PCI resources before adding device (Yinghai Lu) PCI device hotplug - Remove unnecessary "dev->bus" test (Bjorn Helgaas) - Use PCI_EXP_SLTCAP_PSN define (Bjorn Helgaas) - Fix rphahp endianess issues (Laurent Dufour) - Acknowledge spurious "cmd completed" event (Rajat Jain) - Allow hotplug service drivers to operate in polling mode (Rajat Jain) - Fix cpqphp possible NULL dereference (Rickard Strandqvist) MSI - Replace pci_enable_msi_block() by pci_enable_msi_exact() (Alexander Gordeev) - Replace pci_enable_msix() by pci_enable_msix_exact() (Alexander Gordeev) - Simplify populate_msi_sysfs() (Jan Beulich) Virtualization - Add Intel Patsburg (X79) root port ACS quirk (Alex Williamson) - Mark RTL8110SC INTx masking as broken (Alex Williamson) Generic host bridge driver - Add generic PCI host controller driver (Will Deacon) Freescale i.MX6 - Use new clock names (Lucas Stach) - Drop old IRQ mapping (Lucas Stach) - Remove optional (and unused) IRQs (Lucas Stach) - Add support for MSI (Lucas Stach) - Fix imx6_add_pcie_port() section mismatch warning (Sachin Kamat) Renesas R-Car - Add gen2 device tree support (Ben Dooks) - Use new OF interrupt mapping when possible (Lucas Stach) - Add PCIe driver (Phil Edworthy) - Add PCIe MSI support (Phil Edworthy) - Add PCIe device tree bindings (Phil Edworthy) Samsung Exynos - Remove unnecessary OOM messages (Jingoo Han) - Fix add_pcie_port() section mismatch warning (Sachin Kamat) Synopsys DesignWare - Make MSI ISR shared IRQ aware (Lucas Stach) Miscellaneous - Check for broken config space aliasing (Alex Williamson) - Update email address (Ben Hutchings) - Fix Broadcom CNB20LE unintended sign extension (Bjorn Helgaas) - Fix incorrect vgaarb conditional in WARN_ON() (Bjorn Helgaas) - Remove unnecessary __ref annotations (Bjorn Helgaas) - Add arch/x86/kernel/quirks.c to MAINTAINERS PCI file patterns (Bjorn Helgaas) - Fix use of uninitialized MPS value (Bjorn Helgaas) - Tidy x86/gart messages (Bjorn Helgaas) - Fix return value from pci_user_{read,write}_config_*() (Gavin Shan) - Turn pcibios_penalize_isa_irq() into a weak function (Hanjun Guo) - Remove unused serial device IDs (Jean Delvare) - Use designated initialization in PCI_VDEVICE (Mark Rustad) - Fix powerpc NULL dereference in pci_root_buses traversal (Mike Qiu) - Configure MPS on ARM (Murali Karicheri) - Remove unnecessary includes of <linux/init.h> (Paul Gortmaker) - Move Open Firmware devspec attribute to PCI common code (Sebastian Ott) - Use pdev->dev.groups for attribute creation on s390 (Sebastian Ott) - Remove pcibios_add_platform_entries() (Sebastian Ott) - Add new ID for Intel GPU "spurious interrupt" quirk (Thomas Jarosch) - Rename pci_is_bridge() to pci_has_subordinate() (Yijing Wang) - Add and use new pci_is_bridge() interface (Yijing Wang) - Make pci_bus_add_device() void (Yijing Wang) DMA API - Clarify physical/bus address distinction in docs (Bjorn Helgaas) - Fix typos in docs (Emilio López) - Update dma_pool_create ()and dma_pool_alloc() descriptions (Gioh Kim) - Change dma_declare_coherent_memory() CPU address to phys_addr_t (Bjorn Helgaas) - Pass GAPSPCI_DMA_BASE CPU & bus address to dma_declare_coherent_memory() (Bjorn Helgaas) -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) iQIcBAABAgAGBQJTjMaQAAoJEFmIoMA60/r8XncQAKX7cD6btXCZnrcYo7inseyp 3rwOlrsNkWyHqSj/RqqzE1NY6L1h5G2uliI6xg1SKenuHPDcosm5d8FYO0ORKiUs xrqBkmZJHXN7fck//tJwsTXiYh5u42RO8QWbvZVr5UqXe40LyaMHMh9Y7VarrU/o sM2ADzFKagv1qMQ13nmYxqT+Zl+CqpimyLP+ep6Nfqxi6ils+KJ6b9SKYqrpqE6t Mcq2K5ShqU5SaYub1JIXLcQ9XylID+t1M9+cwixcs7a87HbJiktfkGqQvQJoUIuK Q5U+abcIGk4vfOnDCctSnoRyrcbTAZ/vqfo0vpX22TokESjwrD8hFOX5HPOFtD+4 wIDbYurW/8oBhLRaJ0uTPzSH8bXjXTynAwxHZgIrEur5908eECKQ/WiFCxyrovvv r4ThAN0FaobllEr0XOFESOzDNSt/ME00WWI7+puAJ/KJkFEtcXt9othLmLmvLz8H 2GWXrm/aOR0WUO7foGUxI3bXYlDN6NbSKpfuZsLAi2VAyJJ6L6yVSo/fT0X07e3z qRy9LOohuiwIKv/I4F2SEq2REfGGsnkrJBoeQi/oBZDcBy1Lsi7P9LWIERhLQEM+ Hm+30lC/f326nI3hoyThj2k2xxZOQzCIvrt658xP4qd9Zfe1bvCH58FF8K62CoOd p8XAf7Sl6v6YUodUrT/t =km55 -----END PGP SIGNATURE----- Merge tag 'pci-v3.16-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci into next Pull PCI changes from Bjorn Helgaas: "Enumeration - Notify driver before and after device reset (Keith Busch) - Use reset notification in NVMe (Keith Busch) NUMA - Warn if we have to guess host bridge node information (Myron Stowe) - Work around AMD Fam15h BIOSes that fail to provide _PXM (Suravee Suthikulpanit) - Clean up and mark early_root_info_init() as deprecated (Suravee Suthikulpanit) Driver binding - Add "driver_override" for force specific binding (Alex Williamson) - Fail "new_id" addition for devices we already know about (Bandan Das) Resource management - Support BAR sizes up to 8GB (Nikhil Rao, Alan Cox) - Don't move IORESOURCE_PCI_FIXED resources (Bjorn Helgaas) - Mark SBx00 HPET BAR as IORESOURCE_PCI_FIXED (Bjorn Helgaas) - Fail safely if we can't handle BARs larger than 4GB (Bjorn Helgaas) - Reject BAR above 4GB if dma_addr_t is too small (Bjorn Helgaas) - Don't convert BAR address to resource if dma_addr_t is too small (Bjorn Helgaas) - Don't set BAR to zero if dma_addr_t is too small (Bjorn Helgaas) - Don't print anything while decoding is disabled (Bjorn Helgaas) - Don't add disabled subtractive decode bus resources (Bjorn Helgaas) - Add resource allocation comments (Bjorn Helgaas) - Restrict 64-bit prefetchable bridge windows to 64-bit resources (Yinghai Lu) - Assign i82875p_edac PCI resources before adding device (Yinghai Lu) PCI device hotplug - Remove unnecessary "dev->bus" test (Bjorn Helgaas) - Use PCI_EXP_SLTCAP_PSN define (Bjorn Helgaas) - Fix rphahp endianess issues (Laurent Dufour) - Acknowledge spurious "cmd completed" event (Rajat Jain) - Allow hotplug service drivers to operate in polling mode (Rajat Jain) - Fix cpqphp possible NULL dereference (Rickard Strandqvist) MSI - Replace pci_enable_msi_block() by pci_enable_msi_exact() (Alexander Gordeev) - Replace pci_enable_msix() by pci_enable_msix_exact() (Alexander Gordeev) - Simplify populate_msi_sysfs() (Jan Beulich) Virtualization - Add Intel Patsburg (X79) root port ACS quirk (Alex Williamson) - Mark RTL8110SC INTx masking as broken (Alex Williamson) Generic host bridge driver - Add generic PCI host controller driver (Will Deacon) Freescale i.MX6 - Use new clock names (Lucas Stach) - Drop old IRQ mapping (Lucas Stach) - Remove optional (and unused) IRQs (Lucas Stach) - Add support for MSI (Lucas Stach) - Fix imx6_add_pcie_port() section mismatch warning (Sachin Kamat) Renesas R-Car - Add gen2 device tree support (Ben Dooks) - Use new OF interrupt mapping when possible (Lucas Stach) - Add PCIe driver (Phil Edworthy) - Add PCIe MSI support (Phil Edworthy) - Add PCIe device tree bindings (Phil Edworthy) Samsung Exynos - Remove unnecessary OOM messages (Jingoo Han) - Fix add_pcie_port() section mismatch warning (Sachin Kamat) Synopsys DesignWare - Make MSI ISR shared IRQ aware (Lucas Stach) Miscellaneous - Check for broken config space aliasing (Alex Williamson) - Update email address (Ben Hutchings) - Fix Broadcom CNB20LE unintended sign extension (Bjorn Helgaas) - Fix incorrect vgaarb conditional in WARN_ON() (Bjorn Helgaas) - Remove unnecessary __ref annotations (Bjorn Helgaas) - Add arch/x86/kernel/quirks.c to MAINTAINERS PCI file patterns (Bjorn Helgaas) - Fix use of uninitialized MPS value (Bjorn Helgaas) - Tidy x86/gart messages (Bjorn Helgaas) - Fix return value from pci_user_{read,write}_config_*() (Gavin Shan) - Turn pcibios_penalize_isa_irq() into a weak function (Hanjun Guo) - Remove unused serial device IDs (Jean Delvare) - Use designated initialization in PCI_VDEVICE (Mark Rustad) - Fix powerpc NULL dereference in pci_root_buses traversal (Mike Qiu) - Configure MPS on ARM (Murali Karicheri) - Remove unnecessary includes of <linux/init.h> (Paul Gortmaker) - Move Open Firmware devspec attribute to PCI common code (Sebastian Ott) - Use pdev->dev.groups for attribute creation on s390 (Sebastian Ott) - Remove pcibios_add_platform_entries() (Sebastian Ott) - Add new ID for Intel GPU "spurious interrupt" quirk (Thomas Jarosch) - Rename pci_is_bridge() to pci_has_subordinate() (Yijing Wang) - Add and use new pci_is_bridge() interface (Yijing Wang) - Make pci_bus_add_device() void (Yijing Wang) DMA API - Clarify physical/bus address distinction in docs (Bjorn Helgaas) - Fix typos in docs (Emilio López) - Update dma_pool_create ()and dma_pool_alloc() descriptions (Gioh Kim) - Change dma_declare_coherent_memory() CPU address to phys_addr_t (Bjorn Helgaas) - Pass GAPSPCI_DMA_BASE CPU & bus address to dma_declare_coherent_memory() (Bjorn Helgaas)" * tag 'pci-v3.16-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (92 commits) MAINTAINERS: Add generic PCI host controller driver PCI: generic: Add generic PCI host controller driver PCI: imx6: Add support for MSI PCI: designware: Make MSI ISR shared IRQ aware PCI: imx6: Remove optional (and unused) IRQs PCI: imx6: Drop old IRQ mapping PCI: imx6: Use new clock names i82875p_edac: Assign PCI resources before adding device ARM/PCI: Call pcie_bus_configure_settings() to set MPS PCI: imx6: Fix imx6_add_pcie_port() section mismatch warning PCI: Make pci_bus_add_device() void PCI: exynos: Fix add_pcie_port() section mismatch warning PCI: Introduce new device binding path using pci_dev.driver_override PCI: rcar: Add gen2 device tree support PCI: cpqphp: Fix possible null pointer dereference PCI: rcar: Add R-Car PCIe device tree bindings PCI: rcar: Add MSI support for PCIe PCI: rcar: Add Renesas R-Car PCIe driver PCI: Fix return value from pci_user_{read,write}_config_*() PCI: exynos: Remove unnecessary OOM messages ...
This commit is contained in:
commit
425553209b
|
@ -250,3 +250,24 @@ Description:
|
|||
valid. For example, writing a 2 to this file when sriov_numvfs
|
||||
is not 0 and not 2 already will return an error. Writing a 10
|
||||
when the value of sriov_totalvfs is 8 will return an error.
|
||||
|
||||
What: /sys/bus/pci/devices/.../driver_override
|
||||
Date: April 2014
|
||||
Contact: Alex Williamson <alex.williamson@redhat.com>
|
||||
Description:
|
||||
This file allows the driver for a device to be specified which
|
||||
will override standard static and dynamic ID matching. When
|
||||
specified, only a driver with a name matching the value written
|
||||
to driver_override will have an opportunity to bind to the
|
||||
device. The override is specified by writing a string to the
|
||||
driver_override file (echo pci-stub > driver_override) and
|
||||
may be cleared with an empty string (echo > driver_override).
|
||||
This returns the device to standard matching rules binding.
|
||||
Writing to driver_override does not automatically unbind the
|
||||
device from its current driver or make any attempt to
|
||||
automatically load the specified driver. If no driver with a
|
||||
matching name is currently loaded in the kernel, the device
|
||||
will not bind to any driver. This also allows devices to
|
||||
opt-out of driver binding using a driver_override name such as
|
||||
"none". Only a single driver may be specified in the override,
|
||||
there is no support for parsing delimiters.
|
||||
|
|
|
@ -9,16 +9,76 @@ This is a guide to device driver writers on how to use the DMA API
|
|||
with example pseudo-code. For a concise description of the API, see
|
||||
DMA-API.txt.
|
||||
|
||||
Most of the 64bit platforms have special hardware that translates bus
|
||||
addresses (DMA addresses) into physical addresses. This is similar to
|
||||
how page tables and/or a TLB translates virtual addresses to physical
|
||||
addresses on a CPU. This is needed so that e.g. PCI devices can
|
||||
access with a Single Address Cycle (32bit DMA address) any page in the
|
||||
64bit physical address space. Previously in Linux those 64bit
|
||||
platforms had to set artificial limits on the maximum RAM size in the
|
||||
system, so that the virt_to_bus() static scheme works (the DMA address
|
||||
translation tables were simply filled on bootup to map each bus
|
||||
address to the physical page __pa(bus_to_virt())).
|
||||
CPU and DMA addresses
|
||||
|
||||
There are several kinds of addresses involved in the DMA API, and it's
|
||||
important to understand the differences.
|
||||
|
||||
The kernel normally uses virtual addresses. Any address returned by
|
||||
kmalloc(), vmalloc(), and similar interfaces is a virtual address and can
|
||||
be stored in a "void *".
|
||||
|
||||
The virtual memory system (TLB, page tables, etc.) translates virtual
|
||||
addresses to CPU physical addresses, which are stored as "phys_addr_t" or
|
||||
"resource_size_t". The kernel manages device resources like registers as
|
||||
physical addresses. These are the addresses in /proc/iomem. The physical
|
||||
address is not directly useful to a driver; it must use ioremap() to map
|
||||
the space and produce a virtual address.
|
||||
|
||||
I/O devices use a third kind of address: a "bus address" or "DMA address".
|
||||
If a device has registers at an MMIO address, or if it performs DMA to read
|
||||
or write system memory, the addresses used by the device are bus addresses.
|
||||
In some systems, bus addresses are identical to CPU physical addresses, but
|
||||
in general they are not. IOMMUs and host bridges can produce arbitrary
|
||||
mappings between physical and bus addresses.
|
||||
|
||||
Here's a picture and some examples:
|
||||
|
||||
CPU CPU Bus
|
||||
Virtual Physical Address
|
||||
Address Address Space
|
||||
Space Space
|
||||
|
||||
+-------+ +------+ +------+
|
||||
| | |MMIO | Offset | |
|
||||
| | Virtual |Space | applied | |
|
||||
C +-------+ --------> B +------+ ----------> +------+ A
|
||||
| | mapping | | by host | |
|
||||
+-----+ | | | | bridge | | +--------+
|
||||
| | | | +------+ | | | |
|
||||
| CPU | | | | RAM | | | | Device |
|
||||
| | | | | | | | | |
|
||||
+-----+ +-------+ +------+ +------+ +--------+
|
||||
| | Virtual |Buffer| Mapping | |
|
||||
X +-------+ --------> Y +------+ <---------- +------+ Z
|
||||
| | mapping | RAM | by IOMMU
|
||||
| | | |
|
||||
| | | |
|
||||
+-------+ +------+
|
||||
|
||||
During the enumeration process, the kernel learns about I/O devices and
|
||||
their MMIO space and the host bridges that connect them to the system. For
|
||||
example, if a PCI device has a BAR, the kernel reads the bus address (A)
|
||||
from the BAR and converts it to a CPU physical address (B). The address B
|
||||
is stored in a struct resource and usually exposed via /proc/iomem. When a
|
||||
driver claims a device, it typically uses ioremap() to map physical address
|
||||
B at a virtual address (C). It can then use, e.g., ioread32(C), to access
|
||||
the device registers at bus address A.
|
||||
|
||||
If the device supports DMA, the driver sets up a buffer using kmalloc() or
|
||||
a similar interface, which returns a virtual address (X). The virtual
|
||||
memory system maps X to a physical address (Y) in system RAM. The driver
|
||||
can use virtual address X to access the buffer, but the device itself
|
||||
cannot because DMA doesn't go through the CPU virtual memory system.
|
||||
|
||||
In some simple systems, the device can do DMA directly to physical address
|
||||
Y. But in many others, there is IOMMU hardware that translates bus
|
||||
addresses to physical addresses, e.g., it translates Z to Y. This is part
|
||||
of the reason for the DMA API: the driver can give a virtual address X to
|
||||
an interface like dma_map_single(), which sets up any required IOMMU
|
||||
mapping and returns the bus address Z. The driver then tells the device to
|
||||
do DMA to Z, and the IOMMU maps it to the buffer at address Y in system
|
||||
RAM.
|
||||
|
||||
So that Linux can use the dynamic DMA mapping, it needs some help from the
|
||||
drivers, namely it has to take into account that DMA addresses should be
|
||||
|
@ -29,17 +89,17 @@ The following API will work of course even on platforms where no such
|
|||
hardware exists.
|
||||
|
||||
Note that the DMA API works with any bus independent of the underlying
|
||||
microprocessor architecture. You should use the DMA API rather than
|
||||
the bus specific DMA API (e.g. pci_dma_*).
|
||||
microprocessor architecture. You should use the DMA API rather than the
|
||||
bus-specific DMA API, i.e., use the dma_map_*() interfaces rather than the
|
||||
pci_map_*() interfaces.
|
||||
|
||||
First of all, you should make sure
|
||||
|
||||
#include <linux/dma-mapping.h>
|
||||
|
||||
is in your driver. This file will obtain for you the definition of the
|
||||
dma_addr_t (which can hold any valid DMA address for the platform)
|
||||
type which should be used everywhere you hold a DMA (bus) address
|
||||
returned from the DMA mapping functions.
|
||||
is in your driver, which provides the definition of dma_addr_t. This type
|
||||
can hold any valid DMA or bus address for the platform and should be used
|
||||
everywhere you hold a DMA address returned from the DMA mapping functions.
|
||||
|
||||
What memory is DMA'able?
|
||||
|
||||
|
@ -123,9 +183,9 @@ Here, dev is a pointer to the device struct of your device, and mask
|
|||
is a bit mask describing which bits of an address your device
|
||||
supports. It returns zero if your card can perform DMA properly on
|
||||
the machine given the address mask you provided. In general, the
|
||||
device struct of your device is embedded in the bus specific device
|
||||
struct of your device. For example, a pointer to the device struct of
|
||||
your PCI device is pdev->dev (pdev is a pointer to the PCI device
|
||||
device struct of your device is embedded in the bus-specific device
|
||||
struct of your device. For example, &pdev->dev is a pointer to the
|
||||
device struct of a PCI device (pdev is a pointer to the PCI device
|
||||
struct of your device).
|
||||
|
||||
If it returns non-zero, your device cannot perform DMA properly on
|
||||
|
@ -147,8 +207,7 @@ exactly why.
|
|||
The standard 32-bit addressing device would do something like this:
|
||||
|
||||
if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) {
|
||||
printk(KERN_WARNING
|
||||
"mydev: No suitable DMA available.\n");
|
||||
dev_warn(dev, "mydev: No suitable DMA available\n");
|
||||
goto ignore_this_device;
|
||||
}
|
||||
|
||||
|
@ -170,8 +229,7 @@ all 64-bits when accessing streaming DMA:
|
|||
} else if (!dma_set_mask(dev, DMA_BIT_MASK(32))) {
|
||||
using_dac = 0;
|
||||
} else {
|
||||
printk(KERN_WARNING
|
||||
"mydev: No suitable DMA available.\n");
|
||||
dev_warn(dev, "mydev: No suitable DMA available\n");
|
||||
goto ignore_this_device;
|
||||
}
|
||||
|
||||
|
@ -187,22 +245,20 @@ the case would look like this:
|
|||
using_dac = 0;
|
||||
consistent_using_dac = 0;
|
||||
} else {
|
||||
printk(KERN_WARNING
|
||||
"mydev: No suitable DMA available.\n");
|
||||
dev_warn(dev, "mydev: No suitable DMA available\n");
|
||||
goto ignore_this_device;
|
||||
}
|
||||
|
||||
The coherent coherent mask will always be able to set the same or a
|
||||
smaller mask as the streaming mask. However for the rare case that a
|
||||
device driver only uses consistent allocations, one would have to
|
||||
check the return value from dma_set_coherent_mask().
|
||||
The coherent mask will always be able to set the same or a smaller mask as
|
||||
the streaming mask. However for the rare case that a device driver only
|
||||
uses consistent allocations, one would have to check the return value from
|
||||
dma_set_coherent_mask().
|
||||
|
||||
Finally, if your device can only drive the low 24-bits of
|
||||
address you might do something like:
|
||||
|
||||
if (dma_set_mask(dev, DMA_BIT_MASK(24))) {
|
||||
printk(KERN_WARNING
|
||||
"mydev: 24-bit DMA addressing not available.\n");
|
||||
dev_warn(dev, "mydev: 24-bit DMA addressing not available\n");
|
||||
goto ignore_this_device;
|
||||
}
|
||||
|
||||
|
@ -232,14 +288,14 @@ Here is pseudo-code showing how this might be done:
|
|||
card->playback_enabled = 1;
|
||||
} else {
|
||||
card->playback_enabled = 0;
|
||||
printk(KERN_WARNING "%s: Playback disabled due to DMA limitations.\n",
|
||||
dev_warn(dev, "%s: Playback disabled due to DMA limitations\n",
|
||||
card->name);
|
||||
}
|
||||
if (!dma_set_mask(dev, RECORD_ADDRESS_BITS)) {
|
||||
card->record_enabled = 1;
|
||||
} else {
|
||||
card->record_enabled = 0;
|
||||
printk(KERN_WARNING "%s: Record disabled due to DMA limitations.\n",
|
||||
dev_warn(dev, "%s: Record disabled due to DMA limitations\n",
|
||||
card->name);
|
||||
}
|
||||
|
||||
|
@ -331,7 +387,7 @@ context with the GFP_ATOMIC flag.
|
|||
Size is the length of the region you want to allocate, in bytes.
|
||||
|
||||
This routine will allocate RAM for that region, so it acts similarly to
|
||||
__get_free_pages (but takes size instead of a page order). If your
|
||||
__get_free_pages() (but takes size instead of a page order). If your
|
||||
driver needs regions sized smaller than a page, you may prefer using
|
||||
the dma_pool interface, described below.
|
||||
|
||||
|
@ -343,11 +399,11 @@ the consistent DMA mask has been explicitly changed via
|
|||
dma_set_coherent_mask(). This is true of the dma_pool interface as
|
||||
well.
|
||||
|
||||
dma_alloc_coherent returns two values: the virtual address which you
|
||||
dma_alloc_coherent() returns two values: the virtual address which you
|
||||
can use to access it from the CPU and dma_handle which you pass to the
|
||||
card.
|
||||
|
||||
The cpu return address and the DMA bus master address are both
|
||||
The CPU virtual address and the DMA bus address are both
|
||||
guaranteed to be aligned to the smallest PAGE_SIZE order which
|
||||
is greater than or equal to the requested size. This invariant
|
||||
exists (for example) to guarantee that if you allocate a chunk
|
||||
|
@ -359,13 +415,13 @@ To unmap and free such a DMA region, you call:
|
|||
dma_free_coherent(dev, size, cpu_addr, dma_handle);
|
||||
|
||||
where dev, size are the same as in the above call and cpu_addr and
|
||||
dma_handle are the values dma_alloc_coherent returned to you.
|
||||
dma_handle are the values dma_alloc_coherent() returned to you.
|
||||
This function may not be called in interrupt context.
|
||||
|
||||
If your driver needs lots of smaller memory regions, you can write
|
||||
custom code to subdivide pages returned by dma_alloc_coherent,
|
||||
custom code to subdivide pages returned by dma_alloc_coherent(),
|
||||
or you can use the dma_pool API to do that. A dma_pool is like
|
||||
a kmem_cache, but it uses dma_alloc_coherent not __get_free_pages.
|
||||
a kmem_cache, but it uses dma_alloc_coherent(), not __get_free_pages().
|
||||
Also, it understands common hardware constraints for alignment,
|
||||
like queue heads needing to be aligned on N byte boundaries.
|
||||
|
||||
|
@ -373,37 +429,37 @@ Create a dma_pool like this:
|
|||
|
||||
struct dma_pool *pool;
|
||||
|
||||
pool = dma_pool_create(name, dev, size, align, alloc);
|
||||
pool = dma_pool_create(name, dev, size, align, boundary);
|
||||
|
||||
The "name" is for diagnostics (like a kmem_cache name); dev and size
|
||||
are as above. The device's hardware alignment requirement for this
|
||||
type of data is "align" (which is expressed in bytes, and must be a
|
||||
power of two). If your device has no boundary crossing restrictions,
|
||||
pass 0 for alloc; passing 4096 says memory allocated from this pool
|
||||
pass 0 for boundary; passing 4096 says memory allocated from this pool
|
||||
must not cross 4KByte boundaries (but at that time it may be better to
|
||||
go for dma_alloc_coherent directly instead).
|
||||
use dma_alloc_coherent() directly instead).
|
||||
|
||||
Allocate memory from a dma pool like this:
|
||||
Allocate memory from a DMA pool like this:
|
||||
|
||||
cpu_addr = dma_pool_alloc(pool, flags, &dma_handle);
|
||||
|
||||
flags are SLAB_KERNEL if blocking is permitted (not in_interrupt nor
|
||||
holding SMP locks), SLAB_ATOMIC otherwise. Like dma_alloc_coherent,
|
||||
flags are GFP_KERNEL if blocking is permitted (not in_interrupt nor
|
||||
holding SMP locks), GFP_ATOMIC otherwise. Like dma_alloc_coherent(),
|
||||
this returns two values, cpu_addr and dma_handle.
|
||||
|
||||
Free memory that was allocated from a dma_pool like this:
|
||||
|
||||
dma_pool_free(pool, cpu_addr, dma_handle);
|
||||
|
||||
where pool is what you passed to dma_pool_alloc, and cpu_addr and
|
||||
dma_handle are the values dma_pool_alloc returned. This function
|
||||
where pool is what you passed to dma_pool_alloc(), and cpu_addr and
|
||||
dma_handle are the values dma_pool_alloc() returned. This function
|
||||
may be called in interrupt context.
|
||||
|
||||
Destroy a dma_pool by calling:
|
||||
|
||||
dma_pool_destroy(pool);
|
||||
|
||||
Make sure you've called dma_pool_free for all memory allocated
|
||||
Make sure you've called dma_pool_free() for all memory allocated
|
||||
from a pool before you destroy the pool. This function may not
|
||||
be called in interrupt context.
|
||||
|
||||
|
@ -418,7 +474,7 @@ one of the following values:
|
|||
DMA_FROM_DEVICE
|
||||
DMA_NONE
|
||||
|
||||
One should provide the exact DMA direction if you know it.
|
||||
You should provide the exact DMA direction if you know it.
|
||||
|
||||
DMA_TO_DEVICE means "from main memory to the device"
|
||||
DMA_FROM_DEVICE means "from the device to main memory"
|
||||
|
@ -489,14 +545,14 @@ and to unmap it:
|
|||
dma_unmap_single(dev, dma_handle, size, direction);
|
||||
|
||||
You should call dma_mapping_error() as dma_map_single() could fail and return
|
||||
error. Not all dma implementations support dma_mapping_error() interface.
|
||||
error. Not all DMA implementations support the dma_mapping_error() interface.
|
||||
However, it is a good practice to call dma_mapping_error() interface, which
|
||||
will invoke the generic mapping error check interface. Doing so will ensure
|
||||
that the mapping code will work correctly on all dma implementations without
|
||||
that the mapping code will work correctly on all DMA implementations without
|
||||
any dependency on the specifics of the underlying implementation. Using the
|
||||
returned address without checking for errors could result in failures ranging
|
||||
from panics to silent data corruption. A couple of examples of incorrect ways
|
||||
to check for errors that make assumptions about the underlying dma
|
||||
to check for errors that make assumptions about the underlying DMA
|
||||
implementation are as follows and these are applicable to dma_map_page() as
|
||||
well.
|
||||
|
||||
|
@ -516,13 +572,13 @@ Incorrect example 2:
|
|||
goto map_error;
|
||||
}
|
||||
|
||||
You should call dma_unmap_single when the DMA activity is finished, e.g.
|
||||
You should call dma_unmap_single() when the DMA activity is finished, e.g.,
|
||||
from the interrupt which told you that the DMA transfer is done.
|
||||
|
||||
Using cpu pointers like this for single mappings has a disadvantage,
|
||||
Using CPU pointers like this for single mappings has a disadvantage:
|
||||
you cannot reference HIGHMEM memory in this way. Thus, there is a
|
||||
map/unmap interface pair akin to dma_{map,unmap}_single. These
|
||||
interfaces deal with page/offset pairs instead of cpu pointers.
|
||||
map/unmap interface pair akin to dma_{map,unmap}_single(). These
|
||||
interfaces deal with page/offset pairs instead of CPU pointers.
|
||||
Specifically:
|
||||
|
||||
struct device *dev = &my_dev->dev;
|
||||
|
@ -550,7 +606,7 @@ Here, "offset" means byte offset within the given page.
|
|||
You should call dma_mapping_error() as dma_map_page() could fail and return
|
||||
error as outlined under the dma_map_single() discussion.
|
||||
|
||||
You should call dma_unmap_page when the DMA activity is finished, e.g.
|
||||
You should call dma_unmap_page() when the DMA activity is finished, e.g.,
|
||||
from the interrupt which told you that the DMA transfer is done.
|
||||
|
||||
With scatterlists, you map a region gathered from several regions by:
|
||||
|
@ -588,18 +644,16 @@ PLEASE NOTE: The 'nents' argument to the dma_unmap_sg call must be
|
|||
it should _NOT_ be the 'count' value _returned_ from the
|
||||
dma_map_sg call.
|
||||
|
||||
Every dma_map_{single,sg} call should have its dma_unmap_{single,sg}
|
||||
counterpart, because the bus address space is a shared resource (although
|
||||
in some ports the mapping is per each BUS so less devices contend for the
|
||||
same bus address space) and you could render the machine unusable by eating
|
||||
all bus addresses.
|
||||
Every dma_map_{single,sg}() call should have its dma_unmap_{single,sg}()
|
||||
counterpart, because the bus address space is a shared resource and
|
||||
you could render the machine unusable by consuming all bus addresses.
|
||||
|
||||
If you need to use the same streaming DMA region multiple times and touch
|
||||
the data in between the DMA transfers, the buffer needs to be synced
|
||||
properly in order for the cpu and device to see the most uptodate and
|
||||
properly in order for the CPU and device to see the most up-to-date and
|
||||
correct copy of the DMA buffer.
|
||||
|
||||
So, firstly, just map it with dma_map_{single,sg}, and after each DMA
|
||||
So, firstly, just map it with dma_map_{single,sg}(), and after each DMA
|
||||
transfer call either:
|
||||
|
||||
dma_sync_single_for_cpu(dev, dma_handle, size, direction);
|
||||
|
@ -611,7 +665,7 @@ or:
|
|||
as appropriate.
|
||||
|
||||
Then, if you wish to let the device get at the DMA area again,
|
||||
finish accessing the data with the cpu, and then before actually
|
||||
finish accessing the data with the CPU, and then before actually
|
||||
giving the buffer to the hardware call either:
|
||||
|
||||
dma_sync_single_for_device(dev, dma_handle, size, direction);
|
||||
|
@ -623,9 +677,9 @@ or:
|
|||
as appropriate.
|
||||
|
||||
After the last DMA transfer call one of the DMA unmap routines
|
||||
dma_unmap_{single,sg}. If you don't touch the data from the first dma_map_*
|
||||
call till dma_unmap_*, then you don't have to call the dma_sync_*
|
||||
routines at all.
|
||||
dma_unmap_{single,sg}(). If you don't touch the data from the first
|
||||
dma_map_*() call till dma_unmap_*(), then you don't have to call the
|
||||
dma_sync_*() routines at all.
|
||||
|
||||
Here is pseudo code which shows a situation in which you would need
|
||||
to use the dma_sync_*() interfaces.
|
||||
|
@ -690,12 +744,12 @@ to use the dma_sync_*() interfaces.
|
|||
}
|
||||
}
|
||||
|
||||
Drivers converted fully to this interface should not use virt_to_bus any
|
||||
longer, nor should they use bus_to_virt. Some drivers have to be changed a
|
||||
little bit, because there is no longer an equivalent to bus_to_virt in the
|
||||
Drivers converted fully to this interface should not use virt_to_bus() any
|
||||
longer, nor should they use bus_to_virt(). Some drivers have to be changed a
|
||||
little bit, because there is no longer an equivalent to bus_to_virt() in the
|
||||
dynamic DMA mapping scheme - you have to always store the DMA addresses
|
||||
returned by the dma_alloc_coherent, dma_pool_alloc, and dma_map_single
|
||||
calls (dma_map_sg stores them in the scatterlist itself if the platform
|
||||
returned by the dma_alloc_coherent(), dma_pool_alloc(), and dma_map_single()
|
||||
calls (dma_map_sg() stores them in the scatterlist itself if the platform
|
||||
supports dynamic DMA mapping in hardware) in your driver structures and/or
|
||||
in the card registers.
|
||||
|
||||
|
@ -709,9 +763,9 @@ as it is impossible to correctly support them.
|
|||
DMA address space is limited on some architectures and an allocation
|
||||
failure can be determined by:
|
||||
|
||||
- checking if dma_alloc_coherent returns NULL or dma_map_sg returns 0
|
||||
- checking if dma_alloc_coherent() returns NULL or dma_map_sg returns 0
|
||||
|
||||
- checking the returned dma_addr_t of dma_map_single and dma_map_page
|
||||
- checking the dma_addr_t returned from dma_map_single() and dma_map_page()
|
||||
by using dma_mapping_error():
|
||||
|
||||
dma_addr_t dma_handle;
|
||||
|
@ -794,7 +848,7 @@ Example 2: (if buffers are allocated in a loop, unmap all mapped buffers when
|
|||
dma_unmap_single(array[i].dma_addr);
|
||||
}
|
||||
|
||||
Networking drivers must call dev_kfree_skb to free the socket buffer
|
||||
Networking drivers must call dev_kfree_skb() to free the socket buffer
|
||||
and return NETDEV_TX_OK if the DMA mapping fails on the transmit hook
|
||||
(ndo_start_xmit). This means that the socket buffer is just dropped in
|
||||
the failure case.
|
||||
|
@ -831,7 +885,7 @@ transform some example code.
|
|||
DEFINE_DMA_UNMAP_LEN(len);
|
||||
};
|
||||
|
||||
2) Use dma_unmap_{addr,len}_set to set these values.
|
||||
2) Use dma_unmap_{addr,len}_set() to set these values.
|
||||
Example, before:
|
||||
|
||||
ringp->mapping = FOO;
|
||||
|
@ -842,7 +896,7 @@ transform some example code.
|
|||
dma_unmap_addr_set(ringp, mapping, FOO);
|
||||
dma_unmap_len_set(ringp, len, BAR);
|
||||
|
||||
3) Use dma_unmap_{addr,len} to access these values.
|
||||
3) Use dma_unmap_{addr,len}() to access these values.
|
||||
Example, before:
|
||||
|
||||
dma_unmap_single(dev, ringp->mapping, ringp->len,
|
||||
|
|
|
@ -4,22 +4,26 @@
|
|||
James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
|
||||
|
||||
This document describes the DMA API. For a more gentle introduction
|
||||
of the API (and actual examples) see
|
||||
Documentation/DMA-API-HOWTO.txt.
|
||||
of the API (and actual examples), see Documentation/DMA-API-HOWTO.txt.
|
||||
|
||||
This API is split into two pieces. Part I describes the API. Part II
|
||||
describes the extensions to the API for supporting non-consistent
|
||||
memory machines. Unless you know that your driver absolutely has to
|
||||
support non-consistent platforms (this is usually only legacy
|
||||
platforms) you should only use the API described in part I.
|
||||
This API is split into two pieces. Part I describes the basic API.
|
||||
Part II describes extensions for supporting non-consistent memory
|
||||
machines. Unless you know that your driver absolutely has to support
|
||||
non-consistent platforms (this is usually only legacy platforms) you
|
||||
should only use the API described in part I.
|
||||
|
||||
Part I - dma_ API
|
||||
-------------------------------------
|
||||
|
||||
To get the dma_ API, you must #include <linux/dma-mapping.h>
|
||||
To get the dma_ API, you must #include <linux/dma-mapping.h>. This
|
||||
provides dma_addr_t and the interfaces described below.
|
||||
|
||||
A dma_addr_t can hold any valid DMA or bus address for the platform. It
|
||||
can be given to a device to use as a DMA source or target. A CPU cannot
|
||||
reference a dma_addr_t directly because there may be translation between
|
||||
its physical address space and the bus address space.
|
||||
|
||||
Part Ia - Using large dma-coherent buffers
|
||||
Part Ia - Using large DMA-coherent buffers
|
||||
------------------------------------------
|
||||
|
||||
void *
|
||||
|
@ -33,20 +37,21 @@ to make sure to flush the processor's write buffers before telling
|
|||
devices to read that memory.)
|
||||
|
||||
This routine allocates a region of <size> bytes of consistent memory.
|
||||
It also returns a <dma_handle> which may be cast to an unsigned
|
||||
integer the same width as the bus and used as the physical address
|
||||
base of the region.
|
||||
|
||||
Returns: a pointer to the allocated region (in the processor's virtual
|
||||
It returns a pointer to the allocated region (in the processor's virtual
|
||||
address space) or NULL if the allocation failed.
|
||||
|
||||
It also returns a <dma_handle> which may be cast to an unsigned integer the
|
||||
same width as the bus and given to the device as the bus address base of
|
||||
the region.
|
||||
|
||||
Note: consistent memory can be expensive on some platforms, and the
|
||||
minimum allocation length may be as big as a page, so you should
|
||||
consolidate your requests for consistent memory as much as possible.
|
||||
The simplest way to do that is to use the dma_pool calls (see below).
|
||||
|
||||
The flag parameter (dma_alloc_coherent only) allows the caller to
|
||||
specify the GFP_ flags (see kmalloc) for the allocation (the
|
||||
The flag parameter (dma_alloc_coherent() only) allows the caller to
|
||||
specify the GFP_ flags (see kmalloc()) for the allocation (the
|
||||
implementation may choose to ignore flags that affect the location of
|
||||
the returned memory, like GFP_DMA).
|
||||
|
||||
|
@ -61,24 +66,24 @@ void
|
|||
dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
|
||||
dma_addr_t dma_handle)
|
||||
|
||||
Free the region of consistent memory you previously allocated. dev,
|
||||
size and dma_handle must all be the same as those passed into the
|
||||
consistent allocate. cpu_addr must be the virtual address returned by
|
||||
the consistent allocate.
|
||||
Free a region of consistent memory you previously allocated. dev,
|
||||
size and dma_handle must all be the same as those passed into
|
||||
dma_alloc_coherent(). cpu_addr must be the virtual address returned by
|
||||
the dma_alloc_coherent().
|
||||
|
||||
Note that unlike their sibling allocation calls, these routines
|
||||
may only be called with IRQs enabled.
|
||||
|
||||
|
||||
Part Ib - Using small dma-coherent buffers
|
||||
Part Ib - Using small DMA-coherent buffers
|
||||
------------------------------------------
|
||||
|
||||
To get this part of the dma_ API, you must #include <linux/dmapool.h>
|
||||
|
||||
Many drivers need lots of small dma-coherent memory regions for DMA
|
||||
Many drivers need lots of small DMA-coherent memory regions for DMA
|
||||
descriptors or I/O buffers. Rather than allocating in units of a page
|
||||
or more using dma_alloc_coherent(), you can use DMA pools. These work
|
||||
much like a struct kmem_cache, except that they use the dma-coherent allocator,
|
||||
much like a struct kmem_cache, except that they use the DMA-coherent allocator,
|
||||
not __get_free_pages(). Also, they understand common hardware constraints
|
||||
for alignment, like queue heads needing to be aligned on N-byte boundaries.
|
||||
|
||||
|
@ -87,7 +92,7 @@ for alignment, like queue heads needing to be aligned on N-byte boundaries.
|
|||
dma_pool_create(const char *name, struct device *dev,
|
||||
size_t size, size_t align, size_t alloc);
|
||||
|
||||
The pool create() routines initialize a pool of dma-coherent buffers
|
||||
dma_pool_create() initializes a pool of DMA-coherent buffers
|
||||
for use with a given device. It must be called in a context which
|
||||
can sleep.
|
||||
|
||||
|
@ -102,25 +107,26 @@ from this pool must not cross 4KByte boundaries.
|
|||
void *dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags,
|
||||
dma_addr_t *dma_handle);
|
||||
|
||||
This allocates memory from the pool; the returned memory will meet the size
|
||||
and alignment requirements specified at creation time. Pass GFP_ATOMIC to
|
||||
prevent blocking, or if it's permitted (not in_interrupt, not holding SMP locks),
|
||||
pass GFP_KERNEL to allow blocking. Like dma_alloc_coherent(), this returns
|
||||
two values: an address usable by the cpu, and the dma address usable by the
|
||||
pool's device.
|
||||
This allocates memory from the pool; the returned memory will meet the
|
||||
size and alignment requirements specified at creation time. Pass
|
||||
GFP_ATOMIC to prevent blocking, or if it's permitted (not
|
||||
in_interrupt, not holding SMP locks), pass GFP_KERNEL to allow
|
||||
blocking. Like dma_alloc_coherent(), this returns two values: an
|
||||
address usable by the CPU, and the DMA address usable by the pool's
|
||||
device.
|
||||
|
||||
|
||||
void dma_pool_free(struct dma_pool *pool, void *vaddr,
|
||||
dma_addr_t addr);
|
||||
|
||||
This puts memory back into the pool. The pool is what was passed to
|
||||
the pool allocation routine; the cpu (vaddr) and dma addresses are what
|
||||
dma_pool_alloc(); the CPU (vaddr) and DMA addresses are what
|
||||
were returned when that routine allocated the memory being freed.
|
||||
|
||||
|
||||
void dma_pool_destroy(struct dma_pool *pool);
|
||||
|
||||
The pool destroy() routines free the resources of the pool. They must be
|
||||
dma_pool_destroy() frees the resources of the pool. It must be
|
||||
called in a context which can sleep. Make sure you've freed all allocated
|
||||
memory back to the pool before you destroy it.
|
||||
|
||||
|
@ -187,9 +193,9 @@ dma_map_single(struct device *dev, void *cpu_addr, size_t size,
|
|||
enum dma_data_direction direction)
|
||||
|
||||
Maps a piece of processor virtual memory so it can be accessed by the
|
||||
device and returns the physical handle of the memory.
|
||||
device and returns the bus address of the memory.
|
||||
|
||||
The direction for both api's may be converted freely by casting.
|
||||
The direction for both APIs may be converted freely by casting.
|
||||
However the dma_ API uses a strongly typed enumerator for its
|
||||
direction:
|
||||
|
||||
|
@ -198,31 +204,30 @@ DMA_TO_DEVICE data is going from the memory to the device
|
|||
DMA_FROM_DEVICE data is coming from the device to the memory
|
||||
DMA_BIDIRECTIONAL direction isn't known
|
||||
|
||||
Notes: Not all memory regions in a machine can be mapped by this
|
||||
API. Further, regions that appear to be physically contiguous in
|
||||
kernel virtual space may not be contiguous as physical memory. Since
|
||||
this API does not provide any scatter/gather capability, it will fail
|
||||
if the user tries to map a non-physically contiguous piece of memory.
|
||||
For this reason, it is recommended that memory mapped by this API be
|
||||
obtained only from sources which guarantee it to be physically contiguous
|
||||
(like kmalloc).
|
||||
Notes: Not all memory regions in a machine can be mapped by this API.
|
||||
Further, contiguous kernel virtual space may not be contiguous as
|
||||
physical memory. Since this API does not provide any scatter/gather
|
||||
capability, it will fail if the user tries to map a non-physically
|
||||
contiguous piece of memory. For this reason, memory to be mapped by
|
||||
this API should be obtained from sources which guarantee it to be
|
||||
physically contiguous (like kmalloc).
|
||||
|
||||
Further, the physical address of the memory must be within the
|
||||
dma_mask of the device (the dma_mask represents a bit mask of the
|
||||
addressable region for the device. I.e., if the physical address of
|
||||
the memory anded with the dma_mask is still equal to the physical
|
||||
address, then the device can perform DMA to the memory). In order to
|
||||
Further, the bus address of the memory must be within the
|
||||
dma_mask of the device (the dma_mask is a bit mask of the
|
||||
addressable region for the device, i.e., if the bus address of
|
||||
the memory ANDed with the dma_mask is still equal to the bus
|
||||
address, then the device can perform DMA to the memory). To
|
||||
ensure that the memory allocated by kmalloc is within the dma_mask,
|
||||
the driver may specify various platform-dependent flags to restrict
|
||||
the physical memory range of the allocation (e.g. on x86, GFP_DMA
|
||||
guarantees to be within the first 16Mb of available physical memory,
|
||||
the bus address range of the allocation (e.g., on x86, GFP_DMA
|
||||
guarantees to be within the first 16MB of available bus addresses,
|
||||
as required by ISA devices).
|
||||
|
||||
Note also that the above constraints on physical contiguity and
|
||||
dma_mask may not apply if the platform has an IOMMU (a device which
|
||||
supplies a physical to virtual mapping between the I/O memory bus and
|
||||
the device). However, to be portable, device driver writers may *not*
|
||||
assume that such an IOMMU exists.
|
||||
maps an I/O bus address to a physical memory address). However, to be
|
||||
portable, device driver writers may *not* assume that such an IOMMU
|
||||
exists.
|
||||
|
||||
Warnings: Memory coherency operates at a granularity called the cache
|
||||
line width. In order for memory mapped by this API to operate
|
||||
|
@ -281,9 +286,9 @@ cache width is.
|
|||
int
|
||||
dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
|
||||
|
||||
In some circumstances dma_map_single and dma_map_page will fail to create
|
||||
In some circumstances dma_map_single() and dma_map_page() will fail to create
|
||||
a mapping. A driver can check for these errors by testing the returned
|
||||
dma address with dma_mapping_error(). A non-zero return value means the mapping
|
||||
DMA address with dma_mapping_error(). A non-zero return value means the mapping
|
||||
could not be created and the driver should take appropriate action (e.g.
|
||||
reduce current DMA mapping usage or delay and try again later).
|
||||
|
||||
|
@ -291,7 +296,7 @@ reduce current DMA mapping usage or delay and try again later).
|
|||
dma_map_sg(struct device *dev, struct scatterlist *sg,
|
||||
int nents, enum dma_data_direction direction)
|
||||
|
||||
Returns: the number of physical segments mapped (this may be shorter
|
||||
Returns: the number of bus address segments mapped (this may be shorter
|
||||
than <nents> passed in if some elements of the scatter/gather list are
|
||||
physically or virtually adjacent and an IOMMU maps them with a single
|
||||
entry).
|
||||
|
@ -299,7 +304,7 @@ entry).
|
|||
Please note that the sg cannot be mapped again if it has been mapped once.
|
||||
The mapping process is allowed to destroy information in the sg.
|
||||
|
||||
As with the other mapping interfaces, dma_map_sg can fail. When it
|
||||
As with the other mapping interfaces, dma_map_sg() can fail. When it
|
||||
does, 0 is returned and a driver must take appropriate action. It is
|
||||
critical that the driver do something, in the case of a block driver
|
||||
aborting the request or even oopsing is better than doing nothing and
|
||||
|
@ -335,7 +340,7 @@ must be the same as those and passed in to the scatter/gather mapping
|
|||
API.
|
||||
|
||||
Note: <nents> must be the number you passed in, *not* the number of
|
||||
physical entries returned.
|
||||
bus address entries returned.
|
||||
|
||||
void
|
||||
dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size,
|
||||
|
@ -350,7 +355,7 @@ void
|
|||
dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems,
|
||||
enum dma_data_direction direction)
|
||||
|
||||
Synchronise a single contiguous or scatter/gather mapping for the cpu
|
||||
Synchronise a single contiguous or scatter/gather mapping for the CPU
|
||||
and device. With the sync_sg API, all the parameters must be the same
|
||||
as those passed into the single mapping API. With the sync_single API,
|
||||
you can use dma_handle and size parameters that aren't identical to
|
||||
|
@ -391,10 +396,10 @@ The four functions above are just like the counterpart functions
|
|||
without the _attrs suffixes, except that they pass an optional
|
||||
struct dma_attrs*.
|
||||
|
||||
struct dma_attrs encapsulates a set of "dma attributes". For the
|
||||
struct dma_attrs encapsulates a set of "DMA attributes". For the
|
||||
definition of struct dma_attrs see linux/dma-attrs.h.
|
||||
|
||||
The interpretation of dma attributes is architecture-specific, and
|
||||
The interpretation of DMA attributes is architecture-specific, and
|
||||
each attribute should be documented in Documentation/DMA-attributes.txt.
|
||||
|
||||
If struct dma_attrs* is NULL, the semantics of each of these
|
||||
|
@ -458,7 +463,7 @@ Note: where the platform can return consistent memory, it will
|
|||
guarantee that the sync points become nops.
|
||||
|
||||
Warning: Handling non-consistent memory is a real pain. You should
|
||||
only ever use this API if you positively know your driver will be
|
||||
only use this API if you positively know your driver will be
|
||||
required to work on one of the rare (usually non-PCI) architectures
|
||||
that simply cannot make consistent memory.
|
||||
|
||||
|
@ -492,30 +497,29 @@ continuing on for size. Again, you *must* observe the cache line
|
|||
boundaries when doing this.
|
||||
|
||||
int
|
||||
dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
|
||||
dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
|
||||
dma_addr_t device_addr, size_t size, int
|
||||
flags)
|
||||
|
||||
Declare region of memory to be handed out by dma_alloc_coherent when
|
||||
Declare region of memory to be handed out by dma_alloc_coherent() when
|
||||
it's asked for coherent memory for this device.
|
||||
|
||||
bus_addr is the physical address to which the memory is currently
|
||||
assigned in the bus responding region (this will be used by the
|
||||
platform to perform the mapping).
|
||||
phys_addr is the CPU physical address to which the memory is currently
|
||||
assigned (this will be ioremapped so the CPU can access the region).
|
||||
|
||||
device_addr is the physical address the device needs to be programmed
|
||||
with actually to address this memory (this will be handed out as the
|
||||
device_addr is the bus address the device needs to be programmed
|
||||
with to actually address this memory (this will be handed out as the
|
||||
dma_addr_t in dma_alloc_coherent()).
|
||||
|
||||
size is the size of the area (must be multiples of PAGE_SIZE).
|
||||
|
||||
flags can be or'd together and are:
|
||||
flags can be ORed together and are:
|
||||
|
||||
DMA_MEMORY_MAP - request that the memory returned from
|
||||
dma_alloc_coherent() be directly writable.
|
||||
|
||||
DMA_MEMORY_IO - request that the memory returned from
|
||||
dma_alloc_coherent() be addressable using read/write/memcpy_toio etc.
|
||||
dma_alloc_coherent() be addressable using read()/write()/memcpy_toio() etc.
|
||||
|
||||
One or both of these flags must be present.
|
||||
|
||||
|
@ -572,7 +576,7 @@ region is occupied.
|
|||
Part III - Debug drivers use of the DMA-API
|
||||
-------------------------------------------
|
||||
|
||||
The DMA-API as described above as some constraints. DMA addresses must be
|
||||
The DMA-API as described above has some constraints. DMA addresses must be
|
||||
released with the corresponding function with the same size for example. With
|
||||
the advent of hardware IOMMUs it becomes more and more important that drivers
|
||||
do not violate those constraints. In the worst case such a violation can
|
||||
|
@ -690,11 +694,11 @@ architectural default.
|
|||
void debug_dmap_mapping_error(struct device *dev, dma_addr_t dma_addr);
|
||||
|
||||
dma-debug interface debug_dma_mapping_error() to debug drivers that fail
|
||||
to check dma mapping errors on addresses returned by dma_map_single() and
|
||||
to check DMA mapping errors on addresses returned by dma_map_single() and
|
||||
dma_map_page() interfaces. This interface clears a flag set by
|
||||
debug_dma_map_page() to indicate that dma_mapping_error() has been called by
|
||||
the driver. When driver does unmap, debug_dma_unmap() checks the flag and if
|
||||
this flag is still set, prints warning message that includes call trace that
|
||||
leads up to the unmap. This interface can be called from dma_mapping_error()
|
||||
routines to enable dma mapping error check debugging.
|
||||
routines to enable DMA mapping error check debugging.
|
||||
|
||||
|
|
|
@ -16,7 +16,7 @@ To do ISA style DMA you need to include two headers:
|
|||
#include <asm/dma.h>
|
||||
|
||||
The first is the generic DMA API used to convert virtual addresses to
|
||||
physical addresses (see Documentation/DMA-API.txt for details).
|
||||
bus addresses (see Documentation/DMA-API.txt for details).
|
||||
|
||||
The second contains the routines specific to ISA DMA transfers. Since
|
||||
this is not present on all platforms make sure you construct your
|
||||
|
@ -50,7 +50,7 @@ early as possible and not release it until the driver is unloaded.)
|
|||
Part III - Address translation
|
||||
------------------------------
|
||||
|
||||
To translate the virtual address to a physical use the normal DMA
|
||||
To translate the virtual address to a bus address, use the normal DMA
|
||||
API. Do _not_ use isa_virt_to_phys() even though it does the same
|
||||
thing. The reason for this is that the function isa_virt_to_phys()
|
||||
will require a Kconfig dependency to ISA, not just ISA_DMA_API which
|
||||
|
|
|
@ -0,0 +1,100 @@
|
|||
* Generic PCI host controller
|
||||
|
||||
Firmware-initialised PCI host controllers and PCI emulations, such as the
|
||||
virtio-pci implementations found in kvmtool and other para-virtualised
|
||||
systems, do not require driver support for complexities such as regulator
|
||||
and clock management. In fact, the controller may not even require the
|
||||
configuration of a control interface by the operating system, instead
|
||||
presenting a set of fixed windows describing a subset of IO, Memory and
|
||||
Configuration Spaces.
|
||||
|
||||
Such a controller can be described purely in terms of the standardized device
|
||||
tree bindings communicated in pci.txt:
|
||||
|
||||
|
||||
Properties of the host controller node:
|
||||
|
||||
- compatible : Must be "pci-host-cam-generic" or "pci-host-ecam-generic"
|
||||
depending on the layout of configuration space (CAM vs
|
||||
ECAM respectively).
|
||||
|
||||
- device_type : Must be "pci".
|
||||
|
||||
- ranges : As described in IEEE Std 1275-1994, but must provide
|
||||
at least a definition of non-prefetchable memory. One
|
||||
or both of prefetchable Memory and IO Space may also
|
||||
be provided.
|
||||
|
||||
- bus-range : Optional property (also described in IEEE Std 1275-1994)
|
||||
to indicate the range of bus numbers for this controller.
|
||||
If absent, defaults to <0 255> (i.e. all buses).
|
||||
|
||||
- #address-cells : Must be 3.
|
||||
|
||||
- #size-cells : Must be 2.
|
||||
|
||||
- reg : The Configuration Space base address and size, as accessed
|
||||
from the parent bus.
|
||||
|
||||
|
||||
Properties of the /chosen node:
|
||||
|
||||
- linux,pci-probe-only
|
||||
: Optional property which takes a single-cell argument.
|
||||
If '0', then Linux will assign devices in its usual manner,
|
||||
otherwise it will not try to assign devices and instead use
|
||||
them as they are configured already.
|
||||
|
||||
Configuration Space is assumed to be memory-mapped (as opposed to being
|
||||
accessed via an ioport) and laid out with a direct correspondence to the
|
||||
geography of a PCI bus address by concatenating the various components to
|
||||
form an offset.
|
||||
|
||||
For CAM, this 24-bit offset is:
|
||||
|
||||
cfg_offset(bus, device, function, register) =
|
||||
bus << 16 | device << 11 | function << 8 | register
|
||||
|
||||
Whilst ECAM extends this by 4 bits to accomodate 4k of function space:
|
||||
|
||||
cfg_offset(bus, device, function, register) =
|
||||
bus << 20 | device << 15 | function << 12 | register
|
||||
|
||||
Interrupt mapping is exactly as described in `Open Firmware Recommended
|
||||
Practice: Interrupt Mapping' and requires the following properties:
|
||||
|
||||
- #interrupt-cells : Must be 1
|
||||
|
||||
- interrupt-map : <see aforementioned specification>
|
||||
|
||||
- interrupt-map-mask : <see aforementioned specification>
|
||||
|
||||
|
||||
Example:
|
||||
|
||||
pci {
|
||||
compatible = "pci-host-cam-generic"
|
||||
device_type = "pci";
|
||||
#address-cells = <3>;
|
||||
#size-cells = <2>;
|
||||
bus-range = <0x0 0x1>;
|
||||
|
||||
// CPU_PHYSICAL(2) SIZE(2)
|
||||
reg = <0x0 0x40000000 0x0 0x1000000>;
|
||||
|
||||
// BUS_ADDRESS(3) CPU_PHYSICAL(2) SIZE(2)
|
||||
ranges = <0x01000000 0x0 0x01000000 0x0 0x01000000 0x0 0x00010000>,
|
||||
<0x02000000 0x0 0x41000000 0x0 0x41000000 0x0 0x3f000000>;
|
||||
|
||||
|
||||
#interrupt-cells = <0x1>;
|
||||
|
||||
// PCI_DEVICE(3) INT#(1) CONTROLLER(PHANDLE) CONTROLLER_DATA(3)
|
||||
interrupt-map = < 0x0 0x0 0x0 0x1 &gic 0x0 0x4 0x1
|
||||
0x800 0x0 0x0 0x1 &gic 0x0 0x5 0x1
|
||||
0x1000 0x0 0x0 0x1 &gic 0x0 0x6 0x1
|
||||
0x1800 0x0 0x0 0x1 &gic 0x0 0x7 0x1>;
|
||||
|
||||
// PCI_DEVICE(3) INT#(1)
|
||||
interrupt-map-mask = <0xf800 0x0 0x0 0x7>;
|
||||
}
|
|
@ -0,0 +1,66 @@
|
|||
Renesas AHB to PCI bridge
|
||||
-------------------------
|
||||
|
||||
This is the bridge used internally to connect the USB controllers to the
|
||||
AHB. There is one bridge instance per USB port connected to the internal
|
||||
OHCI and EHCI controllers.
|
||||
|
||||
Required properties:
|
||||
- compatible: "renesas,pci-r8a7790" for the R8A7790 SoC;
|
||||
"renesas,pci-r8a7791" for the R8A7791 SoC.
|
||||
- reg: A list of physical regions to access the device: the first is
|
||||
the operational registers for the OHCI/EHCI controllers and the
|
||||
second is for the bridge configuration and control registers.
|
||||
- interrupts: interrupt for the device.
|
||||
- clocks: The reference to the device clock.
|
||||
- bus-range: The PCI bus number range; as this is a single bus, the range
|
||||
should be specified as the same value twice.
|
||||
- #address-cells: must be 3.
|
||||
- #size-cells: must be 2.
|
||||
- #interrupt-cells: must be 1.
|
||||
- interrupt-map: standard property used to define the mapping of the PCI
|
||||
interrupts to the GIC interrupts.
|
||||
- interrupt-map-mask: standard property that helps to define the interrupt
|
||||
mapping.
|
||||
|
||||
Example SoC configuration:
|
||||
|
||||
pci0: pci@ee090000 {
|
||||
compatible = "renesas,pci-r8a7790";
|
||||
clocks = <&mstp7_clks R8A7790_CLK_EHCI>;
|
||||
reg = <0x0 0xee090000 0x0 0xc00>,
|
||||
<0x0 0xee080000 0x0 0x1100>;
|
||||
interrupts = <0 108 IRQ_TYPE_LEVEL_HIGH>;
|
||||
status = "disabled";
|
||||
|
||||
bus-range = <0 0>;
|
||||
#address-cells = <3>;
|
||||
#size-cells = <2>;
|
||||
#interrupt-cells = <1>;
|
||||
interrupt-map-mask = <0xff00 0 0 0x7>;
|
||||
interrupt-map = <0x0000 0 0 1 &gic 0 108 IRQ_TYPE_LEVEL_HIGH
|
||||
0x0800 0 0 1 &gic 0 108 IRQ_TYPE_LEVEL_HIGH
|
||||
0x1000 0 0 2 &gic 0 108 IRQ_TYPE_LEVEL_HIGH>;
|
||||
|
||||
pci@0,1 {
|
||||
reg = <0x800 0 0 0 0>;
|
||||
device_type = "pci";
|
||||
phys = <&usbphy 0 0>;
|
||||
phy-names = "usb";
|
||||
};
|
||||
|
||||
pci@0,2 {
|
||||
reg = <0x1000 0 0 0 0>;
|
||||
device_type = "pci";
|
||||
phys = <&usbphy 0 0>;
|
||||
phy-names = "usb";
|
||||
};
|
||||
};
|
||||
|
||||
Example board setup:
|
||||
|
||||
&pci0 {
|
||||
status = "okay";
|
||||
pinctrl-0 = <&usb0_pins>;
|
||||
pinctrl-names = "default";
|
||||
};
|
|
@ -0,0 +1,47 @@
|
|||
* Renesas RCar PCIe interface
|
||||
|
||||
Required properties:
|
||||
- compatible: should contain one of the following
|
||||
"renesas,pcie-r8a7779", "renesas,pcie-r8a7790", "renesas,pcie-r8a7791"
|
||||
- reg: base address and length of the pcie controller registers.
|
||||
- #address-cells: set to <3>
|
||||
- #size-cells: set to <2>
|
||||
- bus-range: PCI bus numbers covered
|
||||
- device_type: set to "pci"
|
||||
- ranges: ranges for the PCI memory and I/O regions.
|
||||
- dma-ranges: ranges for the inbound memory regions.
|
||||
- interrupts: two interrupt sources for MSI interrupts, followed by interrupt
|
||||
source for hardware related interrupts (e.g. link speed change).
|
||||
- #interrupt-cells: set to <1>
|
||||
- interrupt-map-mask and interrupt-map: standard PCI properties
|
||||
to define the mapping of the PCIe interface to interrupt
|
||||
numbers.
|
||||
- clocks: from common clock binding: clock specifiers for the PCIe controller
|
||||
and PCIe bus clocks.
|
||||
- clock-names: from common clock binding: should be "pcie" and "pcie_bus".
|
||||
|
||||
Example:
|
||||
|
||||
SoC specific DT Entry:
|
||||
|
||||
pcie: pcie@fe000000 {
|
||||
compatible = "renesas,pcie-r8a7791";
|
||||
reg = <0 0xfe000000 0 0x80000>;
|
||||
#address-cells = <3>;
|
||||
#size-cells = <2>;
|
||||
bus-range = <0x00 0xff>;
|
||||
device_type = "pci";
|
||||
ranges = <0x01000000 0 0x00000000 0 0xfe100000 0 0x00100000
|
||||
0x02000000 0 0xfe200000 0 0xfe200000 0 0x00200000
|
||||
0x02000000 0 0x30000000 0 0x30000000 0 0x08000000
|
||||
0x42000000 0 0x38000000 0 0x38000000 0 0x08000000>;
|
||||
dma-ranges = <0x42000000 0 0x40000000 0 0x40000000 0 0x40000000
|
||||
0x42000000 2 0x00000000 2 0x00000000 0 0x40000000>;
|
||||
interrupts = <0 116 4>, <0 117 4>, <0 118 4>;
|
||||
#interrupt-cells = <1>;
|
||||
interrupt-map-mask = <0 0 0 0>;
|
||||
interrupt-map = <0 0 0 0 &gic 0 116 4>;
|
||||
clocks = <&mstp3_clks R8A7791_CLK_PCIE>, <&pcie_bus_clk>;
|
||||
clock-names = "pcie", "pcie_bus";
|
||||
status = "disabled";
|
||||
};
|
|
@ -6710,6 +6710,7 @@ F: Documentation/PCI/
|
|||
F: drivers/pci/
|
||||
F: include/linux/pci*
|
||||
F: arch/x86/pci/
|
||||
F: arch/x86/kernel/quirks.c
|
||||
|
||||
PCI DRIVER FOR IMX6
|
||||
M: Richard Zhu <r65037@freescale.com>
|
||||
|
@ -6757,6 +6758,14 @@ L: linux-pci@vger.kernel.org
|
|||
S: Maintained
|
||||
F: drivers/pci/host/*designware*
|
||||
|
||||
PCI DRIVER FOR GENERIC OF HOSTS
|
||||
M: Will Deacon <will.deacon@arm.com>
|
||||
L: linux-pci@vger.kernel.org
|
||||
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
|
||||
S: Maintained
|
||||
F: Documentation/devicetree/bindings/pci/host-generic-pci.txt
|
||||
F: drivers/pci/host/pci-host-generic.c
|
||||
|
||||
PCMCIA SUBSYSTEM
|
||||
P: Linux PCMCIA Team
|
||||
L: linux-pcmcia@lists.infradead.org
|
||||
|
|
|
@ -59,11 +59,6 @@ struct pci_controller {
|
|||
|
||||
extern void pcibios_set_master(struct pci_dev *dev);
|
||||
|
||||
extern inline void pcibios_penalize_isa_irq(int irq, int active)
|
||||
{
|
||||
/* We don't do dynamic PCI IRQ allocation */
|
||||
}
|
||||
|
||||
/* IOMMU controls. */
|
||||
|
||||
/* The PCI address space does not equal the physical memory address space.
|
||||
|
|
|
@ -31,11 +31,6 @@ static inline int pci_proc_domain(struct pci_bus *bus)
|
|||
}
|
||||
#endif /* CONFIG_PCI_DOMAINS */
|
||||
|
||||
static inline void pcibios_penalize_isa_irq(int irq, int active)
|
||||
{
|
||||
/* We don't do dynamic PCI IRQ allocation */
|
||||
}
|
||||
|
||||
/*
|
||||
* The PCI address space does equal the physical memory address space.
|
||||
* The networking and block device layers use this boolean for bounce
|
||||
|
|
|
@ -545,6 +545,18 @@ void pci_common_init_dev(struct device *parent, struct hw_pci *hw)
|
|||
*/
|
||||
pci_bus_add_devices(bus);
|
||||
}
|
||||
|
||||
list_for_each_entry(sys, &head, node) {
|
||||
struct pci_bus *bus = sys->bus;
|
||||
|
||||
/* Configure PCI Express settings */
|
||||
if (bus && !pci_has_flag(PCI_PROBE_ONLY)) {
|
||||
struct pci_bus *child;
|
||||
|
||||
list_for_each_entry(child, &bus->children, node)
|
||||
pcie_bus_configure_settings(child);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#ifndef CONFIG_PCI_HOST_ITE8152
|
||||
|
|
|
@ -10,9 +10,4 @@
|
|||
#define PCIBIOS_MIN_IO 0x00001000
|
||||
#define PCIBIOS_MIN_MEM 0x10000000
|
||||
|
||||
static inline void pcibios_penalize_isa_irq(int irq)
|
||||
{
|
||||
/* We don't do dynamic PCI IRQ allocation */
|
||||
}
|
||||
|
||||
#endif /* _ASM_BFIN_PCI_H */
|
||||
|
|
|
@ -20,7 +20,6 @@ void pcibios_config_init(void);
|
|||
struct pci_bus * pcibios_scan_root(int bus);
|
||||
|
||||
void pcibios_set_master(struct pci_dev *dev);
|
||||
void pcibios_penalize_isa_irq(int irq);
|
||||
struct irq_routing_table *pcibios_get_irq_routing_table(void);
|
||||
int pcibios_set_irq_routing(struct pci_dev *dev, int pin, int irq);
|
||||
|
||||
|
|
|
@ -24,8 +24,6 @@ struct pci_dev;
|
|||
|
||||
extern void pcibios_set_master(struct pci_dev *dev);
|
||||
|
||||
extern void pcibios_penalize_isa_irq(int irq);
|
||||
|
||||
#ifdef CONFIG_MMU
|
||||
extern void *consistent_alloc(gfp_t gfp, size_t size, dma_addr_t *dma_handle);
|
||||
extern void consistent_free(void *vaddr);
|
||||
|
|
|
@ -55,10 +55,6 @@ void __init pcibios_fixup_irqs(void)
|
|||
}
|
||||
}
|
||||
|
||||
void __init pcibios_penalize_isa_irq(int irq)
|
||||
{
|
||||
}
|
||||
|
||||
void pcibios_enable_irq(struct pci_dev *dev)
|
||||
{
|
||||
pci_write_config_byte(dev, PCI_INTERRUPT_LINE, dev->irq);
|
||||
|
|
|
@ -50,12 +50,6 @@ struct pci_dev;
|
|||
extern unsigned long ia64_max_iommu_merge_mask;
|
||||
#define PCI_DMA_BUS_IS_PHYS (ia64_max_iommu_merge_mask == ~0UL)
|
||||
|
||||
static inline void
|
||||
pcibios_penalize_isa_irq (int irq, int active)
|
||||
{
|
||||
/* We don't do dynamic PCI IRQ allocation */
|
||||
}
|
||||
|
||||
#include <asm-generic/pci-dma-compat.h>
|
||||
|
||||
#ifdef CONFIG_PCI
|
||||
|
|
|
@ -49,9 +49,7 @@ static void pci_fixup_video(struct pci_dev *pdev)
|
|||
* type BRIDGE, or CARDBUS. Host to PCI controllers use
|
||||
* PCI header type NORMAL.
|
||||
*/
|
||||
if (bridge
|
||||
&&((bridge->hdr_type == PCI_HEADER_TYPE_BRIDGE)
|
||||
||(bridge->hdr_type == PCI_HEADER_TYPE_CARDBUS))) {
|
||||
if (bridge && (pci_is_bridge(bridge))) {
|
||||
pci_read_config_word(bridge, PCI_BRIDGE_CONTROL,
|
||||
&config);
|
||||
if (!(config & PCI_BRIDGE_CTL_VGA))
|
||||
|
|
|
@ -44,11 +44,6 @@ struct pci_dev;
|
|||
*/
|
||||
#define pcibios_assign_all_busses() 0
|
||||
|
||||
static inline void pcibios_penalize_isa_irq(int irq, int active)
|
||||
{
|
||||
/* We don't do dynamic PCI IRQ allocation */
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PCI
|
||||
extern void set_pci_dma_ops(struct dma_map_ops *dma_ops);
|
||||
extern struct dma_map_ops *get_pci_dma_ops(void);
|
||||
|
|
|
@ -168,26 +168,6 @@ struct pci_controller *pci_find_hose_for_OF_device(struct device_node *node)
|
|||
return NULL;
|
||||
}
|
||||
|
||||
static ssize_t pci_show_devspec(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct pci_dev *pdev;
|
||||
struct device_node *np;
|
||||
|
||||
pdev = to_pci_dev(dev);
|
||||
np = pci_device_to_OF_node(pdev);
|
||||
if (np == NULL || np->full_name == NULL)
|
||||
return 0;
|
||||
return sprintf(buf, "%s", np->full_name);
|
||||
}
|
||||
static DEVICE_ATTR(devspec, S_IRUGO, pci_show_devspec, NULL);
|
||||
|
||||
/* Add sysfs properties */
|
||||
int pcibios_add_platform_entries(struct pci_dev *pdev)
|
||||
{
|
||||
return device_create_file(&pdev->dev, &dev_attr_devspec);
|
||||
}
|
||||
|
||||
void pcibios_set_master(struct pci_dev *dev)
|
||||
{
|
||||
/* No special bus mastering setup handling */
|
||||
|
|
|
@ -73,11 +73,6 @@ extern unsigned long PCIBIOS_MIN_MEM;
|
|||
|
||||
extern void pcibios_set_master(struct pci_dev *dev);
|
||||
|
||||
static inline void pcibios_penalize_isa_irq(int irq, int active)
|
||||
{
|
||||
/* We don't do dynamic PCI IRQ allocation */
|
||||
}
|
||||
|
||||
#define HAVE_PCI_MMAP
|
||||
|
||||
extern int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,
|
||||
|
|
|
@ -48,7 +48,6 @@ extern void unit_pci_init(void);
|
|||
#define PCIBIOS_MIN_MEM 0xB8000000
|
||||
|
||||
void pcibios_set_master(struct pci_dev *dev);
|
||||
void pcibios_penalize_isa_irq(int irq);
|
||||
|
||||
/* Dynamic DMA mapping stuff.
|
||||
* i386 has everything mapped statically.
|
||||
|
|
|
@ -40,10 +40,6 @@ void __init pcibios_fixup_irqs(void)
|
|||
}
|
||||
}
|
||||
|
||||
void __init pcibios_penalize_isa_irq(int irq)
|
||||
{
|
||||
}
|
||||
|
||||
void pcibios_enable_irq(struct pci_dev *dev)
|
||||
{
|
||||
pci_write_config_byte(dev, PCI_INTERRUPT_LINE, dev->irq);
|
||||
|
|
|
@ -215,11 +215,6 @@ static inline void pci_dma_burst_advice(struct pci_dev *pdev,
|
|||
}
|
||||
#endif
|
||||
|
||||
static inline void pcibios_penalize_isa_irq(int irq, int active)
|
||||
{
|
||||
/* We don't need to penalize isa irq's */
|
||||
}
|
||||
|
||||
static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel)
|
||||
{
|
||||
return channel ? 15 : 14;
|
||||
|
|
|
@ -46,11 +46,6 @@ struct pci_dev;
|
|||
#define pcibios_assign_all_busses() \
|
||||
(pci_has_flag(PCI_REASSIGN_ALL_BUS))
|
||||
|
||||
static inline void pcibios_penalize_isa_irq(int irq, int active)
|
||||
{
|
||||
/* We don't do dynamic PCI IRQ allocation */
|
||||
}
|
||||
|
||||
#define HAVE_ARCH_PCI_GET_LEGACY_IDE_IRQ
|
||||
static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel)
|
||||
{
|
||||
|
|
|
@ -201,26 +201,6 @@ struct pci_controller* pci_find_hose_for_OF_device(struct device_node* node)
|
|||
return NULL;
|
||||
}
|
||||
|
||||
static ssize_t pci_show_devspec(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct pci_dev *pdev;
|
||||
struct device_node *np;
|
||||
|
||||
pdev = to_pci_dev (dev);
|
||||
np = pci_device_to_OF_node(pdev);
|
||||
if (np == NULL || np->full_name == NULL)
|
||||
return 0;
|
||||
return sprintf(buf, "%s", np->full_name);
|
||||
}
|
||||
static DEVICE_ATTR(devspec, S_IRUGO, pci_show_devspec, NULL);
|
||||
|
||||
/* Add sysfs properties */
|
||||
int pcibios_add_platform_entries(struct pci_dev *pdev)
|
||||
{
|
||||
return device_create_file(&pdev->dev, &dev_attr_devspec);
|
||||
}
|
||||
|
||||
/*
|
||||
* Reads the interrupt pin to determine if interrupt is use by card.
|
||||
* If the interrupt is used, then gets the interrupt line from the
|
||||
|
|
|
@ -98,8 +98,7 @@ void pcibios_add_pci_devices(struct pci_bus * bus)
|
|||
max = bus->busn_res.start;
|
||||
for (pass = 0; pass < 2; pass++) {
|
||||
list_for_each_entry(dev, &bus->devices, bus_list) {
|
||||
if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE ||
|
||||
dev->hdr_type == PCI_HEADER_TYPE_CARDBUS)
|
||||
if (pci_is_bridge(dev))
|
||||
max = pci_scan_bridge(bus, dev,
|
||||
max, pass);
|
||||
}
|
||||
|
|
|
@ -362,8 +362,7 @@ static void __of_scan_bus(struct device_node *node, struct pci_bus *bus,
|
|||
|
||||
/* Now scan child busses */
|
||||
list_for_each_entry(dev, &bus->devices, bus_list) {
|
||||
if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE ||
|
||||
dev->hdr_type == PCI_HEADER_TYPE_CARDBUS) {
|
||||
if (pci_is_bridge(dev)) {
|
||||
of_scan_pci_bridge(dev);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -120,6 +120,8 @@ static inline bool zdev_enabled(struct zpci_dev *zdev)
|
|||
return (zdev->fh & (1UL << 31)) ? true : false;
|
||||
}
|
||||
|
||||
extern const struct attribute_group *zpci_attr_groups[];
|
||||
|
||||
/* -----------------------------------------------------------------------------
|
||||
Prototypes
|
||||
----------------------------------------------------------------------------- */
|
||||
|
@ -166,10 +168,6 @@ static inline void zpci_exit_slot(struct zpci_dev *zdev) {}
|
|||
struct zpci_dev *get_zdev(struct pci_dev *);
|
||||
struct zpci_dev *get_zdev_by_fid(u32);
|
||||
|
||||
/* sysfs */
|
||||
int zpci_sysfs_add_device(struct device *);
|
||||
void zpci_sysfs_remove_device(struct device *);
|
||||
|
||||
/* DMA */
|
||||
int zpci_dma_init(void);
|
||||
void zpci_dma_exit(void);
|
||||
|
|
|
@ -530,11 +530,6 @@ static void zpci_unmap_resources(struct zpci_dev *zdev)
|
|||
}
|
||||
}
|
||||
|
||||
int pcibios_add_platform_entries(struct pci_dev *pdev)
|
||||
{
|
||||
return zpci_sysfs_add_device(&pdev->dev);
|
||||
}
|
||||
|
||||
static int __init zpci_irq_init(void)
|
||||
{
|
||||
int rc;
|
||||
|
@ -671,6 +666,7 @@ int pcibios_add_device(struct pci_dev *pdev)
|
|||
int i;
|
||||
|
||||
zdev->pdev = pdev;
|
||||
pdev->dev.groups = zpci_attr_groups;
|
||||
zpci_map_resources(zdev);
|
||||
|
||||
for (i = 0; i < PCI_BAR_COUNT; i++) {
|
||||
|
|
|
@ -72,36 +72,18 @@ static ssize_t store_recover(struct device *dev, struct device_attribute *attr,
|
|||
}
|
||||
static DEVICE_ATTR(recover, S_IWUSR, NULL, store_recover);
|
||||
|
||||
static struct device_attribute *zpci_dev_attrs[] = {
|
||||
&dev_attr_function_id,
|
||||
&dev_attr_function_handle,
|
||||
&dev_attr_pchid,
|
||||
&dev_attr_pfgid,
|
||||
&dev_attr_recover,
|
||||
static struct attribute *zpci_dev_attrs[] = {
|
||||
&dev_attr_function_id.attr,
|
||||
&dev_attr_function_handle.attr,
|
||||
&dev_attr_pchid.attr,
|
||||
&dev_attr_pfgid.attr,
|
||||
&dev_attr_recover.attr,
|
||||
NULL,
|
||||
};
|
||||
static struct attribute_group zpci_attr_group = {
|
||||
.attrs = zpci_dev_attrs,
|
||||
};
|
||||
const struct attribute_group *zpci_attr_groups[] = {
|
||||
&zpci_attr_group,
|
||||
NULL,
|
||||
};
|
||||
|
||||
int zpci_sysfs_add_device(struct device *dev)
|
||||
{
|
||||
int i, rc = 0;
|
||||
|
||||
for (i = 0; zpci_dev_attrs[i]; i++) {
|
||||
rc = device_create_file(dev, zpci_dev_attrs[i]);
|
||||
if (rc)
|
||||
goto error;
|
||||
}
|
||||
return 0;
|
||||
|
||||
error:
|
||||
while (--i >= 0)
|
||||
device_remove_file(dev, zpci_dev_attrs[i]);
|
||||
return rc;
|
||||
}
|
||||
|
||||
void zpci_sysfs_remove_device(struct device *dev)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; zpci_dev_attrs[i]; i++)
|
||||
device_remove_file(dev, zpci_dev_attrs[i]);
|
||||
}
|
||||
|
|
|
@ -31,6 +31,8 @@
|
|||
static void gapspci_fixup_resources(struct pci_dev *dev)
|
||||
{
|
||||
struct pci_channel *p = dev->sysdata;
|
||||
struct resource res;
|
||||
struct pci_bus_region region;
|
||||
|
||||
printk(KERN_NOTICE "PCI: Fixing up device %s\n", pci_name(dev));
|
||||
|
||||
|
@ -50,11 +52,21 @@ static void gapspci_fixup_resources(struct pci_dev *dev)
|
|||
|
||||
/*
|
||||
* Redirect dma memory allocations to special memory window.
|
||||
*
|
||||
* If this GAPSPCI region were mapped by a BAR, the CPU
|
||||
* phys_addr_t would be pci_resource_start(), and the bus
|
||||
* address would be pci_bus_address(pci_resource_start()).
|
||||
* But apparently there's no BAR mapping it, so we just
|
||||
* "know" its CPU address is GAPSPCI_DMA_BASE.
|
||||
*/
|
||||
res.start = GAPSPCI_DMA_BASE;
|
||||
res.end = GAPSPCI_DMA_BASE + GAPSPCI_DMA_SIZE - 1;
|
||||
res.flags = IORESOURCE_MEM;
|
||||
pcibios_resource_to_bus(dev->bus, ®ion, &res);
|
||||
BUG_ON(!dma_declare_coherent_memory(&dev->dev,
|
||||
GAPSPCI_DMA_BASE,
|
||||
GAPSPCI_DMA_BASE,
|
||||
GAPSPCI_DMA_SIZE,
|
||||
res.start,
|
||||
region.start,
|
||||
resource_size(&res),
|
||||
DMA_MEMORY_MAP |
|
||||
DMA_MEMORY_EXCLUSIVE));
|
||||
break;
|
||||
|
|
|
@ -70,11 +70,6 @@ extern int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,
|
|||
enum pci_mmap_state mmap_state, int write_combine);
|
||||
extern void pcibios_set_master(struct pci_dev *dev);
|
||||
|
||||
static inline void pcibios_penalize_isa_irq(int irq, int active)
|
||||
{
|
||||
/* We don't do dynamic PCI IRQ allocation */
|
||||
}
|
||||
|
||||
/* Dynamic DMA mapping stuff.
|
||||
* SuperH has everything mapped statically like x86.
|
||||
*/
|
||||
|
|
|
@ -16,11 +16,6 @@
|
|||
|
||||
#define PCI_IRQ_NONE 0xffffffff
|
||||
|
||||
static inline void pcibios_penalize_isa_irq(int irq, int active)
|
||||
{
|
||||
/* We don't do dynamic PCI IRQ allocation */
|
||||
}
|
||||
|
||||
/* Dynamic DMA mapping stuff.
|
||||
*/
|
||||
#define PCI_DMA_BUS_IS_PHYS (0)
|
||||
|
|
|
@ -16,11 +16,6 @@
|
|||
|
||||
#define PCI_IRQ_NONE 0xffffffff
|
||||
|
||||
static inline void pcibios_penalize_isa_irq(int irq, int active)
|
||||
{
|
||||
/* We don't do dynamic PCI IRQ allocation */
|
||||
}
|
||||
|
||||
/* The PCI address space does not equal the physical memory
|
||||
* address space. The networking and block device layers use
|
||||
* this boolean for bounce buffer decisions.
|
||||
|
|
|
@ -543,8 +543,7 @@ static void pci_of_scan_bus(struct pci_pbm_info *pbm,
|
|||
printk("PCI: dev header type: %x\n",
|
||||
dev->hdr_type);
|
||||
|
||||
if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE ||
|
||||
dev->hdr_type == PCI_HEADER_TYPE_CARDBUS)
|
||||
if (pci_is_bridge(dev))
|
||||
of_scan_pci_bridge(pbm, child, dev);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -18,11 +18,6 @@
|
|||
#include <asm-generic/pci.h>
|
||||
#include <mach/hardware.h> /* for PCIBIOS_MIN_* */
|
||||
|
||||
static inline void pcibios_penalize_isa_irq(int irq, int active)
|
||||
{
|
||||
/* We don't do dynamic PCI IRQ allocation */
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PCI
|
||||
static inline void pci_dma_burst_advice(struct pci_dev *pdev,
|
||||
enum pci_dma_burst_strategy *strat,
|
||||
|
|
|
@ -68,7 +68,6 @@ void pcibios_config_init(void);
|
|||
void pcibios_scan_root(int bus);
|
||||
|
||||
void pcibios_set_master(struct pci_dev *dev);
|
||||
void pcibios_penalize_isa_irq(int irq, int active);
|
||||
struct irq_routing_table *pcibios_get_irq_routing_table(void);
|
||||
int pcibios_set_irq_routing(struct pci_dev *dev, int pin, int irq);
|
||||
|
||||
|
|
|
@ -10,6 +10,8 @@
|
|||
*
|
||||
* Copyright 2002 Andi Kleen, SuSE Labs.
|
||||
*/
|
||||
#define pr_fmt(fmt) "AGP: " fmt
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/init.h>
|
||||
|
@ -75,14 +77,13 @@ static u32 __init allocate_aperture(void)
|
|||
addr = memblock_find_in_range(GART_MIN_ADDR, GART_MAX_ADDR,
|
||||
aper_size, aper_size);
|
||||
if (!addr) {
|
||||
printk(KERN_ERR
|
||||
"Cannot allocate aperture memory hole (%lx,%uK)\n",
|
||||
addr, aper_size>>10);
|
||||
pr_err("Cannot allocate aperture memory hole [mem %#010lx-%#010lx] (%uKB)\n",
|
||||
addr, addr + aper_size - 1, aper_size >> 10);
|
||||
return 0;
|
||||
}
|
||||
memblock_reserve(addr, aper_size);
|
||||
printk(KERN_INFO "Mapping aperture over %d KB of RAM @ %lx\n",
|
||||
aper_size >> 10, addr);
|
||||
pr_info("Mapping aperture over RAM [mem %#010lx-%#010lx] (%uKB)\n",
|
||||
addr, addr + aper_size - 1, aper_size >> 10);
|
||||
register_nosave_region(addr >> PAGE_SHIFT,
|
||||
(addr+aper_size) >> PAGE_SHIFT);
|
||||
|
||||
|
@ -126,10 +127,11 @@ static u32 __init read_agp(int bus, int slot, int func, int cap, u32 *order)
|
|||
u64 aper;
|
||||
u32 old_order;
|
||||
|
||||
printk(KERN_INFO "AGP bridge at %02x:%02x:%02x\n", bus, slot, func);
|
||||
pr_info("pci 0000:%02x:%02x:%02x: AGP bridge\n", bus, slot, func);
|
||||
apsizereg = read_pci_config_16(bus, slot, func, cap + 0x14);
|
||||
if (apsizereg == 0xffffffff) {
|
||||
printk(KERN_ERR "APSIZE in AGP bridge unreadable\n");
|
||||
pr_err("pci 0000:%02x:%02x.%d: APSIZE unreadable\n",
|
||||
bus, slot, func);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -153,16 +155,18 @@ static u32 __init read_agp(int bus, int slot, int func, int cap, u32 *order)
|
|||
* On some sick chips, APSIZE is 0. It means it wants 4G
|
||||
* so let double check that order, and lets trust AMD NB settings:
|
||||
*/
|
||||
printk(KERN_INFO "Aperture from AGP @ %Lx old size %u MB\n",
|
||||
aper, 32 << old_order);
|
||||
pr_info("pci 0000:%02x:%02x.%d: AGP aperture [bus addr %#010Lx-%#010Lx] (old size %uMB)\n",
|
||||
bus, slot, func, aper, aper + (32ULL << (old_order + 20)) - 1,
|
||||
32 << old_order);
|
||||
if (aper + (32ULL<<(20 + *order)) > 0x100000000ULL) {
|
||||
printk(KERN_INFO "Aperture size %u MB (APSIZE %x) is not right, using settings from NB\n",
|
||||
32 << *order, apsizereg);
|
||||
pr_info("pci 0000:%02x:%02x.%d: AGP aperture size %uMB (APSIZE %#x) is not right, using settings from NB\n",
|
||||
bus, slot, func, 32 << *order, apsizereg);
|
||||
*order = old_order;
|
||||
}
|
||||
|
||||
printk(KERN_INFO "Aperture from AGP @ %Lx size %u MB (APSIZE %x)\n",
|
||||
aper, 32 << *order, apsizereg);
|
||||
pr_info("pci 0000:%02x:%02x.%d: AGP aperture [bus addr %#010Lx-%#010Lx] (%uMB, APSIZE %#x)\n",
|
||||
bus, slot, func, aper, aper + (32ULL << (*order + 20)) - 1,
|
||||
32 << *order, apsizereg);
|
||||
|
||||
if (!aperture_valid(aper, (32*1024*1024) << *order, 32<<20))
|
||||
return 0;
|
||||
|
@ -218,7 +222,7 @@ static u32 __init search_agp_bridge(u32 *order, int *valid_agp)
|
|||
}
|
||||
}
|
||||
}
|
||||
printk(KERN_INFO "No AGP bridge found\n");
|
||||
pr_info("No AGP bridge found\n");
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -310,7 +314,8 @@ void __init early_gart_iommu_check(void)
|
|||
if (e820_any_mapped(aper_base, aper_base + aper_size,
|
||||
E820_RAM)) {
|
||||
/* reserve it, so we can reuse it in second kernel */
|
||||
printk(KERN_INFO "update e820 for GART\n");
|
||||
pr_info("e820: reserve [mem %#010Lx-%#010Lx] for GART\n",
|
||||
aper_base, aper_base + aper_size - 1);
|
||||
e820_add_region(aper_base, aper_size, E820_RESERVED);
|
||||
update_e820();
|
||||
}
|
||||
|
@ -354,7 +359,7 @@ int __init gart_iommu_hole_init(void)
|
|||
!early_pci_allowed())
|
||||
return -ENODEV;
|
||||
|
||||
printk(KERN_INFO "Checking aperture...\n");
|
||||
pr_info("Checking aperture...\n");
|
||||
|
||||
if (!fallback_aper_force)
|
||||
agp_aper_base = search_agp_bridge(&agp_aper_order, &valid_agp);
|
||||
|
@ -395,8 +400,9 @@ int __init gart_iommu_hole_init(void)
|
|||
aper_base = read_pci_config(bus, slot, 3, AMD64_GARTAPERTUREBASE) & 0x7fff;
|
||||
aper_base <<= 25;
|
||||
|
||||
printk(KERN_INFO "Node %d: aperture @ %Lx size %u MB\n",
|
||||
node, aper_base, aper_size >> 20);
|
||||
pr_info("Node %d: aperture [bus addr %#010Lx-%#010Lx] (%uMB)\n",
|
||||
node, aper_base, aper_base + aper_size - 1,
|
||||
aper_size >> 20);
|
||||
node++;
|
||||
|
||||
if (!aperture_valid(aper_base, aper_size, 64<<20)) {
|
||||
|
@ -407,9 +413,9 @@ int __init gart_iommu_hole_init(void)
|
|||
if (!no_iommu &&
|
||||
max_pfn > MAX_DMA32_PFN &&
|
||||
!printed_gart_size_msg) {
|
||||
printk(KERN_ERR "you are using iommu with agp, but GART size is less than 64M\n");
|
||||
printk(KERN_ERR "please increase GART size in your BIOS setup\n");
|
||||
printk(KERN_ERR "if BIOS doesn't have that option, contact your HW vendor!\n");
|
||||
pr_err("you are using iommu with agp, but GART size is less than 64MB\n");
|
||||
pr_err("please increase GART size in your BIOS setup\n");
|
||||
pr_err("if BIOS doesn't have that option, contact your HW vendor!\n");
|
||||
printed_gart_size_msg = 1;
|
||||
}
|
||||
} else {
|
||||
|
@ -446,13 +452,10 @@ out:
|
|||
force_iommu ||
|
||||
valid_agp ||
|
||||
fallback_aper_force) {
|
||||
printk(KERN_INFO
|
||||
"Your BIOS doesn't leave a aperture memory hole\n");
|
||||
printk(KERN_INFO
|
||||
"Please enable the IOMMU option in the BIOS setup\n");
|
||||
printk(KERN_INFO
|
||||
"This costs you %d MB of RAM\n",
|
||||
32 << fallback_aper_order);
|
||||
pr_info("Your BIOS doesn't leave a aperture memory hole\n");
|
||||
pr_info("Please enable the IOMMU option in the BIOS setup\n");
|
||||
pr_info("This costs you %dMB of RAM\n",
|
||||
32 << fallback_aper_order);
|
||||
|
||||
aper_order = fallback_aper_order;
|
||||
aper_alloc = allocate_aperture();
|
||||
|
|
|
@ -489,8 +489,12 @@ struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
|
|||
}
|
||||
|
||||
node = acpi_get_node(device->handle);
|
||||
if (node == NUMA_NO_NODE)
|
||||
if (node == NUMA_NO_NODE) {
|
||||
node = x86_pci_root_bus_node(busnum);
|
||||
if (node != 0 && node != NUMA_NO_NODE)
|
||||
dev_info(&device->dev, FW_BUG "no _PXM; falling back to node %d from hardware (may be inconsistent with ACPI node numbers)\n",
|
||||
node);
|
||||
}
|
||||
|
||||
if (node != NUMA_NO_NODE && !node_online(node))
|
||||
node = NUMA_NO_NODE;
|
||||
|
|
|
@ -11,27 +11,33 @@
|
|||
|
||||
#include "bus_numa.h"
|
||||
|
||||
/*
|
||||
* This discovers the pcibus <-> node mapping on AMD K8.
|
||||
* also get peer root bus resource for io,mmio
|
||||
*/
|
||||
#define AMD_NB_F0_NODE_ID 0x60
|
||||
#define AMD_NB_F0_UNIT_ID 0x64
|
||||
#define AMD_NB_F1_CONFIG_MAP_REG 0xe0
|
||||
|
||||
struct pci_hostbridge_probe {
|
||||
#define RANGE_NUM 16
|
||||
#define AMD_NB_F1_CONFIG_MAP_RANGES 4
|
||||
|
||||
struct amd_hostbridge {
|
||||
u32 bus;
|
||||
u32 slot;
|
||||
u32 vendor;
|
||||
u32 device;
|
||||
};
|
||||
|
||||
static struct pci_hostbridge_probe pci_probes[] __initdata = {
|
||||
{ 0, 0x18, PCI_VENDOR_ID_AMD, 0x1100 },
|
||||
{ 0, 0x18, PCI_VENDOR_ID_AMD, 0x1200 },
|
||||
{ 0xff, 0, PCI_VENDOR_ID_AMD, 0x1200 },
|
||||
{ 0, 0x18, PCI_VENDOR_ID_AMD, 0x1300 },
|
||||
/*
|
||||
* IMPORTANT NOTE:
|
||||
* hb_probes[] and early_root_info_init() is in maintenance mode.
|
||||
* It only supports K8, Fam10h, Fam11h, and Fam15h_00h-0fh .
|
||||
* Future processor will rely on information in ACPI.
|
||||
*/
|
||||
static struct amd_hostbridge hb_probes[] __initdata = {
|
||||
{ 0, 0x18, 0x1100 }, /* K8 */
|
||||
{ 0, 0x18, 0x1200 }, /* Family10h */
|
||||
{ 0xff, 0, 0x1200 }, /* Family10h */
|
||||
{ 0, 0x18, 0x1300 }, /* Family11h */
|
||||
{ 0, 0x18, 0x1600 }, /* Family15h */
|
||||
};
|
||||
|
||||
#define RANGE_NUM 16
|
||||
|
||||
static struct pci_root_info __init *find_pci_root_info(int node, int link)
|
||||
{
|
||||
struct pci_root_info *info;
|
||||
|
@ -45,12 +51,12 @@ static struct pci_root_info __init *find_pci_root_info(int node, int link)
|
|||
}
|
||||
|
||||
/**
|
||||
* early_fill_mp_bus_to_node()
|
||||
* early_root_info_init()
|
||||
* called before pcibios_scan_root and pci_scan_bus
|
||||
* fills the mp_bus_to_cpumask array based according to the LDT Bus Number
|
||||
* Registers found in the K8 northbridge
|
||||
* fills the mp_bus_to_cpumask array based according
|
||||
* to the LDT Bus Number Registers found in the northbridge.
|
||||
*/
|
||||
static int __init early_fill_mp_bus_info(void)
|
||||
static int __init early_root_info_init(void)
|
||||
{
|
||||
int i;
|
||||
unsigned bus;
|
||||
|
@ -75,19 +81,21 @@ static int __init early_fill_mp_bus_info(void)
|
|||
return -1;
|
||||
|
||||
found = false;
|
||||
for (i = 0; i < ARRAY_SIZE(pci_probes); i++) {
|
||||
for (i = 0; i < ARRAY_SIZE(hb_probes); i++) {
|
||||
u32 id;
|
||||
u16 device;
|
||||
u16 vendor;
|
||||
|
||||
bus = pci_probes[i].bus;
|
||||
slot = pci_probes[i].slot;
|
||||
bus = hb_probes[i].bus;
|
||||
slot = hb_probes[i].slot;
|
||||
id = read_pci_config(bus, slot, 0, PCI_VENDOR_ID);
|
||||
|
||||
vendor = id & 0xffff;
|
||||
device = (id>>16) & 0xffff;
|
||||
if (pci_probes[i].vendor == vendor &&
|
||||
pci_probes[i].device == device) {
|
||||
|
||||
if (vendor != PCI_VENDOR_ID_AMD)
|
||||
continue;
|
||||
|
||||
if (hb_probes[i].device == device) {
|
||||
found = true;
|
||||
break;
|
||||
}
|
||||
|
@ -96,10 +104,16 @@ static int __init early_fill_mp_bus_info(void)
|
|||
if (!found)
|
||||
return 0;
|
||||
|
||||
for (i = 0; i < 4; i++) {
|
||||
/*
|
||||
* We should learn topology and routing information from _PXM and
|
||||
* _CRS methods in the ACPI namespace. We extract node numbers
|
||||
* here to work around BIOSes that don't supply _PXM.
|
||||
*/
|
||||
for (i = 0; i < AMD_NB_F1_CONFIG_MAP_RANGES; i++) {
|
||||
int min_bus;
|
||||
int max_bus;
|
||||
reg = read_pci_config(bus, slot, 1, 0xe0 + (i << 2));
|
||||
reg = read_pci_config(bus, slot, 1,
|
||||
AMD_NB_F1_CONFIG_MAP_REG + (i << 2));
|
||||
|
||||
/* Check if that register is enabled for bus range */
|
||||
if ((reg & 7) != 3)
|
||||
|
@ -113,10 +127,21 @@ static int __init early_fill_mp_bus_info(void)
|
|||
info = alloc_pci_root_info(min_bus, max_bus, node, link);
|
||||
}
|
||||
|
||||
/*
|
||||
* The following code extracts routing information for use on old
|
||||
* systems where Linux doesn't automatically use host bridge _CRS
|
||||
* methods (or when the user specifies "pci=nocrs").
|
||||
*
|
||||
* We only do this through Fam11h, because _CRS should be enough on
|
||||
* newer systems.
|
||||
*/
|
||||
if (boot_cpu_data.x86 > 0x11)
|
||||
return 0;
|
||||
|
||||
/* get the default node and link for left over res */
|
||||
reg = read_pci_config(bus, slot, 0, 0x60);
|
||||
reg = read_pci_config(bus, slot, 0, AMD_NB_F0_NODE_ID);
|
||||
def_node = (reg >> 8) & 0x07;
|
||||
reg = read_pci_config(bus, slot, 0, 0x64);
|
||||
reg = read_pci_config(bus, slot, 0, AMD_NB_F0_UNIT_ID);
|
||||
def_link = (reg >> 8) & 0x03;
|
||||
|
||||
memset(range, 0, sizeof(range));
|
||||
|
@ -363,7 +388,7 @@ static int __init pci_io_ecs_init(void)
|
|||
int cpu;
|
||||
|
||||
/* assume all cpus from fam10h have IO ECS */
|
||||
if (boot_cpu_data.x86 < 0x10)
|
||||
if (boot_cpu_data.x86 < 0x10)
|
||||
return 0;
|
||||
|
||||
/* Try the PCI method first. */
|
||||
|
@ -387,7 +412,7 @@ static int __init amd_postcore_init(void)
|
|||
if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD)
|
||||
return 0;
|
||||
|
||||
early_fill_mp_bus_info();
|
||||
early_root_info_init();
|
||||
pci_io_ecs_init();
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -60,8 +60,8 @@ static void __init cnb20le_res(u8 bus, u8 slot, u8 func)
|
|||
word1 = read_pci_config_16(bus, slot, func, 0xc4);
|
||||
word2 = read_pci_config_16(bus, slot, func, 0xc6);
|
||||
if (word1 != word2) {
|
||||
res.start = (word1 << 16) | 0x0000;
|
||||
res.end = (word2 << 16) | 0xffff;
|
||||
res.start = ((resource_size_t) word1 << 16) | 0x0000;
|
||||
res.end = ((resource_size_t) word2 << 16) | 0xffff;
|
||||
res.flags = IORESOURCE_MEM | IORESOURCE_PREFETCH;
|
||||
update_res(info, res.start, res.end, res.flags, 0);
|
||||
}
|
||||
|
|
|
@ -6,6 +6,7 @@
|
|||
#include <linux/dmi.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/vgaarb.h>
|
||||
#include <asm/hpet.h>
|
||||
#include <asm/pci_x86.h>
|
||||
|
||||
static void pci_fixup_i450nx(struct pci_dev *d)
|
||||
|
@ -337,9 +338,7 @@ static void pci_fixup_video(struct pci_dev *pdev)
|
|||
* type BRIDGE, or CARDBUS. Host to PCI controllers use
|
||||
* PCI header type NORMAL.
|
||||
*/
|
||||
if (bridge
|
||||
&& ((bridge->hdr_type == PCI_HEADER_TYPE_BRIDGE)
|
||||
|| (bridge->hdr_type == PCI_HEADER_TYPE_CARDBUS))) {
|
||||
if (bridge && (pci_is_bridge(bridge))) {
|
||||
pci_read_config_word(bridge, PCI_BRIDGE_CONTROL,
|
||||
&config);
|
||||
if (!(config & PCI_BRIDGE_CTL_VGA))
|
||||
|
@ -526,6 +525,19 @@ static void sb600_disable_hpet_bar(struct pci_dev *dev)
|
|||
}
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_ATI, 0x4385, sb600_disable_hpet_bar);
|
||||
|
||||
#ifdef CONFIG_HPET_TIMER
|
||||
static void sb600_hpet_quirk(struct pci_dev *dev)
|
||||
{
|
||||
struct resource *r = &dev->resource[1];
|
||||
|
||||
if (r->flags & IORESOURCE_MEM && r->start == hpet_address) {
|
||||
r->flags |= IORESOURCE_PCI_FIXED;
|
||||
dev_info(&dev->dev, "reg 0x14 contains HPET; making it immovable\n");
|
||||
}
|
||||
}
|
||||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATI, 0x4385, sb600_hpet_quirk);
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Twinhead H12Y needs us to block out a region otherwise we map devices
|
||||
* there and any access kills the box.
|
||||
|
|
|
@ -271,11 +271,16 @@ static void pcibios_allocate_dev_resources(struct pci_dev *dev, int pass)
|
|||
"BAR %d: reserving %pr (d=%d, p=%d)\n",
|
||||
idx, r, disabled, pass);
|
||||
if (pci_claim_resource(dev, idx) < 0) {
|
||||
/* We'll assign a new address later */
|
||||
pcibios_save_fw_addr(dev,
|
||||
idx, r->start);
|
||||
r->end -= r->start;
|
||||
r->start = 0;
|
||||
if (r->flags & IORESOURCE_PCI_FIXED) {
|
||||
dev_info(&dev->dev, "BAR %d %pR is immovable\n",
|
||||
idx, r);
|
||||
} else {
|
||||
/* We'll assign a new address later */
|
||||
pcibios_save_fw_addr(dev,
|
||||
idx, r->start);
|
||||
r->end -= r->start;
|
||||
r->start = 0;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -356,6 +361,12 @@ static int __init pcibios_assign_resources(void)
|
|||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* called in fs_initcall (one below subsys_initcall),
|
||||
* give a chance for motherboard reserve resources
|
||||
*/
|
||||
fs_initcall(pcibios_assign_resources);
|
||||
|
||||
void pcibios_resource_survey_bus(struct pci_bus *bus)
|
||||
{
|
||||
dev_printk(KERN_DEBUG, &bus->dev, "Allocating resources\n");
|
||||
|
@ -392,12 +403,6 @@ void __init pcibios_resource_survey(void)
|
|||
ioapic_insert_resources();
|
||||
}
|
||||
|
||||
/**
|
||||
* called in fs_initcall (one below subsys_initcall),
|
||||
* give a chance for motherboard reserve resources
|
||||
*/
|
||||
fs_initcall(pcibios_assign_resources);
|
||||
|
||||
static const struct vm_operations_struct pci_mmap_ops = {
|
||||
.access = generic_access_phys,
|
||||
};
|
||||
|
|
|
@ -22,11 +22,6 @@
|
|||
|
||||
extern struct pci_controller* pcibios_alloc_controller(void);
|
||||
|
||||
static inline void pcibios_penalize_isa_irq(int irq)
|
||||
{
|
||||
/* We don't do dynamic PCI IRQ allocation */
|
||||
}
|
||||
|
||||
/* Assume some values. (We should revise them, if necessary) */
|
||||
|
||||
#define PCIBIOS_MIN_IO 0x2000
|
||||
|
|
|
@ -10,13 +10,13 @@
|
|||
struct dma_coherent_mem {
|
||||
void *virt_base;
|
||||
dma_addr_t device_base;
|
||||
phys_addr_t pfn_base;
|
||||
unsigned long pfn_base;
|
||||
int size;
|
||||
int flags;
|
||||
unsigned long *bitmap;
|
||||
};
|
||||
|
||||
int dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
|
||||
int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
|
||||
dma_addr_t device_addr, size_t size, int flags)
|
||||
{
|
||||
void __iomem *mem_base = NULL;
|
||||
|
@ -32,7 +32,7 @@ int dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
|
|||
|
||||
/* FIXME: this routine just ignores DMA_MEMORY_INCLUDES_CHILDREN */
|
||||
|
||||
mem_base = ioremap(bus_addr, size);
|
||||
mem_base = ioremap(phys_addr, size);
|
||||
if (!mem_base)
|
||||
goto out;
|
||||
|
||||
|
@ -45,7 +45,7 @@ int dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
|
|||
|
||||
dev->dma_mem->virt_base = mem_base;
|
||||
dev->dma_mem->device_base = device_addr;
|
||||
dev->dma_mem->pfn_base = PFN_DOWN(bus_addr);
|
||||
dev->dma_mem->pfn_base = PFN_DOWN(phys_addr);
|
||||
dev->dma_mem->size = pages;
|
||||
dev->dma_mem->flags = flags;
|
||||
|
||||
|
@ -208,7 +208,7 @@ int dma_mmap_from_coherent(struct device *dev, struct vm_area_struct *vma,
|
|||
|
||||
*ret = -ENXIO;
|
||||
if (off < count && user_count <= count - off) {
|
||||
unsigned pfn = mem->pfn_base + start + off;
|
||||
unsigned long pfn = mem->pfn_base + start + off;
|
||||
*ret = remap_pfn_range(vma, vma->vm_start, pfn,
|
||||
user_count << PAGE_SHIFT,
|
||||
vma->vm_page_prot);
|
||||
|
|
|
@ -175,7 +175,7 @@ static void dmam_coherent_decl_release(struct device *dev, void *res)
|
|||
/**
|
||||
* dmam_declare_coherent_memory - Managed dma_declare_coherent_memory()
|
||||
* @dev: Device to declare coherent memory for
|
||||
* @bus_addr: Bus address of coherent memory to be declared
|
||||
* @phys_addr: Physical address of coherent memory to be declared
|
||||
* @device_addr: Device address of coherent memory to be declared
|
||||
* @size: Size of coherent memory to be declared
|
||||
* @flags: Flags
|
||||
|
@ -185,7 +185,7 @@ static void dmam_coherent_decl_release(struct device *dev, void *res)
|
|||
* RETURNS:
|
||||
* 0 on success, -errno on failure.
|
||||
*/
|
||||
int dmam_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
|
||||
int dmam_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
|
||||
dma_addr_t device_addr, size_t size, int flags)
|
||||
{
|
||||
void *res;
|
||||
|
@ -195,7 +195,7 @@ int dmam_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
|
|||
if (!res)
|
||||
return -ENOMEM;
|
||||
|
||||
rc = dma_declare_coherent_memory(dev, bus_addr, device_addr, size,
|
||||
rc = dma_declare_coherent_memory(dev, phys_addr, device_addr, size,
|
||||
flags);
|
||||
if (rc == 0)
|
||||
devres_add(dev, res);
|
||||
|
|
|
@ -2775,6 +2775,16 @@ static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
|||
return result;
|
||||
}
|
||||
|
||||
static void nvme_reset_notify(struct pci_dev *pdev, bool prepare)
|
||||
{
|
||||
struct nvme_dev *dev = pci_get_drvdata(pdev);
|
||||
|
||||
if (prepare)
|
||||
nvme_dev_shutdown(dev);
|
||||
else
|
||||
nvme_dev_resume(dev);
|
||||
}
|
||||
|
||||
static void nvme_shutdown(struct pci_dev *pdev)
|
||||
{
|
||||
struct nvme_dev *dev = pci_get_drvdata(pdev);
|
||||
|
@ -2839,6 +2849,7 @@ static const struct pci_error_handlers nvme_err_handler = {
|
|||
.link_reset = nvme_link_reset,
|
||||
.slot_reset = nvme_slot_reset,
|
||||
.resume = nvme_error_resume,
|
||||
.reset_notify = nvme_reset_notify,
|
||||
};
|
||||
|
||||
/* Move to pci_ids.h later */
|
||||
|
|
|
@ -275,7 +275,6 @@ static int i82875p_setup_overfl_dev(struct pci_dev *pdev,
|
|||
{
|
||||
struct pci_dev *dev;
|
||||
void __iomem *window;
|
||||
int err;
|
||||
|
||||
*ovrfl_pdev = NULL;
|
||||
*ovrfl_window = NULL;
|
||||
|
@ -293,13 +292,8 @@ static int i82875p_setup_overfl_dev(struct pci_dev *pdev,
|
|||
if (dev == NULL)
|
||||
return 1;
|
||||
|
||||
err = pci_bus_add_device(dev);
|
||||
if (err) {
|
||||
i82875p_printk(KERN_ERR,
|
||||
"%s(): pci_bus_add_device() Failed\n",
|
||||
__func__);
|
||||
}
|
||||
pci_bus_assign_resources(dev->bus);
|
||||
pci_bus_add_device(dev);
|
||||
}
|
||||
|
||||
*ovrfl_pdev = dev;
|
||||
|
|
|
@ -1011,13 +1011,13 @@ static phys_addr_t exynos_iommu_iova_to_phys(struct iommu_domain *domain,
|
|||
}
|
||||
|
||||
static struct iommu_ops exynos_iommu_ops = {
|
||||
.domain_init = &exynos_iommu_domain_init,
|
||||
.domain_destroy = &exynos_iommu_domain_destroy,
|
||||
.attach_dev = &exynos_iommu_attach_device,
|
||||
.detach_dev = &exynos_iommu_detach_device,
|
||||
.map = &exynos_iommu_map,
|
||||
.unmap = &exynos_iommu_unmap,
|
||||
.iova_to_phys = &exynos_iommu_iova_to_phys,
|
||||
.domain_init = exynos_iommu_domain_init,
|
||||
.domain_destroy = exynos_iommu_domain_destroy,
|
||||
.attach_dev = exynos_iommu_attach_device,
|
||||
.detach_dev = exynos_iommu_detach_device,
|
||||
.map = exynos_iommu_map,
|
||||
.unmap = exynos_iommu_unmap,
|
||||
.iova_to_phys = exynos_iommu_iova_to_phys,
|
||||
.pgsize_bitmap = SECT_SIZE | LPAGE_SIZE | SPAGE_SIZE,
|
||||
};
|
||||
|
||||
|
|
|
@ -718,7 +718,7 @@ int genwqe_set_interrupt_capability(struct genwqe_dev *cd, int count)
|
|||
int rc;
|
||||
struct pci_dev *pci_dev = cd->pci_dev;
|
||||
|
||||
rc = pci_enable_msi_block(pci_dev, count);
|
||||
rc = pci_enable_msi_exact(pci_dev, count);
|
||||
if (rc == 0)
|
||||
cd->flags |= GENWQE_FLAG_MSI_ENABLED;
|
||||
return rc;
|
||||
|
|
|
@ -148,7 +148,7 @@ static noinline void pci_wait_cfg(struct pci_dev *dev)
|
|||
int pci_user_read_config_##size \
|
||||
(struct pci_dev *dev, int pos, type *val) \
|
||||
{ \
|
||||
int ret = 0; \
|
||||
int ret = PCIBIOS_SUCCESSFUL; \
|
||||
u32 data = -1; \
|
||||
if (PCI_##size##_BAD) \
|
||||
return -EINVAL; \
|
||||
|
@ -159,9 +159,7 @@ int pci_user_read_config_##size \
|
|||
pos, sizeof(type), &data); \
|
||||
raw_spin_unlock_irq(&pci_lock); \
|
||||
*val = (type)data; \
|
||||
if (ret > 0) \
|
||||
ret = -EINVAL; \
|
||||
return ret; \
|
||||
return pcibios_err_to_errno(ret); \
|
||||
} \
|
||||
EXPORT_SYMBOL_GPL(pci_user_read_config_##size);
|
||||
|
||||
|
@ -170,7 +168,7 @@ EXPORT_SYMBOL_GPL(pci_user_read_config_##size);
|
|||
int pci_user_write_config_##size \
|
||||
(struct pci_dev *dev, int pos, type val) \
|
||||
{ \
|
||||
int ret = -EIO; \
|
||||
int ret = PCIBIOS_SUCCESSFUL; \
|
||||
if (PCI_##size##_BAD) \
|
||||
return -EINVAL; \
|
||||
raw_spin_lock_irq(&pci_lock); \
|
||||
|
@ -179,9 +177,7 @@ int pci_user_write_config_##size \
|
|||
ret = dev->bus->ops->write(dev->bus, dev->devfn, \
|
||||
pos, sizeof(type), val); \
|
||||
raw_spin_unlock_irq(&pci_lock); \
|
||||
if (ret > 0) \
|
||||
ret = -EINVAL; \
|
||||
return ret; \
|
||||
return pcibios_err_to_errno(ret); \
|
||||
} \
|
||||
EXPORT_SYMBOL_GPL(pci_user_write_config_##size);
|
||||
|
||||
|
|
|
@ -13,7 +13,6 @@
|
|||
#include <linux/errno.h>
|
||||
#include <linux/ioport.h>
|
||||
#include <linux/proc_fs.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
#include "pci.h"
|
||||
|
@ -236,7 +235,7 @@ void __weak pcibios_resource_survey_bus(struct pci_bus *bus) { }
|
|||
*
|
||||
* This adds add sysfs entries and start device drivers
|
||||
*/
|
||||
int pci_bus_add_device(struct pci_dev *dev)
|
||||
void pci_bus_add_device(struct pci_dev *dev)
|
||||
{
|
||||
int retval;
|
||||
|
||||
|
@ -253,8 +252,6 @@ int pci_bus_add_device(struct pci_dev *dev)
|
|||
WARN_ON(retval < 0);
|
||||
|
||||
dev->is_added = 1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -267,16 +264,12 @@ void pci_bus_add_devices(const struct pci_bus *bus)
|
|||
{
|
||||
struct pci_dev *dev;
|
||||
struct pci_bus *child;
|
||||
int retval;
|
||||
|
||||
list_for_each_entry(dev, &bus->devices, bus_list) {
|
||||
/* Skip already-added devices */
|
||||
if (dev->is_added)
|
||||
continue;
|
||||
retval = pci_bus_add_device(dev);
|
||||
if (retval)
|
||||
dev_err(&dev->dev, "Error adding device (%d)\n",
|
||||
retval);
|
||||
pci_bus_add_device(dev);
|
||||
}
|
||||
|
||||
list_for_each_entry(dev, &bus->devices, bus_list) {
|
||||
|
|
|
@ -3,7 +3,6 @@
|
|||
*/
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/module.h>
|
||||
|
||||
|
|
|
@ -33,4 +33,17 @@ config PCI_RCAR_GEN2
|
|||
There are 3 internal PCI controllers available with a single
|
||||
built-in EHCI/OHCI host controller present on each one.
|
||||
|
||||
config PCI_RCAR_GEN2_PCIE
|
||||
bool "Renesas R-Car PCIe controller"
|
||||
depends on ARCH_SHMOBILE || (ARM && COMPILE_TEST)
|
||||
help
|
||||
Say Y here if you want PCIe controller support on R-Car Gen2 SoCs.
|
||||
|
||||
config PCI_HOST_GENERIC
|
||||
bool "Generic PCI host controller"
|
||||
depends on ARM && OF
|
||||
help
|
||||
Say Y here if you want to support a simple generic PCI host
|
||||
controller, such as the one emulated by kvmtool.
|
||||
|
||||
endmenu
|
||||
|
|
|
@ -4,3 +4,5 @@ obj-$(CONFIG_PCI_IMX6) += pci-imx6.o
|
|||
obj-$(CONFIG_PCI_MVEBU) += pci-mvebu.o
|
||||
obj-$(CONFIG_PCI_TEGRA) += pci-tegra.o
|
||||
obj-$(CONFIG_PCI_RCAR_GEN2) += pci-rcar-gen2.o
|
||||
obj-$(CONFIG_PCI_RCAR_GEN2_PCIE) += pcie-rcar.o
|
||||
obj-$(CONFIG_PCI_HOST_GENERIC) += pci-host-generic.o
|
||||
|
|
|
@ -415,9 +415,7 @@ static irqreturn_t exynos_pcie_msi_irq_handler(int irq, void *arg)
|
|||
{
|
||||
struct pcie_port *pp = arg;
|
||||
|
||||
dw_handle_msi_irq(pp);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
return dw_handle_msi_irq(pp);
|
||||
}
|
||||
|
||||
static void exynos_pcie_msi_init(struct pcie_port *pp)
|
||||
|
@ -511,7 +509,8 @@ static struct pcie_host_ops exynos_pcie_host_ops = {
|
|||
.host_init = exynos_pcie_host_init,
|
||||
};
|
||||
|
||||
static int add_pcie_port(struct pcie_port *pp, struct platform_device *pdev)
|
||||
static int __init add_pcie_port(struct pcie_port *pp,
|
||||
struct platform_device *pdev)
|
||||
{
|
||||
int ret;
|
||||
|
||||
|
@ -568,10 +567,8 @@ static int __init exynos_pcie_probe(struct platform_device *pdev)
|
|||
|
||||
exynos_pcie = devm_kzalloc(&pdev->dev, sizeof(*exynos_pcie),
|
||||
GFP_KERNEL);
|
||||
if (!exynos_pcie) {
|
||||
dev_err(&pdev->dev, "no memory for exynos pcie\n");
|
||||
if (!exynos_pcie)
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
pp = &exynos_pcie->pp;
|
||||
|
||||
|
|
|
@ -0,0 +1,388 @@
|
|||
/*
|
||||
* Simple, generic PCI host controller driver targetting firmware-initialised
|
||||
* systems and virtual machines (e.g. the PCI emulation provided by kvmtool).
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*
|
||||
* Copyright (C) 2014 ARM Limited
|
||||
*
|
||||
* Author: Will Deacon <will.deacon@arm.com>
|
||||
*/
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
||||
struct gen_pci_cfg_bus_ops {
|
||||
u32 bus_shift;
|
||||
void __iomem *(*map_bus)(struct pci_bus *, unsigned int, int);
|
||||
};
|
||||
|
||||
struct gen_pci_cfg_windows {
|
||||
struct resource res;
|
||||
struct resource bus_range;
|
||||
void __iomem **win;
|
||||
|
||||
const struct gen_pci_cfg_bus_ops *ops;
|
||||
};
|
||||
|
||||
struct gen_pci {
|
||||
struct pci_host_bridge host;
|
||||
struct gen_pci_cfg_windows cfg;
|
||||
struct list_head resources;
|
||||
};
|
||||
|
||||
static void __iomem *gen_pci_map_cfg_bus_cam(struct pci_bus *bus,
|
||||
unsigned int devfn,
|
||||
int where)
|
||||
{
|
||||
struct pci_sys_data *sys = bus->sysdata;
|
||||
struct gen_pci *pci = sys->private_data;
|
||||
resource_size_t idx = bus->number - pci->cfg.bus_range.start;
|
||||
|
||||
return pci->cfg.win[idx] + ((devfn << 8) | where);
|
||||
}
|
||||
|
||||
static struct gen_pci_cfg_bus_ops gen_pci_cfg_cam_bus_ops = {
|
||||
.bus_shift = 16,
|
||||
.map_bus = gen_pci_map_cfg_bus_cam,
|
||||
};
|
||||
|
||||
static void __iomem *gen_pci_map_cfg_bus_ecam(struct pci_bus *bus,
|
||||
unsigned int devfn,
|
||||
int where)
|
||||
{
|
||||
struct pci_sys_data *sys = bus->sysdata;
|
||||
struct gen_pci *pci = sys->private_data;
|
||||
resource_size_t idx = bus->number - pci->cfg.bus_range.start;
|
||||
|
||||
return pci->cfg.win[idx] + ((devfn << 12) | where);
|
||||
}
|
||||
|
||||
static struct gen_pci_cfg_bus_ops gen_pci_cfg_ecam_bus_ops = {
|
||||
.bus_shift = 20,
|
||||
.map_bus = gen_pci_map_cfg_bus_ecam,
|
||||
};
|
||||
|
||||
static int gen_pci_config_read(struct pci_bus *bus, unsigned int devfn,
|
||||
int where, int size, u32 *val)
|
||||
{
|
||||
void __iomem *addr;
|
||||
struct pci_sys_data *sys = bus->sysdata;
|
||||
struct gen_pci *pci = sys->private_data;
|
||||
|
||||
addr = pci->cfg.ops->map_bus(bus, devfn, where);
|
||||
|
||||
switch (size) {
|
||||
case 1:
|
||||
*val = readb(addr);
|
||||
break;
|
||||
case 2:
|
||||
*val = readw(addr);
|
||||
break;
|
||||
default:
|
||||
*val = readl(addr);
|
||||
}
|
||||
|
||||
return PCIBIOS_SUCCESSFUL;
|
||||
}
|
||||
|
||||
static int gen_pci_config_write(struct pci_bus *bus, unsigned int devfn,
|
||||
int where, int size, u32 val)
|
||||
{
|
||||
void __iomem *addr;
|
||||
struct pci_sys_data *sys = bus->sysdata;
|
||||
struct gen_pci *pci = sys->private_data;
|
||||
|
||||
addr = pci->cfg.ops->map_bus(bus, devfn, where);
|
||||
|
||||
switch (size) {
|
||||
case 1:
|
||||
writeb(val, addr);
|
||||
break;
|
||||
case 2:
|
||||
writew(val, addr);
|
||||
break;
|
||||
default:
|
||||
writel(val, addr);
|
||||
}
|
||||
|
||||
return PCIBIOS_SUCCESSFUL;
|
||||
}
|
||||
|
||||
static struct pci_ops gen_pci_ops = {
|
||||
.read = gen_pci_config_read,
|
||||
.write = gen_pci_config_write,
|
||||
};
|
||||
|
||||
static const struct of_device_id gen_pci_of_match[] = {
|
||||
{ .compatible = "pci-host-cam-generic",
|
||||
.data = &gen_pci_cfg_cam_bus_ops },
|
||||
|
||||
{ .compatible = "pci-host-ecam-generic",
|
||||
.data = &gen_pci_cfg_ecam_bus_ops },
|
||||
|
||||
{ },
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, gen_pci_of_match);
|
||||
|
||||
static int gen_pci_calc_io_offset(struct device *dev,
|
||||
struct of_pci_range *range,
|
||||
struct resource *res,
|
||||
resource_size_t *offset)
|
||||
{
|
||||
static atomic_t wins = ATOMIC_INIT(0);
|
||||
int err, idx, max_win;
|
||||
unsigned int window;
|
||||
|
||||
if (!PAGE_ALIGNED(range->cpu_addr))
|
||||
return -EINVAL;
|
||||
|
||||
max_win = (IO_SPACE_LIMIT + 1) / SZ_64K;
|
||||
idx = atomic_inc_return(&wins);
|
||||
if (idx > max_win)
|
||||
return -ENOSPC;
|
||||
|
||||
window = (idx - 1) * SZ_64K;
|
||||
err = pci_ioremap_io(window, range->cpu_addr);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
of_pci_range_to_resource(range, dev->of_node, res);
|
||||
res->start = window;
|
||||
res->end = res->start + range->size - 1;
|
||||
*offset = window - range->pci_addr;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int gen_pci_calc_mem_offset(struct device *dev,
|
||||
struct of_pci_range *range,
|
||||
struct resource *res,
|
||||
resource_size_t *offset)
|
||||
{
|
||||
of_pci_range_to_resource(range, dev->of_node, res);
|
||||
*offset = range->cpu_addr - range->pci_addr;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void gen_pci_release_of_pci_ranges(struct gen_pci *pci)
|
||||
{
|
||||
struct pci_host_bridge_window *win;
|
||||
|
||||
list_for_each_entry(win, &pci->resources, list)
|
||||
release_resource(win->res);
|
||||
|
||||
pci_free_resource_list(&pci->resources);
|
||||
}
|
||||
|
||||
static int gen_pci_parse_request_of_pci_ranges(struct gen_pci *pci)
|
||||
{
|
||||
struct of_pci_range range;
|
||||
struct of_pci_range_parser parser;
|
||||
int err, res_valid = 0;
|
||||
struct device *dev = pci->host.dev.parent;
|
||||
struct device_node *np = dev->of_node;
|
||||
|
||||
if (of_pci_range_parser_init(&parser, np)) {
|
||||
dev_err(dev, "missing \"ranges\" property\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
for_each_of_pci_range(&parser, &range) {
|
||||
struct resource *parent, *res;
|
||||
resource_size_t offset;
|
||||
u32 restype = range.flags & IORESOURCE_TYPE_BITS;
|
||||
|
||||
res = devm_kmalloc(dev, sizeof(*res), GFP_KERNEL);
|
||||
if (!res) {
|
||||
err = -ENOMEM;
|
||||
goto out_release_res;
|
||||
}
|
||||
|
||||
switch (restype) {
|
||||
case IORESOURCE_IO:
|
||||
parent = &ioport_resource;
|
||||
err = gen_pci_calc_io_offset(dev, &range, res, &offset);
|
||||
break;
|
||||
case IORESOURCE_MEM:
|
||||
parent = &iomem_resource;
|
||||
err = gen_pci_calc_mem_offset(dev, &range, res, &offset);
|
||||
res_valid |= !(res->flags & IORESOURCE_PREFETCH || err);
|
||||
break;
|
||||
default:
|
||||
err = -EINVAL;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (err) {
|
||||
dev_warn(dev,
|
||||
"error %d: failed to add resource [type 0x%x, %lld bytes]\n",
|
||||
err, restype, range.size);
|
||||
continue;
|
||||
}
|
||||
|
||||
err = request_resource(parent, res);
|
||||
if (err)
|
||||
goto out_release_res;
|
||||
|
||||
pci_add_resource_offset(&pci->resources, res, offset);
|
||||
}
|
||||
|
||||
if (!res_valid) {
|
||||
dev_err(dev, "non-prefetchable memory resource required\n");
|
||||
err = -EINVAL;
|
||||
goto out_release_res;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
out_release_res:
|
||||
gen_pci_release_of_pci_ranges(pci);
|
||||
return err;
|
||||
}
|
||||
|
||||
static int gen_pci_parse_map_cfg_windows(struct gen_pci *pci)
|
||||
{
|
||||
int err;
|
||||
u8 bus_max;
|
||||
resource_size_t busn;
|
||||
struct resource *bus_range;
|
||||
struct device *dev = pci->host.dev.parent;
|
||||
struct device_node *np = dev->of_node;
|
||||
|
||||
if (of_pci_parse_bus_range(np, &pci->cfg.bus_range))
|
||||
pci->cfg.bus_range = (struct resource) {
|
||||
.name = np->name,
|
||||
.start = 0,
|
||||
.end = 0xff,
|
||||
.flags = IORESOURCE_BUS,
|
||||
};
|
||||
|
||||
err = of_address_to_resource(np, 0, &pci->cfg.res);
|
||||
if (err) {
|
||||
dev_err(dev, "missing \"reg\" property\n");
|
||||
return err;
|
||||
}
|
||||
|
||||
pci->cfg.win = devm_kcalloc(dev, resource_size(&pci->cfg.bus_range),
|
||||
sizeof(*pci->cfg.win), GFP_KERNEL);
|
||||
if (!pci->cfg.win)
|
||||
return -ENOMEM;
|
||||
|
||||
/* Limit the bus-range to fit within reg */
|
||||
bus_max = pci->cfg.bus_range.start +
|
||||
(resource_size(&pci->cfg.res) >> pci->cfg.ops->bus_shift) - 1;
|
||||
pci->cfg.bus_range.end = min_t(resource_size_t, pci->cfg.bus_range.end,
|
||||
bus_max);
|
||||
|
||||
/* Map our Configuration Space windows */
|
||||
if (!devm_request_mem_region(dev, pci->cfg.res.start,
|
||||
resource_size(&pci->cfg.res),
|
||||
"Configuration Space"))
|
||||
return -ENOMEM;
|
||||
|
||||
bus_range = &pci->cfg.bus_range;
|
||||
for (busn = bus_range->start; busn <= bus_range->end; ++busn) {
|
||||
u32 idx = busn - bus_range->start;
|
||||
u32 sz = 1 << pci->cfg.ops->bus_shift;
|
||||
|
||||
pci->cfg.win[idx] = devm_ioremap(dev,
|
||||
pci->cfg.res.start + busn * sz,
|
||||
sz);
|
||||
if (!pci->cfg.win[idx])
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
/* Register bus resource */
|
||||
pci_add_resource(&pci->resources, bus_range);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int gen_pci_setup(int nr, struct pci_sys_data *sys)
|
||||
{
|
||||
struct gen_pci *pci = sys->private_data;
|
||||
list_splice_init(&pci->resources, &sys->resources);
|
||||
return 1;
|
||||
}
|
||||
|
||||
static int gen_pci_probe(struct platform_device *pdev)
|
||||
{
|
||||
int err;
|
||||
const char *type;
|
||||
const struct of_device_id *of_id;
|
||||
const int *prop;
|
||||
struct device *dev = &pdev->dev;
|
||||
struct device_node *np = dev->of_node;
|
||||
struct gen_pci *pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL);
|
||||
struct hw_pci hw = {
|
||||
.nr_controllers = 1,
|
||||
.private_data = (void **)&pci,
|
||||
.setup = gen_pci_setup,
|
||||
.map_irq = of_irq_parse_and_map_pci,
|
||||
.ops = &gen_pci_ops,
|
||||
};
|
||||
|
||||
if (!pci)
|
||||
return -ENOMEM;
|
||||
|
||||
type = of_get_property(np, "device_type", NULL);
|
||||
if (!type || strcmp(type, "pci")) {
|
||||
dev_err(dev, "invalid \"device_type\" %s\n", type);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
prop = of_get_property(of_chosen, "linux,pci-probe-only", NULL);
|
||||
if (prop) {
|
||||
if (*prop)
|
||||
pci_add_flags(PCI_PROBE_ONLY);
|
||||
else
|
||||
pci_clear_flags(PCI_PROBE_ONLY);
|
||||
}
|
||||
|
||||
of_id = of_match_node(gen_pci_of_match, np);
|
||||
pci->cfg.ops = of_id->data;
|
||||
pci->host.dev.parent = dev;
|
||||
INIT_LIST_HEAD(&pci->host.windows);
|
||||
INIT_LIST_HEAD(&pci->resources);
|
||||
|
||||
/* Parse our PCI ranges and request their resources */
|
||||
err = gen_pci_parse_request_of_pci_ranges(pci);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
/* Parse and map our Configuration Space windows */
|
||||
err = gen_pci_parse_map_cfg_windows(pci);
|
||||
if (err) {
|
||||
gen_pci_release_of_pci_ranges(pci);
|
||||
return err;
|
||||
}
|
||||
|
||||
pci_common_init_dev(dev, &hw);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct platform_driver gen_pci_driver = {
|
||||
.driver = {
|
||||
.name = "pci-host-generic",
|
||||
.owner = THIS_MODULE,
|
||||
.of_match_table = gen_pci_of_match,
|
||||
},
|
||||
.probe = gen_pci_probe,
|
||||
};
|
||||
module_platform_driver(gen_pci_driver);
|
||||
|
||||
MODULE_DESCRIPTION("Generic PCI host driver");
|
||||
MODULE_AUTHOR("Will Deacon <will.deacon@arm.com>");
|
||||
MODULE_LICENSE("GPLv2");
|
|
@ -25,6 +25,7 @@
|
|||
#include <linux/resource.h>
|
||||
#include <linux/signal.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/interrupt.h>
|
||||
|
||||
#include "pcie-designware.h"
|
||||
|
||||
|
@ -32,13 +33,9 @@
|
|||
|
||||
struct imx6_pcie {
|
||||
int reset_gpio;
|
||||
int power_on_gpio;
|
||||
int wake_up_gpio;
|
||||
int disable_gpio;
|
||||
struct clk *lvds_gate;
|
||||
struct clk *sata_ref_100m;
|
||||
struct clk *pcie_ref_125m;
|
||||
struct clk *pcie_axi;
|
||||
struct clk *pcie_bus;
|
||||
struct clk *pcie_phy;
|
||||
struct clk *pcie;
|
||||
struct pcie_port pp;
|
||||
struct regmap *iomuxc_gpr;
|
||||
void __iomem *mem_base;
|
||||
|
@ -231,36 +228,27 @@ static int imx6_pcie_deassert_core_reset(struct pcie_port *pp)
|
|||
struct imx6_pcie *imx6_pcie = to_imx6_pcie(pp);
|
||||
int ret;
|
||||
|
||||
if (gpio_is_valid(imx6_pcie->power_on_gpio))
|
||||
gpio_set_value(imx6_pcie->power_on_gpio, 1);
|
||||
|
||||
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1,
|
||||
IMX6Q_GPR1_PCIE_TEST_PD, 0 << 18);
|
||||
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1,
|
||||
IMX6Q_GPR1_PCIE_REF_CLK_EN, 1 << 16);
|
||||
|
||||
ret = clk_prepare_enable(imx6_pcie->sata_ref_100m);
|
||||
ret = clk_prepare_enable(imx6_pcie->pcie_phy);
|
||||
if (ret) {
|
||||
dev_err(pp->dev, "unable to enable sata_ref_100m\n");
|
||||
goto err_sata_ref;
|
||||
dev_err(pp->dev, "unable to enable pcie_phy clock\n");
|
||||
goto err_pcie_phy;
|
||||
}
|
||||
|
||||
ret = clk_prepare_enable(imx6_pcie->pcie_ref_125m);
|
||||
ret = clk_prepare_enable(imx6_pcie->pcie_bus);
|
||||
if (ret) {
|
||||
dev_err(pp->dev, "unable to enable pcie_ref_125m\n");
|
||||
goto err_pcie_ref;
|
||||
dev_err(pp->dev, "unable to enable pcie_bus clock\n");
|
||||
goto err_pcie_bus;
|
||||
}
|
||||
|
||||
ret = clk_prepare_enable(imx6_pcie->lvds_gate);
|
||||
ret = clk_prepare_enable(imx6_pcie->pcie);
|
||||
if (ret) {
|
||||
dev_err(pp->dev, "unable to enable lvds_gate\n");
|
||||
goto err_lvds_gate;
|
||||
}
|
||||
|
||||
ret = clk_prepare_enable(imx6_pcie->pcie_axi);
|
||||
if (ret) {
|
||||
dev_err(pp->dev, "unable to enable pcie_axi\n");
|
||||
goto err_pcie_axi;
|
||||
dev_err(pp->dev, "unable to enable pcie clock\n");
|
||||
goto err_pcie;
|
||||
}
|
||||
|
||||
/* allow the clocks to stabilize */
|
||||
|
@ -274,13 +262,11 @@ static int imx6_pcie_deassert_core_reset(struct pcie_port *pp)
|
|||
}
|
||||
return 0;
|
||||
|
||||
err_pcie_axi:
|
||||
clk_disable_unprepare(imx6_pcie->lvds_gate);
|
||||
err_lvds_gate:
|
||||
clk_disable_unprepare(imx6_pcie->pcie_ref_125m);
|
||||
err_pcie_ref:
|
||||
clk_disable_unprepare(imx6_pcie->sata_ref_100m);
|
||||
err_sata_ref:
|
||||
err_pcie:
|
||||
clk_disable_unprepare(imx6_pcie->pcie_bus);
|
||||
err_pcie_bus:
|
||||
clk_disable_unprepare(imx6_pcie->pcie_phy);
|
||||
err_pcie_phy:
|
||||
return ret;
|
||||
|
||||
}
|
||||
|
@ -329,6 +315,13 @@ static int imx6_pcie_wait_for_link(struct pcie_port *pp)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static irqreturn_t imx6_pcie_msi_handler(int irq, void *arg)
|
||||
{
|
||||
struct pcie_port *pp = arg;
|
||||
|
||||
return dw_handle_msi_irq(pp);
|
||||
}
|
||||
|
||||
static int imx6_pcie_start_link(struct pcie_port *pp)
|
||||
{
|
||||
struct imx6_pcie *imx6_pcie = to_imx6_pcie(pp);
|
||||
|
@ -403,6 +396,9 @@ static void imx6_pcie_host_init(struct pcie_port *pp)
|
|||
dw_pcie_setup_rc(pp);
|
||||
|
||||
imx6_pcie_start_link(pp);
|
||||
|
||||
if (IS_ENABLED(CONFIG_PCI_MSI))
|
||||
dw_pcie_msi_init(pp);
|
||||
}
|
||||
|
||||
static void imx6_pcie_reset_phy(struct pcie_port *pp)
|
||||
|
@ -487,15 +483,25 @@ static struct pcie_host_ops imx6_pcie_host_ops = {
|
|||
.host_init = imx6_pcie_host_init,
|
||||
};
|
||||
|
||||
static int imx6_add_pcie_port(struct pcie_port *pp,
|
||||
static int __init imx6_add_pcie_port(struct pcie_port *pp,
|
||||
struct platform_device *pdev)
|
||||
{
|
||||
int ret;
|
||||
|
||||
pp->irq = platform_get_irq(pdev, 0);
|
||||
if (!pp->irq) {
|
||||
dev_err(&pdev->dev, "failed to get irq\n");
|
||||
return -ENODEV;
|
||||
if (IS_ENABLED(CONFIG_PCI_MSI)) {
|
||||
pp->msi_irq = platform_get_irq_byname(pdev, "msi");
|
||||
if (pp->msi_irq <= 0) {
|
||||
dev_err(&pdev->dev, "failed to get MSI irq\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
ret = devm_request_irq(&pdev->dev, pp->msi_irq,
|
||||
imx6_pcie_msi_handler,
|
||||
IRQF_SHARED, "mx6-pcie-msi", pp);
|
||||
if (ret) {
|
||||
dev_err(&pdev->dev, "failed to request MSI irq\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
}
|
||||
|
||||
pp->root_bus_nr = -1;
|
||||
|
@ -546,69 +552,26 @@ static int __init imx6_pcie_probe(struct platform_device *pdev)
|
|||
}
|
||||
}
|
||||
|
||||
imx6_pcie->power_on_gpio = of_get_named_gpio(np, "power-on-gpio", 0);
|
||||
if (gpio_is_valid(imx6_pcie->power_on_gpio)) {
|
||||
ret = devm_gpio_request_one(&pdev->dev,
|
||||
imx6_pcie->power_on_gpio,
|
||||
GPIOF_OUT_INIT_LOW,
|
||||
"PCIe power enable");
|
||||
if (ret) {
|
||||
dev_err(&pdev->dev, "unable to get power-on gpio\n");
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
imx6_pcie->wake_up_gpio = of_get_named_gpio(np, "wake-up-gpio", 0);
|
||||
if (gpio_is_valid(imx6_pcie->wake_up_gpio)) {
|
||||
ret = devm_gpio_request_one(&pdev->dev,
|
||||
imx6_pcie->wake_up_gpio,
|
||||
GPIOF_IN,
|
||||
"PCIe wake up");
|
||||
if (ret) {
|
||||
dev_err(&pdev->dev, "unable to get wake-up gpio\n");
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
imx6_pcie->disable_gpio = of_get_named_gpio(np, "disable-gpio", 0);
|
||||
if (gpio_is_valid(imx6_pcie->disable_gpio)) {
|
||||
ret = devm_gpio_request_one(&pdev->dev,
|
||||
imx6_pcie->disable_gpio,
|
||||
GPIOF_OUT_INIT_HIGH,
|
||||
"PCIe disable endpoint");
|
||||
if (ret) {
|
||||
dev_err(&pdev->dev, "unable to get disable-ep gpio\n");
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
/* Fetch clocks */
|
||||
imx6_pcie->lvds_gate = devm_clk_get(&pdev->dev, "lvds_gate");
|
||||
if (IS_ERR(imx6_pcie->lvds_gate)) {
|
||||
imx6_pcie->pcie_phy = devm_clk_get(&pdev->dev, "pcie_phy");
|
||||
if (IS_ERR(imx6_pcie->pcie_phy)) {
|
||||
dev_err(&pdev->dev,
|
||||
"lvds_gate clock select missing or invalid\n");
|
||||
return PTR_ERR(imx6_pcie->lvds_gate);
|
||||
"pcie_phy clock source missing or invalid\n");
|
||||
return PTR_ERR(imx6_pcie->pcie_phy);
|
||||
}
|
||||
|
||||
imx6_pcie->sata_ref_100m = devm_clk_get(&pdev->dev, "sata_ref_100m");
|
||||
if (IS_ERR(imx6_pcie->sata_ref_100m)) {
|
||||
imx6_pcie->pcie_bus = devm_clk_get(&pdev->dev, "pcie_bus");
|
||||
if (IS_ERR(imx6_pcie->pcie_bus)) {
|
||||
dev_err(&pdev->dev,
|
||||
"sata_ref_100m clock source missing or invalid\n");
|
||||
return PTR_ERR(imx6_pcie->sata_ref_100m);
|
||||
"pcie_bus clock source missing or invalid\n");
|
||||
return PTR_ERR(imx6_pcie->pcie_bus);
|
||||
}
|
||||
|
||||
imx6_pcie->pcie_ref_125m = devm_clk_get(&pdev->dev, "pcie_ref_125m");
|
||||
if (IS_ERR(imx6_pcie->pcie_ref_125m)) {
|
||||
imx6_pcie->pcie = devm_clk_get(&pdev->dev, "pcie");
|
||||
if (IS_ERR(imx6_pcie->pcie)) {
|
||||
dev_err(&pdev->dev,
|
||||
"pcie_ref_125m clock source missing or invalid\n");
|
||||
return PTR_ERR(imx6_pcie->pcie_ref_125m);
|
||||
}
|
||||
|
||||
imx6_pcie->pcie_axi = devm_clk_get(&pdev->dev, "pcie_axi");
|
||||
if (IS_ERR(imx6_pcie->pcie_axi)) {
|
||||
dev_err(&pdev->dev,
|
||||
"pcie_axi clock source missing or invalid\n");
|
||||
return PTR_ERR(imx6_pcie->pcie_axi);
|
||||
"pcie clock source missing or invalid\n");
|
||||
return PTR_ERR(imx6_pcie->pcie);
|
||||
}
|
||||
|
||||
/* Grab GPR config register range */
|
||||
|
|
|
@ -99,6 +99,7 @@ struct rcar_pci_priv {
|
|||
struct resource io_res;
|
||||
struct resource mem_res;
|
||||
struct resource *cfg_res;
|
||||
unsigned busnr;
|
||||
int irq;
|
||||
unsigned long window_size;
|
||||
};
|
||||
|
@ -318,8 +319,8 @@ static int rcar_pci_setup(int nr, struct pci_sys_data *sys)
|
|||
pci_add_resource(&sys->resources, &priv->io_res);
|
||||
pci_add_resource(&sys->resources, &priv->mem_res);
|
||||
|
||||
/* Setup bus number based on platform device id */
|
||||
sys->busnr = to_platform_device(priv->dev)->id;
|
||||
/* Setup bus number based on platform device id / of bus-range */
|
||||
sys->busnr = priv->busnr;
|
||||
return 1;
|
||||
}
|
||||
|
||||
|
@ -372,6 +373,23 @@ static int rcar_pci_probe(struct platform_device *pdev)
|
|||
|
||||
priv->window_size = SZ_1G;
|
||||
|
||||
if (pdev->dev.of_node) {
|
||||
struct resource busnr;
|
||||
int ret;
|
||||
|
||||
ret = of_pci_parse_bus_range(pdev->dev.of_node, &busnr);
|
||||
if (ret < 0) {
|
||||
dev_err(&pdev->dev, "failed to parse bus-range\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
priv->busnr = busnr.start;
|
||||
if (busnr.end != busnr.start)
|
||||
dev_warn(&pdev->dev, "only one bus number supported\n");
|
||||
} else {
|
||||
priv->busnr = pdev->id;
|
||||
}
|
||||
|
||||
hw_private[0] = priv;
|
||||
memset(&hw, 0, sizeof(hw));
|
||||
hw.nr_controllers = ARRAY_SIZE(hw_private);
|
||||
|
@ -383,11 +401,20 @@ static int rcar_pci_probe(struct platform_device *pdev)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static struct of_device_id rcar_pci_of_match[] = {
|
||||
{ .compatible = "renesas,pci-r8a7790", },
|
||||
{ .compatible = "renesas,pci-r8a7791", },
|
||||
{ },
|
||||
};
|
||||
|
||||
MODULE_DEVICE_TABLE(of, rcar_pci_of_match);
|
||||
|
||||
static struct platform_driver rcar_pci_driver = {
|
||||
.driver = {
|
||||
.name = "pci-rcar-gen2",
|
||||
.owner = THIS_MODULE,
|
||||
.suppress_bind_attrs = true,
|
||||
.of_match_table = rcar_pci_of_match,
|
||||
},
|
||||
.probe = rcar_pci_probe,
|
||||
};
|
||||
|
|
|
@ -156,15 +156,17 @@ static struct irq_chip dw_msi_irq_chip = {
|
|||
};
|
||||
|
||||
/* MSI int handler */
|
||||
void dw_handle_msi_irq(struct pcie_port *pp)
|
||||
irqreturn_t dw_handle_msi_irq(struct pcie_port *pp)
|
||||
{
|
||||
unsigned long val;
|
||||
int i, pos, irq;
|
||||
irqreturn_t ret = IRQ_NONE;
|
||||
|
||||
for (i = 0; i < MAX_MSI_CTRLS; i++) {
|
||||
dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_STATUS + i * 12, 4,
|
||||
(u32 *)&val);
|
||||
if (val) {
|
||||
ret = IRQ_HANDLED;
|
||||
pos = 0;
|
||||
while ((pos = find_next_bit(&val, 32, pos)) != 32) {
|
||||
irq = irq_find_mapping(pp->irq_domain,
|
||||
|
@ -177,6 +179,8 @@ void dw_handle_msi_irq(struct pcie_port *pp)
|
|||
}
|
||||
}
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
void dw_pcie_msi_init(struct pcie_port *pp)
|
||||
|
|
|
@ -68,7 +68,7 @@ struct pcie_host_ops {
|
|||
|
||||
int dw_pcie_cfg_read(void __iomem *addr, int where, int size, u32 *val);
|
||||
int dw_pcie_cfg_write(void __iomem *addr, int where, int size, u32 val);
|
||||
void dw_handle_msi_irq(struct pcie_port *pp);
|
||||
irqreturn_t dw_handle_msi_irq(struct pcie_port *pp);
|
||||
void dw_pcie_msi_init(struct pcie_port *pp);
|
||||
int dw_pcie_link_up(struct pcie_port *pp);
|
||||
void dw_pcie_setup_rc(struct pcie_port *pp);
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -4,7 +4,7 @@
|
|||
#include <linux/export.h>
|
||||
#include "pci.h"
|
||||
|
||||
int __ref pci_hp_add_bridge(struct pci_dev *dev)
|
||||
int pci_hp_add_bridge(struct pci_dev *dev)
|
||||
{
|
||||
struct pci_bus *parent = dev->bus;
|
||||
int pass, busnr, start = parent->busn_res.start;
|
||||
|
|
|
@ -41,7 +41,6 @@
|
|||
|
||||
#define pr_fmt(fmt) "acpiphp_glue: " fmt
|
||||
|
||||
#include <linux/init.h>
|
||||
#include <linux/module.h>
|
||||
|
||||
#include <linux/kernel.h>
|
||||
|
@ -501,7 +500,7 @@ static int acpiphp_rescan_slot(struct acpiphp_slot *slot)
|
|||
* This function should be called per *physical slot*,
|
||||
* not per each slot object in ACPI namespace.
|
||||
*/
|
||||
static void __ref enable_slot(struct acpiphp_slot *slot)
|
||||
static void enable_slot(struct acpiphp_slot *slot)
|
||||
{
|
||||
struct pci_dev *dev;
|
||||
struct pci_bus *bus = slot->bus;
|
||||
|
@ -516,8 +515,7 @@ static void __ref enable_slot(struct acpiphp_slot *slot)
|
|||
if (PCI_SLOT(dev->devfn) != slot->device)
|
||||
continue;
|
||||
|
||||
if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE ||
|
||||
dev->hdr_type == PCI_HEADER_TYPE_CARDBUS) {
|
||||
if (pci_is_bridge(dev)) {
|
||||
max = pci_scan_bridge(bus, dev, max, pass);
|
||||
if (pass && dev->subordinate) {
|
||||
check_hotplug_bridge(slot, dev);
|
||||
|
|
|
@ -250,7 +250,7 @@ int cpci_led_off(struct slot* slot)
|
|||
* Device configuration functions
|
||||
*/
|
||||
|
||||
int __ref cpci_configure_slot(struct slot *slot)
|
||||
int cpci_configure_slot(struct slot *slot)
|
||||
{
|
||||
struct pci_dev *dev;
|
||||
struct pci_bus *parent;
|
||||
|
@ -289,8 +289,7 @@ int __ref cpci_configure_slot(struct slot *slot)
|
|||
list_for_each_entry(dev, &parent->devices, bus_list)
|
||||
if (PCI_SLOT(dev->devfn) != PCI_SLOT(slot->devfn))
|
||||
continue;
|
||||
if ((dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) ||
|
||||
(dev->hdr_type == PCI_HEADER_TYPE_CARDBUS))
|
||||
if (pci_is_bridge(dev))
|
||||
pci_hp_add_bridge(dev);
|
||||
|
||||
|
||||
|
|
|
@ -709,7 +709,8 @@ static struct pci_resource *get_max_resource(struct pci_resource **head, u32 siz
|
|||
temp = temp->next;
|
||||
}
|
||||
|
||||
temp->next = max->next;
|
||||
if (temp)
|
||||
temp->next = max->next;
|
||||
}
|
||||
|
||||
max->next = NULL;
|
||||
|
|
|
@ -34,7 +34,6 @@
|
|||
#include <linux/workqueue.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/pci_hotplug.h>
|
||||
#include <linux/init.h>
|
||||
#include <asm/uaccess.h>
|
||||
#include "cpqphp.h"
|
||||
#include "cpqphp_nvram.h"
|
||||
|
|
|
@ -127,7 +127,7 @@ struct controller {
|
|||
#define HP_SUPR_RM(ctrl) ((ctrl)->slot_cap & PCI_EXP_SLTCAP_HPS)
|
||||
#define EMI(ctrl) ((ctrl)->slot_cap & PCI_EXP_SLTCAP_EIP)
|
||||
#define NO_CMD_CMPL(ctrl) ((ctrl)->slot_cap & PCI_EXP_SLTCAP_NCCS)
|
||||
#define PSN(ctrl) ((ctrl)->slot_cap >> 19)
|
||||
#define PSN(ctrl) (((ctrl)->slot_cap & PCI_EXP_SLTCAP_PSN) >> 19)
|
||||
|
||||
int pciehp_sysfs_enable_slot(struct slot *slot);
|
||||
int pciehp_sysfs_disable_slot(struct slot *slot);
|
||||
|
|
|
@ -159,6 +159,8 @@ static void pcie_write_cmd(struct controller *ctrl, u16 cmd, u16 mask)
|
|||
|
||||
pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status);
|
||||
if (slot_status & PCI_EXP_SLTSTA_CC) {
|
||||
pcie_capability_write_word(pdev, PCI_EXP_SLTSTA,
|
||||
PCI_EXP_SLTSTA_CC);
|
||||
if (!ctrl->no_cmd_complete) {
|
||||
/*
|
||||
* After 1 sec and CMD_COMPLETED still not set, just
|
||||
|
|
|
@ -62,8 +62,7 @@ int pciehp_configure_device(struct slot *p_slot)
|
|||
}
|
||||
|
||||
list_for_each_entry(dev, &parent->devices, bus_list)
|
||||
if ((dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) ||
|
||||
(dev->hdr_type == PCI_HEADER_TYPE_CARDBUS))
|
||||
if (pci_is_bridge(dev))
|
||||
pci_hp_add_bridge(dev);
|
||||
|
||||
pci_assign_unassigned_bridge_resources(bridge);
|
||||
|
|
|
@ -160,8 +160,7 @@ void pci_configure_slot(struct pci_dev *dev)
|
|||
(dev->class >> 8) == PCI_CLASS_BRIDGE_PCI)))
|
||||
return;
|
||||
|
||||
if (dev->bus)
|
||||
pcie_bus_configure_settings(dev->bus);
|
||||
pcie_bus_configure_settings(dev->bus);
|
||||
|
||||
memset(&hpp, 0, sizeof(hpp));
|
||||
ret = pci_get_hp_params(dev, &hpp);
|
||||
|
|
|
@ -157,8 +157,7 @@ static void dlpar_pci_add_bus(struct device_node *dn)
|
|||
}
|
||||
|
||||
/* Scan below the new bridge */
|
||||
if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE ||
|
||||
dev->hdr_type == PCI_HEADER_TYPE_CARDBUS)
|
||||
if (pci_is_bridge(dev))
|
||||
of_scan_pci_bridge(dev);
|
||||
|
||||
/* Map IO space for child bus, which may or may not succeed */
|
||||
|
|
|
@ -223,16 +223,16 @@ int rpaphp_get_drc_props(struct device_node *dn, int *drc_index,
|
|||
type_tmp = (char *) &types[1];
|
||||
|
||||
/* Iterate through parent properties, looking for my-drc-index */
|
||||
for (i = 0; i < indexes[0]; i++) {
|
||||
for (i = 0; i < be32_to_cpu(indexes[0]); i++) {
|
||||
if ((unsigned int) indexes[i + 1] == *my_index) {
|
||||
if (drc_name)
|
||||
*drc_name = name_tmp;
|
||||
if (drc_type)
|
||||
*drc_type = type_tmp;
|
||||
if (drc_index)
|
||||
*drc_index = *my_index;
|
||||
*drc_index = be32_to_cpu(*my_index);
|
||||
if (drc_power_domain)
|
||||
*drc_power_domain = domains[i+1];
|
||||
*drc_power_domain = be32_to_cpu(domains[i+1]);
|
||||
return 0;
|
||||
}
|
||||
name_tmp += (strlen(name_tmp) + 1);
|
||||
|
@ -321,16 +321,19 @@ int rpaphp_add_slot(struct device_node *dn)
|
|||
/* register PCI devices */
|
||||
name = (char *) &names[1];
|
||||
type = (char *) &types[1];
|
||||
for (i = 0; i < indexes[0]; i++) {
|
||||
for (i = 0; i < be32_to_cpu(indexes[0]); i++) {
|
||||
int index;
|
||||
|
||||
slot = alloc_slot_struct(dn, indexes[i + 1], name, power_domains[i + 1]);
|
||||
index = be32_to_cpu(indexes[i + 1]);
|
||||
slot = alloc_slot_struct(dn, index, name,
|
||||
be32_to_cpu(power_domains[i + 1]));
|
||||
if (!slot)
|
||||
return -ENOMEM;
|
||||
|
||||
slot->type = simple_strtoul(type, NULL, 10);
|
||||
|
||||
dbg("Found drc-index:0x%x drc-name:%s drc-type:%s\n",
|
||||
indexes[i + 1], name, type);
|
||||
index, name, type);
|
||||
|
||||
retval = rpaphp_enable_slot(slot);
|
||||
if (!retval)
|
||||
|
|
|
@ -15,7 +15,6 @@
|
|||
#include <linux/slab.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/pci_hotplug.h>
|
||||
#include <linux/init.h>
|
||||
#include <asm/pci_debug.h>
|
||||
#include <asm/sclp.h>
|
||||
|
||||
|
|
|
@ -34,7 +34,7 @@
|
|||
#include "../pci.h"
|
||||
#include "shpchp.h"
|
||||
|
||||
int __ref shpchp_configure_device(struct slot *p_slot)
|
||||
int shpchp_configure_device(struct slot *p_slot)
|
||||
{
|
||||
struct pci_dev *dev;
|
||||
struct controller *ctrl = p_slot->ctrl;
|
||||
|
@ -64,8 +64,7 @@ int __ref shpchp_configure_device(struct slot *p_slot)
|
|||
list_for_each_entry(dev, &parent->devices, bus_list) {
|
||||
if (PCI_SLOT(dev->devfn) != p_slot->device)
|
||||
continue;
|
||||
if ((dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) ||
|
||||
(dev->hdr_type == PCI_HEADER_TYPE_CARDBUS))
|
||||
if (pci_is_bridge(dev))
|
||||
pci_hp_add_bridge(dev);
|
||||
}
|
||||
|
||||
|
|
|
@ -106,7 +106,7 @@ static int virtfn_add(struct pci_dev *dev, int id, int reset)
|
|||
pci_device_add(virtfn, virtfn->bus);
|
||||
mutex_unlock(&iov->dev->sriov->lock);
|
||||
|
||||
rc = pci_bus_add_device(virtfn);
|
||||
pci_bus_add_device(virtfn);
|
||||
sprintf(buf, "virtfn%u", id);
|
||||
rc = sysfs_create_link(&dev->dev.kobj, &virtfn->dev.kobj, buf);
|
||||
if (rc)
|
||||
|
|
|
@ -10,7 +10,6 @@
|
|||
#include <linux/mm.h>
|
||||
#include <linux/irq.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/export.h>
|
||||
#include <linux/ioport.h>
|
||||
#include <linux/pci.h>
|
||||
|
@ -544,22 +543,18 @@ static int populate_msi_sysfs(struct pci_dev *pdev)
|
|||
if (!msi_attrs)
|
||||
return -ENOMEM;
|
||||
list_for_each_entry(entry, &pdev->msi_list, list) {
|
||||
char *name = kmalloc(20, GFP_KERNEL);
|
||||
if (!name)
|
||||
goto error_attrs;
|
||||
|
||||
msi_dev_attr = kzalloc(sizeof(*msi_dev_attr), GFP_KERNEL);
|
||||
if (!msi_dev_attr) {
|
||||
kfree(name);
|
||||
if (!msi_dev_attr)
|
||||
goto error_attrs;
|
||||
}
|
||||
msi_attrs[count] = &msi_dev_attr->attr;
|
||||
|
||||
sprintf(name, "%d", entry->irq);
|
||||
sysfs_attr_init(&msi_dev_attr->attr);
|
||||
msi_dev_attr->attr.name = name;
|
||||
msi_dev_attr->attr.name = kasprintf(GFP_KERNEL, "%d",
|
||||
entry->irq);
|
||||
if (!msi_dev_attr->attr.name)
|
||||
goto error_attrs;
|
||||
msi_dev_attr->attr.mode = S_IRUGO;
|
||||
msi_dev_attr->show = msi_mode_show;
|
||||
msi_attrs[count] = &msi_dev_attr->attr;
|
||||
++count;
|
||||
}
|
||||
|
||||
|
@ -883,50 +878,6 @@ int pci_msi_vec_count(struct pci_dev *dev)
|
|||
}
|
||||
EXPORT_SYMBOL(pci_msi_vec_count);
|
||||
|
||||
/**
|
||||
* pci_enable_msi_block - configure device's MSI capability structure
|
||||
* @dev: device to configure
|
||||
* @nvec: number of interrupts to configure
|
||||
*
|
||||
* Allocate IRQs for a device with the MSI capability.
|
||||
* This function returns a negative errno if an error occurs. If it
|
||||
* is unable to allocate the number of interrupts requested, it returns
|
||||
* the number of interrupts it might be able to allocate. If it successfully
|
||||
* allocates at least the number of interrupts requested, it returns 0 and
|
||||
* updates the @dev's irq member to the lowest new interrupt number; the
|
||||
* other interrupt numbers allocated to this device are consecutive.
|
||||
*/
|
||||
int pci_enable_msi_block(struct pci_dev *dev, int nvec)
|
||||
{
|
||||
int status, maxvec;
|
||||
|
||||
if (dev->current_state != PCI_D0)
|
||||
return -EINVAL;
|
||||
|
||||
maxvec = pci_msi_vec_count(dev);
|
||||
if (maxvec < 0)
|
||||
return maxvec;
|
||||
if (nvec > maxvec)
|
||||
return maxvec;
|
||||
|
||||
status = pci_msi_check_device(dev, nvec, PCI_CAP_ID_MSI);
|
||||
if (status)
|
||||
return status;
|
||||
|
||||
WARN_ON(!!dev->msi_enabled);
|
||||
|
||||
/* Check whether driver already requested MSI-X irqs */
|
||||
if (dev->msix_enabled) {
|
||||
dev_info(&dev->dev, "can't enable MSI "
|
||||
"(MSI-X already enabled)\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
status = msi_capability_init(dev, nvec);
|
||||
return status;
|
||||
}
|
||||
EXPORT_SYMBOL(pci_enable_msi_block);
|
||||
|
||||
void pci_msi_shutdown(struct pci_dev *dev)
|
||||
{
|
||||
struct msi_desc *desc;
|
||||
|
@ -1132,14 +1083,45 @@ void pci_msi_init_pci_dev(struct pci_dev *dev)
|
|||
**/
|
||||
int pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec)
|
||||
{
|
||||
int nvec = maxvec;
|
||||
int nvec;
|
||||
int rc;
|
||||
|
||||
if (dev->current_state != PCI_D0)
|
||||
return -EINVAL;
|
||||
|
||||
WARN_ON(!!dev->msi_enabled);
|
||||
|
||||
/* Check whether driver already requested MSI-X irqs */
|
||||
if (dev->msix_enabled) {
|
||||
dev_info(&dev->dev,
|
||||
"can't enable MSI (MSI-X already enabled)\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (maxvec < minvec)
|
||||
return -ERANGE;
|
||||
|
||||
nvec = pci_msi_vec_count(dev);
|
||||
if (nvec < 0)
|
||||
return nvec;
|
||||
else if (nvec < minvec)
|
||||
return -EINVAL;
|
||||
else if (nvec > maxvec)
|
||||
nvec = maxvec;
|
||||
|
||||
do {
|
||||
rc = pci_enable_msi_block(dev, nvec);
|
||||
rc = pci_msi_check_device(dev, nvec, PCI_CAP_ID_MSI);
|
||||
if (rc < 0) {
|
||||
return rc;
|
||||
} else if (rc > 0) {
|
||||
if (rc < minvec)
|
||||
return -ENOSPC;
|
||||
nvec = rc;
|
||||
}
|
||||
} while (rc);
|
||||
|
||||
do {
|
||||
rc = msi_capability_init(dev, nvec);
|
||||
if (rc < 0) {
|
||||
return rc;
|
||||
} else if (rc > 0) {
|
||||
|
|
|
@ -309,13 +309,7 @@ static struct acpi_device *acpi_pci_find_companion(struct device *dev)
|
|||
bool check_children;
|
||||
u64 addr;
|
||||
|
||||
/*
|
||||
* pci_is_bridge() is not suitable here, because pci_dev->subordinate
|
||||
* is set only after acpi_pci_find_device() has been called for the
|
||||
* given device.
|
||||
*/
|
||||
check_children = pci_dev->hdr_type == PCI_HEADER_TYPE_BRIDGE
|
||||
|| pci_dev->hdr_type == PCI_HEADER_TYPE_CARDBUS;
|
||||
check_children = pci_is_bridge(pci_dev);
|
||||
/* Please ref to ACPI spec for the syntax of _ADR */
|
||||
addr = (PCI_SLOT(pci_dev->devfn) << 16) | PCI_FUNC(pci_dev->devfn);
|
||||
return acpi_find_child_device(ACPI_COMPANION(dev->parent), addr,
|
||||
|
|
|
@ -107,7 +107,7 @@ store_new_id(struct device_driver *driver, const char *buf, size_t count)
|
|||
subdevice=PCI_ANY_ID, class=0, class_mask=0;
|
||||
unsigned long driver_data=0;
|
||||
int fields=0;
|
||||
int retval;
|
||||
int retval = 0;
|
||||
|
||||
fields = sscanf(buf, "%x %x %x %x %x %x %lx",
|
||||
&vendor, &device, &subvendor, &subdevice,
|
||||
|
@ -115,6 +115,26 @@ store_new_id(struct device_driver *driver, const char *buf, size_t count)
|
|||
if (fields < 2)
|
||||
return -EINVAL;
|
||||
|
||||
if (fields != 7) {
|
||||
struct pci_dev *pdev = kzalloc(sizeof(*pdev), GFP_KERNEL);
|
||||
if (!pdev)
|
||||
return -ENOMEM;
|
||||
|
||||
pdev->vendor = vendor;
|
||||
pdev->device = device;
|
||||
pdev->subsystem_vendor = subvendor;
|
||||
pdev->subsystem_device = subdevice;
|
||||
pdev->class = class;
|
||||
|
||||
if (pci_match_id(pdrv->id_table, pdev))
|
||||
retval = -EEXIST;
|
||||
|
||||
kfree(pdev);
|
||||
|
||||
if (retval)
|
||||
return retval;
|
||||
}
|
||||
|
||||
/* Only accept driver_data values that match an existing id_table
|
||||
entry */
|
||||
if (ids) {
|
||||
|
@ -216,6 +236,13 @@ const struct pci_device_id *pci_match_id(const struct pci_device_id *ids,
|
|||
return NULL;
|
||||
}
|
||||
|
||||
static const struct pci_device_id pci_device_id_any = {
|
||||
.vendor = PCI_ANY_ID,
|
||||
.device = PCI_ANY_ID,
|
||||
.subvendor = PCI_ANY_ID,
|
||||
.subdevice = PCI_ANY_ID,
|
||||
};
|
||||
|
||||
/**
|
||||
* pci_match_device - Tell if a PCI device structure has a matching PCI device id structure
|
||||
* @drv: the PCI driver to match against
|
||||
|
@ -229,18 +256,30 @@ static const struct pci_device_id *pci_match_device(struct pci_driver *drv,
|
|||
struct pci_dev *dev)
|
||||
{
|
||||
struct pci_dynid *dynid;
|
||||
const struct pci_device_id *found_id = NULL;
|
||||
|
||||
/* When driver_override is set, only bind to the matching driver */
|
||||
if (dev->driver_override && strcmp(dev->driver_override, drv->name))
|
||||
return NULL;
|
||||
|
||||
/* Look at the dynamic ids first, before the static ones */
|
||||
spin_lock(&drv->dynids.lock);
|
||||
list_for_each_entry(dynid, &drv->dynids.list, node) {
|
||||
if (pci_match_one_device(&dynid->id, dev)) {
|
||||
spin_unlock(&drv->dynids.lock);
|
||||
return &dynid->id;
|
||||
found_id = &dynid->id;
|
||||
break;
|
||||
}
|
||||
}
|
||||
spin_unlock(&drv->dynids.lock);
|
||||
|
||||
return pci_match_id(drv->id_table, dev);
|
||||
if (!found_id)
|
||||
found_id = pci_match_id(drv->id_table, dev);
|
||||
|
||||
/* driver_override will always match, send a dummy id */
|
||||
if (!found_id && dev->driver_override)
|
||||
found_id = &pci_device_id_any;
|
||||
|
||||
return found_id;
|
||||
}
|
||||
|
||||
struct drv_dev_and_id {
|
||||
|
@ -580,14 +619,14 @@ static void pci_pm_default_resume(struct pci_dev *pci_dev)
|
|||
{
|
||||
pci_fixup_device(pci_fixup_resume, pci_dev);
|
||||
|
||||
if (!pci_is_bridge(pci_dev))
|
||||
if (!pci_has_subordinate(pci_dev))
|
||||
pci_enable_wake(pci_dev, PCI_D0, false);
|
||||
}
|
||||
|
||||
static void pci_pm_default_suspend(struct pci_dev *pci_dev)
|
||||
{
|
||||
/* Disable non-bridge devices without PM support */
|
||||
if (!pci_is_bridge(pci_dev))
|
||||
if (!pci_has_subordinate(pci_dev))
|
||||
pci_disable_enabled_device(pci_dev);
|
||||
}
|
||||
|
||||
|
@ -717,7 +756,7 @@ static int pci_pm_suspend_noirq(struct device *dev)
|
|||
|
||||
if (!pci_dev->state_saved) {
|
||||
pci_save_state(pci_dev);
|
||||
if (!pci_is_bridge(pci_dev))
|
||||
if (!pci_has_subordinate(pci_dev))
|
||||
pci_prepare_to_sleep(pci_dev);
|
||||
}
|
||||
|
||||
|
@ -971,7 +1010,7 @@ static int pci_pm_poweroff_noirq(struct device *dev)
|
|||
return error;
|
||||
}
|
||||
|
||||
if (!pci_dev->state_saved && !pci_is_bridge(pci_dev))
|
||||
if (!pci_dev->state_saved && !pci_has_subordinate(pci_dev))
|
||||
pci_prepare_to_sleep(pci_dev);
|
||||
|
||||
/*
|
||||
|
@ -1325,8 +1364,6 @@ static int pci_uevent(struct device *dev, struct kobj_uevent_env *env)
|
|||
return -ENODEV;
|
||||
|
||||
pdev = to_pci_dev(dev);
|
||||
if (!pdev)
|
||||
return -ENODEV;
|
||||
|
||||
if (add_uevent_var(env, "PCI_CLASS=%04X", pdev->class))
|
||||
return -ENOMEM;
|
||||
|
@ -1347,6 +1384,7 @@ static int pci_uevent(struct device *dev, struct kobj_uevent_env *env)
|
|||
(u8)(pdev->class >> 16), (u8)(pdev->class >> 8),
|
||||
(u8)(pdev->class)))
|
||||
return -ENOMEM;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -29,6 +29,7 @@
|
|||
#include <linux/slab.h>
|
||||
#include <linux/vgaarb.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
#include <linux/of.h>
|
||||
#include "pci.h"
|
||||
|
||||
static int sysfs_initialized; /* = 0 */
|
||||
|
@ -416,6 +417,20 @@ static ssize_t d3cold_allowed_show(struct device *dev,
|
|||
static DEVICE_ATTR_RW(d3cold_allowed);
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_OF
|
||||
static ssize_t devspec_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(dev);
|
||||
struct device_node *np = pci_device_to_OF_node(pdev);
|
||||
|
||||
if (np == NULL || np->full_name == NULL)
|
||||
return 0;
|
||||
return sprintf(buf, "%s", np->full_name);
|
||||
}
|
||||
static DEVICE_ATTR_RO(devspec);
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_PCI_IOV
|
||||
static ssize_t sriov_totalvfs_show(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
|
@ -499,6 +514,45 @@ static struct device_attribute sriov_numvfs_attr =
|
|||
sriov_numvfs_show, sriov_numvfs_store);
|
||||
#endif /* CONFIG_PCI_IOV */
|
||||
|
||||
static ssize_t driver_override_store(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(dev);
|
||||
char *driver_override, *old = pdev->driver_override, *cp;
|
||||
|
||||
if (count > PATH_MAX)
|
||||
return -EINVAL;
|
||||
|
||||
driver_override = kstrndup(buf, count, GFP_KERNEL);
|
||||
if (!driver_override)
|
||||
return -ENOMEM;
|
||||
|
||||
cp = strchr(driver_override, '\n');
|
||||
if (cp)
|
||||
*cp = '\0';
|
||||
|
||||
if (strlen(driver_override)) {
|
||||
pdev->driver_override = driver_override;
|
||||
} else {
|
||||
kfree(driver_override);
|
||||
pdev->driver_override = NULL;
|
||||
}
|
||||
|
||||
kfree(old);
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
static ssize_t driver_override_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(dev);
|
||||
|
||||
return sprintf(buf, "%s\n", pdev->driver_override);
|
||||
}
|
||||
static DEVICE_ATTR_RW(driver_override);
|
||||
|
||||
static struct attribute *pci_dev_attrs[] = {
|
||||
&dev_attr_resource.attr,
|
||||
&dev_attr_vendor.attr,
|
||||
|
@ -521,6 +575,10 @@ static struct attribute *pci_dev_attrs[] = {
|
|||
#if defined(CONFIG_PM_RUNTIME) && defined(CONFIG_ACPI)
|
||||
&dev_attr_d3cold_allowed.attr,
|
||||
#endif
|
||||
#ifdef CONFIG_OF
|
||||
&dev_attr_devspec.attr,
|
||||
#endif
|
||||
&dev_attr_driver_override.attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
|
@ -1255,11 +1313,6 @@ static struct bin_attribute pcie_config_attr = {
|
|||
.write = pci_write_config,
|
||||
};
|
||||
|
||||
int __weak pcibios_add_platform_entries(struct pci_dev *dev)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static ssize_t reset_store(struct device *dev,
|
||||
struct device_attribute *attr, const char *buf,
|
||||
size_t count)
|
||||
|
@ -1375,11 +1428,6 @@ int __must_check pci_create_sysfs_dev_files (struct pci_dev *pdev)
|
|||
pdev->rom_attr = attr;
|
||||
}
|
||||
|
||||
/* add platform-specific attributes */
|
||||
retval = pcibios_add_platform_entries(pdev);
|
||||
if (retval)
|
||||
goto err_rom_file;
|
||||
|
||||
/* add sysfs entries for various capabilities */
|
||||
retval = pci_create_capabilities_sysfs(pdev);
|
||||
if (retval)
|
||||
|
|
|
@ -1468,6 +1468,17 @@ void __weak pcibios_release_device(struct pci_dev *dev) {}
|
|||
*/
|
||||
void __weak pcibios_disable_device (struct pci_dev *dev) {}
|
||||
|
||||
/**
|
||||
* pcibios_penalize_isa_irq - penalize an ISA IRQ
|
||||
* @irq: ISA IRQ to penalize
|
||||
* @active: IRQ active or not
|
||||
*
|
||||
* Permits the platform to provide architecture-specific functionality when
|
||||
* penalizing ISA IRQs. This is the default implementation. Architecture
|
||||
* implementations can override this.
|
||||
*/
|
||||
void __weak pcibios_penalize_isa_irq(int irq, int active) {}
|
||||
|
||||
static void do_pci_disable_device(struct pci_dev *dev)
|
||||
{
|
||||
u16 pci_command;
|
||||
|
@ -3306,8 +3317,27 @@ static void pci_dev_unlock(struct pci_dev *dev)
|
|||
pci_cfg_access_unlock(dev);
|
||||
}
|
||||
|
||||
/**
|
||||
* pci_reset_notify - notify device driver of reset
|
||||
* @dev: device to be notified of reset
|
||||
* @prepare: 'true' if device is about to be reset; 'false' if reset attempt
|
||||
* completed
|
||||
*
|
||||
* Must be called prior to device access being disabled and after device
|
||||
* access is restored.
|
||||
*/
|
||||
static void pci_reset_notify(struct pci_dev *dev, bool prepare)
|
||||
{
|
||||
const struct pci_error_handlers *err_handler =
|
||||
dev->driver ? dev->driver->err_handler : NULL;
|
||||
if (err_handler && err_handler->reset_notify)
|
||||
err_handler->reset_notify(dev, prepare);
|
||||
}
|
||||
|
||||
static void pci_dev_save_and_disable(struct pci_dev *dev)
|
||||
{
|
||||
pci_reset_notify(dev, true);
|
||||
|
||||
/*
|
||||
* Wake-up device prior to save. PM registers default to D0 after
|
||||
* reset and a simple register restore doesn't reliably return
|
||||
|
@ -3329,6 +3359,7 @@ static void pci_dev_save_and_disable(struct pci_dev *dev)
|
|||
static void pci_dev_restore(struct pci_dev *dev)
|
||||
{
|
||||
pci_restore_state(dev);
|
||||
pci_reset_notify(dev, false);
|
||||
}
|
||||
|
||||
static int pci_dev_reset(struct pci_dev *dev, int probe)
|
||||
|
@ -3345,6 +3376,7 @@ static int pci_dev_reset(struct pci_dev *dev, int probe)
|
|||
|
||||
return rc;
|
||||
}
|
||||
|
||||
/**
|
||||
* __pci_reset_function - reset a PCI device function
|
||||
* @dev: PCI device to reset
|
||||
|
@ -4126,7 +4158,7 @@ int pci_set_vga_state(struct pci_dev *dev, bool decode,
|
|||
u16 cmd;
|
||||
int rc;
|
||||
|
||||
WARN_ON((flags & PCI_VGA_STATE_CHANGE_DECODES) & (command_bits & ~(PCI_COMMAND_IO|PCI_COMMAND_MEMORY)));
|
||||
WARN_ON((flags & PCI_VGA_STATE_CHANGE_DECODES) && (command_bits & ~(PCI_COMMAND_IO|PCI_COMMAND_MEMORY)));
|
||||
|
||||
/* ARCH specific VGA enables */
|
||||
rc = pci_set_vga_state_arch(dev, decode, command_bits, flags);
|
||||
|
|
|
@ -77,7 +77,7 @@ static inline void pci_wakeup_event(struct pci_dev *dev)
|
|||
pm_wakeup_event(&dev->dev, 100);
|
||||
}
|
||||
|
||||
static inline bool pci_is_bridge(struct pci_dev *pci_dev)
|
||||
static inline bool pci_has_subordinate(struct pci_dev *pci_dev)
|
||||
{
|
||||
return !!(pci_dev->subordinate);
|
||||
}
|
||||
|
@ -201,11 +201,11 @@ int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type,
|
|||
struct resource *res, unsigned int reg);
|
||||
int pci_resource_bar(struct pci_dev *dev, int resno, enum pci_bar_type *type);
|
||||
void pci_configure_ari(struct pci_dev *dev);
|
||||
void __ref __pci_bus_size_bridges(struct pci_bus *bus,
|
||||
void __pci_bus_size_bridges(struct pci_bus *bus,
|
||||
struct list_head *realloc_head);
|
||||
void __ref __pci_bus_assign_resources(const struct pci_bus *bus,
|
||||
struct list_head *realloc_head,
|
||||
struct list_head *fail_head);
|
||||
void __pci_bus_assign_resources(const struct pci_bus *bus,
|
||||
struct list_head *realloc_head,
|
||||
struct list_head *fail_head);
|
||||
|
||||
/**
|
||||
* pci_ari_enabled - query ARI forwarding status
|
||||
|
|
|
@ -99,7 +99,7 @@ static int pcie_port_enable_msix(struct pci_dev *dev, int *vectors, int mask)
|
|||
for (i = 0; i < nr_entries; i++)
|
||||
msix_entries[i].entry = i;
|
||||
|
||||
status = pci_enable_msix(dev, msix_entries, nr_entries);
|
||||
status = pci_enable_msix_exact(dev, msix_entries, nr_entries);
|
||||
if (status)
|
||||
goto Exit;
|
||||
|
||||
|
@ -171,7 +171,7 @@ static int pcie_port_enable_msix(struct pci_dev *dev, int *vectors, int mask)
|
|||
pci_disable_msix(dev);
|
||||
|
||||
/* Now allocate the MSI-X vectors for real */
|
||||
status = pci_enable_msix(dev, msix_entries, nvec);
|
||||
status = pci_enable_msix_exact(dev, msix_entries, nvec);
|
||||
if (status)
|
||||
goto Exit;
|
||||
}
|
||||
|
@ -379,10 +379,13 @@ int pcie_port_device_register(struct pci_dev *dev)
|
|||
/*
|
||||
* Initialize service irqs. Don't use service devices that
|
||||
* require interrupts if there is no way to generate them.
|
||||
* However, some drivers may have a polling mode (e.g. pciehp_poll_mode)
|
||||
* that can be used in the absence of irqs. Allow them to determine
|
||||
* if that is to be used.
|
||||
*/
|
||||
status = init_service_irqs(dev, irqs, capabilities);
|
||||
if (status) {
|
||||
capabilities &= PCIE_PORT_SERVICE_VC;
|
||||
capabilities &= PCIE_PORT_SERVICE_VC | PCIE_PORT_SERVICE_HP;
|
||||
if (!capabilities)
|
||||
goto error_disable;
|
||||
}
|
||||
|
|
|
@ -171,9 +171,10 @@ int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type,
|
|||
struct resource *res, unsigned int pos)
|
||||
{
|
||||
u32 l, sz, mask;
|
||||
u64 l64, sz64, mask64;
|
||||
u16 orig_cmd;
|
||||
struct pci_bus_region region, inverted_region;
|
||||
bool bar_too_big = false, bar_disabled = false;
|
||||
bool bar_too_big = false, bar_too_high = false, bar_invalid = false;
|
||||
|
||||
mask = type ? PCI_ROM_ADDRESS_MASK : ~0;
|
||||
|
||||
|
@ -226,9 +227,9 @@ int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type,
|
|||
}
|
||||
|
||||
if (res->flags & IORESOURCE_MEM_64) {
|
||||
u64 l64 = l;
|
||||
u64 sz64 = sz;
|
||||
u64 mask64 = mask | (u64)~0 << 32;
|
||||
l64 = l;
|
||||
sz64 = sz;
|
||||
mask64 = mask | (u64)~0 << 32;
|
||||
|
||||
pci_read_config_dword(dev, pos + 4, &l);
|
||||
pci_write_config_dword(dev, pos + 4, ~0);
|
||||
|
@ -243,19 +244,22 @@ int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type,
|
|||
if (!sz64)
|
||||
goto fail;
|
||||
|
||||
if ((sizeof(resource_size_t) < 8) && (sz64 > 0x100000000ULL)) {
|
||||
if ((sizeof(dma_addr_t) < 8 || sizeof(resource_size_t) < 8) &&
|
||||
sz64 > 0x100000000ULL) {
|
||||
res->flags |= IORESOURCE_UNSET | IORESOURCE_DISABLED;
|
||||
res->start = 0;
|
||||
res->end = 0;
|
||||
bar_too_big = true;
|
||||
goto fail;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if ((sizeof(resource_size_t) < 8) && l) {
|
||||
/* Address above 32-bit boundary; disable the BAR */
|
||||
pci_write_config_dword(dev, pos, 0);
|
||||
pci_write_config_dword(dev, pos + 4, 0);
|
||||
if ((sizeof(dma_addr_t) < 8) && l) {
|
||||
/* Above 32-bit boundary; try to reallocate */
|
||||
res->flags |= IORESOURCE_UNSET;
|
||||
region.start = 0;
|
||||
region.end = sz64;
|
||||
bar_disabled = true;
|
||||
res->start = 0;
|
||||
res->end = sz64;
|
||||
bar_too_high = true;
|
||||
goto out;
|
||||
} else {
|
||||
region.start = l64;
|
||||
region.end = l64 + sz64;
|
||||
|
@ -285,11 +289,10 @@ int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type,
|
|||
* be claimed by the device.
|
||||
*/
|
||||
if (inverted_region.start != region.start) {
|
||||
dev_info(&dev->dev, "reg 0x%x: initial BAR value %pa invalid; forcing reassignment\n",
|
||||
pos, ®ion.start);
|
||||
res->flags |= IORESOURCE_UNSET;
|
||||
res->end -= res->start;
|
||||
res->start = 0;
|
||||
res->end = region.end - region.start;
|
||||
bar_invalid = true;
|
||||
}
|
||||
|
||||
goto out;
|
||||
|
@ -303,8 +306,15 @@ out:
|
|||
pci_write_config_word(dev, PCI_COMMAND, orig_cmd);
|
||||
|
||||
if (bar_too_big)
|
||||
dev_err(&dev->dev, "reg 0x%x: can't handle 64-bit BAR\n", pos);
|
||||
if (res->flags && !bar_disabled)
|
||||
dev_err(&dev->dev, "reg 0x%x: can't handle BAR larger than 4GB (size %#010llx)\n",
|
||||
pos, (unsigned long long) sz64);
|
||||
if (bar_too_high)
|
||||
dev_info(&dev->dev, "reg 0x%x: can't handle BAR above 4G (bus address %#010llx)\n",
|
||||
pos, (unsigned long long) l64);
|
||||
if (bar_invalid)
|
||||
dev_info(&dev->dev, "reg 0x%x: initial BAR value %#010llx invalid\n",
|
||||
pos, (unsigned long long) region.start);
|
||||
if (res->flags)
|
||||
dev_printk(KERN_DEBUG, &dev->dev, "reg 0x%x: %pR\n", pos, res);
|
||||
|
||||
return (res->flags & IORESOURCE_MEM_64) ? 1 : 0;
|
||||
|
@ -465,7 +475,7 @@ void pci_read_bridge_bases(struct pci_bus *child)
|
|||
|
||||
if (dev->transparent) {
|
||||
pci_bus_for_each_resource(child->parent, res, i) {
|
||||
if (res) {
|
||||
if (res && res->flags) {
|
||||
pci_bus_add_resource(child, res,
|
||||
PCI_SUBTRACTIVE_DECODE);
|
||||
dev_printk(KERN_DEBUG, &dev->dev,
|
||||
|
@ -719,7 +729,7 @@ add_dev:
|
|||
return child;
|
||||
}
|
||||
|
||||
struct pci_bus *__ref pci_add_new_bus(struct pci_bus *parent, struct pci_dev *dev, int busnr)
|
||||
struct pci_bus *pci_add_new_bus(struct pci_bus *parent, struct pci_dev *dev, int busnr)
|
||||
{
|
||||
struct pci_bus *child;
|
||||
|
||||
|
@ -983,6 +993,43 @@ void set_pcie_hotplug_bridge(struct pci_dev *pdev)
|
|||
}
|
||||
|
||||
|
||||
/**
|
||||
* pci_ext_cfg_is_aliased - is ext config space just an alias of std config?
|
||||
* @dev: PCI device
|
||||
*
|
||||
* PCI Express to PCI/PCI-X Bridge Specification, rev 1.0, 4.1.4 says that
|
||||
* when forwarding a type1 configuration request the bridge must check that
|
||||
* the extended register address field is zero. The bridge is not permitted
|
||||
* to forward the transactions and must handle it as an Unsupported Request.
|
||||
* Some bridges do not follow this rule and simply drop the extended register
|
||||
* bits, resulting in the standard config space being aliased, every 256
|
||||
* bytes across the entire configuration space. Test for this condition by
|
||||
* comparing the first dword of each potential alias to the vendor/device ID.
|
||||
* Known offenders:
|
||||
* ASM1083/1085 PCIe-to-PCI Reversible Bridge (1b21:1080, rev 01 & 03)
|
||||
* AMD/ATI SBx00 PCI to PCI Bridge (1002:4384, rev 40)
|
||||
*/
|
||||
static bool pci_ext_cfg_is_aliased(struct pci_dev *dev)
|
||||
{
|
||||
#ifdef CONFIG_PCI_QUIRKS
|
||||
int pos;
|
||||
u32 header, tmp;
|
||||
|
||||
pci_read_config_dword(dev, PCI_VENDOR_ID, &header);
|
||||
|
||||
for (pos = PCI_CFG_SPACE_SIZE;
|
||||
pos < PCI_CFG_SPACE_EXP_SIZE; pos += PCI_CFG_SPACE_SIZE) {
|
||||
if (pci_read_config_dword(dev, pos, &tmp) != PCIBIOS_SUCCESSFUL
|
||||
|| header != tmp)
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
#else
|
||||
return false;
|
||||
#endif
|
||||
}
|
||||
|
||||
/**
|
||||
* pci_cfg_space_size - get the configuration space size of the PCI device.
|
||||
* @dev: PCI device
|
||||
|
@ -1001,7 +1048,7 @@ static int pci_cfg_space_size_ext(struct pci_dev *dev)
|
|||
|
||||
if (pci_read_config_dword(dev, pos, &status) != PCIBIOS_SUCCESSFUL)
|
||||
goto fail;
|
||||
if (status == 0xffffffff)
|
||||
if (status == 0xffffffff || pci_ext_cfg_is_aliased(dev))
|
||||
goto fail;
|
||||
|
||||
return PCI_CFG_SPACE_EXP_SIZE;
|
||||
|
@ -1215,6 +1262,7 @@ static void pci_release_dev(struct device *dev)
|
|||
pci_release_of_node(pci_dev);
|
||||
pcibios_release_device(pci_dev);
|
||||
pci_bus_put(pci_dev->bus);
|
||||
kfree(pci_dev->driver_override);
|
||||
kfree(pci_dev);
|
||||
}
|
||||
|
||||
|
@ -1369,7 +1417,7 @@ void pci_device_add(struct pci_dev *dev, struct pci_bus *bus)
|
|||
WARN_ON(ret < 0);
|
||||
}
|
||||
|
||||
struct pci_dev *__ref pci_scan_single_device(struct pci_bus *bus, int devfn)
|
||||
struct pci_dev *pci_scan_single_device(struct pci_bus *bus, int devfn)
|
||||
{
|
||||
struct pci_dev *dev;
|
||||
|
||||
|
@ -1617,7 +1665,7 @@ static int pcie_bus_configure_set(struct pci_dev *dev, void *data)
|
|||
*/
|
||||
void pcie_bus_configure_settings(struct pci_bus *bus)
|
||||
{
|
||||
u8 smpss;
|
||||
u8 smpss = 0;
|
||||
|
||||
if (!bus->self)
|
||||
return;
|
||||
|
@ -1670,8 +1718,7 @@ unsigned int pci_scan_child_bus(struct pci_bus *bus)
|
|||
|
||||
for (pass=0; pass < 2; pass++)
|
||||
list_for_each_entry(dev, &bus->devices, bus_list) {
|
||||
if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE ||
|
||||
dev->hdr_type == PCI_HEADER_TYPE_CARDBUS)
|
||||
if (pci_is_bridge(dev))
|
||||
max = pci_scan_bridge(bus, dev, max, pass);
|
||||
}
|
||||
|
||||
|
@ -1958,7 +2005,7 @@ EXPORT_SYMBOL(pci_scan_bus);
|
|||
*
|
||||
* Returns the max number of subordinate bus discovered.
|
||||
*/
|
||||
unsigned int __ref pci_rescan_bus_bridge_resize(struct pci_dev *bridge)
|
||||
unsigned int pci_rescan_bus_bridge_resize(struct pci_dev *bridge)
|
||||
{
|
||||
unsigned int max;
|
||||
struct pci_bus *bus = bridge->subordinate;
|
||||
|
@ -1981,7 +2028,7 @@ unsigned int __ref pci_rescan_bus_bridge_resize(struct pci_dev *bridge)
|
|||
*
|
||||
* Returns the max number of subordinate bus discovered.
|
||||
*/
|
||||
unsigned int __ref pci_rescan_bus(struct pci_bus *bus)
|
||||
unsigned int pci_rescan_bus(struct pci_bus *bus)
|
||||
{
|
||||
unsigned int max;
|
||||
|
||||
|
|
|
@ -2954,6 +2954,7 @@ static void disable_igfx_irq(struct pci_dev *dev)
|
|||
}
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0102, disable_igfx_irq);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x010a, disable_igfx_irq);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0152, disable_igfx_irq);
|
||||
|
||||
/*
|
||||
* PCI devices which are on Intel chips can skip the 10ms delay
|
||||
|
@ -2991,6 +2992,14 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_CHELSIO, 0x0030,
|
|||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_HEADER(0x1814, 0x0601, /* Ralink RT2800 802.11n PCI */
|
||||
quirk_broken_intx_masking);
|
||||
/*
|
||||
* Realtek RTL8169 PCI Gigabit Ethernet Controller (rev 10)
|
||||
* Subsystem: Realtek RTL8169/8110 Family PCI Gigabit Ethernet NIC
|
||||
*
|
||||
* RTL8110SC - Fails under PCI device assignment using DisINTx masking.
|
||||
*/
|
||||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_REALTEK, 0x8169,
|
||||
quirk_broken_intx_masking);
|
||||
|
||||
static void pci_do_fixups(struct pci_dev *dev, struct pci_fixup *f,
|
||||
struct pci_fixup *end)
|
||||
|
@ -3453,6 +3462,8 @@ static const u16 pci_quirk_intel_pch_acs_ids[] = {
|
|||
/* Wildcat PCH */
|
||||
0x9c90, 0x9c91, 0x9c92, 0x9c93, 0x9c94, 0x9c95, 0x9c96, 0x9c97,
|
||||
0x9c98, 0x9c99, 0x9c9a, 0x9c9b,
|
||||
/* Patsburg (X79) PCH */
|
||||
0x1d10, 0x1d12, 0x1d14, 0x1d16, 0x1d18, 0x1d1a, 0x1d1c, 0x1d1e,
|
||||
};
|
||||
|
||||
static bool pci_quirk_intel_pch_acs_match(struct pci_dev *dev)
|
||||
|
|
|
@ -7,7 +7,6 @@
|
|||
* Copyright (C) 2003 -- 2004 Greg Kroah-Hartman <greg@kroah.com>
|
||||
*/
|
||||
|
||||
#include <linux/init.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/module.h>
|
||||
|
|
|
@ -713,12 +713,11 @@ static void pci_bridge_check_ranges(struct pci_bus *bus)
|
|||
bus resource of a given type. Note: we intentionally skip
|
||||
the bus resources which have already been assigned (that is,
|
||||
have non-NULL parent resource). */
|
||||
static struct resource *find_free_bus_resource(struct pci_bus *bus, unsigned long type)
|
||||
static struct resource *find_free_bus_resource(struct pci_bus *bus,
|
||||
unsigned long type_mask, unsigned long type)
|
||||
{
|
||||
int i;
|
||||
struct resource *r;
|
||||
unsigned long type_mask = IORESOURCE_IO | IORESOURCE_MEM |
|
||||
IORESOURCE_PREFETCH;
|
||||
|
||||
pci_bus_for_each_resource(bus, r, i) {
|
||||
if (r == &ioport_resource || r == &iomem_resource)
|
||||
|
@ -815,7 +814,8 @@ static void pbus_size_io(struct pci_bus *bus, resource_size_t min_size,
|
|||
resource_size_t add_size, struct list_head *realloc_head)
|
||||
{
|
||||
struct pci_dev *dev;
|
||||
struct resource *b_res = find_free_bus_resource(bus, IORESOURCE_IO);
|
||||
struct resource *b_res = find_free_bus_resource(bus, IORESOURCE_IO,
|
||||
IORESOURCE_IO);
|
||||
resource_size_t size = 0, size0 = 0, size1 = 0;
|
||||
resource_size_t children_add_size = 0;
|
||||
resource_size_t min_align, align;
|
||||
|
@ -907,36 +907,40 @@ static inline resource_size_t calculate_mem_align(resource_size_t *aligns,
|
|||
* @bus : the bus
|
||||
* @mask: mask the resource flag, then compare it with type
|
||||
* @type: the type of free resource from bridge
|
||||
* @type2: second match type
|
||||
* @type3: third match type
|
||||
* @min_size : the minimum memory window that must to be allocated
|
||||
* @add_size : additional optional memory window
|
||||
* @realloc_head : track the additional memory window on this list
|
||||
*
|
||||
* Calculate the size of the bus and minimal alignment which
|
||||
* guarantees that all child resources fit in this size.
|
||||
*
|
||||
* Returns -ENOSPC if there's no available bus resource of the desired type.
|
||||
* Otherwise, sets the bus resource start/end to indicate the required
|
||||
* size, adds things to realloc_head (if supplied), and returns 0.
|
||||
*/
|
||||
static int pbus_size_mem(struct pci_bus *bus, unsigned long mask,
|
||||
unsigned long type, resource_size_t min_size,
|
||||
resource_size_t add_size,
|
||||
struct list_head *realloc_head)
|
||||
unsigned long type, unsigned long type2,
|
||||
unsigned long type3,
|
||||
resource_size_t min_size, resource_size_t add_size,
|
||||
struct list_head *realloc_head)
|
||||
{
|
||||
struct pci_dev *dev;
|
||||
resource_size_t min_align, align, size, size0, size1;
|
||||
resource_size_t aligns[12]; /* Alignments from 1Mb to 2Gb */
|
||||
resource_size_t aligns[14]; /* Alignments from 1Mb to 8Gb */
|
||||
int order, max_order;
|
||||
struct resource *b_res = find_free_bus_resource(bus, type);
|
||||
unsigned int mem64_mask = 0;
|
||||
struct resource *b_res = find_free_bus_resource(bus,
|
||||
mask | IORESOURCE_PREFETCH, type);
|
||||
resource_size_t children_add_size = 0;
|
||||
|
||||
if (!b_res)
|
||||
return 0;
|
||||
return -ENOSPC;
|
||||
|
||||
memset(aligns, 0, sizeof(aligns));
|
||||
max_order = 0;
|
||||
size = 0;
|
||||
|
||||
mem64_mask = b_res->flags & IORESOURCE_MEM_64;
|
||||
b_res->flags &= ~IORESOURCE_MEM_64;
|
||||
|
||||
list_for_each_entry(dev, &bus->devices, bus_list) {
|
||||
int i;
|
||||
|
||||
|
@ -944,7 +948,9 @@ static int pbus_size_mem(struct pci_bus *bus, unsigned long mask,
|
|||
struct resource *r = &dev->resource[i];
|
||||
resource_size_t r_size;
|
||||
|
||||
if (r->parent || (r->flags & mask) != type)
|
||||
if (r->parent || ((r->flags & mask) != type &&
|
||||
(r->flags & mask) != type2 &&
|
||||
(r->flags & mask) != type3))
|
||||
continue;
|
||||
r_size = resource_size(r);
|
||||
#ifdef CONFIG_PCI_IOV
|
||||
|
@ -957,10 +963,17 @@ static int pbus_size_mem(struct pci_bus *bus, unsigned long mask,
|
|||
continue;
|
||||
}
|
||||
#endif
|
||||
/* For bridges size != alignment */
|
||||
/*
|
||||
* aligns[0] is for 1MB (since bridge memory
|
||||
* windows are always at least 1MB aligned), so
|
||||
* keep "order" from being negative for smaller
|
||||
* resources.
|
||||
*/
|
||||
align = pci_resource_alignment(dev, r);
|
||||
order = __ffs(align) - 20;
|
||||
if (order > 11) {
|
||||
if (order < 0)
|
||||
order = 0;
|
||||
if (order >= ARRAY_SIZE(aligns)) {
|
||||
dev_warn(&dev->dev, "disabling BAR %d: %pR "
|
||||
"(bad alignment %#llx)\n", i, r,
|
||||
(unsigned long long) align);
|
||||
|
@ -968,15 +981,12 @@ static int pbus_size_mem(struct pci_bus *bus, unsigned long mask,
|
|||
continue;
|
||||
}
|
||||
size += r_size;
|
||||
if (order < 0)
|
||||
order = 0;
|
||||
/* Exclude ranges with size > align from
|
||||
calculation of the alignment. */
|
||||
if (r_size == align)
|
||||
aligns[order] += align;
|
||||
if (order > max_order)
|
||||
max_order = order;
|
||||
mem64_mask &= r->flags & IORESOURCE_MEM_64;
|
||||
|
||||
if (realloc_head)
|
||||
children_add_size += get_res_add_size(realloc_head, r);
|
||||
|
@ -997,18 +1007,18 @@ static int pbus_size_mem(struct pci_bus *bus, unsigned long mask,
|
|||
"%pR to %pR (unused)\n", b_res,
|
||||
&bus->busn_res);
|
||||
b_res->flags = 0;
|
||||
return 1;
|
||||
return 0;
|
||||
}
|
||||
b_res->start = min_align;
|
||||
b_res->end = size0 + min_align - 1;
|
||||
b_res->flags |= IORESOURCE_STARTALIGN | mem64_mask;
|
||||
b_res->flags |= IORESOURCE_STARTALIGN;
|
||||
if (size1 > size0 && realloc_head) {
|
||||
add_to_list(realloc_head, bus->self, b_res, size1-size0, min_align);
|
||||
dev_printk(KERN_DEBUG, &bus->self->dev, "bridge window "
|
||||
"%pR to %pR add_size %llx\n", b_res,
|
||||
&bus->busn_res, (unsigned long long)size1-size0);
|
||||
}
|
||||
return 1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
unsigned long pci_cardbus_resource_alignment(struct resource *res)
|
||||
|
@ -1113,12 +1123,13 @@ handle_done:
|
|||
;
|
||||
}
|
||||
|
||||
void __ref __pci_bus_size_bridges(struct pci_bus *bus,
|
||||
struct list_head *realloc_head)
|
||||
void __pci_bus_size_bridges(struct pci_bus *bus, struct list_head *realloc_head)
|
||||
{
|
||||
struct pci_dev *dev;
|
||||
unsigned long mask, prefmask;
|
||||
unsigned long mask, prefmask, type2 = 0, type3 = 0;
|
||||
resource_size_t additional_mem_size = 0, additional_io_size = 0;
|
||||
struct resource *b_res;
|
||||
int ret;
|
||||
|
||||
list_for_each_entry(dev, &bus->devices, bus_list) {
|
||||
struct pci_bus *b = dev->subordinate;
|
||||
|
@ -1152,41 +1163,93 @@ void __ref __pci_bus_size_bridges(struct pci_bus *bus,
|
|||
additional_io_size = pci_hotplug_io_size;
|
||||
additional_mem_size = pci_hotplug_mem_size;
|
||||
}
|
||||
/*
|
||||
* Follow thru
|
||||
*/
|
||||
/* Fall through */
|
||||
default:
|
||||
pbus_size_io(bus, realloc_head ? 0 : additional_io_size,
|
||||
additional_io_size, realloc_head);
|
||||
/* If the bridge supports prefetchable range, size it
|
||||
separately. If it doesn't, or its prefetchable window
|
||||
has already been allocated by arch code, try
|
||||
non-prefetchable range for both types of PCI memory
|
||||
resources. */
|
||||
|
||||
/*
|
||||
* If there's a 64-bit prefetchable MMIO window, compute
|
||||
* the size required to put all 64-bit prefetchable
|
||||
* resources in it.
|
||||
*/
|
||||
b_res = &bus->self->resource[PCI_BRIDGE_RESOURCES];
|
||||
mask = IORESOURCE_MEM;
|
||||
prefmask = IORESOURCE_MEM | IORESOURCE_PREFETCH;
|
||||
if (pbus_size_mem(bus, prefmask, prefmask,
|
||||
if (b_res[2].flags & IORESOURCE_MEM_64) {
|
||||
prefmask |= IORESOURCE_MEM_64;
|
||||
ret = pbus_size_mem(bus, prefmask, prefmask,
|
||||
prefmask, prefmask,
|
||||
realloc_head ? 0 : additional_mem_size,
|
||||
additional_mem_size, realloc_head))
|
||||
mask = prefmask; /* Success, size non-prefetch only. */
|
||||
else
|
||||
additional_mem_size += additional_mem_size;
|
||||
pbus_size_mem(bus, mask, IORESOURCE_MEM,
|
||||
additional_mem_size, realloc_head);
|
||||
|
||||
/*
|
||||
* If successful, all non-prefetchable resources
|
||||
* and any 32-bit prefetchable resources will go in
|
||||
* the non-prefetchable window.
|
||||
*/
|
||||
if (ret == 0) {
|
||||
mask = prefmask;
|
||||
type2 = prefmask & ~IORESOURCE_MEM_64;
|
||||
type3 = prefmask & ~IORESOURCE_PREFETCH;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* If there is no 64-bit prefetchable window, compute the
|
||||
* size required to put all prefetchable resources in the
|
||||
* 32-bit prefetchable window (if there is one).
|
||||
*/
|
||||
if (!type2) {
|
||||
prefmask &= ~IORESOURCE_MEM_64;
|
||||
ret = pbus_size_mem(bus, prefmask, prefmask,
|
||||
prefmask, prefmask,
|
||||
realloc_head ? 0 : additional_mem_size,
|
||||
additional_mem_size, realloc_head);
|
||||
|
||||
/*
|
||||
* If successful, only non-prefetchable resources
|
||||
* will go in the non-prefetchable window.
|
||||
*/
|
||||
if (ret == 0)
|
||||
mask = prefmask;
|
||||
else
|
||||
additional_mem_size += additional_mem_size;
|
||||
|
||||
type2 = type3 = IORESOURCE_MEM;
|
||||
}
|
||||
|
||||
/*
|
||||
* Compute the size required to put everything else in the
|
||||
* non-prefetchable window. This includes:
|
||||
*
|
||||
* - all non-prefetchable resources
|
||||
* - 32-bit prefetchable resources if there's a 64-bit
|
||||
* prefetchable window or no prefetchable window at all
|
||||
* - 64-bit prefetchable resources if there's no
|
||||
* prefetchable window at all
|
||||
*
|
||||
* Note that the strategy in __pci_assign_resource() must
|
||||
* match that used here. Specifically, we cannot put a
|
||||
* 32-bit prefetchable resource in a 64-bit prefetchable
|
||||
* window.
|
||||
*/
|
||||
pbus_size_mem(bus, mask, IORESOURCE_MEM, type2, type3,
|
||||
realloc_head ? 0 : additional_mem_size,
|
||||
additional_mem_size, realloc_head);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
void __ref pci_bus_size_bridges(struct pci_bus *bus)
|
||||
void pci_bus_size_bridges(struct pci_bus *bus)
|
||||
{
|
||||
__pci_bus_size_bridges(bus, NULL);
|
||||
}
|
||||
EXPORT_SYMBOL(pci_bus_size_bridges);
|
||||
|
||||
void __ref __pci_bus_assign_resources(const struct pci_bus *bus,
|
||||
struct list_head *realloc_head,
|
||||
struct list_head *fail_head)
|
||||
void __pci_bus_assign_resources(const struct pci_bus *bus,
|
||||
struct list_head *realloc_head,
|
||||
struct list_head *fail_head)
|
||||
{
|
||||
struct pci_bus *b;
|
||||
struct pci_dev *dev;
|
||||
|
@ -1218,15 +1281,15 @@ void __ref __pci_bus_assign_resources(const struct pci_bus *bus,
|
|||
}
|
||||
}
|
||||
|
||||
void __ref pci_bus_assign_resources(const struct pci_bus *bus)
|
||||
void pci_bus_assign_resources(const struct pci_bus *bus)
|
||||
{
|
||||
__pci_bus_assign_resources(bus, NULL, NULL);
|
||||
}
|
||||
EXPORT_SYMBOL(pci_bus_assign_resources);
|
||||
|
||||
static void __ref __pci_bridge_assign_resources(const struct pci_dev *bridge,
|
||||
struct list_head *add_head,
|
||||
struct list_head *fail_head)
|
||||
static void __pci_bridge_assign_resources(const struct pci_dev *bridge,
|
||||
struct list_head *add_head,
|
||||
struct list_head *fail_head)
|
||||
{
|
||||
struct pci_bus *b;
|
||||
|
||||
|
@ -1257,42 +1320,66 @@ static void __ref __pci_bridge_assign_resources(const struct pci_dev *bridge,
|
|||
static void pci_bridge_release_resources(struct pci_bus *bus,
|
||||
unsigned long type)
|
||||
{
|
||||
int idx;
|
||||
bool changed = false;
|
||||
struct pci_dev *dev;
|
||||
struct pci_dev *dev = bus->self;
|
||||
struct resource *r;
|
||||
unsigned long type_mask = IORESOURCE_IO | IORESOURCE_MEM |
|
||||
IORESOURCE_PREFETCH;
|
||||
IORESOURCE_PREFETCH | IORESOURCE_MEM_64;
|
||||
unsigned old_flags = 0;
|
||||
struct resource *b_res;
|
||||
int idx = 1;
|
||||
|
||||
dev = bus->self;
|
||||
for (idx = PCI_BRIDGE_RESOURCES; idx <= PCI_BRIDGE_RESOURCE_END;
|
||||
idx++) {
|
||||
r = &dev->resource[idx];
|
||||
if ((r->flags & type_mask) != type)
|
||||
continue;
|
||||
if (!r->parent)
|
||||
continue;
|
||||
/*
|
||||
* if there are children under that, we should release them
|
||||
* all
|
||||
*/
|
||||
release_child_resources(r);
|
||||
if (!release_resource(r)) {
|
||||
dev_printk(KERN_DEBUG, &dev->dev,
|
||||
"resource %d %pR released\n", idx, r);
|
||||
/* keep the old size */
|
||||
r->end = resource_size(r) - 1;
|
||||
r->start = 0;
|
||||
r->flags = 0;
|
||||
changed = true;
|
||||
}
|
||||
}
|
||||
b_res = &dev->resource[PCI_BRIDGE_RESOURCES];
|
||||
|
||||
/*
|
||||
* 1. if there is io port assign fail, will release bridge
|
||||
* io port.
|
||||
* 2. if there is non pref mmio assign fail, release bridge
|
||||
* nonpref mmio.
|
||||
* 3. if there is 64bit pref mmio assign fail, and bridge pref
|
||||
* is 64bit, release bridge pref mmio.
|
||||
* 4. if there is pref mmio assign fail, and bridge pref is
|
||||
* 32bit mmio, release bridge pref mmio
|
||||
* 5. if there is pref mmio assign fail, and bridge pref is not
|
||||
* assigned, release bridge nonpref mmio.
|
||||
*/
|
||||
if (type & IORESOURCE_IO)
|
||||
idx = 0;
|
||||
else if (!(type & IORESOURCE_PREFETCH))
|
||||
idx = 1;
|
||||
else if ((type & IORESOURCE_MEM_64) &&
|
||||
(b_res[2].flags & IORESOURCE_MEM_64))
|
||||
idx = 2;
|
||||
else if (!(b_res[2].flags & IORESOURCE_MEM_64) &&
|
||||
(b_res[2].flags & IORESOURCE_PREFETCH))
|
||||
idx = 2;
|
||||
else
|
||||
idx = 1;
|
||||
|
||||
r = &b_res[idx];
|
||||
|
||||
if (!r->parent)
|
||||
return;
|
||||
|
||||
/*
|
||||
* if there are children under that, we should release them
|
||||
* all
|
||||
*/
|
||||
release_child_resources(r);
|
||||
if (!release_resource(r)) {
|
||||
type = old_flags = r->flags & type_mask;
|
||||
dev_printk(KERN_DEBUG, &dev->dev, "resource %d %pR released\n",
|
||||
PCI_BRIDGE_RESOURCES + idx, r);
|
||||
/* keep the old size */
|
||||
r->end = resource_size(r) - 1;
|
||||
r->start = 0;
|
||||
r->flags = 0;
|
||||
|
||||
if (changed) {
|
||||
/* avoiding touch the one without PREF */
|
||||
if (type & IORESOURCE_PREFETCH)
|
||||
type = IORESOURCE_PREFETCH;
|
||||
__pci_setup_bridge(bus, type);
|
||||
/* for next child res under same bridge */
|
||||
r->flags = old_flags;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1304,9 +1391,9 @@ enum release_type {
|
|||
* try to release pci bridge resources that is from leaf bridge,
|
||||
* so we can allocate big new one later
|
||||
*/
|
||||
static void __ref pci_bus_release_bridge_resources(struct pci_bus *bus,
|
||||
unsigned long type,
|
||||
enum release_type rel_type)
|
||||
static void pci_bus_release_bridge_resources(struct pci_bus *bus,
|
||||
unsigned long type,
|
||||
enum release_type rel_type)
|
||||
{
|
||||
struct pci_dev *dev;
|
||||
bool is_leaf_bridge = true;
|
||||
|
@ -1471,7 +1558,7 @@ void pci_assign_unassigned_root_bus_resources(struct pci_bus *bus)
|
|||
LIST_HEAD(fail_head);
|
||||
struct pci_dev_resource *fail_res;
|
||||
unsigned long type_mask = IORESOURCE_IO | IORESOURCE_MEM |
|
||||
IORESOURCE_PREFETCH;
|
||||
IORESOURCE_PREFETCH | IORESOURCE_MEM_64;
|
||||
int pci_try_num = 1;
|
||||
enum enable_type enable_local;
|
||||
|
||||
|
@ -1629,9 +1716,7 @@ void pci_assign_unassigned_bus_resources(struct pci_bus *bus)
|
|||
|
||||
down_read(&pci_bus_sem);
|
||||
list_for_each_entry(dev, &bus->devices, bus_list)
|
||||
if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE ||
|
||||
dev->hdr_type == PCI_HEADER_TYPE_CARDBUS)
|
||||
if (dev->subordinate)
|
||||
if (pci_is_bridge(dev) && pci_has_subordinate(dev))
|
||||
__pci_bus_size_bridges(dev->subordinate,
|
||||
&add_list);
|
||||
up_read(&pci_bus_sem);
|
||||
|
|
|
@ -10,7 +10,6 @@
|
|||
*/
|
||||
|
||||
|
||||
#include <linux/init.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/errno.h>
|
||||
|
|
|
@ -16,7 +16,6 @@
|
|||
* Resource sorting
|
||||
*/
|
||||
|
||||
#include <linux/init.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/export.h>
|
||||
#include <linux/pci.h>
|
||||
|
@ -209,21 +208,42 @@ static int __pci_assign_resource(struct pci_bus *bus, struct pci_dev *dev,
|
|||
|
||||
min = (res->flags & IORESOURCE_IO) ? PCIBIOS_MIN_IO : PCIBIOS_MIN_MEM;
|
||||
|
||||
/* First, try exact prefetching match.. */
|
||||
/*
|
||||
* First, try exact prefetching match. Even if a 64-bit
|
||||
* prefetchable bridge window is below 4GB, we can't put a 32-bit
|
||||
* prefetchable resource in it because pbus_size_mem() assumes a
|
||||
* 64-bit window will contain no 32-bit resources. If we assign
|
||||
* things differently than they were sized, not everything will fit.
|
||||
*/
|
||||
ret = pci_bus_alloc_resource(bus, res, size, align, min,
|
||||
IORESOURCE_PREFETCH,
|
||||
IORESOURCE_PREFETCH | IORESOURCE_MEM_64,
|
||||
pcibios_align_resource, dev);
|
||||
if (ret == 0)
|
||||
return 0;
|
||||
|
||||
if (ret < 0 && (res->flags & IORESOURCE_PREFETCH)) {
|
||||
/*
|
||||
* That failed.
|
||||
*
|
||||
* But a prefetching area can handle a non-prefetching
|
||||
* window (it will just not perform as well).
|
||||
*/
|
||||
/*
|
||||
* If the prefetchable window is only 32 bits wide, we can put
|
||||
* 64-bit prefetchable resources in it.
|
||||
*/
|
||||
if ((res->flags & (IORESOURCE_PREFETCH | IORESOURCE_MEM_64)) ==
|
||||
(IORESOURCE_PREFETCH | IORESOURCE_MEM_64)) {
|
||||
ret = pci_bus_alloc_resource(bus, res, size, align, min,
|
||||
IORESOURCE_PREFETCH,
|
||||
pcibios_align_resource, dev);
|
||||
if (ret == 0)
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* If we didn't find a better match, we can put any memory resource
|
||||
* in a non-prefetchable window. If this resource is 32 bits and
|
||||
* non-prefetchable, the first call already tried the only possibility
|
||||
* so we don't need to try again.
|
||||
*/
|
||||
if (res->flags & (IORESOURCE_PREFETCH | IORESOURCE_MEM_64))
|
||||
ret = pci_bus_alloc_resource(bus, res, size, align, min, 0,
|
||||
pcibios_align_resource, dev);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
|
|
@ -78,8 +78,7 @@ int __ref cb_alloc(struct pcmcia_socket *s)
|
|||
max = bus->busn_res.start;
|
||||
for (pass = 0; pass < 2; pass++)
|
||||
list_for_each_entry(dev, &bus->devices, bus_list)
|
||||
if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE ||
|
||||
dev->hdr_type == PCI_HEADER_TYPE_CARDBUS)
|
||||
if (pci_is_bridge(dev))
|
||||
max = pci_scan_bridge(bus, dev, max, pass);
|
||||
|
||||
/*
|
||||
|
|
|
@ -642,8 +642,7 @@ static void asus_rfkill_hotplug(struct asus_wmi *asus)
|
|||
dev = pci_scan_single_device(bus, 0);
|
||||
if (dev) {
|
||||
pci_bus_assign_resources(bus);
|
||||
if (pci_bus_add_device(dev))
|
||||
pr_err("Unable to hotplug wifi\n");
|
||||
pci_bus_add_device(dev);
|
||||
}
|
||||
} else {
|
||||
dev = pci_get_slot(bus, 0);
|
||||
|
|
|
@ -633,8 +633,7 @@ static void eeepc_rfkill_hotplug(struct eeepc_laptop *eeepc, acpi_handle handle)
|
|||
dev = pci_scan_single_device(bus, 0);
|
||||
if (dev) {
|
||||
pci_bus_assign_resources(bus);
|
||||
if (pci_bus_add_device(dev))
|
||||
pr_err("Unable to hotplug wifi\n");
|
||||
pci_bus_add_device(dev);
|
||||
}
|
||||
} else {
|
||||
dev = pci_get_slot(bus, 0);
|
||||
|
|
|
@ -16,16 +16,13 @@ int dma_mmap_from_coherent(struct device *dev, struct vm_area_struct *vma,
|
|||
* Standard interface
|
||||
*/
|
||||
#define ARCH_HAS_DMA_DECLARE_COHERENT_MEMORY
|
||||
extern int
|
||||
dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
|
||||
dma_addr_t device_addr, size_t size, int flags);
|
||||
int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
|
||||
dma_addr_t device_addr, size_t size, int flags);
|
||||
|
||||
extern void
|
||||
dma_release_declared_memory(struct device *dev);
|
||||
void dma_release_declared_memory(struct device *dev);
|
||||
|
||||
extern void *
|
||||
dma_mark_declared_memory_occupied(struct device *dev,
|
||||
dma_addr_t device_addr, size_t size);
|
||||
void *dma_mark_declared_memory_occupied(struct device *dev,
|
||||
dma_addr_t device_addr, size_t size);
|
||||
#else
|
||||
#define dma_alloc_from_coherent(dev, size, handle, ret) (0)
|
||||
#define dma_release_from_coherent(dev, order, vaddr) (0)
|
||||
|
|
|
@ -8,6 +8,12 @@
|
|||
#include <linux/dma-direction.h>
|
||||
#include <linux/scatterlist.h>
|
||||
|
||||
/*
|
||||
* A dma_addr_t can hold any valid DMA or bus address for the platform.
|
||||
* It can be given to a device to use as a DMA source or target. A CPU cannot
|
||||
* reference a dma_addr_t directly because there may be translation between
|
||||
* its physical address space and the bus address space.
|
||||
*/
|
||||
struct dma_map_ops {
|
||||
void* (*alloc)(struct device *dev, size_t size,
|
||||
dma_addr_t *dma_handle, gfp_t gfp,
|
||||
|
@ -186,7 +192,7 @@ static inline int dma_get_cache_alignment(void)
|
|||
|
||||
#ifndef ARCH_HAS_DMA_DECLARE_COHERENT_MEMORY
|
||||
static inline int
|
||||
dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
|
||||
dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
|
||||
dma_addr_t device_addr, size_t size, int flags)
|
||||
{
|
||||
return 0;
|
||||
|
@ -217,13 +223,14 @@ extern void *dmam_alloc_noncoherent(struct device *dev, size_t size,
|
|||
extern void dmam_free_noncoherent(struct device *dev, size_t size, void *vaddr,
|
||||
dma_addr_t dma_handle);
|
||||
#ifdef ARCH_HAS_DMA_DECLARE_COHERENT_MEMORY
|
||||
extern int dmam_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
|
||||
extern int dmam_declare_coherent_memory(struct device *dev,
|
||||
phys_addr_t phys_addr,
|
||||
dma_addr_t device_addr, size_t size,
|
||||
int flags);
|
||||
extern void dmam_release_declared_memory(struct device *dev);
|
||||
#else /* ARCH_HAS_DMA_DECLARE_COHERENT_MEMORY */
|
||||
static inline int dmam_declare_coherent_memory(struct device *dev,
|
||||
dma_addr_t bus_addr, dma_addr_t device_addr,
|
||||
phys_addr_t phys_addr, dma_addr_t device_addr,
|
||||
size_t size, gfp_t gfp)
|
||||
{
|
||||
return 0;
|
||||
|
|
|
@ -365,6 +365,7 @@ struct pci_dev {
|
|||
#endif
|
||||
phys_addr_t rom; /* Physical address of ROM if it's not from the BAR */
|
||||
size_t romlen; /* Length of ROM if it's not from the BAR */
|
||||
char *driver_override; /* Driver name to force a match */
|
||||
};
|
||||
|
||||
static inline struct pci_dev *pci_physfn(struct pci_dev *dev)
|
||||
|
@ -477,6 +478,19 @@ static inline bool pci_is_root_bus(struct pci_bus *pbus)
|
|||
return !(pbus->parent);
|
||||
}
|
||||
|
||||
/**
|
||||
* pci_is_bridge - check if the PCI device is a bridge
|
||||
* @dev: PCI device
|
||||
*
|
||||
* Return true if the PCI device is bridge whether it has subordinate
|
||||
* or not.
|
||||
*/
|
||||
static inline bool pci_is_bridge(struct pci_dev *dev)
|
||||
{
|
||||
return dev->hdr_type == PCI_HEADER_TYPE_BRIDGE ||
|
||||
dev->hdr_type == PCI_HEADER_TYPE_CARDBUS;
|
||||
}
|
||||
|
||||
static inline struct pci_dev *pci_upstream_bridge(struct pci_dev *dev)
|
||||
{
|
||||
dev = pci_physfn(dev);
|
||||
|
@ -518,7 +532,7 @@ static inline int pcibios_err_to_errno(int err)
|
|||
case PCIBIOS_FUNC_NOT_SUPPORTED:
|
||||
return -ENOENT;
|
||||
case PCIBIOS_BAD_VENDOR_ID:
|
||||
return -EINVAL;
|
||||
return -ENOTTY;
|
||||
case PCIBIOS_DEVICE_NOT_FOUND:
|
||||
return -ENODEV;
|
||||
case PCIBIOS_BAD_REGISTER_NUMBER:
|
||||
|
@ -529,7 +543,7 @@ static inline int pcibios_err_to_errno(int err)
|
|||
return -ENOSPC;
|
||||
}
|
||||
|
||||
return -ENOTTY;
|
||||
return -ERANGE;
|
||||
}
|
||||
|
||||
/* Low-level architecture-dependent routines */
|
||||
|
@ -603,6 +617,9 @@ struct pci_error_handlers {
|
|||
/* PCI slot has been reset */
|
||||
pci_ers_result_t (*slot_reset)(struct pci_dev *dev);
|
||||
|
||||
/* PCI function reset prepare or completed */
|
||||
void (*reset_notify)(struct pci_dev *dev, bool prepare);
|
||||
|
||||
/* Device driver may resume normal operations */
|
||||
void (*resume)(struct pci_dev *dev);
|
||||
};
|
||||
|
@ -680,8 +697,8 @@ struct pci_driver {
|
|||
|
||||
/**
|
||||
* PCI_VDEVICE - macro used to describe a specific pci device in short form
|
||||
* @vendor: the vendor name
|
||||
* @device: the 16 bit PCI Device ID
|
||||
* @vend: the vendor name
|
||||
* @dev: the 16 bit PCI Device ID
|
||||
*
|
||||
* This macro is used to create a struct pci_device_id that matches a
|
||||
* specific PCI device. The subvendor, and subdevice fields will be set
|
||||
|
@ -689,9 +706,9 @@ struct pci_driver {
|
|||
* private data.
|
||||
*/
|
||||
|
||||
#define PCI_VDEVICE(vendor, device) \
|
||||
PCI_VENDOR_ID_##vendor, (device), \
|
||||
PCI_ANY_ID, PCI_ANY_ID, 0, 0
|
||||
#define PCI_VDEVICE(vend, dev) \
|
||||
.vendor = PCI_VENDOR_ID_##vend, .device = (dev), \
|
||||
.subvendor = PCI_ANY_ID, .subdevice = PCI_ANY_ID, 0, 0
|
||||
|
||||
/* these external functions are only available when PCI support is enabled */
|
||||
#ifdef CONFIG_PCI
|
||||
|
@ -764,7 +781,7 @@ int pci_scan_slot(struct pci_bus *bus, int devfn);
|
|||
struct pci_dev *pci_scan_single_device(struct pci_bus *bus, int devfn);
|
||||
void pci_device_add(struct pci_dev *dev, struct pci_bus *bus);
|
||||
unsigned int pci_scan_child_bus(struct pci_bus *bus);
|
||||
int __must_check pci_bus_add_device(struct pci_dev *dev);
|
||||
void pci_bus_add_device(struct pci_dev *dev);
|
||||
void pci_read_bridge_bases(struct pci_bus *child);
|
||||
struct resource *pci_find_parent_resource(const struct pci_dev *dev,
|
||||
struct resource *res);
|
||||
|
@ -1158,7 +1175,6 @@ struct msix_entry {
|
|||
|
||||
#ifdef CONFIG_PCI_MSI
|
||||
int pci_msi_vec_count(struct pci_dev *dev);
|
||||
int pci_enable_msi_block(struct pci_dev *dev, int nvec);
|
||||
void pci_msi_shutdown(struct pci_dev *dev);
|
||||
void pci_disable_msi(struct pci_dev *dev);
|
||||
int pci_msix_vec_count(struct pci_dev *dev);
|
||||
|
@ -1188,8 +1204,6 @@ static inline int pci_enable_msix_exact(struct pci_dev *dev,
|
|||
}
|
||||
#else
|
||||
static inline int pci_msi_vec_count(struct pci_dev *dev) { return -ENOSYS; }
|
||||
static inline int pci_enable_msi_block(struct pci_dev *dev, int nvec)
|
||||
{ return -ENOSYS; }
|
||||
static inline void pci_msi_shutdown(struct pci_dev *dev) { }
|
||||
static inline void pci_disable_msi(struct pci_dev *dev) { }
|
||||
static inline int pci_msix_vec_count(struct pci_dev *dev) { return -ENOSYS; }
|
||||
|
@ -1244,7 +1258,7 @@ static inline void pcie_set_ecrc_checking(struct pci_dev *dev) { }
|
|||
static inline void pcie_ecrc_get_policy(char *str) { }
|
||||
#endif
|
||||
|
||||
#define pci_enable_msi(pdev) pci_enable_msi_block(pdev, 1)
|
||||
#define pci_enable_msi(pdev) pci_enable_msi_exact(pdev, 1)
|
||||
|
||||
#ifdef CONFIG_HT_IRQ
|
||||
/* The functions a driver should call */
|
||||
|
@ -1572,13 +1586,13 @@ extern unsigned long pci_hotplug_io_size;
|
|||
extern unsigned long pci_hotplug_mem_size;
|
||||
|
||||
/* Architecture-specific versions may override these (weak) */
|
||||
int pcibios_add_platform_entries(struct pci_dev *dev);
|
||||
void pcibios_disable_device(struct pci_dev *dev);
|
||||
void pcibios_set_master(struct pci_dev *dev);
|
||||
int pcibios_set_pcie_reset_state(struct pci_dev *dev,
|
||||
enum pcie_reset_state state);
|
||||
int pcibios_add_device(struct pci_dev *dev);
|
||||
void pcibios_release_device(struct pci_dev *dev);
|
||||
void pcibios_penalize_isa_irq(int irq, int active);
|
||||
|
||||
#ifdef CONFIG_HIBERNATE_CALLBACKS
|
||||
extern struct dev_pm_ops pcibios_pm_ops;
|
||||
|
|
|
@ -1631,8 +1631,6 @@
|
|||
#define PCI_DEVICE_ID_ATT_VENUS_MODEM 0x480
|
||||
|
||||
#define PCI_VENDOR_ID_SPECIALIX 0x11cb
|
||||
#define PCI_DEVICE_ID_SPECIALIX_IO8 0x2000
|
||||
#define PCI_DEVICE_ID_SPECIALIX_RIO 0x8000
|
||||
#define PCI_SUBDEVICE_ID_SPECIALIX_SPEED4 0xa004
|
||||
|
||||
#define PCI_VENDOR_ID_ANALOG_DEVICES 0x11d4
|
||||
|
@ -2874,7 +2872,6 @@
|
|||
#define PCI_DEVICE_ID_SCALEMP_VSMP_CTL 0x1010
|
||||
|
||||
#define PCI_VENDOR_ID_COMPUTONE 0x8e0e
|
||||
#define PCI_DEVICE_ID_COMPUTONE_IP2EX 0x0291
|
||||
#define PCI_DEVICE_ID_COMPUTONE_PG 0x0302
|
||||
#define PCI_SUBVENDOR_ID_COMPUTONE 0x8e0e
|
||||
#define PCI_SUBDEVICE_ID_COMPUTONE_PG4 0x0001
|
||||
|
|
|
@ -142,6 +142,7 @@ typedef unsigned long blkcnt_t;
|
|||
#define pgoff_t unsigned long
|
||||
#endif
|
||||
|
||||
/* A dma_addr_t can hold any valid DMA or bus address for the platform */
|
||||
#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
|
||||
typedef u64 dma_addr_t;
|
||||
#else
|
||||
|
|
|
@ -1288,13 +1288,10 @@ int iomem_map_sanity_check(resource_size_t addr, unsigned long size)
|
|||
if (p->flags & IORESOURCE_BUSY)
|
||||
continue;
|
||||
|
||||
printk(KERN_WARNING "resource map sanity check conflict: "
|
||||
"0x%llx 0x%llx 0x%llx 0x%llx %s\n",
|
||||
printk(KERN_WARNING "resource sanity check: requesting [mem %#010llx-%#010llx], which spans more than %s %pR\n",
|
||||
(unsigned long long)addr,
|
||||
(unsigned long long)(addr + size - 1),
|
||||
(unsigned long long)p->start,
|
||||
(unsigned long long)p->end,
|
||||
p->name);
|
||||
p->name, p);
|
||||
err = -1;
|
||||
break;
|
||||
}
|
||||
|
|
Loading…
Reference in New Issue