PCI changes for the v3.12 merge window:
PCI device hotplug - Use PCIe native hotplug, not ACPI hotplug, when possible (Neil Horman) - Assign resources on per-host bridge basis (Yinghai Lu) MPS (Max Payload Size) - Allow larger MPS settings below hotplug-capable Root Port (Yijing Wang) - Add warnings about unsafe MPS settings (Yijing Wang) - Simplify interface and messages (Bjorn Helgaas) SR-IOV - Return -ENOSYS on non-SR-IOV devices (Stefan Assmann) - Update NumVFs register when disabling SR-IOV (Yijing Wang) Virtualization - Add bus and slot reset support (Alex Williamson) - Fix ACS (Access Control Services) issues (Alex Williamson) Miscellaneous - Simplify PCIe Capability accessors (Bjorn Helgaas) - Add pcibios_pm_ops for arch-specific hibernate stuff (Sebastian Ott) - Disable decoding during BAR sizing only when necessary (Zoltan Kiss) - Delay enabling bridges until they're needed (Yinghai Lu) - Split Designware support into Synopsys and Exynos parts (Jingoo Han) - Convert class code to use dev_groups (Greg Kroah-Hartman) - Cleanup Designware and Exynos I/O access wrappers (Seungwon Jeon) - Fix bridge I/O window alignment (Bjorn Helgaas) - Add pci_wait_for_pending_transaction() (Casey Leedom) - Use devm_ioremap_resource() in Marvell driver (Tushar Behera) -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) iQIcBAABAgAGBQJSJiBcAAoJEFmIoMA60/r8xJgQAJML7aDmo3ASfabGrfY12fUR 10Miud/MzlX8/AjPSVW0BodpPMmyQY/Viqd9nBWVm3OR9JSrBp2Q8a3Qge5c0GsE dMpO3bJrjOmexaAP3wqEQ/NNyL+iIO7fVQsjHf0uyYTS359Ed0TMWsLQwjAa+h2d bB2Ul1AqNiXywCj8Kxnzz52DLnRn1g2YVwp7hACCXyQ+NDVDqhgbxLBnbEFkQqOr jAF38xz6DuyVTF+EzIIUDWsOLuo5s0qC3aai36yrVwUuuppBFFX4QRoUOaerZRwe 2WCSa8jqI5QnOPU0LYIPr24DJa6LKCtuSJXUE5hKZgz70UsNefRkV3F5lzB/YlXt t5PYH9B27fEyokh8gGmyytAKkutbm8RH3+99cjNzf/UKuiJgzZE27qi3A+DEpJft Igl4WoIC39/fhDSvmpGfd7BWvEkdz86UKdB9f7Wz6+NpWoDLiYiwqkOGuF0bo7zo 3vH48s5VAR8avyGeSUPGFcP9Bq+Hi936xzZxq+Hrj0hASPTpOMTLD1XCqomONO26 x6x0ipHRDTh3TixHN7KENqyIJCkY/vlzt4kDnzytZe4TupJX+hlG74fq98hpoEFy y2RPiLB8jahPf+fr4cmltqiv6WAhcUcJuGdcAF+Ht4wlrIDELR8e7AKH4Q04B/5O I8FCB6bay8mvW6MMSXql =uNEk -----END PGP SIGNATURE----- Merge tag 'pci-v3.12-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci Pull PCI changes from Bjorn Helgaas: PCI device hotplug: - Use PCIe native hotplug, not ACPI hotplug, when possible (Neil Horman) - Assign resources on per-host bridge basis (Yinghai Lu) MPS (Max Payload Size): - Allow larger MPS settings below hotplug-capable Root Port (Yijing Wang) - Add warnings about unsafe MPS settings (Yijing Wang) - Simplify interface and messages (Bjorn Helgaas) SR-IOV: - Return -ENOSYS on non-SR-IOV devices (Stefan Assmann) - Update NumVFs register when disabling SR-IOV (Yijing Wang) Virtualization: - Add bus and slot reset support (Alex Williamson) - Fix ACS (Access Control Services) issues (Alex Williamson) Miscellaneous: - Simplify PCIe Capability accessors (Bjorn Helgaas) - Add pcibios_pm_ops for arch-specific hibernate stuff (Sebastian Ott) - Disable decoding during BAR sizing only when necessary (Zoltan Kiss) - Delay enabling bridges until they're needed (Yinghai Lu) - Split Designware support into Synopsys and Exynos parts (Jingoo Han) - Convert class code to use dev_groups (Greg Kroah-Hartman) - Cleanup Designware and Exynos I/O access wrappers (Seungwon Jeon) - Fix bridge I/O window alignment (Bjorn Helgaas) - Add pci_wait_for_pending_transaction() (Casey Leedom) - Use devm_ioremap_resource() in Marvell driver (Tushar Behera) * tag 'pci-v3.12-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (63 commits) PCI/ACPI: Fix _OSC ordering to allow PCIe hotplug use when available PCI: exynos: Add I/O access wrappers PCI: designware: Drop "addr" arg from dw_pcie_readl_rc()/dw_pcie_writel_rc() PCI: Remove pcie_cap_has_devctl() PCI: Support PCIe Capability Slot registers only for ports with slots PCI: Remove PCIe Capability version checks PCI: Allow PCIe Capability link-related register access for switches PCI: Add offsets of PCIe capability registers PCI: Tidy bitmasks and spacing of PCIe capability definitions PCI: Remove obsolete comment reference to pci_pcie_cap2() PCI: Clarify PCI_EXP_TYPE_PCI_BRIDGE comment PCI: Rename PCIe capability definitions to follow convention PCI: Warn if unsafe MPS settings detected PCI: Fix MPS peer-to-peer DMA comment syntax PCI: Disable decoding for BAR sizing only when it was actually enabled PCI: Add comment about needing pci_msi_off() even when CONFIG_PCI_MSI=n PCI: Add pcibios_pm_ops for optional arch-specific hibernate functionality PCI: Don't restrict MPS for slots below Root Ports PCI: Simplify MPS test for Downstream Port PCI: Remove unnecessary check for pcie_get_mps() failure ...
This commit is contained in:
commit
a923874198
|
@ -18,6 +18,7 @@ Required properties:
|
|||
- interrupt-map-mask and interrupt-map: standard PCI properties
|
||||
to define the mapping of the PCIe interface to interrupt
|
||||
numbers.
|
||||
- num-lanes: number of lanes to use
|
||||
- reset-gpio: gpio pin number of power good signal
|
||||
|
||||
Example:
|
||||
|
@ -41,6 +42,7 @@ SoC specific DT Entry:
|
|||
#interrupt-cells = <1>;
|
||||
interrupt-map-mask = <0 0 0 0>;
|
||||
interrupt-map = <0x0 0 &gic 53>;
|
||||
num-lanes = <4>;
|
||||
};
|
||||
|
||||
pcie@2a0000 {
|
||||
|
@ -60,6 +62,7 @@ SoC specific DT Entry:
|
|||
#interrupt-cells = <1>;
|
||||
interrupt-map-mask = <0 0 0 0>;
|
||||
interrupt-map = <0x0 0 &gic 56>;
|
||||
num-lanes = <4>;
|
||||
};
|
||||
|
||||
Board specific DT Entry:
|
||||
|
|
|
@ -248,6 +248,7 @@
|
|||
#interrupt-cells = <1>;
|
||||
interrupt-map-mask = <0 0 0 0>;
|
||||
interrupt-map = <0x0 0 &gic 53>;
|
||||
num-lanes = <4>;
|
||||
};
|
||||
|
||||
pcie@2a0000 {
|
||||
|
@ -267,5 +268,6 @@
|
|||
#interrupt-cells = <1>;
|
||||
interrupt-map-mask = <0 0 0 0>;
|
||||
interrupt-map = <0x0 0 &gic 56>;
|
||||
num-lanes = <4>;
|
||||
};
|
||||
};
|
||||
|
|
|
@ -525,11 +525,6 @@ void pci_common_init_dev(struct device *parent, struct hw_pci *hw)
|
|||
* Assign resources.
|
||||
*/
|
||||
pci_bus_assign_resources(bus);
|
||||
|
||||
/*
|
||||
* Enable bridges
|
||||
*/
|
||||
pci_enable_bridges(bus);
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
|
@ -320,7 +320,7 @@ DECLARE_PCI_FIXUP_HEADER(PCI_ANY_ID, PCI_ANY_ID, pci_fixup_ide_bases);
|
|||
* are examined.
|
||||
*/
|
||||
|
||||
void __init pcibios_fixup_bus(struct pci_bus *bus)
|
||||
void pcibios_fixup_bus(struct pci_bus *bus)
|
||||
{
|
||||
#if 0
|
||||
printk("### PCIBIOS_FIXUP_BUS(%d)\n",bus->number);
|
||||
|
|
|
@ -319,7 +319,6 @@ static int __init mcf_pci_init(void)
|
|||
pci_fixup_irqs(pci_common_swizzle, mcf_pci_map_irq);
|
||||
pci_bus_size_bridges(rootbus);
|
||||
pci_bus_assign_resources(rootbus);
|
||||
pci_enable_bridges(rootbus);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -113,7 +113,6 @@ static void pcibios_scanbus(struct pci_controller *hose)
|
|||
if (!pci_has_flag(PCI_PROBE_ONLY)) {
|
||||
pci_bus_size_bridges(bus);
|
||||
pci_bus_assign_resources(bus);
|
||||
pci_enable_bridges(bus);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1674,12 +1674,8 @@ void pcibios_scan_phb(struct pci_controller *hose)
|
|||
/* Configure PCI Express settings */
|
||||
if (bus && !pci_has_flag(PCI_PROBE_ONLY)) {
|
||||
struct pci_bus *child;
|
||||
list_for_each_entry(child, &bus->children, node) {
|
||||
struct pci_dev *self = child->self;
|
||||
if (!self)
|
||||
continue;
|
||||
pcie_bus_configure_settings(child, self->pcie_mpss);
|
||||
}
|
||||
list_for_each_entry(child, &bus->children, node)
|
||||
pcie_bus_configure_settings(child);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -69,7 +69,6 @@ static void pcibios_scanbus(struct pci_channel *hose)
|
|||
|
||||
pci_bus_size_bridges(bus);
|
||||
pci_bus_assign_resources(bus);
|
||||
pci_enable_bridges(bus);
|
||||
} else {
|
||||
pci_free_resource_list(&resources);
|
||||
}
|
||||
|
|
|
@ -508,13 +508,8 @@ static void fixup_read_and_payload_sizes(struct pci_controller *controller)
|
|||
rc_dev_cap.word);
|
||||
|
||||
/* Configure PCI Express MPS setting. */
|
||||
list_for_each_entry(child, &root_bus->children, node) {
|
||||
struct pci_dev *self = child->self;
|
||||
if (!self)
|
||||
continue;
|
||||
|
||||
pcie_bus_configure_settings(child, self->pcie_mpss);
|
||||
}
|
||||
list_for_each_entry(child, &root_bus->children, node)
|
||||
pcie_bus_configure_settings(child);
|
||||
|
||||
/*
|
||||
* Set the mac_config register in trio based on the MPS/MRS of the link.
|
||||
|
|
|
@ -568,13 +568,8 @@ struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
|
|||
*/
|
||||
if (bus) {
|
||||
struct pci_bus *child;
|
||||
list_for_each_entry(child, &bus->children, node) {
|
||||
struct pci_dev *self = child->self;
|
||||
if (!self)
|
||||
continue;
|
||||
|
||||
pcie_bus_configure_settings(child, self->pcie_mpss);
|
||||
}
|
||||
list_for_each_entry(child, &bus->children, node)
|
||||
pcie_bus_configure_settings(child);
|
||||
}
|
||||
|
||||
if (bus && node != -1) {
|
||||
|
|
|
@ -700,7 +700,7 @@ int pci_mmconfig_insert(struct device *dev, u16 seg, u8 start, u8 end,
|
|||
if (!(pci_probe & PCI_PROBE_MMCONF) || pci_mmcfg_arch_init_failed)
|
||||
return -ENODEV;
|
||||
|
||||
if (start > end)
|
||||
if (start > end || !addr)
|
||||
return -EINVAL;
|
||||
|
||||
mutex_lock(&pci_mmcfg_lock);
|
||||
|
@ -716,11 +716,6 @@ int pci_mmconfig_insert(struct device *dev, u16 seg, u8 start, u8 end,
|
|||
return -EEXIST;
|
||||
}
|
||||
|
||||
if (!addr) {
|
||||
mutex_unlock(&pci_mmcfg_lock);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
rc = -EBUSY;
|
||||
cfg = pci_mmconfig_alloc(seg, start, end, addr);
|
||||
if (cfg == NULL) {
|
||||
|
|
|
@ -23,11 +23,11 @@
|
|||
#include <linux/ioport.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/dmi.h>
|
||||
#include <linux/acpi.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/smp.h>
|
||||
|
||||
#include <asm/acpi.h>
|
||||
#include <asm/segment.h>
|
||||
#include <asm/io.h>
|
||||
#include <asm/smp.h>
|
||||
#include <asm/pci_x86.h>
|
||||
#include <asm/hw_irq.h>
|
||||
#include <asm/io_apic.h>
|
||||
|
@ -43,7 +43,7 @@
|
|||
#define PCI_FIXED_BAR_4_SIZE 0x14
|
||||
#define PCI_FIXED_BAR_5_SIZE 0x1c
|
||||
|
||||
static int pci_soc_mode = 0;
|
||||
static int pci_soc_mode;
|
||||
|
||||
/**
|
||||
* fixed_bar_cap - return the offset of the fixed BAR cap if found
|
||||
|
@ -141,7 +141,8 @@ static int pci_device_update_fixed(struct pci_bus *bus, unsigned int devfn,
|
|||
*/
|
||||
static bool type1_access_ok(unsigned int bus, unsigned int devfn, int reg)
|
||||
{
|
||||
/* This is a workaround for A0 LNC bug where PCI status register does
|
||||
/*
|
||||
* This is a workaround for A0 LNC bug where PCI status register does
|
||||
* not have new CAP bit set. can not be written by SW either.
|
||||
*
|
||||
* PCI header type in real LNC indicates a single function device, this
|
||||
|
@ -154,7 +155,7 @@ static bool type1_access_ok(unsigned int bus, unsigned int devfn, int reg)
|
|||
|| devfn == PCI_DEVFN(0, 0)
|
||||
|| devfn == PCI_DEVFN(3, 0)))
|
||||
return 1;
|
||||
return 0; /* langwell on others */
|
||||
return 0; /* Langwell on others */
|
||||
}
|
||||
|
||||
static int pci_read(struct pci_bus *bus, unsigned int devfn, int where,
|
||||
|
@ -172,7 +173,8 @@ static int pci_write(struct pci_bus *bus, unsigned int devfn, int where,
|
|||
{
|
||||
int offset;
|
||||
|
||||
/* On MRST, there is no PCI ROM BAR, this will cause a subsequent read
|
||||
/*
|
||||
* On MRST, there is no PCI ROM BAR, this will cause a subsequent read
|
||||
* to ROM BAR return 0 then being ignored.
|
||||
*/
|
||||
if (where == PCI_ROM_ADDRESS)
|
||||
|
@ -210,7 +212,8 @@ static int mrst_pci_irq_enable(struct pci_dev *dev)
|
|||
|
||||
pci_read_config_byte(dev, PCI_INTERRUPT_PIN, &pin);
|
||||
|
||||
/* MRST only have IOAPIC, the PCI irq lines are 1:1 mapped to
|
||||
/*
|
||||
* MRST only have IOAPIC, the PCI irq lines are 1:1 mapped to
|
||||
* IOAPIC RTE entries, so we just enable RTE for the device.
|
||||
*/
|
||||
irq_attr.ioapic = mp_find_ioapic(dev->irq);
|
||||
|
@ -235,7 +238,7 @@ struct pci_ops pci_mrst_ops = {
|
|||
*/
|
||||
int __init pci_mrst_init(void)
|
||||
{
|
||||
printk(KERN_INFO "Intel MID platform detected, using MID PCI ops\n");
|
||||
pr_info("Intel MID platform detected, using MID PCI ops\n");
|
||||
pci_mmcfg_late_init();
|
||||
pcibios_enable_irq = mrst_pci_irq_enable;
|
||||
pci_root_ops = pci_mrst_ops;
|
||||
|
@ -244,17 +247,21 @@ int __init pci_mrst_init(void)
|
|||
return 1;
|
||||
}
|
||||
|
||||
/* Langwell devices are not true pci devices, they are not subject to 10 ms
|
||||
* d3 to d0 delay required by pci spec.
|
||||
/*
|
||||
* Langwell devices are not true PCI devices; they are not subject to 10 ms
|
||||
* d3 to d0 delay required by PCI spec.
|
||||
*/
|
||||
static void pci_d3delay_fixup(struct pci_dev *dev)
|
||||
{
|
||||
/* PCI fixups are effectively decided compile time. If we have a dual
|
||||
SoC/non-SoC kernel we don't want to mangle d3 on non SoC devices */
|
||||
if (!pci_soc_mode)
|
||||
return;
|
||||
/* true pci devices in lincroft should allow type 1 access, the rest
|
||||
* are langwell fake pci devices.
|
||||
/*
|
||||
* PCI fixups are effectively decided compile time. If we have a dual
|
||||
* SoC/non-SoC kernel we don't want to mangle d3 on non-SoC devices.
|
||||
*/
|
||||
if (!pci_soc_mode)
|
||||
return;
|
||||
/*
|
||||
* True PCI devices in Lincroft should allow type 1 access, the rest
|
||||
* are Langwell fake PCI devices.
|
||||
*/
|
||||
if (type1_access_ok(dev->bus->number, dev->devfn, PCI_DEVICE_ID))
|
||||
return;
|
||||
|
|
|
@ -378,6 +378,7 @@ static int acpi_pci_root_add(struct acpi_device *device,
|
|||
struct acpi_pci_root *root;
|
||||
u32 flags, base_flags;
|
||||
acpi_handle handle = device->handle;
|
||||
bool no_aspm = false, clear_aspm = false;
|
||||
|
||||
root = kzalloc(sizeof(struct acpi_pci_root), GFP_KERNEL);
|
||||
if (!root)
|
||||
|
@ -437,27 +438,6 @@ static int acpi_pci_root_add(struct acpi_device *device,
|
|||
flags = base_flags = OSC_PCI_SEGMENT_GROUPS_SUPPORT;
|
||||
acpi_pci_osc_support(root, flags);
|
||||
|
||||
/*
|
||||
* TBD: Need PCI interface for enumeration/configuration of roots.
|
||||
*/
|
||||
|
||||
/*
|
||||
* Scan the Root Bridge
|
||||
* --------------------
|
||||
* Must do this prior to any attempt to bind the root device, as the
|
||||
* PCI namespace does not get created until this call is made (and
|
||||
* thus the root bridge's pci_dev does not exist).
|
||||
*/
|
||||
root->bus = pci_acpi_scan_root(root);
|
||||
if (!root->bus) {
|
||||
dev_err(&device->dev,
|
||||
"Bus %04x:%02x not present in PCI namespace\n",
|
||||
root->segment, (unsigned int)root->secondary.start);
|
||||
result = -ENODEV;
|
||||
goto end;
|
||||
}
|
||||
|
||||
/* Indicate support for various _OSC capabilities. */
|
||||
if (pci_ext_cfg_avail())
|
||||
flags |= OSC_EXT_PCI_CONFIG_SUPPORT;
|
||||
if (pcie_aspm_support_enabled()) {
|
||||
|
@ -471,7 +451,7 @@ static int acpi_pci_root_add(struct acpi_device *device,
|
|||
if (ACPI_FAILURE(status)) {
|
||||
dev_info(&device->dev, "ACPI _OSC support "
|
||||
"notification failed, disabling PCIe ASPM\n");
|
||||
pcie_no_aspm();
|
||||
no_aspm = true;
|
||||
flags = base_flags;
|
||||
}
|
||||
}
|
||||
|
@ -503,7 +483,7 @@ static int acpi_pci_root_add(struct acpi_device *device,
|
|||
* We have ASPM control, but the FADT indicates
|
||||
* that it's unsupported. Clear it.
|
||||
*/
|
||||
pcie_clear_aspm(root->bus);
|
||||
clear_aspm = true;
|
||||
}
|
||||
} else {
|
||||
dev_info(&device->dev,
|
||||
|
@ -512,7 +492,14 @@ static int acpi_pci_root_add(struct acpi_device *device,
|
|||
acpi_format_exception(status), flags);
|
||||
dev_info(&device->dev,
|
||||
"ACPI _OSC control for PCIe not granted, disabling ASPM\n");
|
||||
pcie_no_aspm();
|
||||
/*
|
||||
* We want to disable ASPM here, but aspm_disabled
|
||||
* needs to remain in its state from boot so that we
|
||||
* properly handle PCIe 1.1 devices. So we set this
|
||||
* flag here, to defer the action until after the ACPI
|
||||
* root scan.
|
||||
*/
|
||||
no_aspm = true;
|
||||
}
|
||||
} else {
|
||||
dev_info(&device->dev,
|
||||
|
@ -520,16 +507,40 @@ static int acpi_pci_root_add(struct acpi_device *device,
|
|||
"(_OSC support mask: 0x%02x)\n", flags);
|
||||
}
|
||||
|
||||
/*
|
||||
* TBD: Need PCI interface for enumeration/configuration of roots.
|
||||
*/
|
||||
|
||||
/*
|
||||
* Scan the Root Bridge
|
||||
* --------------------
|
||||
* Must do this prior to any attempt to bind the root device, as the
|
||||
* PCI namespace does not get created until this call is made (and
|
||||
* thus the root bridge's pci_dev does not exist).
|
||||
*/
|
||||
root->bus = pci_acpi_scan_root(root);
|
||||
if (!root->bus) {
|
||||
dev_err(&device->dev,
|
||||
"Bus %04x:%02x not present in PCI namespace\n",
|
||||
root->segment, (unsigned int)root->secondary.start);
|
||||
result = -ENODEV;
|
||||
goto end;
|
||||
}
|
||||
|
||||
if (clear_aspm) {
|
||||
dev_info(&device->dev, "Disabling ASPM (FADT indicates it is unsupported)\n");
|
||||
pcie_clear_aspm(root->bus);
|
||||
}
|
||||
if (no_aspm)
|
||||
pcie_no_aspm();
|
||||
|
||||
pci_acpi_add_bus_pm_notifier(device, root->bus);
|
||||
if (device->wakeup.flags.run_wake)
|
||||
device_set_run_wake(root->bus->bridge, true);
|
||||
|
||||
if (system_state != SYSTEM_BOOTING) {
|
||||
pcibios_resource_survey_bus(root->bus);
|
||||
pci_assign_unassigned_bus_resources(root->bus);
|
||||
|
||||
/* need to after hot-added ioapic is registered */
|
||||
pci_enable_bridges(root->bus);
|
||||
pci_assign_unassigned_root_bus_resources(root->bus);
|
||||
}
|
||||
|
||||
pci_bus_add_devices(root->bus);
|
||||
|
|
|
@ -44,7 +44,7 @@ static int rts5227_extra_init_hw(struct rtsx_pcr *pcr)
|
|||
rtsx_pci_add_cmd(pcr, WRITE_REG_CMD, OLT_LED_CTL, 0x0F, 0x02);
|
||||
/* Configure LTR */
|
||||
pcie_capability_read_word(pcr->pci, PCI_EXP_DEVCTL2, &cap);
|
||||
if (cap & PCI_EXP_LTR_EN)
|
||||
if (cap & PCI_EXP_DEVCTL2_LTR_EN)
|
||||
rtsx_pci_add_cmd(pcr, WRITE_REG_CMD, LTR_CTL, 0xFF, 0xA3);
|
||||
/* Configure OBFF */
|
||||
rtsx_pci_add_cmd(pcr, WRITE_REG_CMD, OBFF_CFG, 0x03, 0x03);
|
||||
|
|
|
@ -9960,8 +9960,6 @@ static int bnx2x_prev_mark_path(struct bnx2x *bp, bool after_undi)
|
|||
|
||||
static int bnx2x_do_flr(struct bnx2x *bp)
|
||||
{
|
||||
int i;
|
||||
u16 status;
|
||||
struct pci_dev *dev = bp->pdev;
|
||||
|
||||
if (CHIP_IS_E1x(bp)) {
|
||||
|
@ -9976,20 +9974,8 @@ static int bnx2x_do_flr(struct bnx2x *bp)
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Wait for Transaction Pending bit clean */
|
||||
for (i = 0; i < 4; i++) {
|
||||
if (i)
|
||||
msleep((1 << (i - 1)) * 100);
|
||||
|
||||
pcie_capability_read_word(dev, PCI_EXP_DEVSTA, &status);
|
||||
if (!(status & PCI_EXP_DEVSTA_TRPND))
|
||||
goto clear;
|
||||
}
|
||||
|
||||
dev_err(&dev->dev,
|
||||
"transaction is not cleared; proceeding with reset anyway\n");
|
||||
|
||||
clear:
|
||||
if (!pci_wait_for_pending_transaction(dev))
|
||||
dev_err(&dev->dev, "transaction is not cleared; proceeding with reset anyway\n");
|
||||
|
||||
BNX2X_DEV_INFO("Initiating FLR\n");
|
||||
bnx2x_fw_command(bp, DRV_MSG_CODE_INITIATE_FLR, 0);
|
||||
|
|
|
@ -1590,7 +1590,6 @@ lba_driver_probe(struct parisc_device *dev)
|
|||
lba_dump_res(&lba_dev->hba.lmmio_space, 2);
|
||||
#endif
|
||||
}
|
||||
pci_enable_bridges(lba_bus);
|
||||
|
||||
/*
|
||||
** Once PCI register ops has walked the bus, access to config
|
||||
|
|
|
@ -475,37 +475,33 @@ static inline int pcie_cap_version(const struct pci_dev *dev)
|
|||
return pcie_caps_reg(dev) & PCI_EXP_FLAGS_VERS;
|
||||
}
|
||||
|
||||
static inline bool pcie_cap_has_devctl(const struct pci_dev *dev)
|
||||
{
|
||||
return true;
|
||||
}
|
||||
|
||||
static inline bool pcie_cap_has_lnkctl(const struct pci_dev *dev)
|
||||
{
|
||||
int type = pci_pcie_type(dev);
|
||||
|
||||
return pcie_cap_version(dev) > 1 ||
|
||||
return type == PCI_EXP_TYPE_ENDPOINT ||
|
||||
type == PCI_EXP_TYPE_LEG_END ||
|
||||
type == PCI_EXP_TYPE_ROOT_PORT ||
|
||||
type == PCI_EXP_TYPE_ENDPOINT ||
|
||||
type == PCI_EXP_TYPE_LEG_END;
|
||||
type == PCI_EXP_TYPE_UPSTREAM ||
|
||||
type == PCI_EXP_TYPE_DOWNSTREAM ||
|
||||
type == PCI_EXP_TYPE_PCI_BRIDGE ||
|
||||
type == PCI_EXP_TYPE_PCIE_BRIDGE;
|
||||
}
|
||||
|
||||
static inline bool pcie_cap_has_sltctl(const struct pci_dev *dev)
|
||||
{
|
||||
int type = pci_pcie_type(dev);
|
||||
|
||||
return pcie_cap_version(dev) > 1 ||
|
||||
type == PCI_EXP_TYPE_ROOT_PORT ||
|
||||
(type == PCI_EXP_TYPE_DOWNSTREAM &&
|
||||
pcie_caps_reg(dev) & PCI_EXP_FLAGS_SLOT);
|
||||
return (type == PCI_EXP_TYPE_ROOT_PORT ||
|
||||
type == PCI_EXP_TYPE_DOWNSTREAM) &&
|
||||
pcie_caps_reg(dev) & PCI_EXP_FLAGS_SLOT;
|
||||
}
|
||||
|
||||
static inline bool pcie_cap_has_rtctl(const struct pci_dev *dev)
|
||||
{
|
||||
int type = pci_pcie_type(dev);
|
||||
|
||||
return pcie_cap_version(dev) > 1 ||
|
||||
type == PCI_EXP_TYPE_ROOT_PORT ||
|
||||
return type == PCI_EXP_TYPE_ROOT_PORT ||
|
||||
type == PCI_EXP_TYPE_RC_EC;
|
||||
}
|
||||
|
||||
|
@ -520,7 +516,7 @@ static bool pcie_capability_reg_implemented(struct pci_dev *dev, int pos)
|
|||
case PCI_EXP_DEVCAP:
|
||||
case PCI_EXP_DEVCTL:
|
||||
case PCI_EXP_DEVSTA:
|
||||
return pcie_cap_has_devctl(dev);
|
||||
return true;
|
||||
case PCI_EXP_LNKCAP:
|
||||
case PCI_EXP_LNKCTL:
|
||||
case PCI_EXP_LNKSTA:
|
||||
|
|
|
@ -216,24 +216,6 @@ void pci_bus_add_devices(const struct pci_bus *bus)
|
|||
}
|
||||
}
|
||||
|
||||
void pci_enable_bridges(struct pci_bus *bus)
|
||||
{
|
||||
struct pci_dev *dev;
|
||||
int retval;
|
||||
|
||||
list_for_each_entry(dev, &bus->devices, bus_list) {
|
||||
if (dev->subordinate) {
|
||||
if (!pci_is_enabled(dev)) {
|
||||
retval = pci_enable_device(dev);
|
||||
if (retval)
|
||||
dev_err(&dev->dev, "Error enabling bridge (%d), continuing\n", retval);
|
||||
pci_set_master(dev);
|
||||
}
|
||||
pci_enable_bridges(dev->subordinate);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/** pci_walk_bus - walk devices on/under bus, calling callback.
|
||||
* @top bus whose devices should be walked
|
||||
* @cb callback to be called for each device found
|
||||
|
@ -301,4 +283,3 @@ EXPORT_SYMBOL(pci_bus_put);
|
|||
EXPORT_SYMBOL(pci_bus_alloc_resource);
|
||||
EXPORT_SYMBOL_GPL(pci_bus_add_device);
|
||||
EXPORT_SYMBOL(pci_bus_add_devices);
|
||||
EXPORT_SYMBOL(pci_enable_bridges);
|
||||
|
|
|
@ -4,6 +4,7 @@ menu "PCI host controller drivers"
|
|||
config PCI_MVEBU
|
||||
bool "Marvell EBU PCIe controller"
|
||||
depends on ARCH_MVEBU || ARCH_KIRKWOOD
|
||||
depends on OF
|
||||
|
||||
config PCIE_DW
|
||||
bool
|
||||
|
|
|
@ -1,2 +1,3 @@
|
|||
obj-$(CONFIG_PCI_MVEBU) += pci-mvebu.o
|
||||
obj-$(CONFIG_PCIE_DW) += pcie-designware.o
|
||||
obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o
|
||||
obj-$(CONFIG_PCI_MVEBU) += pci-mvebu.o
|
||||
|
|
|
@ -0,0 +1,552 @@
|
|||
/*
|
||||
* PCIe host controller driver for Samsung EXYNOS SoCs
|
||||
*
|
||||
* Copyright (C) 2013 Samsung Electronics Co., Ltd.
|
||||
* http://www.samsung.com
|
||||
*
|
||||
* Author: Jingoo Han <jg1.han@samsung.com>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*/
|
||||
|
||||
#include <linux/clk.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/gpio.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of_gpio.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/resource.h>
|
||||
#include <linux/signal.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
#include "pcie-designware.h"
|
||||
|
||||
#define to_exynos_pcie(x) container_of(x, struct exynos_pcie, pp)
|
||||
|
||||
struct exynos_pcie {
|
||||
void __iomem *elbi_base;
|
||||
void __iomem *phy_base;
|
||||
void __iomem *block_base;
|
||||
int reset_gpio;
|
||||
struct clk *clk;
|
||||
struct clk *bus_clk;
|
||||
struct pcie_port pp;
|
||||
};
|
||||
|
||||
/* PCIe ELBI registers */
|
||||
#define PCIE_IRQ_PULSE 0x000
|
||||
#define IRQ_INTA_ASSERT (0x1 << 0)
|
||||
#define IRQ_INTB_ASSERT (0x1 << 2)
|
||||
#define IRQ_INTC_ASSERT (0x1 << 4)
|
||||
#define IRQ_INTD_ASSERT (0x1 << 6)
|
||||
#define PCIE_IRQ_LEVEL 0x004
|
||||
#define PCIE_IRQ_SPECIAL 0x008
|
||||
#define PCIE_IRQ_EN_PULSE 0x00c
|
||||
#define PCIE_IRQ_EN_LEVEL 0x010
|
||||
#define PCIE_IRQ_EN_SPECIAL 0x014
|
||||
#define PCIE_PWR_RESET 0x018
|
||||
#define PCIE_CORE_RESET 0x01c
|
||||
#define PCIE_CORE_RESET_ENABLE (0x1 << 0)
|
||||
#define PCIE_STICKY_RESET 0x020
|
||||
#define PCIE_NONSTICKY_RESET 0x024
|
||||
#define PCIE_APP_INIT_RESET 0x028
|
||||
#define PCIE_APP_LTSSM_ENABLE 0x02c
|
||||
#define PCIE_ELBI_RDLH_LINKUP 0x064
|
||||
#define PCIE_ELBI_LTSSM_ENABLE 0x1
|
||||
#define PCIE_ELBI_SLV_AWMISC 0x11c
|
||||
#define PCIE_ELBI_SLV_ARMISC 0x120
|
||||
#define PCIE_ELBI_SLV_DBI_ENABLE (0x1 << 21)
|
||||
|
||||
/* PCIe Purple registers */
|
||||
#define PCIE_PHY_GLOBAL_RESET 0x000
|
||||
#define PCIE_PHY_COMMON_RESET 0x004
|
||||
#define PCIE_PHY_CMN_REG 0x008
|
||||
#define PCIE_PHY_MAC_RESET 0x00c
|
||||
#define PCIE_PHY_PLL_LOCKED 0x010
|
||||
#define PCIE_PHY_TRSVREG_RESET 0x020
|
||||
#define PCIE_PHY_TRSV_RESET 0x024
|
||||
|
||||
/* PCIe PHY registers */
|
||||
#define PCIE_PHY_IMPEDANCE 0x004
|
||||
#define PCIE_PHY_PLL_DIV_0 0x008
|
||||
#define PCIE_PHY_PLL_BIAS 0x00c
|
||||
#define PCIE_PHY_DCC_FEEDBACK 0x014
|
||||
#define PCIE_PHY_PLL_DIV_1 0x05c
|
||||
#define PCIE_PHY_TRSV0_EMP_LVL 0x084
|
||||
#define PCIE_PHY_TRSV0_DRV_LVL 0x088
|
||||
#define PCIE_PHY_TRSV0_RXCDR 0x0ac
|
||||
#define PCIE_PHY_TRSV0_LVCC 0x0dc
|
||||
#define PCIE_PHY_TRSV1_EMP_LVL 0x144
|
||||
#define PCIE_PHY_TRSV1_RXCDR 0x16c
|
||||
#define PCIE_PHY_TRSV1_LVCC 0x19c
|
||||
#define PCIE_PHY_TRSV2_EMP_LVL 0x204
|
||||
#define PCIE_PHY_TRSV2_RXCDR 0x22c
|
||||
#define PCIE_PHY_TRSV2_LVCC 0x25c
|
||||
#define PCIE_PHY_TRSV3_EMP_LVL 0x2c4
|
||||
#define PCIE_PHY_TRSV3_RXCDR 0x2ec
|
||||
#define PCIE_PHY_TRSV3_LVCC 0x31c
|
||||
|
||||
static inline void exynos_elb_writel(struct exynos_pcie *pcie, u32 val, u32 reg)
|
||||
{
|
||||
writel(val, pcie->elbi_base + reg);
|
||||
}
|
||||
|
||||
static inline u32 exynos_elb_readl(struct exynos_pcie *pcie, u32 reg)
|
||||
{
|
||||
return readl(pcie->elbi_base + reg);
|
||||
}
|
||||
|
||||
static inline void exynos_phy_writel(struct exynos_pcie *pcie, u32 val, u32 reg)
|
||||
{
|
||||
writel(val, pcie->phy_base + reg);
|
||||
}
|
||||
|
||||
static inline u32 exynos_phy_readl(struct exynos_pcie *pcie, u32 reg)
|
||||
{
|
||||
return readl(pcie->phy_base + reg);
|
||||
}
|
||||
|
||||
static inline void exynos_blk_writel(struct exynos_pcie *pcie, u32 val, u32 reg)
|
||||
{
|
||||
writel(val, pcie->block_base + reg);
|
||||
}
|
||||
|
||||
static inline u32 exynos_blk_readl(struct exynos_pcie *pcie, u32 reg)
|
||||
{
|
||||
return readl(pcie->block_base + reg);
|
||||
}
|
||||
|
||||
static void exynos_pcie_sideband_dbi_w_mode(struct pcie_port *pp, bool on)
|
||||
{
|
||||
u32 val;
|
||||
struct exynos_pcie *exynos_pcie = to_exynos_pcie(pp);
|
||||
|
||||
if (on) {
|
||||
val = exynos_elb_readl(exynos_pcie, PCIE_ELBI_SLV_AWMISC);
|
||||
val |= PCIE_ELBI_SLV_DBI_ENABLE;
|
||||
exynos_elb_writel(exynos_pcie, val, PCIE_ELBI_SLV_AWMISC);
|
||||
} else {
|
||||
val = exynos_elb_readl(exynos_pcie, PCIE_ELBI_SLV_AWMISC);
|
||||
val &= ~PCIE_ELBI_SLV_DBI_ENABLE;
|
||||
exynos_elb_writel(exynos_pcie, val, PCIE_ELBI_SLV_AWMISC);
|
||||
}
|
||||
}
|
||||
|
||||
static void exynos_pcie_sideband_dbi_r_mode(struct pcie_port *pp, bool on)
|
||||
{
|
||||
u32 val;
|
||||
struct exynos_pcie *exynos_pcie = to_exynos_pcie(pp);
|
||||
|
||||
if (on) {
|
||||
val = exynos_elb_readl(exynos_pcie, PCIE_ELBI_SLV_ARMISC);
|
||||
val |= PCIE_ELBI_SLV_DBI_ENABLE;
|
||||
exynos_elb_writel(exynos_pcie, val, PCIE_ELBI_SLV_ARMISC);
|
||||
} else {
|
||||
val = exynos_elb_readl(exynos_pcie, PCIE_ELBI_SLV_ARMISC);
|
||||
val &= ~PCIE_ELBI_SLV_DBI_ENABLE;
|
||||
exynos_elb_writel(exynos_pcie, val, PCIE_ELBI_SLV_ARMISC);
|
||||
}
|
||||
}
|
||||
|
||||
static void exynos_pcie_assert_core_reset(struct pcie_port *pp)
|
||||
{
|
||||
u32 val;
|
||||
struct exynos_pcie *exynos_pcie = to_exynos_pcie(pp);
|
||||
|
||||
val = exynos_elb_readl(exynos_pcie, PCIE_CORE_RESET);
|
||||
val &= ~PCIE_CORE_RESET_ENABLE;
|
||||
exynos_elb_writel(exynos_pcie, val, PCIE_CORE_RESET);
|
||||
exynos_elb_writel(exynos_pcie, 0, PCIE_PWR_RESET);
|
||||
exynos_elb_writel(exynos_pcie, 0, PCIE_STICKY_RESET);
|
||||
exynos_elb_writel(exynos_pcie, 0, PCIE_NONSTICKY_RESET);
|
||||
}
|
||||
|
||||
static void exynos_pcie_deassert_core_reset(struct pcie_port *pp)
|
||||
{
|
||||
u32 val;
|
||||
struct exynos_pcie *exynos_pcie = to_exynos_pcie(pp);
|
||||
|
||||
val = exynos_elb_readl(exynos_pcie, PCIE_CORE_RESET);
|
||||
val |= PCIE_CORE_RESET_ENABLE;
|
||||
|
||||
exynos_elb_writel(exynos_pcie, val, PCIE_CORE_RESET);
|
||||
exynos_elb_writel(exynos_pcie, 1, PCIE_STICKY_RESET);
|
||||
exynos_elb_writel(exynos_pcie, 1, PCIE_NONSTICKY_RESET);
|
||||
exynos_elb_writel(exynos_pcie, 1, PCIE_APP_INIT_RESET);
|
||||
exynos_elb_writel(exynos_pcie, 0, PCIE_APP_INIT_RESET);
|
||||
exynos_blk_writel(exynos_pcie, 1, PCIE_PHY_MAC_RESET);
|
||||
}
|
||||
|
||||
static void exynos_pcie_assert_phy_reset(struct pcie_port *pp)
|
||||
{
|
||||
struct exynos_pcie *exynos_pcie = to_exynos_pcie(pp);
|
||||
|
||||
exynos_blk_writel(exynos_pcie, 0, PCIE_PHY_MAC_RESET);
|
||||
exynos_blk_writel(exynos_pcie, 1, PCIE_PHY_GLOBAL_RESET);
|
||||
}
|
||||
|
||||
static void exynos_pcie_deassert_phy_reset(struct pcie_port *pp)
|
||||
{
|
||||
struct exynos_pcie *exynos_pcie = to_exynos_pcie(pp);
|
||||
|
||||
exynos_blk_writel(exynos_pcie, 0, PCIE_PHY_GLOBAL_RESET);
|
||||
exynos_elb_writel(exynos_pcie, 1, PCIE_PWR_RESET);
|
||||
exynos_blk_writel(exynos_pcie, 0, PCIE_PHY_COMMON_RESET);
|
||||
exynos_blk_writel(exynos_pcie, 0, PCIE_PHY_CMN_REG);
|
||||
exynos_blk_writel(exynos_pcie, 0, PCIE_PHY_TRSVREG_RESET);
|
||||
exynos_blk_writel(exynos_pcie, 0, PCIE_PHY_TRSV_RESET);
|
||||
}
|
||||
|
||||
static void exynos_pcie_init_phy(struct pcie_port *pp)
|
||||
{
|
||||
struct exynos_pcie *exynos_pcie = to_exynos_pcie(pp);
|
||||
|
||||
/* DCC feedback control off */
|
||||
exynos_phy_writel(exynos_pcie, 0x29, PCIE_PHY_DCC_FEEDBACK);
|
||||
|
||||
/* set TX/RX impedance */
|
||||
exynos_phy_writel(exynos_pcie, 0xd5, PCIE_PHY_IMPEDANCE);
|
||||
|
||||
/* set 50Mhz PHY clock */
|
||||
exynos_phy_writel(exynos_pcie, 0x14, PCIE_PHY_PLL_DIV_0);
|
||||
exynos_phy_writel(exynos_pcie, 0x12, PCIE_PHY_PLL_DIV_1);
|
||||
|
||||
/* set TX Differential output for lane 0 */
|
||||
exynos_phy_writel(exynos_pcie, 0x7f, PCIE_PHY_TRSV0_DRV_LVL);
|
||||
|
||||
/* set TX Pre-emphasis Level Control for lane 0 to minimum */
|
||||
exynos_phy_writel(exynos_pcie, 0x0, PCIE_PHY_TRSV0_EMP_LVL);
|
||||
|
||||
/* set RX clock and data recovery bandwidth */
|
||||
exynos_phy_writel(exynos_pcie, 0xe7, PCIE_PHY_PLL_BIAS);
|
||||
exynos_phy_writel(exynos_pcie, 0x82, PCIE_PHY_TRSV0_RXCDR);
|
||||
exynos_phy_writel(exynos_pcie, 0x82, PCIE_PHY_TRSV1_RXCDR);
|
||||
exynos_phy_writel(exynos_pcie, 0x82, PCIE_PHY_TRSV2_RXCDR);
|
||||
exynos_phy_writel(exynos_pcie, 0x82, PCIE_PHY_TRSV3_RXCDR);
|
||||
|
||||
/* change TX Pre-emphasis Level Control for lanes */
|
||||
exynos_phy_writel(exynos_pcie, 0x39, PCIE_PHY_TRSV0_EMP_LVL);
|
||||
exynos_phy_writel(exynos_pcie, 0x39, PCIE_PHY_TRSV1_EMP_LVL);
|
||||
exynos_phy_writel(exynos_pcie, 0x39, PCIE_PHY_TRSV2_EMP_LVL);
|
||||
exynos_phy_writel(exynos_pcie, 0x39, PCIE_PHY_TRSV3_EMP_LVL);
|
||||
|
||||
/* set LVCC */
|
||||
exynos_phy_writel(exynos_pcie, 0x20, PCIE_PHY_TRSV0_LVCC);
|
||||
exynos_phy_writel(exynos_pcie, 0xa0, PCIE_PHY_TRSV1_LVCC);
|
||||
exynos_phy_writel(exynos_pcie, 0xa0, PCIE_PHY_TRSV2_LVCC);
|
||||
exynos_phy_writel(exynos_pcie, 0xa0, PCIE_PHY_TRSV3_LVCC);
|
||||
}
|
||||
|
||||
static void exynos_pcie_assert_reset(struct pcie_port *pp)
|
||||
{
|
||||
struct exynos_pcie *exynos_pcie = to_exynos_pcie(pp);
|
||||
|
||||
if (exynos_pcie->reset_gpio >= 0)
|
||||
devm_gpio_request_one(pp->dev, exynos_pcie->reset_gpio,
|
||||
GPIOF_OUT_INIT_HIGH, "RESET");
|
||||
return;
|
||||
}
|
||||
|
||||
static int exynos_pcie_establish_link(struct pcie_port *pp)
|
||||
{
|
||||
u32 val;
|
||||
int count = 0;
|
||||
struct exynos_pcie *exynos_pcie = to_exynos_pcie(pp);
|
||||
|
||||
if (dw_pcie_link_up(pp)) {
|
||||
dev_err(pp->dev, "Link already up\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* assert reset signals */
|
||||
exynos_pcie_assert_core_reset(pp);
|
||||
exynos_pcie_assert_phy_reset(pp);
|
||||
|
||||
/* de-assert phy reset */
|
||||
exynos_pcie_deassert_phy_reset(pp);
|
||||
|
||||
/* initialize phy */
|
||||
exynos_pcie_init_phy(pp);
|
||||
|
||||
/* pulse for common reset */
|
||||
exynos_blk_writel(exynos_pcie, 1, PCIE_PHY_COMMON_RESET);
|
||||
udelay(500);
|
||||
exynos_blk_writel(exynos_pcie, 0, PCIE_PHY_COMMON_RESET);
|
||||
|
||||
/* de-assert core reset */
|
||||
exynos_pcie_deassert_core_reset(pp);
|
||||
|
||||
/* setup root complex */
|
||||
dw_pcie_setup_rc(pp);
|
||||
|
||||
/* assert reset signal */
|
||||
exynos_pcie_assert_reset(pp);
|
||||
|
||||
/* assert LTSSM enable */
|
||||
exynos_elb_writel(exynos_pcie, PCIE_ELBI_LTSSM_ENABLE,
|
||||
PCIE_APP_LTSSM_ENABLE);
|
||||
|
||||
/* check if the link is up or not */
|
||||
while (!dw_pcie_link_up(pp)) {
|
||||
mdelay(100);
|
||||
count++;
|
||||
if (count == 10) {
|
||||
while (exynos_phy_readl(exynos_pcie,
|
||||
PCIE_PHY_PLL_LOCKED) == 0) {
|
||||
val = exynos_blk_readl(exynos_pcie,
|
||||
PCIE_PHY_PLL_LOCKED);
|
||||
dev_info(pp->dev, "PLL Locked: 0x%x\n", val);
|
||||
}
|
||||
dev_err(pp->dev, "PCIe Link Fail\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
}
|
||||
|
||||
dev_info(pp->dev, "Link up\n");
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void exynos_pcie_clear_irq_pulse(struct pcie_port *pp)
|
||||
{
|
||||
u32 val;
|
||||
struct exynos_pcie *exynos_pcie = to_exynos_pcie(pp);
|
||||
|
||||
val = exynos_elb_readl(exynos_pcie, PCIE_IRQ_PULSE);
|
||||
exynos_elb_writel(exynos_pcie, val, PCIE_IRQ_PULSE);
|
||||
return;
|
||||
}
|
||||
|
||||
static void exynos_pcie_enable_irq_pulse(struct pcie_port *pp)
|
||||
{
|
||||
u32 val;
|
||||
struct exynos_pcie *exynos_pcie = to_exynos_pcie(pp);
|
||||
|
||||
/* enable INTX interrupt */
|
||||
val = IRQ_INTA_ASSERT | IRQ_INTB_ASSERT |
|
||||
IRQ_INTC_ASSERT | IRQ_INTD_ASSERT,
|
||||
exynos_elb_writel(exynos_pcie, val, PCIE_IRQ_EN_PULSE);
|
||||
return;
|
||||
}
|
||||
|
||||
static irqreturn_t exynos_pcie_irq_handler(int irq, void *arg)
|
||||
{
|
||||
struct pcie_port *pp = arg;
|
||||
|
||||
exynos_pcie_clear_irq_pulse(pp);
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static void exynos_pcie_enable_interrupts(struct pcie_port *pp)
|
||||
{
|
||||
exynos_pcie_enable_irq_pulse(pp);
|
||||
return;
|
||||
}
|
||||
|
||||
static inline void exynos_pcie_readl_rc(struct pcie_port *pp,
|
||||
void __iomem *dbi_base, u32 *val)
|
||||
{
|
||||
exynos_pcie_sideband_dbi_r_mode(pp, true);
|
||||
*val = readl(dbi_base);
|
||||
exynos_pcie_sideband_dbi_r_mode(pp, false);
|
||||
return;
|
||||
}
|
||||
|
||||
static inline void exynos_pcie_writel_rc(struct pcie_port *pp,
|
||||
u32 val, void __iomem *dbi_base)
|
||||
{
|
||||
exynos_pcie_sideband_dbi_w_mode(pp, true);
|
||||
writel(val, dbi_base);
|
||||
exynos_pcie_sideband_dbi_w_mode(pp, false);
|
||||
return;
|
||||
}
|
||||
|
||||
static int exynos_pcie_rd_own_conf(struct pcie_port *pp, int where, int size,
|
||||
u32 *val)
|
||||
{
|
||||
int ret;
|
||||
|
||||
exynos_pcie_sideband_dbi_r_mode(pp, true);
|
||||
ret = cfg_read(pp->dbi_base + (where & ~0x3), where, size, val);
|
||||
exynos_pcie_sideband_dbi_r_mode(pp, false);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int exynos_pcie_wr_own_conf(struct pcie_port *pp, int where, int size,
|
||||
u32 val)
|
||||
{
|
||||
int ret;
|
||||
|
||||
exynos_pcie_sideband_dbi_w_mode(pp, true);
|
||||
ret = cfg_write(pp->dbi_base + (where & ~0x3), where, size, val);
|
||||
exynos_pcie_sideband_dbi_w_mode(pp, false);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int exynos_pcie_link_up(struct pcie_port *pp)
|
||||
{
|
||||
struct exynos_pcie *exynos_pcie = to_exynos_pcie(pp);
|
||||
u32 val = exynos_elb_readl(exynos_pcie, PCIE_ELBI_RDLH_LINKUP);
|
||||
|
||||
if (val == PCIE_ELBI_LTSSM_ENABLE)
|
||||
return 1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void exynos_pcie_host_init(struct pcie_port *pp)
|
||||
{
|
||||
exynos_pcie_establish_link(pp);
|
||||
exynos_pcie_enable_interrupts(pp);
|
||||
}
|
||||
|
||||
static struct pcie_host_ops exynos_pcie_host_ops = {
|
||||
.readl_rc = exynos_pcie_readl_rc,
|
||||
.writel_rc = exynos_pcie_writel_rc,
|
||||
.rd_own_conf = exynos_pcie_rd_own_conf,
|
||||
.wr_own_conf = exynos_pcie_wr_own_conf,
|
||||
.link_up = exynos_pcie_link_up,
|
||||
.host_init = exynos_pcie_host_init,
|
||||
};
|
||||
|
||||
static int add_pcie_port(struct pcie_port *pp, struct platform_device *pdev)
|
||||
{
|
||||
int ret;
|
||||
|
||||
pp->irq = platform_get_irq(pdev, 1);
|
||||
if (!pp->irq) {
|
||||
dev_err(&pdev->dev, "failed to get irq\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
ret = devm_request_irq(&pdev->dev, pp->irq, exynos_pcie_irq_handler,
|
||||
IRQF_SHARED, "exynos-pcie", pp);
|
||||
if (ret) {
|
||||
dev_err(&pdev->dev, "failed to request irq\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
pp->root_bus_nr = -1;
|
||||
pp->ops = &exynos_pcie_host_ops;
|
||||
|
||||
spin_lock_init(&pp->conf_lock);
|
||||
ret = dw_pcie_host_init(pp);
|
||||
if (ret) {
|
||||
dev_err(&pdev->dev, "failed to initialize host\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int __init exynos_pcie_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct exynos_pcie *exynos_pcie;
|
||||
struct pcie_port *pp;
|
||||
struct device_node *np = pdev->dev.of_node;
|
||||
struct resource *elbi_base;
|
||||
struct resource *phy_base;
|
||||
struct resource *block_base;
|
||||
int ret;
|
||||
|
||||
exynos_pcie = devm_kzalloc(&pdev->dev, sizeof(*exynos_pcie),
|
||||
GFP_KERNEL);
|
||||
if (!exynos_pcie) {
|
||||
dev_err(&pdev->dev, "no memory for exynos pcie\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
pp = &exynos_pcie->pp;
|
||||
|
||||
pp->dev = &pdev->dev;
|
||||
|
||||
exynos_pcie->reset_gpio = of_get_named_gpio(np, "reset-gpio", 0);
|
||||
|
||||
exynos_pcie->clk = devm_clk_get(&pdev->dev, "pcie");
|
||||
if (IS_ERR(exynos_pcie->clk)) {
|
||||
dev_err(&pdev->dev, "Failed to get pcie rc clock\n");
|
||||
return PTR_ERR(exynos_pcie->clk);
|
||||
}
|
||||
ret = clk_prepare_enable(exynos_pcie->clk);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
exynos_pcie->bus_clk = devm_clk_get(&pdev->dev, "pcie_bus");
|
||||
if (IS_ERR(exynos_pcie->bus_clk)) {
|
||||
dev_err(&pdev->dev, "Failed to get pcie bus clock\n");
|
||||
ret = PTR_ERR(exynos_pcie->bus_clk);
|
||||
goto fail_clk;
|
||||
}
|
||||
ret = clk_prepare_enable(exynos_pcie->bus_clk);
|
||||
if (ret)
|
||||
goto fail_clk;
|
||||
|
||||
elbi_base = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
exynos_pcie->elbi_base = devm_ioremap_resource(&pdev->dev, elbi_base);
|
||||
if (IS_ERR(exynos_pcie->elbi_base))
|
||||
return PTR_ERR(exynos_pcie->elbi_base);
|
||||
|
||||
phy_base = platform_get_resource(pdev, IORESOURCE_MEM, 1);
|
||||
exynos_pcie->phy_base = devm_ioremap_resource(&pdev->dev, phy_base);
|
||||
if (IS_ERR(exynos_pcie->phy_base))
|
||||
return PTR_ERR(exynos_pcie->phy_base);
|
||||
|
||||
block_base = platform_get_resource(pdev, IORESOURCE_MEM, 2);
|
||||
exynos_pcie->block_base = devm_ioremap_resource(&pdev->dev, block_base);
|
||||
if (IS_ERR(exynos_pcie->block_base))
|
||||
return PTR_ERR(exynos_pcie->block_base);
|
||||
|
||||
ret = add_pcie_port(pp, pdev);
|
||||
if (ret < 0)
|
||||
goto fail_bus_clk;
|
||||
|
||||
platform_set_drvdata(pdev, exynos_pcie);
|
||||
return 0;
|
||||
|
||||
fail_bus_clk:
|
||||
clk_disable_unprepare(exynos_pcie->bus_clk);
|
||||
fail_clk:
|
||||
clk_disable_unprepare(exynos_pcie->clk);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int __exit exynos_pcie_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct exynos_pcie *exynos_pcie = platform_get_drvdata(pdev);
|
||||
|
||||
clk_disable_unprepare(exynos_pcie->bus_clk);
|
||||
clk_disable_unprepare(exynos_pcie->clk);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct of_device_id exynos_pcie_of_match[] = {
|
||||
{ .compatible = "samsung,exynos5440-pcie", },
|
||||
{},
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, exynos_pcie_of_match);
|
||||
|
||||
static struct platform_driver exynos_pcie_driver = {
|
||||
.remove = __exit_p(exynos_pcie_remove),
|
||||
.driver = {
|
||||
.name = "exynos-pcie",
|
||||
.owner = THIS_MODULE,
|
||||
.of_match_table = of_match_ptr(exynos_pcie_of_match),
|
||||
},
|
||||
};
|
||||
|
||||
/* Exynos PCIe driver does not allow module unload */
|
||||
|
||||
static int __init pcie_init(void)
|
||||
{
|
||||
return platform_driver_probe(&exynos_pcie_driver, exynos_pcie_probe);
|
||||
}
|
||||
subsys_initcall(pcie_init);
|
||||
|
||||
MODULE_AUTHOR("Jingoo Han <jg1.han@samsung.com>");
|
||||
MODULE_DESCRIPTION("Samsung PCIe host controller driver");
|
||||
MODULE_LICENSE("GPL v2");
|
|
@ -725,9 +725,9 @@ mvebu_pcie_map_registers(struct platform_device *pdev,
|
|||
|
||||
ret = of_address_to_resource(np, 0, ®s);
|
||||
if (ret)
|
||||
return NULL;
|
||||
return ERR_PTR(ret);
|
||||
|
||||
return devm_request_and_ioremap(&pdev->dev, ®s);
|
||||
return devm_ioremap_resource(&pdev->dev, ®s);
|
||||
}
|
||||
|
||||
static int __init mvebu_pcie_probe(struct platform_device *pdev)
|
||||
|
@ -817,9 +817,10 @@ static int __init mvebu_pcie_probe(struct platform_device *pdev)
|
|||
continue;
|
||||
|
||||
port->base = mvebu_pcie_map_registers(pdev, child, port);
|
||||
if (!port->base) {
|
||||
if (IS_ERR(port->base)) {
|
||||
dev_err(&pdev->dev, "PCIe%d.%d: cannot map registers\n",
|
||||
port->port, port->lane);
|
||||
port->base = NULL;
|
||||
continue;
|
||||
}
|
||||
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,65 @@
|
|||
/*
|
||||
* Synopsys Designware PCIe host controller driver
|
||||
*
|
||||
* Copyright (C) 2013 Samsung Electronics Co., Ltd.
|
||||
* http://www.samsung.com
|
||||
*
|
||||
* Author: Jingoo Han <jg1.han@samsung.com>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*/
|
||||
|
||||
struct pcie_port_info {
|
||||
u32 cfg0_size;
|
||||
u32 cfg1_size;
|
||||
u32 io_size;
|
||||
u32 mem_size;
|
||||
phys_addr_t io_bus_addr;
|
||||
phys_addr_t mem_bus_addr;
|
||||
};
|
||||
|
||||
struct pcie_port {
|
||||
struct device *dev;
|
||||
u8 root_bus_nr;
|
||||
void __iomem *dbi_base;
|
||||
u64 cfg0_base;
|
||||
void __iomem *va_cfg0_base;
|
||||
u64 cfg1_base;
|
||||
void __iomem *va_cfg1_base;
|
||||
u64 io_base;
|
||||
u64 mem_base;
|
||||
spinlock_t conf_lock;
|
||||
struct resource cfg;
|
||||
struct resource io;
|
||||
struct resource mem;
|
||||
struct pcie_port_info config;
|
||||
int irq;
|
||||
u32 lanes;
|
||||
struct pcie_host_ops *ops;
|
||||
};
|
||||
|
||||
struct pcie_host_ops {
|
||||
void (*readl_rc)(struct pcie_port *pp,
|
||||
void __iomem *dbi_base, u32 *val);
|
||||
void (*writel_rc)(struct pcie_port *pp,
|
||||
u32 val, void __iomem *dbi_base);
|
||||
int (*rd_own_conf)(struct pcie_port *pp, int where, int size, u32 *val);
|
||||
int (*wr_own_conf)(struct pcie_port *pp, int where, int size, u32 val);
|
||||
int (*link_up)(struct pcie_port *pp);
|
||||
void (*host_init)(struct pcie_port *pp);
|
||||
};
|
||||
|
||||
extern unsigned long global_io_offset;
|
||||
|
||||
int cfg_read(void __iomem *addr, int where, int size, u32 *val);
|
||||
int cfg_write(void __iomem *addr, int where, int size, u32 val);
|
||||
int dw_pcie_wr_own_conf(struct pcie_port *pp, int where, int size, u32 val);
|
||||
int dw_pcie_rd_own_conf(struct pcie_port *pp, int where, int size, u32 *val);
|
||||
int dw_pcie_link_up(struct pcie_port *pp);
|
||||
void dw_pcie_setup_rc(struct pcie_port *pp);
|
||||
int dw_pcie_host_init(struct pcie_port *pp);
|
||||
int dw_pcie_setup(int nr, struct pci_sys_data *sys);
|
||||
struct pci_bus *dw_pcie_scan_bus(int nr, struct pci_sys_data *sys);
|
||||
int dw_pcie_map_irq(const struct pci_dev *dev, u8 slot, u8 pin);
|
|
@ -572,7 +572,6 @@ static void __ref enable_slot(struct acpiphp_slot *slot)
|
|||
acpiphp_sanitize_bus(bus);
|
||||
acpiphp_set_hpp_values(bus);
|
||||
acpiphp_set_acpi_region(slot);
|
||||
pci_enable_bridges(bus);
|
||||
|
||||
list_for_each_entry(dev, &bus->devices, bus_list) {
|
||||
/* Assume that newly added devices are powered on already. */
|
||||
|
|
|
@ -155,6 +155,7 @@ void pciehp_green_led_off(struct slot *slot);
|
|||
void pciehp_green_led_blink(struct slot *slot);
|
||||
int pciehp_check_link_status(struct controller *ctrl);
|
||||
void pciehp_release_ctrl(struct controller *ctrl);
|
||||
int pciehp_reset_slot(struct slot *slot, int probe);
|
||||
|
||||
static inline const char *slot_name(struct slot *slot)
|
||||
{
|
||||
|
|
|
@ -69,6 +69,7 @@ static int get_power_status (struct hotplug_slot *slot, u8 *value);
|
|||
static int get_attention_status (struct hotplug_slot *slot, u8 *value);
|
||||
static int get_latch_status (struct hotplug_slot *slot, u8 *value);
|
||||
static int get_adapter_status (struct hotplug_slot *slot, u8 *value);
|
||||
static int reset_slot (struct hotplug_slot *slot, int probe);
|
||||
|
||||
/**
|
||||
* release_slot - free up the memory used by a slot
|
||||
|
@ -111,6 +112,7 @@ static int init_slot(struct controller *ctrl)
|
|||
ops->disable_slot = disable_slot;
|
||||
ops->get_power_status = get_power_status;
|
||||
ops->get_adapter_status = get_adapter_status;
|
||||
ops->reset_slot = reset_slot;
|
||||
if (MRL_SENS(ctrl))
|
||||
ops->get_latch_status = get_latch_status;
|
||||
if (ATTN_LED(ctrl)) {
|
||||
|
@ -223,6 +225,16 @@ static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value)
|
|||
return pciehp_get_adapter_status(slot, value);
|
||||
}
|
||||
|
||||
static int reset_slot(struct hotplug_slot *hotplug_slot, int probe)
|
||||
{
|
||||
struct slot *slot = hotplug_slot->private;
|
||||
|
||||
ctrl_dbg(slot->ctrl, "%s: physical_slot = %s\n",
|
||||
__func__, slot_name(slot));
|
||||
|
||||
return pciehp_reset_slot(slot, probe);
|
||||
}
|
||||
|
||||
static int pciehp_probe(struct pcie_device *dev)
|
||||
{
|
||||
int rc;
|
||||
|
|
|
@ -749,6 +749,37 @@ static void pcie_disable_notification(struct controller *ctrl)
|
|||
ctrl_warn(ctrl, "Cannot disable software notification\n");
|
||||
}
|
||||
|
||||
/*
|
||||
* pciehp has a 1:1 bus:slot relationship so we ultimately want a secondary
|
||||
* bus reset of the bridge, but if the slot supports surprise removal we need
|
||||
* to disable presence detection around the bus reset and clear any spurious
|
||||
* events after.
|
||||
*/
|
||||
int pciehp_reset_slot(struct slot *slot, int probe)
|
||||
{
|
||||
struct controller *ctrl = slot->ctrl;
|
||||
|
||||
if (probe)
|
||||
return 0;
|
||||
|
||||
if (HP_SUPR_RM(ctrl)) {
|
||||
pcie_write_cmd(ctrl, 0, PCI_EXP_SLTCTL_PDCE);
|
||||
if (pciehp_poll_mode)
|
||||
del_timer_sync(&ctrl->poll_timer);
|
||||
}
|
||||
|
||||
pci_reset_bridge_secondary_bus(ctrl->pcie->port);
|
||||
|
||||
if (HP_SUPR_RM(ctrl)) {
|
||||
pciehp_writew(ctrl, PCI_EXP_SLTSTA, PCI_EXP_SLTSTA_PDC);
|
||||
pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PDCE, PCI_EXP_SLTCTL_PDCE);
|
||||
if (pciehp_poll_mode)
|
||||
int_poll_timeout(ctrl->poll_timer.data);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int pcie_init_notification(struct controller *ctrl)
|
||||
{
|
||||
if (pciehp_request_irq(ctrl))
|
||||
|
|
|
@ -160,9 +160,8 @@ void pci_configure_slot(struct pci_dev *dev)
|
|||
(dev->class >> 8) == PCI_CLASS_BRIDGE_PCI)))
|
||||
return;
|
||||
|
||||
if (dev->bus && dev->bus->self)
|
||||
pcie_bus_configure_settings(dev->bus,
|
||||
dev->bus->self->pcie_mpss);
|
||||
if (dev->bus)
|
||||
pcie_bus_configure_settings(dev->bus);
|
||||
|
||||
memset(&hpp, 0, sizeof(hpp));
|
||||
ret = pci_get_hp_params(dev, &hpp);
|
||||
|
|
|
@ -286,7 +286,6 @@ static int sriov_enable(struct pci_dev *dev, int nr_virtfn)
|
|||
(!(iov->cap & PCI_SRIOV_CAP_VFM) && (nr_virtfn > initial)))
|
||||
return -EINVAL;
|
||||
|
||||
pci_write_config_word(dev, iov->pos + PCI_SRIOV_NUM_VF, nr_virtfn);
|
||||
pci_read_config_word(dev, iov->pos + PCI_SRIOV_VF_OFFSET, &offset);
|
||||
pci_read_config_word(dev, iov->pos + PCI_SRIOV_VF_STRIDE, &stride);
|
||||
if (!offset || (nr_virtfn > 1 && !stride))
|
||||
|
@ -324,7 +323,7 @@ static int sriov_enable(struct pci_dev *dev, int nr_virtfn)
|
|||
|
||||
if (!pdev->is_physfn) {
|
||||
pci_dev_put(pdev);
|
||||
return -ENODEV;
|
||||
return -ENOSYS;
|
||||
}
|
||||
|
||||
rc = sysfs_create_link(&dev->dev.kobj,
|
||||
|
@ -334,6 +333,7 @@ static int sriov_enable(struct pci_dev *dev, int nr_virtfn)
|
|||
return rc;
|
||||
}
|
||||
|
||||
pci_write_config_word(dev, iov->pos + PCI_SRIOV_NUM_VF, nr_virtfn);
|
||||
iov->ctrl |= PCI_SRIOV_CTRL_VFE | PCI_SRIOV_CTRL_MSE;
|
||||
pci_cfg_access_lock(dev);
|
||||
pci_write_config_word(dev, iov->pos + PCI_SRIOV_CTRL, iov->ctrl);
|
||||
|
@ -368,6 +368,7 @@ failed:
|
|||
iov->ctrl &= ~(PCI_SRIOV_CTRL_VFE | PCI_SRIOV_CTRL_MSE);
|
||||
pci_cfg_access_lock(dev);
|
||||
pci_write_config_word(dev, iov->pos + PCI_SRIOV_CTRL, iov->ctrl);
|
||||
pci_write_config_word(dev, iov->pos + PCI_SRIOV_NUM_VF, 0);
|
||||
ssleep(1);
|
||||
pci_cfg_access_unlock(dev);
|
||||
|
||||
|
@ -401,6 +402,7 @@ static void sriov_disable(struct pci_dev *dev)
|
|||
sysfs_remove_link(&dev->dev.kobj, "dep_link");
|
||||
|
||||
iov->num_VFs = 0;
|
||||
pci_write_config_word(dev, iov->pos + PCI_SRIOV_NUM_VF, 0);
|
||||
}
|
||||
|
||||
static int sriov_init(struct pci_dev *dev, int pos)
|
||||
|
@ -662,7 +664,7 @@ int pci_enable_sriov(struct pci_dev *dev, int nr_virtfn)
|
|||
might_sleep();
|
||||
|
||||
if (!dev->is_physfn)
|
||||
return -ENODEV;
|
||||
return -ENOSYS;
|
||||
|
||||
return sriov_enable(dev, nr_virtfn);
|
||||
}
|
||||
|
@ -722,7 +724,7 @@ EXPORT_SYMBOL_GPL(pci_num_vf);
|
|||
* @dev: the PCI device
|
||||
*
|
||||
* Returns number of VFs belonging to this device that are assigned to a guest.
|
||||
* If device is not a physical function returns -ENODEV.
|
||||
* If device is not a physical function returns 0.
|
||||
*/
|
||||
int pci_vfs_assigned(struct pci_dev *dev)
|
||||
{
|
||||
|
@ -767,12 +769,15 @@ EXPORT_SYMBOL_GPL(pci_vfs_assigned);
|
|||
* device's mutex held.
|
||||
*
|
||||
* Returns 0 if PF is an SRIOV-capable device and
|
||||
* value of numvfs valid. If not a PF with VFS, return -EINVAL;
|
||||
* value of numvfs valid. If not a PF return -ENOSYS;
|
||||
* if numvfs is invalid return -EINVAL;
|
||||
* if VFs already enabled, return -EBUSY.
|
||||
*/
|
||||
int pci_sriov_set_totalvfs(struct pci_dev *dev, u16 numvfs)
|
||||
{
|
||||
if (!dev->is_physfn || (numvfs > dev->sriov->total_VFs))
|
||||
if (!dev->is_physfn)
|
||||
return -ENOSYS;
|
||||
if (numvfs > dev->sriov->total_VFs)
|
||||
return -EINVAL;
|
||||
|
||||
/* Shouldn't change if VFs already enabled */
|
||||
|
@ -786,17 +791,17 @@ int pci_sriov_set_totalvfs(struct pci_dev *dev, u16 numvfs)
|
|||
EXPORT_SYMBOL_GPL(pci_sriov_set_totalvfs);
|
||||
|
||||
/**
|
||||
* pci_sriov_get_totalvfs -- get total VFs supported on this devic3
|
||||
* pci_sriov_get_totalvfs -- get total VFs supported on this device
|
||||
* @dev: the PCI PF device
|
||||
*
|
||||
* For a PCIe device with SRIOV support, return the PCIe
|
||||
* SRIOV capability value of TotalVFs or the value of driver_max_VFs
|
||||
* if the driver reduced it. Otherwise, -EINVAL.
|
||||
* if the driver reduced it. Otherwise 0.
|
||||
*/
|
||||
int pci_sriov_get_totalvfs(struct pci_dev *dev)
|
||||
{
|
||||
if (!dev->is_physfn)
|
||||
return -EINVAL;
|
||||
return 0;
|
||||
|
||||
if (dev->sriov->driver_max_VFs)
|
||||
return dev->sriov->driver_max_VFs;
|
||||
|
|
|
@ -763,6 +763,13 @@ static int pci_pm_resume(struct device *dev)
|
|||
|
||||
#ifdef CONFIG_HIBERNATE_CALLBACKS
|
||||
|
||||
|
||||
/*
|
||||
* pcibios_pm_ops - provide arch-specific hooks when a PCI device is doing
|
||||
* a hibernate transition
|
||||
*/
|
||||
struct dev_pm_ops __weak pcibios_pm_ops;
|
||||
|
||||
static int pci_pm_freeze(struct device *dev)
|
||||
{
|
||||
struct pci_dev *pci_dev = to_pci_dev(dev);
|
||||
|
@ -786,6 +793,9 @@ static int pci_pm_freeze(struct device *dev)
|
|||
return error;
|
||||
}
|
||||
|
||||
if (pcibios_pm_ops.freeze)
|
||||
return pcibios_pm_ops.freeze(dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -811,6 +821,9 @@ static int pci_pm_freeze_noirq(struct device *dev)
|
|||
|
||||
pci_pm_set_unknown_state(pci_dev);
|
||||
|
||||
if (pcibios_pm_ops.freeze_noirq)
|
||||
return pcibios_pm_ops.freeze_noirq(dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -820,6 +833,12 @@ static int pci_pm_thaw_noirq(struct device *dev)
|
|||
struct device_driver *drv = dev->driver;
|
||||
int error = 0;
|
||||
|
||||
if (pcibios_pm_ops.thaw_noirq) {
|
||||
error = pcibios_pm_ops.thaw_noirq(dev);
|
||||
if (error)
|
||||
return error;
|
||||
}
|
||||
|
||||
if (pci_has_legacy_pm_support(pci_dev))
|
||||
return pci_legacy_resume_early(dev);
|
||||
|
||||
|
@ -837,6 +856,12 @@ static int pci_pm_thaw(struct device *dev)
|
|||
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
|
||||
int error = 0;
|
||||
|
||||
if (pcibios_pm_ops.thaw) {
|
||||
error = pcibios_pm_ops.thaw(dev);
|
||||
if (error)
|
||||
return error;
|
||||
}
|
||||
|
||||
if (pci_has_legacy_pm_support(pci_dev))
|
||||
return pci_legacy_resume(dev);
|
||||
|
||||
|
@ -878,6 +903,9 @@ static int pci_pm_poweroff(struct device *dev)
|
|||
Fixup:
|
||||
pci_fixup_device(pci_fixup_suspend, pci_dev);
|
||||
|
||||
if (pcibios_pm_ops.poweroff)
|
||||
return pcibios_pm_ops.poweroff(dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -911,6 +939,9 @@ static int pci_pm_poweroff_noirq(struct device *dev)
|
|||
if (pci_dev->class == PCI_CLASS_SERIAL_USB_EHCI)
|
||||
pci_write_config_word(pci_dev, PCI_COMMAND, 0);
|
||||
|
||||
if (pcibios_pm_ops.poweroff_noirq)
|
||||
return pcibios_pm_ops.poweroff_noirq(dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -920,6 +951,12 @@ static int pci_pm_restore_noirq(struct device *dev)
|
|||
struct device_driver *drv = dev->driver;
|
||||
int error = 0;
|
||||
|
||||
if (pcibios_pm_ops.restore_noirq) {
|
||||
error = pcibios_pm_ops.restore_noirq(dev);
|
||||
if (error)
|
||||
return error;
|
||||
}
|
||||
|
||||
pci_pm_default_resume_early(pci_dev);
|
||||
|
||||
if (pci_has_legacy_pm_support(pci_dev))
|
||||
|
@ -937,6 +974,12 @@ static int pci_pm_restore(struct device *dev)
|
|||
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
|
||||
int error = 0;
|
||||
|
||||
if (pcibios_pm_ops.restore) {
|
||||
error = pcibios_pm_ops.restore(dev);
|
||||
if (error)
|
||||
return error;
|
||||
}
|
||||
|
||||
/*
|
||||
* This is necessary for the hibernation error path in which restore is
|
||||
* called without restoring the standard config registers of the device.
|
||||
|
|
|
@ -131,19 +131,19 @@ static ssize_t pci_bus_show_cpuaffinity(struct device *dev,
|
|||
return ret;
|
||||
}
|
||||
|
||||
static inline ssize_t pci_bus_show_cpumaskaffinity(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
char *buf)
|
||||
static ssize_t cpuaffinity_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
return pci_bus_show_cpuaffinity(dev, 0, attr, buf);
|
||||
}
|
||||
static DEVICE_ATTR_RO(cpuaffinity);
|
||||
|
||||
static inline ssize_t pci_bus_show_cpulistaffinity(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
char *buf)
|
||||
static ssize_t cpulistaffinity_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
return pci_bus_show_cpuaffinity(dev, 1, attr, buf);
|
||||
}
|
||||
static DEVICE_ATTR_RO(cpulistaffinity);
|
||||
|
||||
/* show resources */
|
||||
static ssize_t
|
||||
|
@ -379,6 +379,7 @@ dev_bus_rescan_store(struct device *dev, struct device_attribute *attr,
|
|||
}
|
||||
return count;
|
||||
}
|
||||
static DEVICE_ATTR(rescan, (S_IWUSR|S_IWGRP), NULL, dev_bus_rescan_store);
|
||||
|
||||
#if defined(CONFIG_PM_RUNTIME) && defined(CONFIG_ACPI)
|
||||
static ssize_t d3cold_allowed_store(struct device *dev,
|
||||
|
@ -514,11 +515,20 @@ struct device_attribute pci_dev_attrs[] = {
|
|||
__ATTR_NULL,
|
||||
};
|
||||
|
||||
struct device_attribute pcibus_dev_attrs[] = {
|
||||
__ATTR(rescan, (S_IWUSR|S_IWGRP), NULL, dev_bus_rescan_store),
|
||||
__ATTR(cpuaffinity, S_IRUGO, pci_bus_show_cpumaskaffinity, NULL),
|
||||
__ATTR(cpulistaffinity, S_IRUGO, pci_bus_show_cpulistaffinity, NULL),
|
||||
__ATTR_NULL,
|
||||
static struct attribute *pcibus_attrs[] = {
|
||||
&dev_attr_rescan.attr,
|
||||
&dev_attr_cpuaffinity.attr,
|
||||
&dev_attr_cpulistaffinity.attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
static const struct attribute_group pcibus_group = {
|
||||
.attrs = pcibus_attrs,
|
||||
};
|
||||
|
||||
const struct attribute_group *pcibus_groups[] = {
|
||||
&pcibus_group,
|
||||
NULL,
|
||||
};
|
||||
|
||||
static ssize_t
|
||||
|
|
|
@ -22,6 +22,7 @@
|
|||
#include <linux/interrupt.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
#include <linux/pci_hotplug.h>
|
||||
#include <asm-generic/pci-bridge.h>
|
||||
#include <asm/setup.h>
|
||||
#include "pci.h"
|
||||
|
@ -1145,6 +1146,24 @@ int pci_reenable_device(struct pci_dev *dev)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void pci_enable_bridge(struct pci_dev *dev)
|
||||
{
|
||||
int retval;
|
||||
|
||||
if (!dev)
|
||||
return;
|
||||
|
||||
pci_enable_bridge(dev->bus->self);
|
||||
|
||||
if (pci_is_enabled(dev))
|
||||
return;
|
||||
retval = pci_enable_device(dev);
|
||||
if (retval)
|
||||
dev_err(&dev->dev, "Error enabling bridge (%d), continuing\n",
|
||||
retval);
|
||||
pci_set_master(dev);
|
||||
}
|
||||
|
||||
static int pci_enable_device_flags(struct pci_dev *dev, unsigned long flags)
|
||||
{
|
||||
int err;
|
||||
|
@ -1165,6 +1184,8 @@ static int pci_enable_device_flags(struct pci_dev *dev, unsigned long flags)
|
|||
if (atomic_inc_return(&dev->enable_cnt) > 1)
|
||||
return 0; /* already enabled */
|
||||
|
||||
pci_enable_bridge(dev->bus->self);
|
||||
|
||||
/* only skip sriov related */
|
||||
for (i = 0; i <= PCI_ROM_RESOURCE; i++)
|
||||
if (dev->resource[i].flags & flags)
|
||||
|
@ -1992,7 +2013,7 @@ static void pci_add_saved_cap(struct pci_dev *pci_dev,
|
|||
}
|
||||
|
||||
/**
|
||||
* pci_add_save_buffer - allocate buffer for saving given capability registers
|
||||
* pci_add_cap_save_buffer - allocate buffer for saving given capability registers
|
||||
* @dev: the PCI device
|
||||
* @cap: the capability to allocate the buffer for
|
||||
* @size: requested size of the buffer
|
||||
|
@ -2095,9 +2116,9 @@ void pci_enable_ido(struct pci_dev *dev, unsigned long type)
|
|||
u16 ctrl = 0;
|
||||
|
||||
if (type & PCI_EXP_IDO_REQUEST)
|
||||
ctrl |= PCI_EXP_IDO_REQ_EN;
|
||||
ctrl |= PCI_EXP_DEVCTL2_IDO_REQ_EN;
|
||||
if (type & PCI_EXP_IDO_COMPLETION)
|
||||
ctrl |= PCI_EXP_IDO_CMP_EN;
|
||||
ctrl |= PCI_EXP_DEVCTL2_IDO_CMP_EN;
|
||||
if (ctrl)
|
||||
pcie_capability_set_word(dev, PCI_EXP_DEVCTL2, ctrl);
|
||||
}
|
||||
|
@ -2113,9 +2134,9 @@ void pci_disable_ido(struct pci_dev *dev, unsigned long type)
|
|||
u16 ctrl = 0;
|
||||
|
||||
if (type & PCI_EXP_IDO_REQUEST)
|
||||
ctrl |= PCI_EXP_IDO_REQ_EN;
|
||||
ctrl |= PCI_EXP_DEVCTL2_IDO_REQ_EN;
|
||||
if (type & PCI_EXP_IDO_COMPLETION)
|
||||
ctrl |= PCI_EXP_IDO_CMP_EN;
|
||||
ctrl |= PCI_EXP_DEVCTL2_IDO_CMP_EN;
|
||||
if (ctrl)
|
||||
pcie_capability_clear_word(dev, PCI_EXP_DEVCTL2, ctrl);
|
||||
}
|
||||
|
@ -2147,7 +2168,7 @@ int pci_enable_obff(struct pci_dev *dev, enum pci_obff_signal_type type)
|
|||
int ret;
|
||||
|
||||
pcie_capability_read_dword(dev, PCI_EXP_DEVCAP2, &cap);
|
||||
if (!(cap & PCI_EXP_OBFF_MASK))
|
||||
if (!(cap & PCI_EXP_DEVCAP2_OBFF_MASK))
|
||||
return -ENOTSUPP; /* no OBFF support at all */
|
||||
|
||||
/* Make sure the topology supports OBFF as well */
|
||||
|
@ -2158,17 +2179,17 @@ int pci_enable_obff(struct pci_dev *dev, enum pci_obff_signal_type type)
|
|||
}
|
||||
|
||||
pcie_capability_read_word(dev, PCI_EXP_DEVCTL2, &ctrl);
|
||||
if (cap & PCI_EXP_OBFF_WAKE)
|
||||
ctrl |= PCI_EXP_OBFF_WAKE_EN;
|
||||
if (cap & PCI_EXP_DEVCAP2_OBFF_WAKE)
|
||||
ctrl |= PCI_EXP_DEVCTL2_OBFF_WAKE_EN;
|
||||
else {
|
||||
switch (type) {
|
||||
case PCI_EXP_OBFF_SIGNAL_L0:
|
||||
if (!(ctrl & PCI_EXP_OBFF_WAKE_EN))
|
||||
ctrl |= PCI_EXP_OBFF_MSGA_EN;
|
||||
if (!(ctrl & PCI_EXP_DEVCTL2_OBFF_WAKE_EN))
|
||||
ctrl |= PCI_EXP_DEVCTL2_OBFF_MSGA_EN;
|
||||
break;
|
||||
case PCI_EXP_OBFF_SIGNAL_ALWAYS:
|
||||
ctrl &= ~PCI_EXP_OBFF_WAKE_EN;
|
||||
ctrl |= PCI_EXP_OBFF_MSGB_EN;
|
||||
ctrl &= ~PCI_EXP_DEVCTL2_OBFF_WAKE_EN;
|
||||
ctrl |= PCI_EXP_DEVCTL2_OBFF_MSGB_EN;
|
||||
break;
|
||||
default:
|
||||
WARN(1, "bad OBFF signal type\n");
|
||||
|
@ -2189,7 +2210,8 @@ EXPORT_SYMBOL(pci_enable_obff);
|
|||
*/
|
||||
void pci_disable_obff(struct pci_dev *dev)
|
||||
{
|
||||
pcie_capability_clear_word(dev, PCI_EXP_DEVCTL2, PCI_EXP_OBFF_WAKE_EN);
|
||||
pcie_capability_clear_word(dev, PCI_EXP_DEVCTL2,
|
||||
PCI_EXP_DEVCTL2_OBFF_WAKE_EN);
|
||||
}
|
||||
EXPORT_SYMBOL(pci_disable_obff);
|
||||
|
||||
|
@ -2237,7 +2259,8 @@ int pci_enable_ltr(struct pci_dev *dev)
|
|||
return ret;
|
||||
}
|
||||
|
||||
return pcie_capability_set_word(dev, PCI_EXP_DEVCTL2, PCI_EXP_LTR_EN);
|
||||
return pcie_capability_set_word(dev, PCI_EXP_DEVCTL2,
|
||||
PCI_EXP_DEVCTL2_LTR_EN);
|
||||
}
|
||||
EXPORT_SYMBOL(pci_enable_ltr);
|
||||
|
||||
|
@ -2254,7 +2277,8 @@ void pci_disable_ltr(struct pci_dev *dev)
|
|||
if (!pci_ltr_supported(dev))
|
||||
return;
|
||||
|
||||
pcie_capability_clear_word(dev, PCI_EXP_DEVCTL2, PCI_EXP_LTR_EN);
|
||||
pcie_capability_clear_word(dev, PCI_EXP_DEVCTL2,
|
||||
PCI_EXP_DEVCTL2_LTR_EN);
|
||||
}
|
||||
EXPORT_SYMBOL(pci_disable_ltr);
|
||||
|
||||
|
@ -2359,6 +2383,27 @@ void pci_enable_acs(struct pci_dev *dev)
|
|||
pci_write_config_word(dev, pos + PCI_ACS_CTRL, ctrl);
|
||||
}
|
||||
|
||||
static bool pci_acs_flags_enabled(struct pci_dev *pdev, u16 acs_flags)
|
||||
{
|
||||
int pos;
|
||||
u16 cap, ctrl;
|
||||
|
||||
pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ACS);
|
||||
if (!pos)
|
||||
return false;
|
||||
|
||||
/*
|
||||
* Except for egress control, capabilities are either required
|
||||
* or only required if controllable. Features missing from the
|
||||
* capability field can therefore be assumed as hard-wired enabled.
|
||||
*/
|
||||
pci_read_config_word(pdev, pos + PCI_ACS_CAP, &cap);
|
||||
acs_flags &= (cap | PCI_ACS_EC);
|
||||
|
||||
pci_read_config_word(pdev, pos + PCI_ACS_CTRL, &ctrl);
|
||||
return (ctrl & acs_flags) == acs_flags;
|
||||
}
|
||||
|
||||
/**
|
||||
* pci_acs_enabled - test ACS against required flags for a given device
|
||||
* @pdev: device to test
|
||||
|
@ -2366,36 +2411,76 @@ void pci_enable_acs(struct pci_dev *dev)
|
|||
*
|
||||
* Return true if the device supports the provided flags. Automatically
|
||||
* filters out flags that are not implemented on multifunction devices.
|
||||
*
|
||||
* Note that this interface checks the effective ACS capabilities of the
|
||||
* device rather than the actual capabilities. For instance, most single
|
||||
* function endpoints are not required to support ACS because they have no
|
||||
* opportunity for peer-to-peer access. We therefore return 'true'
|
||||
* regardless of whether the device exposes an ACS capability. This makes
|
||||
* it much easier for callers of this function to ignore the actual type
|
||||
* or topology of the device when testing ACS support.
|
||||
*/
|
||||
bool pci_acs_enabled(struct pci_dev *pdev, u16 acs_flags)
|
||||
{
|
||||
int pos, ret;
|
||||
u16 ctrl;
|
||||
int ret;
|
||||
|
||||
ret = pci_dev_specific_acs_enabled(pdev, acs_flags);
|
||||
if (ret >= 0)
|
||||
return ret > 0;
|
||||
|
||||
/*
|
||||
* Conventional PCI and PCI-X devices never support ACS, either
|
||||
* effectively or actually. The shared bus topology implies that
|
||||
* any device on the bus can receive or snoop DMA.
|
||||
*/
|
||||
if (!pci_is_pcie(pdev))
|
||||
return false;
|
||||
|
||||
/* Filter out flags not applicable to multifunction */
|
||||
if (pdev->multifunction)
|
||||
acs_flags &= (PCI_ACS_RR | PCI_ACS_CR |
|
||||
PCI_ACS_EC | PCI_ACS_DT);
|
||||
switch (pci_pcie_type(pdev)) {
|
||||
/*
|
||||
* PCI/X-to-PCIe bridges are not specifically mentioned by the spec,
|
||||
* but since their primary inteface is PCI/X, we conservatively
|
||||
* handle them as we would a non-PCIe device.
|
||||
*/
|
||||
case PCI_EXP_TYPE_PCIE_BRIDGE:
|
||||
/*
|
||||
* PCIe 3.0, 6.12.1 excludes ACS on these devices. "ACS is never
|
||||
* applicable... must never implement an ACS Extended Capability...".
|
||||
* This seems arbitrary, but we take a conservative interpretation
|
||||
* of this statement.
|
||||
*/
|
||||
case PCI_EXP_TYPE_PCI_BRIDGE:
|
||||
case PCI_EXP_TYPE_RC_EC:
|
||||
return false;
|
||||
/*
|
||||
* PCIe 3.0, 6.12.1.1 specifies that downstream and root ports should
|
||||
* implement ACS in order to indicate their peer-to-peer capabilities,
|
||||
* regardless of whether they are single- or multi-function devices.
|
||||
*/
|
||||
case PCI_EXP_TYPE_DOWNSTREAM:
|
||||
case PCI_EXP_TYPE_ROOT_PORT:
|
||||
return pci_acs_flags_enabled(pdev, acs_flags);
|
||||
/*
|
||||
* PCIe 3.0, 6.12.1.2 specifies ACS capabilities that should be
|
||||
* implemented by the remaining PCIe types to indicate peer-to-peer
|
||||
* capabilities, but only when they are part of a multifunciton
|
||||
* device. The footnote for section 6.12 indicates the specific
|
||||
* PCIe types included here.
|
||||
*/
|
||||
case PCI_EXP_TYPE_ENDPOINT:
|
||||
case PCI_EXP_TYPE_UPSTREAM:
|
||||
case PCI_EXP_TYPE_LEG_END:
|
||||
case PCI_EXP_TYPE_RC_END:
|
||||
if (!pdev->multifunction)
|
||||
break;
|
||||
|
||||
if (pci_pcie_type(pdev) == PCI_EXP_TYPE_DOWNSTREAM ||
|
||||
pci_pcie_type(pdev) == PCI_EXP_TYPE_ROOT_PORT ||
|
||||
pdev->multifunction) {
|
||||
pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ACS);
|
||||
if (!pos)
|
||||
return false;
|
||||
|
||||
pci_read_config_word(pdev, pos + PCI_ACS_CTRL, &ctrl);
|
||||
if ((ctrl & acs_flags) != acs_flags)
|
||||
return false;
|
||||
return pci_acs_flags_enabled(pdev, acs_flags);
|
||||
}
|
||||
|
||||
/*
|
||||
* PCIe 3.0, 6.12.1.3 specifies no ACS capabilties are applicable
|
||||
* to single function devices with the exception of downstream ports.
|
||||
*/
|
||||
return true;
|
||||
}
|
||||
|
||||
|
@ -3059,18 +3144,23 @@ bool pci_check_and_unmask_intx(struct pci_dev *dev)
|
|||
EXPORT_SYMBOL_GPL(pci_check_and_unmask_intx);
|
||||
|
||||
/**
|
||||
* pci_msi_off - disables any msi or msix capabilities
|
||||
* pci_msi_off - disables any MSI or MSI-X capabilities
|
||||
* @dev: the PCI device to operate on
|
||||
*
|
||||
* If you want to use msi see pci_enable_msi and friends.
|
||||
* This is a lower level primitive that allows us to disable
|
||||
* msi operation at the device level.
|
||||
* If you want to use MSI, see pci_enable_msi() and friends.
|
||||
* This is a lower-level primitive that allows us to disable
|
||||
* MSI operation at the device level.
|
||||
*/
|
||||
void pci_msi_off(struct pci_dev *dev)
|
||||
{
|
||||
int pos;
|
||||
u16 control;
|
||||
|
||||
/*
|
||||
* This looks like it could go in msi.c, but we need it even when
|
||||
* CONFIG_PCI_MSI=n. For the same reason, we can't use
|
||||
* dev->msi_cap or dev->msix_cap here.
|
||||
*/
|
||||
pos = pci_find_capability(dev, PCI_CAP_ID_MSI);
|
||||
if (pos) {
|
||||
pci_read_config_word(dev, pos + PCI_MSI_FLAGS, &control);
|
||||
|
@ -3098,19 +3188,17 @@ int pci_set_dma_seg_boundary(struct pci_dev *dev, unsigned long mask)
|
|||
}
|
||||
EXPORT_SYMBOL(pci_set_dma_seg_boundary);
|
||||
|
||||
static int pcie_flr(struct pci_dev *dev, int probe)
|
||||
/**
|
||||
* pci_wait_for_pending_transaction - waits for pending transaction
|
||||
* @dev: the PCI device to operate on
|
||||
*
|
||||
* Return 0 if transaction is pending 1 otherwise.
|
||||
*/
|
||||
int pci_wait_for_pending_transaction(struct pci_dev *dev)
|
||||
{
|
||||
int i;
|
||||
u32 cap;
|
||||
u16 status;
|
||||
|
||||
pcie_capability_read_dword(dev, PCI_EXP_DEVCAP, &cap);
|
||||
if (!(cap & PCI_EXP_DEVCAP_FLR))
|
||||
return -ENOTTY;
|
||||
|
||||
if (probe)
|
||||
return 0;
|
||||
|
||||
/* Wait for Transaction Pending bit clean */
|
||||
for (i = 0; i < 4; i++) {
|
||||
if (i)
|
||||
|
@ -3118,13 +3206,27 @@ static int pcie_flr(struct pci_dev *dev, int probe)
|
|||
|
||||
pcie_capability_read_word(dev, PCI_EXP_DEVSTA, &status);
|
||||
if (!(status & PCI_EXP_DEVSTA_TRPND))
|
||||
goto clear;
|
||||
return 1;
|
||||
}
|
||||
|
||||
dev_err(&dev->dev, "transaction is not cleared; "
|
||||
"proceeding with reset anyway\n");
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(pci_wait_for_pending_transaction);
|
||||
|
||||
static int pcie_flr(struct pci_dev *dev, int probe)
|
||||
{
|
||||
u32 cap;
|
||||
|
||||
pcie_capability_read_dword(dev, PCI_EXP_DEVCAP, &cap);
|
||||
if (!(cap & PCI_EXP_DEVCAP_FLR))
|
||||
return -ENOTTY;
|
||||
|
||||
if (probe)
|
||||
return 0;
|
||||
|
||||
if (!pci_wait_for_pending_transaction(dev))
|
||||
dev_err(&dev->dev, "transaction is not cleared; proceeding with reset anyway\n");
|
||||
|
||||
clear:
|
||||
pcie_capability_set_word(dev, PCI_EXP_DEVCTL, PCI_EXP_DEVCTL_BCR_FLR);
|
||||
|
||||
msleep(100);
|
||||
|
@ -3215,9 +3317,42 @@ static int pci_pm_reset(struct pci_dev *dev, int probe)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int pci_parent_bus_reset(struct pci_dev *dev, int probe)
|
||||
/**
|
||||
* pci_reset_bridge_secondary_bus - Reset the secondary bus on a PCI bridge.
|
||||
* @dev: Bridge device
|
||||
*
|
||||
* Use the bridge control register to assert reset on the secondary bus.
|
||||
* Devices on the secondary bus are left in power-on state.
|
||||
*/
|
||||
void pci_reset_bridge_secondary_bus(struct pci_dev *dev)
|
||||
{
|
||||
u16 ctrl;
|
||||
|
||||
pci_read_config_word(dev, PCI_BRIDGE_CONTROL, &ctrl);
|
||||
ctrl |= PCI_BRIDGE_CTL_BUS_RESET;
|
||||
pci_write_config_word(dev, PCI_BRIDGE_CONTROL, ctrl);
|
||||
/*
|
||||
* PCI spec v3.0 7.6.4.2 requires minimum Trst of 1ms. Double
|
||||
* this to 2ms to ensure that we meet the minium requirement.
|
||||
*/
|
||||
msleep(2);
|
||||
|
||||
ctrl &= ~PCI_BRIDGE_CTL_BUS_RESET;
|
||||
pci_write_config_word(dev, PCI_BRIDGE_CONTROL, ctrl);
|
||||
|
||||
/*
|
||||
* Trhfa for conventional PCI is 2^25 clock cycles.
|
||||
* Assuming a minimum 33MHz clock this results in a 1s
|
||||
* delay before we can consider subordinate devices to
|
||||
* be re-initialized. PCIe has some ways to shorten this,
|
||||
* but we don't make use of them yet.
|
||||
*/
|
||||
ssleep(1);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_reset_bridge_secondary_bus);
|
||||
|
||||
static int pci_parent_bus_reset(struct pci_dev *dev, int probe)
|
||||
{
|
||||
struct pci_dev *pdev;
|
||||
|
||||
if (pci_is_root_bus(dev->bus) || dev->subordinate || !dev->bus->self)
|
||||
|
@ -3230,18 +3365,40 @@ static int pci_parent_bus_reset(struct pci_dev *dev, int probe)
|
|||
if (probe)
|
||||
return 0;
|
||||
|
||||
pci_read_config_word(dev->bus->self, PCI_BRIDGE_CONTROL, &ctrl);
|
||||
ctrl |= PCI_BRIDGE_CTL_BUS_RESET;
|
||||
pci_write_config_word(dev->bus->self, PCI_BRIDGE_CONTROL, ctrl);
|
||||
msleep(100);
|
||||
|
||||
ctrl &= ~PCI_BRIDGE_CTL_BUS_RESET;
|
||||
pci_write_config_word(dev->bus->self, PCI_BRIDGE_CONTROL, ctrl);
|
||||
msleep(100);
|
||||
pci_reset_bridge_secondary_bus(dev->bus->self);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int pci_reset_hotplug_slot(struct hotplug_slot *hotplug, int probe)
|
||||
{
|
||||
int rc = -ENOTTY;
|
||||
|
||||
if (!hotplug || !try_module_get(hotplug->ops->owner))
|
||||
return rc;
|
||||
|
||||
if (hotplug->ops->reset_slot)
|
||||
rc = hotplug->ops->reset_slot(hotplug, probe);
|
||||
|
||||
module_put(hotplug->ops->owner);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int pci_dev_reset_slot_function(struct pci_dev *dev, int probe)
|
||||
{
|
||||
struct pci_dev *pdev;
|
||||
|
||||
if (dev->subordinate || !dev->slot)
|
||||
return -ENOTTY;
|
||||
|
||||
list_for_each_entry(pdev, &dev->bus->devices, bus_list)
|
||||
if (pdev != dev && pdev->slot == dev->slot)
|
||||
return -ENOTTY;
|
||||
|
||||
return pci_reset_hotplug_slot(dev->slot->hotplug, probe);
|
||||
}
|
||||
|
||||
static int __pci_dev_reset(struct pci_dev *dev, int probe)
|
||||
{
|
||||
int rc;
|
||||
|
@ -3264,27 +3421,65 @@ static int __pci_dev_reset(struct pci_dev *dev, int probe)
|
|||
if (rc != -ENOTTY)
|
||||
goto done;
|
||||
|
||||
rc = pci_dev_reset_slot_function(dev, probe);
|
||||
if (rc != -ENOTTY)
|
||||
goto done;
|
||||
|
||||
rc = pci_parent_bus_reset(dev, probe);
|
||||
done:
|
||||
return rc;
|
||||
}
|
||||
|
||||
static void pci_dev_lock(struct pci_dev *dev)
|
||||
{
|
||||
pci_cfg_access_lock(dev);
|
||||
/* block PM suspend, driver probe, etc. */
|
||||
device_lock(&dev->dev);
|
||||
}
|
||||
|
||||
static void pci_dev_unlock(struct pci_dev *dev)
|
||||
{
|
||||
device_unlock(&dev->dev);
|
||||
pci_cfg_access_unlock(dev);
|
||||
}
|
||||
|
||||
static void pci_dev_save_and_disable(struct pci_dev *dev)
|
||||
{
|
||||
/*
|
||||
* Wake-up device prior to save. PM registers default to D0 after
|
||||
* reset and a simple register restore doesn't reliably return
|
||||
* to a non-D0 state anyway.
|
||||
*/
|
||||
pci_set_power_state(dev, PCI_D0);
|
||||
|
||||
pci_save_state(dev);
|
||||
/*
|
||||
* Disable the device by clearing the Command register, except for
|
||||
* INTx-disable which is set. This not only disables MMIO and I/O port
|
||||
* BARs, but also prevents the device from being Bus Master, preventing
|
||||
* DMA from the device including MSI/MSI-X interrupts. For PCI 2.3
|
||||
* compliant devices, INTx-disable prevents legacy interrupts.
|
||||
*/
|
||||
pci_write_config_word(dev, PCI_COMMAND, PCI_COMMAND_INTX_DISABLE);
|
||||
}
|
||||
|
||||
static void pci_dev_restore(struct pci_dev *dev)
|
||||
{
|
||||
pci_restore_state(dev);
|
||||
}
|
||||
|
||||
static int pci_dev_reset(struct pci_dev *dev, int probe)
|
||||
{
|
||||
int rc;
|
||||
|
||||
if (!probe) {
|
||||
pci_cfg_access_lock(dev);
|
||||
/* block PM suspend, driver probe, etc. */
|
||||
device_lock(&dev->dev);
|
||||
}
|
||||
if (!probe)
|
||||
pci_dev_lock(dev);
|
||||
|
||||
rc = __pci_dev_reset(dev, probe);
|
||||
|
||||
if (!probe) {
|
||||
device_unlock(&dev->dev);
|
||||
pci_cfg_access_unlock(dev);
|
||||
}
|
||||
if (!probe)
|
||||
pci_dev_unlock(dev);
|
||||
|
||||
return rc;
|
||||
}
|
||||
/**
|
||||
|
@ -3375,22 +3570,249 @@ int pci_reset_function(struct pci_dev *dev)
|
|||
if (rc)
|
||||
return rc;
|
||||
|
||||
pci_save_state(dev);
|
||||
|
||||
/*
|
||||
* both INTx and MSI are disabled after the Interrupt Disable bit
|
||||
* is set and the Bus Master bit is cleared.
|
||||
*/
|
||||
pci_write_config_word(dev, PCI_COMMAND, PCI_COMMAND_INTX_DISABLE);
|
||||
pci_dev_save_and_disable(dev);
|
||||
|
||||
rc = pci_dev_reset(dev, 0);
|
||||
|
||||
pci_restore_state(dev);
|
||||
pci_dev_restore(dev);
|
||||
|
||||
return rc;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_reset_function);
|
||||
|
||||
/* Lock devices from the top of the tree down */
|
||||
static void pci_bus_lock(struct pci_bus *bus)
|
||||
{
|
||||
struct pci_dev *dev;
|
||||
|
||||
list_for_each_entry(dev, &bus->devices, bus_list) {
|
||||
pci_dev_lock(dev);
|
||||
if (dev->subordinate)
|
||||
pci_bus_lock(dev->subordinate);
|
||||
}
|
||||
}
|
||||
|
||||
/* Unlock devices from the bottom of the tree up */
|
||||
static void pci_bus_unlock(struct pci_bus *bus)
|
||||
{
|
||||
struct pci_dev *dev;
|
||||
|
||||
list_for_each_entry(dev, &bus->devices, bus_list) {
|
||||
if (dev->subordinate)
|
||||
pci_bus_unlock(dev->subordinate);
|
||||
pci_dev_unlock(dev);
|
||||
}
|
||||
}
|
||||
|
||||
/* Lock devices from the top of the tree down */
|
||||
static void pci_slot_lock(struct pci_slot *slot)
|
||||
{
|
||||
struct pci_dev *dev;
|
||||
|
||||
list_for_each_entry(dev, &slot->bus->devices, bus_list) {
|
||||
if (!dev->slot || dev->slot != slot)
|
||||
continue;
|
||||
pci_dev_lock(dev);
|
||||
if (dev->subordinate)
|
||||
pci_bus_lock(dev->subordinate);
|
||||
}
|
||||
}
|
||||
|
||||
/* Unlock devices from the bottom of the tree up */
|
||||
static void pci_slot_unlock(struct pci_slot *slot)
|
||||
{
|
||||
struct pci_dev *dev;
|
||||
|
||||
list_for_each_entry(dev, &slot->bus->devices, bus_list) {
|
||||
if (!dev->slot || dev->slot != slot)
|
||||
continue;
|
||||
if (dev->subordinate)
|
||||
pci_bus_unlock(dev->subordinate);
|
||||
pci_dev_unlock(dev);
|
||||
}
|
||||
}
|
||||
|
||||
/* Save and disable devices from the top of the tree down */
|
||||
static void pci_bus_save_and_disable(struct pci_bus *bus)
|
||||
{
|
||||
struct pci_dev *dev;
|
||||
|
||||
list_for_each_entry(dev, &bus->devices, bus_list) {
|
||||
pci_dev_save_and_disable(dev);
|
||||
if (dev->subordinate)
|
||||
pci_bus_save_and_disable(dev->subordinate);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Restore devices from top of the tree down - parent bridges need to be
|
||||
* restored before we can get to subordinate devices.
|
||||
*/
|
||||
static void pci_bus_restore(struct pci_bus *bus)
|
||||
{
|
||||
struct pci_dev *dev;
|
||||
|
||||
list_for_each_entry(dev, &bus->devices, bus_list) {
|
||||
pci_dev_restore(dev);
|
||||
if (dev->subordinate)
|
||||
pci_bus_restore(dev->subordinate);
|
||||
}
|
||||
}
|
||||
|
||||
/* Save and disable devices from the top of the tree down */
|
||||
static void pci_slot_save_and_disable(struct pci_slot *slot)
|
||||
{
|
||||
struct pci_dev *dev;
|
||||
|
||||
list_for_each_entry(dev, &slot->bus->devices, bus_list) {
|
||||
if (!dev->slot || dev->slot != slot)
|
||||
continue;
|
||||
pci_dev_save_and_disable(dev);
|
||||
if (dev->subordinate)
|
||||
pci_bus_save_and_disable(dev->subordinate);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Restore devices from top of the tree down - parent bridges need to be
|
||||
* restored before we can get to subordinate devices.
|
||||
*/
|
||||
static void pci_slot_restore(struct pci_slot *slot)
|
||||
{
|
||||
struct pci_dev *dev;
|
||||
|
||||
list_for_each_entry(dev, &slot->bus->devices, bus_list) {
|
||||
if (!dev->slot || dev->slot != slot)
|
||||
continue;
|
||||
pci_dev_restore(dev);
|
||||
if (dev->subordinate)
|
||||
pci_bus_restore(dev->subordinate);
|
||||
}
|
||||
}
|
||||
|
||||
static int pci_slot_reset(struct pci_slot *slot, int probe)
|
||||
{
|
||||
int rc;
|
||||
|
||||
if (!slot)
|
||||
return -ENOTTY;
|
||||
|
||||
if (!probe)
|
||||
pci_slot_lock(slot);
|
||||
|
||||
might_sleep();
|
||||
|
||||
rc = pci_reset_hotplug_slot(slot->hotplug, probe);
|
||||
|
||||
if (!probe)
|
||||
pci_slot_unlock(slot);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
/**
|
||||
* pci_probe_reset_slot - probe whether a PCI slot can be reset
|
||||
* @slot: PCI slot to probe
|
||||
*
|
||||
* Return 0 if slot can be reset, negative if a slot reset is not supported.
|
||||
*/
|
||||
int pci_probe_reset_slot(struct pci_slot *slot)
|
||||
{
|
||||
return pci_slot_reset(slot, 1);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_probe_reset_slot);
|
||||
|
||||
/**
|
||||
* pci_reset_slot - reset a PCI slot
|
||||
* @slot: PCI slot to reset
|
||||
*
|
||||
* A PCI bus may host multiple slots, each slot may support a reset mechanism
|
||||
* independent of other slots. For instance, some slots may support slot power
|
||||
* control. In the case of a 1:1 bus to slot architecture, this function may
|
||||
* wrap the bus reset to avoid spurious slot related events such as hotplug.
|
||||
* Generally a slot reset should be attempted before a bus reset. All of the
|
||||
* function of the slot and any subordinate buses behind the slot are reset
|
||||
* through this function. PCI config space of all devices in the slot and
|
||||
* behind the slot is saved before and restored after reset.
|
||||
*
|
||||
* Return 0 on success, non-zero on error.
|
||||
*/
|
||||
int pci_reset_slot(struct pci_slot *slot)
|
||||
{
|
||||
int rc;
|
||||
|
||||
rc = pci_slot_reset(slot, 1);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
pci_slot_save_and_disable(slot);
|
||||
|
||||
rc = pci_slot_reset(slot, 0);
|
||||
|
||||
pci_slot_restore(slot);
|
||||
|
||||
return rc;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_reset_slot);
|
||||
|
||||
static int pci_bus_reset(struct pci_bus *bus, int probe)
|
||||
{
|
||||
if (!bus->self)
|
||||
return -ENOTTY;
|
||||
|
||||
if (probe)
|
||||
return 0;
|
||||
|
||||
pci_bus_lock(bus);
|
||||
|
||||
might_sleep();
|
||||
|
||||
pci_reset_bridge_secondary_bus(bus->self);
|
||||
|
||||
pci_bus_unlock(bus);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* pci_probe_reset_bus - probe whether a PCI bus can be reset
|
||||
* @bus: PCI bus to probe
|
||||
*
|
||||
* Return 0 if bus can be reset, negative if a bus reset is not supported.
|
||||
*/
|
||||
int pci_probe_reset_bus(struct pci_bus *bus)
|
||||
{
|
||||
return pci_bus_reset(bus, 1);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_probe_reset_bus);
|
||||
|
||||
/**
|
||||
* pci_reset_bus - reset a PCI bus
|
||||
* @bus: top level PCI bus to reset
|
||||
*
|
||||
* Do a bus reset on the given bus and any subordinate buses, saving
|
||||
* and restoring state of all devices.
|
||||
*
|
||||
* Return 0 on success, non-zero on error.
|
||||
*/
|
||||
int pci_reset_bus(struct pci_bus *bus)
|
||||
{
|
||||
int rc;
|
||||
|
||||
rc = pci_bus_reset(bus, 1);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
pci_bus_save_and_disable(bus);
|
||||
|
||||
rc = pci_bus_reset(bus, 0);
|
||||
|
||||
pci_bus_restore(bus);
|
||||
|
||||
return rc;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_reset_bus);
|
||||
|
||||
/**
|
||||
* pcix_get_max_mmrbc - get PCI-X maximum designed memory read byte count
|
||||
* @dev: PCI device to query
|
||||
|
@ -3525,8 +3947,6 @@ int pcie_set_readrq(struct pci_dev *dev, int rq)
|
|||
if (pcie_bus_config == PCIE_BUS_PERFORMANCE) {
|
||||
int mps = pcie_get_mps(dev);
|
||||
|
||||
if (mps < 0)
|
||||
return mps;
|
||||
if (mps < rq)
|
||||
rq = mps;
|
||||
}
|
||||
|
@ -3543,7 +3963,6 @@ EXPORT_SYMBOL(pcie_set_readrq);
|
|||
* @dev: PCI device to query
|
||||
*
|
||||
* Returns maximum payload size in bytes
|
||||
* or appropriate error value.
|
||||
*/
|
||||
int pcie_get_mps(struct pci_dev *dev)
|
||||
{
|
||||
|
|
|
@ -151,7 +151,7 @@ static inline int pci_no_d1d2(struct pci_dev *dev)
|
|||
|
||||
}
|
||||
extern struct device_attribute pci_dev_attrs[];
|
||||
extern struct device_attribute pcibus_dev_attrs[];
|
||||
extern const struct attribute_group *pcibus_groups[];
|
||||
extern struct device_type pci_dev_type;
|
||||
extern struct bus_attribute pci_bus_attrs[];
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
# PCI Express Port Bus Configuration
|
||||
#
|
||||
config PCIEPORTBUS
|
||||
bool "PCI Express support"
|
||||
bool "PCI Express Port Bus support"
|
||||
depends on PCI
|
||||
help
|
||||
This automatically enables PCI Express Port Bus support. Users can
|
||||
|
|
|
@ -352,7 +352,7 @@ static pci_ers_result_t aer_root_reset(struct pci_dev *dev)
|
|||
reg32 &= ~ROOT_PORT_INTR_ON_MESG_MASK;
|
||||
pci_write_config_dword(dev, pos + PCI_ERR_ROOT_COMMAND, reg32);
|
||||
|
||||
aer_do_secondary_bus_reset(dev);
|
||||
pci_reset_bridge_secondary_bus(dev);
|
||||
dev_printk(KERN_DEBUG, &dev->dev, "Root Port link has been reset\n");
|
||||
|
||||
/* Clear Root Error Status */
|
||||
|
|
|
@ -106,7 +106,6 @@ static inline pci_ers_result_t merge_result(enum pci_ers_result orig,
|
|||
}
|
||||
|
||||
extern struct bus_type pcie_port_bus_type;
|
||||
void aer_do_secondary_bus_reset(struct pci_dev *dev);
|
||||
int aer_init(struct pcie_device *dev);
|
||||
void aer_isr(struct work_struct *work);
|
||||
void aer_print_error(struct pci_dev *dev, struct aer_err_info *info);
|
||||
|
|
|
@ -366,39 +366,6 @@ static pci_ers_result_t broadcast_error_message(struct pci_dev *dev,
|
|||
return result_data.result;
|
||||
}
|
||||
|
||||
/**
|
||||
* aer_do_secondary_bus_reset - perform secondary bus reset
|
||||
* @dev: pointer to bridge's pci_dev data structure
|
||||
*
|
||||
* Invoked when performing link reset at Root Port or Downstream Port.
|
||||
*/
|
||||
void aer_do_secondary_bus_reset(struct pci_dev *dev)
|
||||
{
|
||||
u16 p2p_ctrl;
|
||||
|
||||
/* Assert Secondary Bus Reset */
|
||||
pci_read_config_word(dev, PCI_BRIDGE_CONTROL, &p2p_ctrl);
|
||||
p2p_ctrl |= PCI_BRIDGE_CTL_BUS_RESET;
|
||||
pci_write_config_word(dev, PCI_BRIDGE_CONTROL, p2p_ctrl);
|
||||
|
||||
/*
|
||||
* we should send hot reset message for 2ms to allow it time to
|
||||
* propagate to all downstream ports
|
||||
*/
|
||||
msleep(2);
|
||||
|
||||
/* De-assert Secondary Bus Reset */
|
||||
p2p_ctrl &= ~PCI_BRIDGE_CTL_BUS_RESET;
|
||||
pci_write_config_word(dev, PCI_BRIDGE_CONTROL, p2p_ctrl);
|
||||
|
||||
/*
|
||||
* System software must wait for at least 100ms from the end
|
||||
* of a reset of one or more device before it is permitted
|
||||
* to issue Configuration Requests to those devices.
|
||||
*/
|
||||
msleep(200);
|
||||
}
|
||||
|
||||
/**
|
||||
* default_reset_link - default reset function
|
||||
* @dev: pointer to pci_dev data structure
|
||||
|
@ -408,7 +375,7 @@ void aer_do_secondary_bus_reset(struct pci_dev *dev)
|
|||
*/
|
||||
static pci_ers_result_t default_reset_link(struct pci_dev *dev)
|
||||
{
|
||||
aer_do_secondary_bus_reset(dev);
|
||||
pci_reset_bridge_secondary_bus(dev);
|
||||
dev_printk(KERN_DEBUG, &dev->dev, "downstream link has been reset\n");
|
||||
return PCI_ERS_RESULT_RECOVERED;
|
||||
}
|
||||
|
|
|
@ -96,7 +96,7 @@ static void release_pcibus_dev(struct device *dev)
|
|||
static struct class pcibus_class = {
|
||||
.name = "pci_bus",
|
||||
.dev_release = &release_pcibus_dev,
|
||||
.dev_attrs = pcibus_dev_attrs,
|
||||
.dev_groups = pcibus_groups,
|
||||
};
|
||||
|
||||
static int __init pcibus_class_init(void)
|
||||
|
@ -156,6 +156,8 @@ static inline unsigned long decode_bar(struct pci_dev *dev, u32 bar)
|
|||
return flags;
|
||||
}
|
||||
|
||||
#define PCI_COMMAND_DECODE_ENABLE (PCI_COMMAND_MEMORY | PCI_COMMAND_IO)
|
||||
|
||||
/**
|
||||
* pci_read_base - read a PCI BAR
|
||||
* @dev: the PCI device
|
||||
|
@ -178,8 +180,10 @@ int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type,
|
|||
/* No printks while decoding is disabled! */
|
||||
if (!dev->mmio_always_on) {
|
||||
pci_read_config_word(dev, PCI_COMMAND, &orig_cmd);
|
||||
pci_write_config_word(dev, PCI_COMMAND,
|
||||
orig_cmd & ~(PCI_COMMAND_MEMORY | PCI_COMMAND_IO));
|
||||
if (orig_cmd & PCI_COMMAND_DECODE_ENABLE) {
|
||||
pci_write_config_word(dev, PCI_COMMAND,
|
||||
orig_cmd & ~PCI_COMMAND_DECODE_ENABLE);
|
||||
}
|
||||
}
|
||||
|
||||
res->name = pci_name(dev);
|
||||
|
@ -293,7 +297,8 @@ int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type,
|
|||
fail:
|
||||
res->flags = 0;
|
||||
out:
|
||||
if (!dev->mmio_always_on)
|
||||
if (!dev->mmio_always_on &&
|
||||
(orig_cmd & PCI_COMMAND_DECODE_ENABLE))
|
||||
pci_write_config_word(dev, PCI_COMMAND, orig_cmd);
|
||||
|
||||
if (bar_too_big)
|
||||
|
@ -1491,24 +1496,23 @@ static int pcie_find_smpss(struct pci_dev *dev, void *data)
|
|||
if (!pci_is_pcie(dev))
|
||||
return 0;
|
||||
|
||||
/* For PCIE hotplug enabled slots not connected directly to a
|
||||
* PCI-E root port, there can be problems when hotplugging
|
||||
* devices. This is due to the possibility of hotplugging a
|
||||
* device into the fabric with a smaller MPS that the devices
|
||||
* currently running have configured. Modifying the MPS on the
|
||||
* running devices could cause a fatal bus error due to an
|
||||
* incoming frame being larger than the newly configured MPS.
|
||||
* To work around this, the MPS for the entire fabric must be
|
||||
* set to the minimum size. Any devices hotplugged into this
|
||||
* fabric will have the minimum MPS set. If the PCI hotplug
|
||||
* slot is directly connected to the root port and there are not
|
||||
* other devices on the fabric (which seems to be the most
|
||||
* common case), then this is not an issue and MPS discovery
|
||||
* will occur as normal.
|
||||
/*
|
||||
* We don't have a way to change MPS settings on devices that have
|
||||
* drivers attached. A hot-added device might support only the minimum
|
||||
* MPS setting (MPS=128). Therefore, if the fabric contains a bridge
|
||||
* where devices may be hot-added, we limit the fabric MPS to 128 so
|
||||
* hot-added devices will work correctly.
|
||||
*
|
||||
* However, if we hot-add a device to a slot directly below a Root
|
||||
* Port, it's impossible for there to be other existing devices below
|
||||
* the port. We don't limit the MPS in this case because we can
|
||||
* reconfigure MPS on both the Root Port and the hot-added device,
|
||||
* and there are no other devices involved.
|
||||
*
|
||||
* Note that this PCIE_BUS_SAFE path assumes no peer-to-peer DMA.
|
||||
*/
|
||||
if (dev->is_hotplug_bridge && (!list_is_singular(&dev->bus->devices) ||
|
||||
(dev->bus->self &&
|
||||
pci_pcie_type(dev->bus->self) != PCI_EXP_TYPE_ROOT_PORT)))
|
||||
if (dev->is_hotplug_bridge &&
|
||||
pci_pcie_type(dev) != PCI_EXP_TYPE_ROOT_PORT)
|
||||
*smpss = 0;
|
||||
|
||||
if (*smpss > dev->pcie_mpss)
|
||||
|
@ -1583,6 +1587,22 @@ static void pcie_write_mrrs(struct pci_dev *dev)
|
|||
"with pci=pcie_bus_safe.\n");
|
||||
}
|
||||
|
||||
static void pcie_bus_detect_mps(struct pci_dev *dev)
|
||||
{
|
||||
struct pci_dev *bridge = dev->bus->self;
|
||||
int mps, p_mps;
|
||||
|
||||
if (!bridge)
|
||||
return;
|
||||
|
||||
mps = pcie_get_mps(dev);
|
||||
p_mps = pcie_get_mps(bridge);
|
||||
|
||||
if (mps != p_mps)
|
||||
dev_warn(&dev->dev, "Max Payload Size %d, but upstream %s set to %d; if necessary, use \"pci=pcie_bus_safe\" and report a bug\n",
|
||||
mps, pci_name(bridge), p_mps);
|
||||
}
|
||||
|
||||
static int pcie_bus_configure_set(struct pci_dev *dev, void *data)
|
||||
{
|
||||
int mps, orig_mps;
|
||||
|
@ -1590,13 +1610,18 @@ static int pcie_bus_configure_set(struct pci_dev *dev, void *data)
|
|||
if (!pci_is_pcie(dev))
|
||||
return 0;
|
||||
|
||||
if (pcie_bus_config == PCIE_BUS_TUNE_OFF) {
|
||||
pcie_bus_detect_mps(dev);
|
||||
return 0;
|
||||
}
|
||||
|
||||
mps = 128 << *(u8 *)data;
|
||||
orig_mps = pcie_get_mps(dev);
|
||||
|
||||
pcie_write_mps(dev, mps);
|
||||
pcie_write_mrrs(dev);
|
||||
|
||||
dev_info(&dev->dev, "PCI-E Max Payload Size set to %4d/%4d (was %4d), "
|
||||
dev_info(&dev->dev, "Max Payload Size set to %4d/%4d (was %4d), "
|
||||
"Max Read Rq %4d\n", pcie_get_mps(dev), 128 << dev->pcie_mpss,
|
||||
orig_mps, pcie_get_readrq(dev));
|
||||
|
||||
|
@ -1607,25 +1632,25 @@ static int pcie_bus_configure_set(struct pci_dev *dev, void *data)
|
|||
* parents then children fashion. If this changes, then this code will not
|
||||
* work as designed.
|
||||
*/
|
||||
void pcie_bus_configure_settings(struct pci_bus *bus, u8 mpss)
|
||||
void pcie_bus_configure_settings(struct pci_bus *bus)
|
||||
{
|
||||
u8 smpss;
|
||||
|
||||
if (!bus->self)
|
||||
return;
|
||||
|
||||
if (!pci_is_pcie(bus->self))
|
||||
return;
|
||||
|
||||
if (pcie_bus_config == PCIE_BUS_TUNE_OFF)
|
||||
return;
|
||||
|
||||
/* FIXME - Peer to peer DMA is possible, though the endpoint would need
|
||||
* to be aware to the MPS of the destination. To work around this,
|
||||
* to be aware of the MPS of the destination. To work around this,
|
||||
* simply force the MPS of the entire system to the smallest possible.
|
||||
*/
|
||||
if (pcie_bus_config == PCIE_BUS_PEER2PEER)
|
||||
smpss = 0;
|
||||
|
||||
if (pcie_bus_config == PCIE_BUS_SAFE) {
|
||||
smpss = mpss;
|
||||
smpss = bus->self->pcie_mpss;
|
||||
|
||||
pcie_find_smpss(bus->self, &smpss);
|
||||
pci_walk_bus(bus, pcie_find_smpss, &smpss);
|
||||
|
@ -1979,7 +2004,6 @@ unsigned int __ref pci_rescan_bus(struct pci_bus *bus)
|
|||
|
||||
max = pci_scan_child_bus(bus);
|
||||
pci_assign_unassigned_bus_resources(bus);
|
||||
pci_enable_bridges(bus);
|
||||
pci_bus_add_devices(bus);
|
||||
|
||||
return max;
|
||||
|
|
|
@ -3126,9 +3126,6 @@ static int reset_intel_generic_dev(struct pci_dev *dev, int probe)
|
|||
|
||||
static int reset_intel_82599_sfp_virtfn(struct pci_dev *dev, int probe)
|
||||
{
|
||||
int i;
|
||||
u16 status;
|
||||
|
||||
/*
|
||||
* http://www.intel.com/content/dam/doc/datasheet/82599-10-gbe-controller-datasheet.pdf
|
||||
*
|
||||
|
@ -3140,20 +3137,9 @@ static int reset_intel_82599_sfp_virtfn(struct pci_dev *dev, int probe)
|
|||
if (probe)
|
||||
return 0;
|
||||
|
||||
/* Wait for Transaction Pending bit clean */
|
||||
for (i = 0; i < 4; i++) {
|
||||
if (i)
|
||||
msleep((1 << (i - 1)) * 100);
|
||||
if (!pci_wait_for_pending_transaction(dev))
|
||||
dev_err(&dev->dev, "transaction is not cleared; proceeding with reset anyway\n");
|
||||
|
||||
pcie_capability_read_word(dev, PCI_EXP_DEVSTA, &status);
|
||||
if (!(status & PCI_EXP_DEVSTA_TRPND))
|
||||
goto clear;
|
||||
}
|
||||
|
||||
dev_err(&dev->dev, "transaction is not cleared; "
|
||||
"proceeding with reset anyway\n");
|
||||
|
||||
clear:
|
||||
pcie_capability_set_word(dev, PCI_EXP_DEVCTL, PCI_EXP_DEVCTL_BCR_FLR);
|
||||
|
||||
msleep(100);
|
||||
|
@ -3208,6 +3194,83 @@ reset_complete:
|
|||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Device-specific reset method for Chelsio T4-based adapters.
|
||||
*/
|
||||
static int reset_chelsio_generic_dev(struct pci_dev *dev, int probe)
|
||||
{
|
||||
u16 old_command;
|
||||
u16 msix_flags;
|
||||
|
||||
/*
|
||||
* If this isn't a Chelsio T4-based device, return -ENOTTY indicating
|
||||
* that we have no device-specific reset method.
|
||||
*/
|
||||
if ((dev->device & 0xf000) != 0x4000)
|
||||
return -ENOTTY;
|
||||
|
||||
/*
|
||||
* If this is the "probe" phase, return 0 indicating that we can
|
||||
* reset this device.
|
||||
*/
|
||||
if (probe)
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* T4 can wedge if there are DMAs in flight within the chip and Bus
|
||||
* Master has been disabled. We need to have it on till the Function
|
||||
* Level Reset completes. (BUS_MASTER is disabled in
|
||||
* pci_reset_function()).
|
||||
*/
|
||||
pci_read_config_word(dev, PCI_COMMAND, &old_command);
|
||||
pci_write_config_word(dev, PCI_COMMAND,
|
||||
old_command | PCI_COMMAND_MASTER);
|
||||
|
||||
/*
|
||||
* Perform the actual device function reset, saving and restoring
|
||||
* configuration information around the reset.
|
||||
*/
|
||||
pci_save_state(dev);
|
||||
|
||||
/*
|
||||
* T4 also suffers a Head-Of-Line blocking problem if MSI-X interrupts
|
||||
* are disabled when an MSI-X interrupt message needs to be delivered.
|
||||
* So we briefly re-enable MSI-X interrupts for the duration of the
|
||||
* FLR. The pci_restore_state() below will restore the original
|
||||
* MSI-X state.
|
||||
*/
|
||||
pci_read_config_word(dev, dev->msix_cap+PCI_MSIX_FLAGS, &msix_flags);
|
||||
if ((msix_flags & PCI_MSIX_FLAGS_ENABLE) == 0)
|
||||
pci_write_config_word(dev, dev->msix_cap+PCI_MSIX_FLAGS,
|
||||
msix_flags |
|
||||
PCI_MSIX_FLAGS_ENABLE |
|
||||
PCI_MSIX_FLAGS_MASKALL);
|
||||
|
||||
/*
|
||||
* Start of pcie_flr() code sequence. This reset code is a copy of
|
||||
* the guts of pcie_flr() because that's not an exported function.
|
||||
*/
|
||||
|
||||
if (!pci_wait_for_pending_transaction(dev))
|
||||
dev_err(&dev->dev, "transaction is not cleared; proceeding with reset anyway\n");
|
||||
|
||||
pcie_capability_set_word(dev, PCI_EXP_DEVCTL, PCI_EXP_DEVCTL_BCR_FLR);
|
||||
msleep(100);
|
||||
|
||||
/*
|
||||
* End of pcie_flr() code sequence.
|
||||
*/
|
||||
|
||||
/*
|
||||
* Restore the configuration information (BAR values, etc.) including
|
||||
* the original PCI Configuration Space Command word, and return
|
||||
* success.
|
||||
*/
|
||||
pci_restore_state(dev);
|
||||
pci_write_config_word(dev, PCI_COMMAND, old_command);
|
||||
return 0;
|
||||
}
|
||||
|
||||
#define PCI_DEVICE_ID_INTEL_82599_SFP_VF 0x10ed
|
||||
#define PCI_DEVICE_ID_INTEL_IVB_M_VGA 0x0156
|
||||
#define PCI_DEVICE_ID_INTEL_IVB_M2_VGA 0x0166
|
||||
|
@ -3221,6 +3284,8 @@ static const struct pci_dev_reset_methods pci_dev_reset_methods[] = {
|
|||
reset_ivb_igd },
|
||||
{ PCI_VENDOR_ID_INTEL, PCI_ANY_ID,
|
||||
reset_intel_generic_dev },
|
||||
{ PCI_VENDOR_ID_CHELSIO, PCI_ANY_ID,
|
||||
reset_chelsio_generic_dev },
|
||||
{ 0 }
|
||||
};
|
||||
|
||||
|
@ -3295,11 +3360,61 @@ struct pci_dev *pci_get_dma_source(struct pci_dev *dev)
|
|||
return pci_dev_get(dev);
|
||||
}
|
||||
|
||||
/*
|
||||
* AMD has indicated that the devices below do not support peer-to-peer
|
||||
* in any system where they are found in the southbridge with an AMD
|
||||
* IOMMU in the system. Multifunction devices that do not support
|
||||
* peer-to-peer between functions can claim to support a subset of ACS.
|
||||
* Such devices effectively enable request redirect (RR) and completion
|
||||
* redirect (CR) since all transactions are redirected to the upstream
|
||||
* root complex.
|
||||
*
|
||||
* http://permalink.gmane.org/gmane.comp.emulators.kvm.devel/94086
|
||||
* http://permalink.gmane.org/gmane.comp.emulators.kvm.devel/94102
|
||||
* http://permalink.gmane.org/gmane.comp.emulators.kvm.devel/99402
|
||||
*
|
||||
* 1002:4385 SBx00 SMBus Controller
|
||||
* 1002:439c SB7x0/SB8x0/SB9x0 IDE Controller
|
||||
* 1002:4383 SBx00 Azalia (Intel HDA)
|
||||
* 1002:439d SB7x0/SB8x0/SB9x0 LPC host controller
|
||||
* 1002:4384 SBx00 PCI to PCI Bridge
|
||||
* 1002:4399 SB7x0/SB8x0/SB9x0 USB OHCI2 Controller
|
||||
*/
|
||||
static int pci_quirk_amd_sb_acs(struct pci_dev *dev, u16 acs_flags)
|
||||
{
|
||||
#ifdef CONFIG_ACPI
|
||||
struct acpi_table_header *header = NULL;
|
||||
acpi_status status;
|
||||
|
||||
/* Targeting multifunction devices on the SB (appears on root bus) */
|
||||
if (!dev->multifunction || !pci_is_root_bus(dev->bus))
|
||||
return -ENODEV;
|
||||
|
||||
/* The IVRS table describes the AMD IOMMU */
|
||||
status = acpi_get_table("IVRS", 0, &header);
|
||||
if (ACPI_FAILURE(status))
|
||||
return -ENODEV;
|
||||
|
||||
/* Filter out flags not applicable to multifunction */
|
||||
acs_flags &= (PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_EC | PCI_ACS_DT);
|
||||
|
||||
return acs_flags & ~(PCI_ACS_RR | PCI_ACS_CR) ? 0 : 1;
|
||||
#else
|
||||
return -ENODEV;
|
||||
#endif
|
||||
}
|
||||
|
||||
static const struct pci_dev_acs_enabled {
|
||||
u16 vendor;
|
||||
u16 device;
|
||||
int (*acs_enabled)(struct pci_dev *dev, u16 acs_flags);
|
||||
} pci_dev_acs_enabled[] = {
|
||||
{ PCI_VENDOR_ID_ATI, 0x4385, pci_quirk_amd_sb_acs },
|
||||
{ PCI_VENDOR_ID_ATI, 0x439c, pci_quirk_amd_sb_acs },
|
||||
{ PCI_VENDOR_ID_ATI, 0x4383, pci_quirk_amd_sb_acs },
|
||||
{ PCI_VENDOR_ID_ATI, 0x439d, pci_quirk_amd_sb_acs },
|
||||
{ PCI_VENDOR_ID_ATI, 0x4384, pci_quirk_amd_sb_acs },
|
||||
{ PCI_VENDOR_ID_ATI, 0x4399, pci_quirk_amd_sb_acs },
|
||||
{ 0 }
|
||||
};
|
||||
|
||||
|
|
|
@ -814,14 +814,14 @@ static void pbus_size_io(struct pci_bus *bus, resource_size_t min_size,
|
|||
{
|
||||
struct pci_dev *dev;
|
||||
struct resource *b_res = find_free_bus_resource(bus, IORESOURCE_IO);
|
||||
unsigned long size = 0, size0 = 0, size1 = 0;
|
||||
resource_size_t size = 0, size0 = 0, size1 = 0;
|
||||
resource_size_t children_add_size = 0;
|
||||
resource_size_t min_align, io_align, align;
|
||||
resource_size_t min_align, align;
|
||||
|
||||
if (!b_res)
|
||||
return;
|
||||
|
||||
io_align = min_align = window_alignment(bus, IORESOURCE_IO);
|
||||
min_align = window_alignment(bus, IORESOURCE_IO);
|
||||
list_for_each_entry(dev, &bus->devices, bus_list) {
|
||||
int i;
|
||||
|
||||
|
@ -848,9 +848,6 @@ static void pbus_size_io(struct pci_bus *bus, resource_size_t min_size,
|
|||
}
|
||||
}
|
||||
|
||||
if (min_align > io_align)
|
||||
min_align = io_align;
|
||||
|
||||
size0 = calculate_iosize(size, min_size, size1,
|
||||
resource_size(b_res), min_align);
|
||||
if (children_add_size > add_size)
|
||||
|
@ -874,8 +871,9 @@ static void pbus_size_io(struct pci_bus *bus, resource_size_t min_size,
|
|||
add_to_list(realloc_head, bus->self, b_res, size1-size0,
|
||||
min_align);
|
||||
dev_printk(KERN_DEBUG, &bus->self->dev, "bridge window "
|
||||
"%pR to %pR add_size %lx\n", b_res,
|
||||
&bus->busn_res, size1-size0);
|
||||
"%pR to %pR add_size %llx\n", b_res,
|
||||
&bus->busn_res,
|
||||
(unsigned long long)size1-size0);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -905,6 +903,8 @@ static inline resource_size_t calculate_mem_align(resource_size_t *aligns,
|
|||
* pbus_size_mem() - size the memory window of a given bus
|
||||
*
|
||||
* @bus : the bus
|
||||
* @mask: mask the resource flag, then compare it with type
|
||||
* @type: the type of free resource from bridge
|
||||
* @min_size : the minimum memory window that must to be allocated
|
||||
* @add_size : additional optional memory window
|
||||
* @realloc_head : track the additional memory window on this list
|
||||
|
@ -1364,39 +1364,21 @@ static void pci_bus_dump_resources(struct pci_bus *bus)
|
|||
}
|
||||
}
|
||||
|
||||
static int __init pci_bus_get_depth(struct pci_bus *bus)
|
||||
static int pci_bus_get_depth(struct pci_bus *bus)
|
||||
{
|
||||
int depth = 0;
|
||||
struct pci_dev *dev;
|
||||
struct pci_bus *child_bus;
|
||||
|
||||
list_for_each_entry(dev, &bus->devices, bus_list) {
|
||||
list_for_each_entry(child_bus, &bus->children, node){
|
||||
int ret;
|
||||
struct pci_bus *b = dev->subordinate;
|
||||
if (!b)
|
||||
continue;
|
||||
|
||||
ret = pci_bus_get_depth(b);
|
||||
ret = pci_bus_get_depth(child_bus);
|
||||
if (ret + 1 > depth)
|
||||
depth = ret + 1;
|
||||
}
|
||||
|
||||
return depth;
|
||||
}
|
||||
static int __init pci_get_max_depth(void)
|
||||
{
|
||||
int depth = 0;
|
||||
struct pci_bus *bus;
|
||||
|
||||
list_for_each_entry(bus, &pci_root_buses, node) {
|
||||
int ret;
|
||||
|
||||
ret = pci_bus_get_depth(bus);
|
||||
if (ret > depth)
|
||||
depth = ret;
|
||||
}
|
||||
|
||||
return depth;
|
||||
}
|
||||
|
||||
/*
|
||||
* -1: undefined, will auto detect later
|
||||
|
@ -1413,7 +1395,7 @@ enum enable_type {
|
|||
auto_enabled,
|
||||
};
|
||||
|
||||
static enum enable_type pci_realloc_enable __initdata = undefined;
|
||||
static enum enable_type pci_realloc_enable = undefined;
|
||||
void __init pci_realloc_get_opt(char *str)
|
||||
{
|
||||
if (!strncmp(str, "off", 3))
|
||||
|
@ -1421,45 +1403,64 @@ void __init pci_realloc_get_opt(char *str)
|
|||
else if (!strncmp(str, "on", 2))
|
||||
pci_realloc_enable = user_enabled;
|
||||
}
|
||||
static bool __init pci_realloc_enabled(void)
|
||||
static bool pci_realloc_enabled(enum enable_type enable)
|
||||
{
|
||||
return pci_realloc_enable >= user_enabled;
|
||||
return enable >= user_enabled;
|
||||
}
|
||||
|
||||
static void __init pci_realloc_detect(void)
|
||||
{
|
||||
#if defined(CONFIG_PCI_IOV) && defined(CONFIG_PCI_REALLOC_ENABLE_AUTO)
|
||||
struct pci_dev *dev = NULL;
|
||||
static int iov_resources_unassigned(struct pci_dev *dev, void *data)
|
||||
{
|
||||
int i;
|
||||
bool *unassigned = data;
|
||||
|
||||
if (pci_realloc_enable != undefined)
|
||||
return;
|
||||
for (i = PCI_IOV_RESOURCES; i <= PCI_IOV_RESOURCE_END; i++) {
|
||||
struct resource *r = &dev->resource[i];
|
||||
struct pci_bus_region region;
|
||||
|
||||
for_each_pci_dev(dev) {
|
||||
int i;
|
||||
/* Not assigned or rejected by kernel? */
|
||||
if (!r->flags)
|
||||
continue;
|
||||
|
||||
for (i = PCI_IOV_RESOURCES; i <= PCI_IOV_RESOURCE_END; i++) {
|
||||
struct resource *r = &dev->resource[i];
|
||||
|
||||
/* Not assigned, or rejected by kernel ? */
|
||||
if (r->flags && !r->start) {
|
||||
pci_realloc_enable = auto_enabled;
|
||||
|
||||
return;
|
||||
}
|
||||
pcibios_resource_to_bus(dev, ®ion, r);
|
||||
if (!region.start) {
|
||||
*unassigned = true;
|
||||
return 1; /* return early from pci_walk_bus() */
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static enum enable_type pci_realloc_detect(struct pci_bus *bus,
|
||||
enum enable_type enable_local)
|
||||
{
|
||||
bool unassigned = false;
|
||||
|
||||
if (enable_local != undefined)
|
||||
return enable_local;
|
||||
|
||||
pci_walk_bus(bus, iov_resources_unassigned, &unassigned);
|
||||
if (unassigned)
|
||||
return auto_enabled;
|
||||
|
||||
return enable_local;
|
||||
}
|
||||
#else
|
||||
static enum enable_type pci_realloc_detect(struct pci_bus *bus,
|
||||
enum enable_type enable_local)
|
||||
{
|
||||
return enable_local;
|
||||
}
|
||||
#endif
|
||||
|
||||
/*
|
||||
* first try will not touch pci bridge res
|
||||
* second and later try will clear small leaf bridge res
|
||||
* will stop till to the max deepth if can not find good one
|
||||
*/
|
||||
void __init
|
||||
pci_assign_unassigned_resources(void)
|
||||
void pci_assign_unassigned_root_bus_resources(struct pci_bus *bus)
|
||||
{
|
||||
struct pci_bus *bus;
|
||||
LIST_HEAD(realloc_head); /* list of resources that
|
||||
want additional resources */
|
||||
struct list_head *add_list = NULL;
|
||||
|
@ -1470,15 +1471,17 @@ pci_assign_unassigned_resources(void)
|
|||
unsigned long type_mask = IORESOURCE_IO | IORESOURCE_MEM |
|
||||
IORESOURCE_PREFETCH;
|
||||
int pci_try_num = 1;
|
||||
enum enable_type enable_local;
|
||||
|
||||
/* don't realloc if asked to do so */
|
||||
pci_realloc_detect();
|
||||
if (pci_realloc_enabled()) {
|
||||
int max_depth = pci_get_max_depth();
|
||||
enable_local = pci_realloc_detect(bus, pci_realloc_enable);
|
||||
if (pci_realloc_enabled(enable_local)) {
|
||||
int max_depth = pci_bus_get_depth(bus);
|
||||
|
||||
pci_try_num = max_depth + 1;
|
||||
printk(KERN_DEBUG "PCI: max bus depth: %d pci_try_num: %d\n",
|
||||
max_depth, pci_try_num);
|
||||
dev_printk(KERN_DEBUG, &bus->dev,
|
||||
"max bus depth: %d pci_try_num: %d\n",
|
||||
max_depth, pci_try_num);
|
||||
}
|
||||
|
||||
again:
|
||||
|
@ -1490,32 +1493,30 @@ again:
|
|||
add_list = &realloc_head;
|
||||
/* Depth first, calculate sizes and alignments of all
|
||||
subordinate buses. */
|
||||
list_for_each_entry(bus, &pci_root_buses, node)
|
||||
__pci_bus_size_bridges(bus, add_list);
|
||||
__pci_bus_size_bridges(bus, add_list);
|
||||
|
||||
/* Depth last, allocate resources and update the hardware. */
|
||||
list_for_each_entry(bus, &pci_root_buses, node)
|
||||
__pci_bus_assign_resources(bus, add_list, &fail_head);
|
||||
__pci_bus_assign_resources(bus, add_list, &fail_head);
|
||||
if (add_list)
|
||||
BUG_ON(!list_empty(add_list));
|
||||
tried_times++;
|
||||
|
||||
/* any device complain? */
|
||||
if (list_empty(&fail_head))
|
||||
goto enable_and_dump;
|
||||
goto dump;
|
||||
|
||||
if (tried_times >= pci_try_num) {
|
||||
if (pci_realloc_enable == undefined)
|
||||
printk(KERN_INFO "Some PCI device resources are unassigned, try booting with pci=realloc\n");
|
||||
else if (pci_realloc_enable == auto_enabled)
|
||||
printk(KERN_INFO "Automatically enabled pci realloc, if you have problem, try booting with pci=realloc=off\n");
|
||||
if (enable_local == undefined)
|
||||
dev_info(&bus->dev, "Some PCI device resources are unassigned, try booting with pci=realloc\n");
|
||||
else if (enable_local == auto_enabled)
|
||||
dev_info(&bus->dev, "Automatically enabled pci realloc, if you have problem, try booting with pci=realloc=off\n");
|
||||
|
||||
free_list(&fail_head);
|
||||
goto enable_and_dump;
|
||||
goto dump;
|
||||
}
|
||||
|
||||
printk(KERN_DEBUG "PCI: No. %d try to assign unassigned res\n",
|
||||
tried_times + 1);
|
||||
dev_printk(KERN_DEBUG, &bus->dev,
|
||||
"No. %d try to assign unassigned res\n", tried_times + 1);
|
||||
|
||||
/* third times and later will not check if it is leaf */
|
||||
if ((tried_times + 1) > 2)
|
||||
|
@ -1525,12 +1526,11 @@ again:
|
|||
* Try to release leaf bridge's resources that doesn't fit resource of
|
||||
* child device under that bridge
|
||||
*/
|
||||
list_for_each_entry(fail_res, &fail_head, list) {
|
||||
bus = fail_res->dev->bus;
|
||||
pci_bus_release_bridge_resources(bus,
|
||||
list_for_each_entry(fail_res, &fail_head, list)
|
||||
pci_bus_release_bridge_resources(fail_res->dev->bus,
|
||||
fail_res->flags & type_mask,
|
||||
rel_type);
|
||||
}
|
||||
|
||||
/* restore size and flags */
|
||||
list_for_each_entry(fail_res, &fail_head, list) {
|
||||
struct resource *res = fail_res->res;
|
||||
|
@ -1545,14 +1545,17 @@ again:
|
|||
|
||||
goto again;
|
||||
|
||||
enable_and_dump:
|
||||
/* Depth last, update the hardware. */
|
||||
list_for_each_entry(bus, &pci_root_buses, node)
|
||||
pci_enable_bridges(bus);
|
||||
|
||||
dump:
|
||||
/* dump the resource on buses */
|
||||
list_for_each_entry(bus, &pci_root_buses, node)
|
||||
pci_bus_dump_resources(bus);
|
||||
pci_bus_dump_resources(bus);
|
||||
}
|
||||
|
||||
void __init pci_assign_unassigned_resources(void)
|
||||
{
|
||||
struct pci_bus *root_bus;
|
||||
|
||||
list_for_each_entry(root_bus, &pci_root_buses, node)
|
||||
pci_assign_unassigned_root_bus_resources(root_bus);
|
||||
}
|
||||
|
||||
void pci_assign_unassigned_bridge_resources(struct pci_dev *bridge)
|
||||
|
@ -1589,13 +1592,11 @@ again:
|
|||
* Try to release leaf bridge's resources that doesn't fit resource of
|
||||
* child device under that bridge
|
||||
*/
|
||||
list_for_each_entry(fail_res, &fail_head, list) {
|
||||
struct pci_bus *bus = fail_res->dev->bus;
|
||||
unsigned long flags = fail_res->flags;
|
||||
|
||||
pci_bus_release_bridge_resources(bus, flags & type_mask,
|
||||
list_for_each_entry(fail_res, &fail_head, list)
|
||||
pci_bus_release_bridge_resources(fail_res->dev->bus,
|
||||
fail_res->flags & type_mask,
|
||||
whole_subtree);
|
||||
}
|
||||
|
||||
/* restore size and flags */
|
||||
list_for_each_entry(fail_res, &fail_head, list) {
|
||||
struct resource *res = fail_res->res;
|
||||
|
@ -1615,7 +1616,6 @@ enable_all:
|
|||
if (retval)
|
||||
dev_err(&bridge->dev, "Error reenabling bridge (%d)\n", retval);
|
||||
pci_set_master(bridge);
|
||||
pci_enable_bridges(parent);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_assign_unassigned_bridge_resources);
|
||||
|
||||
|
|
|
@ -91,7 +91,6 @@ int __ref cb_alloc(struct pcmcia_socket *s)
|
|||
if (s->tune_bridge)
|
||||
s->tune_bridge(s, bus);
|
||||
|
||||
pci_enable_bridges(bus);
|
||||
pci_bus_add_devices(bus);
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -675,7 +675,7 @@ struct pci_driver {
|
|||
/* these external functions are only available when PCI support is enabled */
|
||||
#ifdef CONFIG_PCI
|
||||
|
||||
void pcie_bus_configure_settings(struct pci_bus *bus, u8 smpss);
|
||||
void pcie_bus_configure_settings(struct pci_bus *bus);
|
||||
|
||||
enum pcie_bus_config_types {
|
||||
PCIE_BUS_TUNE_OFF,
|
||||
|
@ -914,6 +914,7 @@ bool pci_check_and_unmask_intx(struct pci_dev *dev);
|
|||
void pci_msi_off(struct pci_dev *dev);
|
||||
int pci_set_dma_max_seg_size(struct pci_dev *dev, unsigned int size);
|
||||
int pci_set_dma_seg_boundary(struct pci_dev *dev, unsigned long mask);
|
||||
int pci_wait_for_pending_transaction(struct pci_dev *dev);
|
||||
int pcix_get_max_mmrbc(struct pci_dev *dev);
|
||||
int pcix_get_mmrbc(struct pci_dev *dev);
|
||||
int pcix_set_mmrbc(struct pci_dev *dev, int mmrbc);
|
||||
|
@ -924,6 +925,11 @@ int pcie_set_mps(struct pci_dev *dev, int mps);
|
|||
int __pci_reset_function(struct pci_dev *dev);
|
||||
int __pci_reset_function_locked(struct pci_dev *dev);
|
||||
int pci_reset_function(struct pci_dev *dev);
|
||||
int pci_probe_reset_slot(struct pci_slot *slot);
|
||||
int pci_reset_slot(struct pci_slot *slot);
|
||||
int pci_probe_reset_bus(struct pci_bus *bus);
|
||||
int pci_reset_bus(struct pci_bus *bus);
|
||||
void pci_reset_bridge_secondary_bus(struct pci_dev *dev);
|
||||
void pci_update_resource(struct pci_dev *dev, int resno);
|
||||
int __must_check pci_assign_resource(struct pci_dev *dev, int i);
|
||||
int __must_check pci_reassign_resource(struct pci_dev *dev, int i, resource_size_t add_size, resource_size_t align);
|
||||
|
@ -1003,6 +1009,7 @@ int pci_claim_resource(struct pci_dev *, int);
|
|||
void pci_assign_unassigned_resources(void);
|
||||
void pci_assign_unassigned_bridge_resources(struct pci_dev *bridge);
|
||||
void pci_assign_unassigned_bus_resources(struct pci_bus *bus);
|
||||
void pci_assign_unassigned_root_bus_resources(struct pci_bus *bus);
|
||||
void pdev_enable_device(struct pci_dev *);
|
||||
int pci_enable_resources(struct pci_dev *, int mask);
|
||||
void pci_fixup_irqs(u8 (*)(struct pci_dev *, u8 *),
|
||||
|
@ -1043,7 +1050,6 @@ int __must_check pci_bus_alloc_resource(struct pci_bus *bus,
|
|||
resource_size_t,
|
||||
resource_size_t),
|
||||
void *alignf_data);
|
||||
void pci_enable_bridges(struct pci_bus *bus);
|
||||
|
||||
/* Proper probing supporting hot-pluggable devices */
|
||||
int __must_check __pci_register_driver(struct pci_driver *, struct module *,
|
||||
|
@ -1648,6 +1654,10 @@ int pcibios_set_pcie_reset_state(struct pci_dev *dev,
|
|||
int pcibios_add_device(struct pci_dev *dev);
|
||||
void pcibios_release_device(struct pci_dev *dev);
|
||||
|
||||
#ifdef CONFIG_HIBERNATE_CALLBACKS
|
||||
extern struct dev_pm_ops pcibios_pm_ops;
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_PCI_MMCONFIG
|
||||
void __init pci_mmcfg_early_init(void);
|
||||
void __init pci_mmcfg_late_init(void);
|
||||
|
|
|
@ -63,6 +63,9 @@ enum pcie_link_width {
|
|||
* @get_adapter_status: Called to get see if an adapter is present in the slot or not.
|
||||
* If this field is NULL, the value passed in the struct hotplug_slot_info
|
||||
* will be used when this value is requested by a user.
|
||||
* @reset_slot: Optional interface to allow override of a bus reset for the
|
||||
* slot for cases where a secondary bus reset can result in spurious
|
||||
* hotplug events or where a slot can be reset independent of the bus.
|
||||
*
|
||||
* The table of function pointers that is passed to the hotplug pci core by a
|
||||
* hotplug pci driver. These functions are called by the hotplug pci core when
|
||||
|
@ -80,6 +83,7 @@ struct hotplug_slot_ops {
|
|||
int (*get_attention_status) (struct hotplug_slot *slot, u8 *value);
|
||||
int (*get_latch_status) (struct hotplug_slot *slot, u8 *value);
|
||||
int (*get_adapter_status) (struct hotplug_slot *slot, u8 *value);
|
||||
int (*reset_slot) (struct hotplug_slot *slot, int probe);
|
||||
};
|
||||
|
||||
/**
|
||||
|
|
|
@ -421,24 +421,24 @@
|
|||
#define PCI_EXP_TYPE_ROOT_PORT 0x4 /* Root Port */
|
||||
#define PCI_EXP_TYPE_UPSTREAM 0x5 /* Upstream Port */
|
||||
#define PCI_EXP_TYPE_DOWNSTREAM 0x6 /* Downstream Port */
|
||||
#define PCI_EXP_TYPE_PCI_BRIDGE 0x7 /* PCI/PCI-X Bridge */
|
||||
#define PCI_EXP_TYPE_PCIE_BRIDGE 0x8 /* PCI/PCI-X to PCIE Bridge */
|
||||
#define PCI_EXP_TYPE_PCI_BRIDGE 0x7 /* PCIe to PCI/PCI-X Bridge */
|
||||
#define PCI_EXP_TYPE_PCIE_BRIDGE 0x8 /* PCI/PCI-X to PCIe Bridge */
|
||||
#define PCI_EXP_TYPE_RC_END 0x9 /* Root Complex Integrated Endpoint */
|
||||
#define PCI_EXP_TYPE_RC_EC 0xa /* Root Complex Event Collector */
|
||||
#define PCI_EXP_FLAGS_SLOT 0x0100 /* Slot implemented */
|
||||
#define PCI_EXP_FLAGS_IRQ 0x3e00 /* Interrupt message number */
|
||||
#define PCI_EXP_DEVCAP 4 /* Device capabilities */
|
||||
#define PCI_EXP_DEVCAP_PAYLOAD 0x07 /* Max_Payload_Size */
|
||||
#define PCI_EXP_DEVCAP_PHANTOM 0x18 /* Phantom functions */
|
||||
#define PCI_EXP_DEVCAP_EXT_TAG 0x20 /* Extended tags */
|
||||
#define PCI_EXP_DEVCAP_L0S 0x1c0 /* L0s Acceptable Latency */
|
||||
#define PCI_EXP_DEVCAP_L1 0xe00 /* L1 Acceptable Latency */
|
||||
#define PCI_EXP_DEVCAP_ATN_BUT 0x1000 /* Attention Button Present */
|
||||
#define PCI_EXP_DEVCAP_ATN_IND 0x2000 /* Attention Indicator Present */
|
||||
#define PCI_EXP_DEVCAP_PWR_IND 0x4000 /* Power Indicator Present */
|
||||
#define PCI_EXP_DEVCAP_RBER 0x8000 /* Role-Based Error Reporting */
|
||||
#define PCI_EXP_DEVCAP_PWR_VAL 0x3fc0000 /* Slot Power Limit Value */
|
||||
#define PCI_EXP_DEVCAP_PWR_SCL 0xc000000 /* Slot Power Limit Scale */
|
||||
#define PCI_EXP_DEVCAP_PAYLOAD 0x00000007 /* Max_Payload_Size */
|
||||
#define PCI_EXP_DEVCAP_PHANTOM 0x00000018 /* Phantom functions */
|
||||
#define PCI_EXP_DEVCAP_EXT_TAG 0x00000020 /* Extended tags */
|
||||
#define PCI_EXP_DEVCAP_L0S 0x000001c0 /* L0s Acceptable Latency */
|
||||
#define PCI_EXP_DEVCAP_L1 0x00000e00 /* L1 Acceptable Latency */
|
||||
#define PCI_EXP_DEVCAP_ATN_BUT 0x00001000 /* Attention Button Present */
|
||||
#define PCI_EXP_DEVCAP_ATN_IND 0x00002000 /* Attention Indicator Present */
|
||||
#define PCI_EXP_DEVCAP_PWR_IND 0x00004000 /* Power Indicator Present */
|
||||
#define PCI_EXP_DEVCAP_RBER 0x00008000 /* Role-Based Error Reporting */
|
||||
#define PCI_EXP_DEVCAP_PWR_VAL 0x03fc0000 /* Slot Power Limit Value */
|
||||
#define PCI_EXP_DEVCAP_PWR_SCL 0x0c000000 /* Slot Power Limit Scale */
|
||||
#define PCI_EXP_DEVCAP_FLR 0x10000000 /* Function Level Reset */
|
||||
#define PCI_EXP_DEVCTL 8 /* Device Control */
|
||||
#define PCI_EXP_DEVCTL_CERE 0x0001 /* Correctable Error Reporting En. */
|
||||
|
@ -454,16 +454,16 @@
|
|||
#define PCI_EXP_DEVCTL_READRQ 0x7000 /* Max_Read_Request_Size */
|
||||
#define PCI_EXP_DEVCTL_BCR_FLR 0x8000 /* Bridge Configuration Retry / FLR */
|
||||
#define PCI_EXP_DEVSTA 10 /* Device Status */
|
||||
#define PCI_EXP_DEVSTA_CED 0x01 /* Correctable Error Detected */
|
||||
#define PCI_EXP_DEVSTA_NFED 0x02 /* Non-Fatal Error Detected */
|
||||
#define PCI_EXP_DEVSTA_FED 0x04 /* Fatal Error Detected */
|
||||
#define PCI_EXP_DEVSTA_URD 0x08 /* Unsupported Request Detected */
|
||||
#define PCI_EXP_DEVSTA_AUXPD 0x10 /* AUX Power Detected */
|
||||
#define PCI_EXP_DEVSTA_TRPND 0x20 /* Transactions Pending */
|
||||
#define PCI_EXP_DEVSTA_CED 0x0001 /* Correctable Error Detected */
|
||||
#define PCI_EXP_DEVSTA_NFED 0x0002 /* Non-Fatal Error Detected */
|
||||
#define PCI_EXP_DEVSTA_FED 0x0004 /* Fatal Error Detected */
|
||||
#define PCI_EXP_DEVSTA_URD 0x0008 /* Unsupported Request Detected */
|
||||
#define PCI_EXP_DEVSTA_AUXPD 0x0010 /* AUX Power Detected */
|
||||
#define PCI_EXP_DEVSTA_TRPND 0x0020 /* Transactions Pending */
|
||||
#define PCI_EXP_LNKCAP 12 /* Link Capabilities */
|
||||
#define PCI_EXP_LNKCAP_SLS 0x0000000f /* Supported Link Speeds */
|
||||
#define PCI_EXP_LNKCAP_SLS_2_5GB 0x1 /* LNKCAP2 SLS Vector bit 0 (2.5GT/s) */
|
||||
#define PCI_EXP_LNKCAP_SLS_5_0GB 0x2 /* LNKCAP2 SLS Vector bit 1 (5.0GT/s) */
|
||||
#define PCI_EXP_LNKCAP_SLS_2_5GB 0x00000001 /* LNKCAP2 SLS Vector bit 0 */
|
||||
#define PCI_EXP_LNKCAP_SLS_5_0GB 0x00000002 /* LNKCAP2 SLS Vector bit 1 */
|
||||
#define PCI_EXP_LNKCAP_MLW 0x000003f0 /* Maximum Link Width */
|
||||
#define PCI_EXP_LNKCAP_ASPMS 0x00000c00 /* ASPM Support */
|
||||
#define PCI_EXP_LNKCAP_L0SEL 0x00007000 /* L0s Exit Latency */
|
||||
|
@ -475,21 +475,21 @@
|
|||
#define PCI_EXP_LNKCAP_PN 0xff000000 /* Port Number */
|
||||
#define PCI_EXP_LNKCTL 16 /* Link Control */
|
||||
#define PCI_EXP_LNKCTL_ASPMC 0x0003 /* ASPM Control */
|
||||
#define PCI_EXP_LNKCTL_ASPM_L0S 0x01 /* L0s Enable */
|
||||
#define PCI_EXP_LNKCTL_ASPM_L1 0x02 /* L1 Enable */
|
||||
#define PCI_EXP_LNKCTL_ASPM_L0S 0x0001 /* L0s Enable */
|
||||
#define PCI_EXP_LNKCTL_ASPM_L1 0x0002 /* L1 Enable */
|
||||
#define PCI_EXP_LNKCTL_RCB 0x0008 /* Read Completion Boundary */
|
||||
#define PCI_EXP_LNKCTL_LD 0x0010 /* Link Disable */
|
||||
#define PCI_EXP_LNKCTL_RL 0x0020 /* Retrain Link */
|
||||
#define PCI_EXP_LNKCTL_CCC 0x0040 /* Common Clock Configuration */
|
||||
#define PCI_EXP_LNKCTL_ES 0x0080 /* Extended Synch */
|
||||
#define PCI_EXP_LNKCTL_CLKREQ_EN 0x100 /* Enable clkreq */
|
||||
#define PCI_EXP_LNKCTL_CLKREQ_EN 0x0100 /* Enable clkreq */
|
||||
#define PCI_EXP_LNKCTL_HAWD 0x0200 /* Hardware Autonomous Width Disable */
|
||||
#define PCI_EXP_LNKCTL_LBMIE 0x0400 /* Link Bandwidth Management Interrupt Enable */
|
||||
#define PCI_EXP_LNKCTL_LABIE 0x0800 /* Lnk Autonomous Bandwidth Interrupt Enable */
|
||||
#define PCI_EXP_LNKSTA 18 /* Link Status */
|
||||
#define PCI_EXP_LNKSTA_CLS 0x000f /* Current Link Speed */
|
||||
#define PCI_EXP_LNKSTA_CLS_2_5GB 0x01 /* Current Link Speed 2.5GT/s */
|
||||
#define PCI_EXP_LNKSTA_CLS_5_0GB 0x02 /* Current Link Speed 5.0GT/s */
|
||||
#define PCI_EXP_LNKSTA_CLS_2_5GB 0x0001 /* Current Link Speed 2.5GT/s */
|
||||
#define PCI_EXP_LNKSTA_CLS_5_0GB 0x0002 /* Current Link Speed 5.0GT/s */
|
||||
#define PCI_EXP_LNKSTA_NLW 0x03f0 /* Nogotiated Link Width */
|
||||
#define PCI_EXP_LNKSTA_NLW_SHIFT 4 /* start of NLW mask in link status */
|
||||
#define PCI_EXP_LNKSTA_LT 0x0800 /* Link Training */
|
||||
|
@ -534,44 +534,49 @@
|
|||
#define PCI_EXP_SLTSTA_EIS 0x0080 /* Electromechanical Interlock Status */
|
||||
#define PCI_EXP_SLTSTA_DLLSC 0x0100 /* Data Link Layer State Changed */
|
||||
#define PCI_EXP_RTCTL 28 /* Root Control */
|
||||
#define PCI_EXP_RTCTL_SECEE 0x01 /* System Error on Correctable Error */
|
||||
#define PCI_EXP_RTCTL_SENFEE 0x02 /* System Error on Non-Fatal Error */
|
||||
#define PCI_EXP_RTCTL_SEFEE 0x04 /* System Error on Fatal Error */
|
||||
#define PCI_EXP_RTCTL_PMEIE 0x08 /* PME Interrupt Enable */
|
||||
#define PCI_EXP_RTCTL_CRSSVE 0x10 /* CRS Software Visibility Enable */
|
||||
#define PCI_EXP_RTCTL_SECEE 0x0001 /* System Error on Correctable Error */
|
||||
#define PCI_EXP_RTCTL_SENFEE 0x0002 /* System Error on Non-Fatal Error */
|
||||
#define PCI_EXP_RTCTL_SEFEE 0x0004 /* System Error on Fatal Error */
|
||||
#define PCI_EXP_RTCTL_PMEIE 0x0008 /* PME Interrupt Enable */
|
||||
#define PCI_EXP_RTCTL_CRSSVE 0x0010 /* CRS Software Visibility Enable */
|
||||
#define PCI_EXP_RTCAP 30 /* Root Capabilities */
|
||||
#define PCI_EXP_RTSTA 32 /* Root Status */
|
||||
#define PCI_EXP_RTSTA_PME 0x10000 /* PME status */
|
||||
#define PCI_EXP_RTSTA_PENDING 0x20000 /* PME pending */
|
||||
#define PCI_EXP_RTSTA_PME 0x00010000 /* PME status */
|
||||
#define PCI_EXP_RTSTA_PENDING 0x00020000 /* PME pending */
|
||||
/*
|
||||
* Note that the following PCI Express 'Capability Structure' registers
|
||||
* were introduced with 'Capability Version' 0x2 (v2). These registers
|
||||
* do not exist on devices with Capability Version 1. Use pci_pcie_cap2()
|
||||
* to use these fields safely.
|
||||
* The Device Capabilities 2, Device Status 2, Device Control 2,
|
||||
* Link Capabilities 2, Link Status 2, Link Control 2,
|
||||
* Slot Capabilities 2, Slot Status 2, and Slot Control 2 registers
|
||||
* are only present on devices with PCIe Capability version 2.
|
||||
* Use pcie_capability_read_word() and similar interfaces to use them
|
||||
* safely.
|
||||
*/
|
||||
#define PCI_EXP_DEVCAP2 36 /* Device Capabilities 2 */
|
||||
#define PCI_EXP_DEVCAP2_ARI 0x20 /* Alternative Routing-ID */
|
||||
#define PCI_EXP_DEVCAP2_LTR 0x800 /* Latency tolerance reporting */
|
||||
#define PCI_EXP_OBFF_MASK 0xc0000 /* OBFF support mechanism */
|
||||
#define PCI_EXP_OBFF_MSG 0x40000 /* New message signaling */
|
||||
#define PCI_EXP_OBFF_WAKE 0x80000 /* Re-use WAKE# for OBFF */
|
||||
#define PCI_EXP_DEVCAP2_ARI 0x00000020 /* Alternative Routing-ID */
|
||||
#define PCI_EXP_DEVCAP2_LTR 0x00000800 /* Latency tolerance reporting */
|
||||
#define PCI_EXP_DEVCAP2_OBFF_MASK 0x000c0000 /* OBFF support mechanism */
|
||||
#define PCI_EXP_DEVCAP2_OBFF_MSG 0x00040000 /* New message signaling */
|
||||
#define PCI_EXP_DEVCAP2_OBFF_WAKE 0x00080000 /* Re-use WAKE# for OBFF */
|
||||
#define PCI_EXP_DEVCTL2 40 /* Device Control 2 */
|
||||
#define PCI_EXP_DEVCTL2_ARI 0x20 /* Alternative Routing-ID */
|
||||
#define PCI_EXP_IDO_REQ_EN 0x100 /* ID-based ordering request enable */
|
||||
#define PCI_EXP_IDO_CMP_EN 0x200 /* ID-based ordering completion enable */
|
||||
#define PCI_EXP_LTR_EN 0x400 /* Latency tolerance reporting */
|
||||
#define PCI_EXP_OBFF_MSGA_EN 0x2000 /* OBFF enable with Message type A */
|
||||
#define PCI_EXP_OBFF_MSGB_EN 0x4000 /* OBFF enable with Message type B */
|
||||
#define PCI_EXP_OBFF_WAKE_EN 0x6000 /* OBFF using WAKE# signaling */
|
||||
#define PCI_EXP_DEVCTL2_ARI 0x20 /* Alternative Routing-ID */
|
||||
#define PCI_EXP_DEVCTL2_IDO_REQ_EN 0x0100 /* Allow IDO for requests */
|
||||
#define PCI_EXP_DEVCTL2_IDO_CMP_EN 0x0200 /* Allow IDO for completions */
|
||||
#define PCI_EXP_DEVCTL2_LTR_EN 0x0400 /* Enable LTR mechanism */
|
||||
#define PCI_EXP_DEVCTL2_OBFF_MSGA_EN 0x2000 /* Enable OBFF Message type A */
|
||||
#define PCI_EXP_DEVCTL2_OBFF_MSGB_EN 0x4000 /* Enable OBFF Message type B */
|
||||
#define PCI_EXP_DEVCTL2_OBFF_WAKE_EN 0x6000 /* OBFF using WAKE# signaling */
|
||||
#define PCI_EXP_DEVSTA2 42 /* Device Status 2 */
|
||||
#define PCI_CAP_EXP_ENDPOINT_SIZEOF_V2 44 /* v2 endpoints end here */
|
||||
#define PCI_EXP_LNKCAP2 44 /* Link Capability 2 */
|
||||
#define PCI_EXP_LNKCAP2_SLS_2_5GB 0x02 /* Supported Link Speed 2.5GT/s */
|
||||
#define PCI_EXP_LNKCAP2_SLS_5_0GB 0x04 /* Supported Link Speed 5.0GT/s */
|
||||
#define PCI_EXP_LNKCAP2_SLS_8_0GB 0x08 /* Supported Link Speed 8.0GT/s */
|
||||
#define PCI_EXP_LNKCAP2_CROSSLINK 0x100 /* Crosslink supported */
|
||||
#define PCI_EXP_LNKCAP2 44 /* Link Capabilities 2 */
|
||||
#define PCI_EXP_LNKCAP2_SLS_2_5GB 0x00000002 /* Supported Speed 2.5GT/s */
|
||||
#define PCI_EXP_LNKCAP2_SLS_5_0GB 0x00000004 /* Supported Speed 5.0GT/s */
|
||||
#define PCI_EXP_LNKCAP2_SLS_8_0GB 0x00000008 /* Supported Speed 8.0GT/s */
|
||||
#define PCI_EXP_LNKCAP2_CROSSLINK 0x00000100 /* Crosslink supported */
|
||||
#define PCI_EXP_LNKCTL2 48 /* Link Control 2 */
|
||||
#define PCI_EXP_LNKSTA2 50 /* Link Status 2 */
|
||||
#define PCI_EXP_SLTCAP2 52 /* Slot Capabilities 2 */
|
||||
#define PCI_EXP_SLTCTL2 56 /* Slot Control 2 */
|
||||
#define PCI_EXP_SLTSTA2 58 /* Slot Status 2 */
|
||||
|
||||
/* Extended Capabilities (PCI-X 2.0 and Express) */
|
||||
#define PCI_EXT_CAP_ID(header) (header & 0x0000ffff)
|
||||
|
|
Loading…
Reference in New Issue