pci-v6.2-changes
-----BEGIN PGP SIGNATURE----- iQJIBAABCgAyFiEEgMe7l+5h9hnxdsnuWYigwDrT+vwFAmOYpTIUHGJoZWxnYWFz QGdvb2dsZS5jb20ACgkQWYigwDrT+vxuZhAAhGjE8voLZeOYwxbvfL69hGTAZ+Me x2hqRWVhh/IGWXTTaoSLwSjMMokcmAKN5S/wv8qdCG5sB8EN8FyTBIZDy8PuRRdl 8UlqlBMSL+d4oSRDCnYLxFNcynLRNnmx2dfcdw9tJ4zjTLN8Y4o8PHFogR6pJ3MT sDC8S0myTQKXr4wAGzTZycKsiGManviYtByp6dCcKD3Oy5Q2uZ9OKO2DP2yQpn+F c3IJSV9oDz3KR8JVJ5Q1iz9cdMXbGwjkM3JLlHpxhedwjN4ErLumPutKcebtzO5C aTqabN7Nnzc4yJusAIfojFCWH7fgaYUyJ3pxcFyJ4tu4m9Last+2I5UB/kV2sYAD jWiCYx3sA/mRopNXOnrBGae+Lgy+sQnt8or0grySr0bK+b+ArAGis4uT4A0uASGO RUQdIQwz7zhHeQrwAladHWxnx4BEDNCatgfn38p4fklIYKydCY5nfZURMDvHezSR G6Nu08hoE9ZXlmkWTFw+5F23wPWKcCpzZj0hf7OroIouXUp8vqSFSqatH5vGkbCl bDswck9GdRJ2hl5SvFOeelaXkM42du45TMLU2JmIn6dYYFNrO93JgdvKSU7E2CpG AmDIpg1Idxo8fEPPGH1I7RVU5+ilzmmPQQY7poQW+va4/dEd/QVp1+ZZTDnMC1qk qi3ck22VdvPU2VU= =KULr -----END PGP SIGNATURE----- Merge tag 'pci-v6.2-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci Pull PCI updates from Bjorn Helgaas: "Enumeration: - Squash portdrv_{core,pci}.c into portdrv.c to ease maintenance and make more things static. - Make portdrv bind to Switch Ports that have AER. Previously, if these Ports lacked MSI/MSI-X, portdrv failed to bind, which meant the Ports couldn't be suspended to low-power states. AER on these Ports doesn't use interrupts, and the AER driver doesn't need to claim them. - Assign PCI domain IDs using ida_alloc(), which makes host bridge add/remove work better. Resource management: - To work better with recent BIOSes that use EfiMemoryMappedIO for PCI host bridge apertures, remove those regions from the E820 map (E820 entries normally prevent us from allocating BARs). In v5.19, we added some quirks to disable E820 checking, but that's not very maintainable. EfiMemoryMappedIO means the OS needs to map the region for use by EFI runtime services; it shouldn't prevent OS from using it. PCIe native device hotplug: - Build pciehp by default if USB4 is enabled, since Thunderbolt/USB4 PCIe tunneling depends on native PCIe hotplug. - Enable Command Completed Interrupt only if supported to avoid user confusion from lspci output that says this is enabled but not supported. - Prevent pciehp from binding to Switch Upstream Ports; this happened because of interaction with acpiphp and caused devices below the Upstream Port to disappear. Power management: - Convert AGP drivers to generic power management. We hope to remove legacy power management from the PCI core eventually. Virtualization: - Fix pci_device_is_present(), which previously always returned "false" for VFs, causing virtio hangs when unbinding the driver. Miscellaneous: - Convert drivers to gpiod API to prepare for dropping some legacy code. - Fix DOE fencepost error for the maximum data object length. Baikal-T1 PCIe controller driver: - Add driver and DT bindings. Broadcom STB PCIe controller driver: - Enable Multi-MSI. - Delay 100ms after PERST# deassert to allow power and clocks to stabilize. - Configure Read Completion Boundary to 64 bytes. Freescale i.MX6 PCIe controller driver: - Initialize PHY before deasserting core reset to fix a regression in v6.0 on boards where the PHY provides the reference. - Fix imx6sx and imx8mq clock names in DT schema. Intel VMD host bridge driver: - Fix Secondary Bus Reset on VMD bridges, which allows reset of NVMe SSDs in VT-d pass-through scenarios. - Disable MSI remapping, which gets re-enabled by firmware during suspend/resume. MediaTek PCIe Gen3 controller driver: - Add MT7986 and MT8195 support. Qualcomm PCIe controller driver: - Add SC8280XP/SA8540P basic interconnect support. Rockchip DesignWare PCIe controller driver: - Base DT schema on common Synopsys schema. Synopsys DesignWare PCIe core: - Collect DT items shared between Root Port and Endpoint (PERST GPIO, PHY info, clocks, resets, link speed, number of lanes, number of iATU windows, interrupt info, etc) to snps,dw-pcie-common.yaml. - Add dma-ranges support for Root Ports and Endpoints. - Consolidate DT resource retrieval for "dbi", "dbi2", "atu", etc. to reduce code duplication. - Add generic names for clocks and resets to encourage more consistent naming across drivers using DesignWare IP. - Stop advertising PTM Responder role for Endpoints, which aren't allowed to be responders. TI J721E PCIe driver: - Add j721s2 host mode ID to DT schema. - Add interrupt properties to DT schema. Toshiba Visconti PCIe controller driver: - Fix interrupts array max constraints in DT schema" * tag 'pci-v6.2-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (95 commits) x86/PCI: Use pr_info() when possible x86/PCI: Fix log message typo x86/PCI: Tidy E820 removal messages PCI: Skip allocate_resource() if too little space available efi/x86: Remove EfiMemoryMappedIO from E820 map PCI/portdrv: Allow AER service only for Root Ports & RCECs PCI: xilinx-nwl: Fix coding style violations PCI: mvebu: Switch to using gpiod API PCI: pciehp: Enable Command Completed Interrupt only if supported PCI: aardvark: Switch to using devm_gpiod_get_optional() dt-bindings: PCI: mediatek-gen3: add support for mt7986 dt-bindings: PCI: mediatek-gen3: add SoC based clock config dt-bindings: PCI: qcom: Allow 'dma-coherent' property PCI: mt7621: Add sentinel to quirks table PCI: vmd: Fix secondary bus reset for Intel bridges PCI: endpoint: pci-epf-vntb: Fix sparse ntb->reg build warning PCI: endpoint: pci-epf-vntb: Fix sparse build warning for epf_db PCI: endpoint: pci-epf-vntb: Replace hardcoded 4 with sizeof(u32) PCI: endpoint: pci-epf-vntb: Remove unused epf_db_phy struct member PCI: endpoint: pci-epf-vntb: Fix call pci_epc_mem_free_addr() in error path ...
This commit is contained in:
commit
c7020e1b34
|
@ -0,0 +1,168 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/pci/baikal,bt1-pcie.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Baikal-T1 PCIe Root Port Controller
|
||||
|
||||
maintainers:
|
||||
- Serge Semin <fancer.lancer@gmail.com>
|
||||
|
||||
description:
|
||||
Embedded into Baikal-T1 SoC Root Complex controller with a single port
|
||||
activated. It's based on the DWC RC PCIe v4.60a IP-core, which is configured
|
||||
to have just a single Root Port function and is capable of establishing the
|
||||
link up to Gen.3 speed on x4 lanes. It doesn't have embedded clock and reset
|
||||
control module, so the proper interface initialization is supposed to be
|
||||
performed by software. There four in- and four outbound iATU regions
|
||||
which can be used to emit all required TLP types on the PCIe bus.
|
||||
|
||||
allOf:
|
||||
- $ref: /schemas/pci/snps,dw-pcie.yaml#
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
const: baikal,bt1-pcie
|
||||
|
||||
reg:
|
||||
description:
|
||||
DBI, DBI2 and at least 4KB outbound iATU-capable region for the
|
||||
peripheral devices CFG-space access.
|
||||
maxItems: 3
|
||||
|
||||
reg-names:
|
||||
items:
|
||||
- const: dbi
|
||||
- const: dbi2
|
||||
- const: config
|
||||
|
||||
interrupts:
|
||||
description:
|
||||
MSI, AER, PME, Hot-plug, Link Bandwidth Management, Link Equalization
|
||||
request and eight Read/Write eDMA IRQ lines are available.
|
||||
maxItems: 14
|
||||
|
||||
interrupt-names:
|
||||
items:
|
||||
- const: dma0
|
||||
- const: dma1
|
||||
- const: dma2
|
||||
- const: dma3
|
||||
- const: dma4
|
||||
- const: dma5
|
||||
- const: dma6
|
||||
- const: dma7
|
||||
- const: msi
|
||||
- const: aer
|
||||
- const: pme
|
||||
- const: hp
|
||||
- const: bw_mg
|
||||
- const: l_eq
|
||||
|
||||
clocks:
|
||||
description:
|
||||
DBI (attached to the APB bus), AXI-bus master and slave interfaces
|
||||
are fed up by the dedicated application clocks. A common reference
|
||||
clock signal is supposed to be attached to the corresponding Ref-pad
|
||||
of the SoC. It will be redistributed amongst the controller core
|
||||
sub-modules (pipe, core, aux, etc).
|
||||
maxItems: 4
|
||||
|
||||
clock-names:
|
||||
items:
|
||||
- const: dbi
|
||||
- const: mstr
|
||||
- const: slv
|
||||
- const: ref
|
||||
|
||||
resets:
|
||||
description:
|
||||
A comprehensive controller reset logic is supposed to be implemented
|
||||
by software, so almost all the possible application and core reset
|
||||
signals are exposed via the system CCU module.
|
||||
maxItems: 9
|
||||
|
||||
reset-names:
|
||||
items:
|
||||
- const: mstr
|
||||
- const: slv
|
||||
- const: pwr
|
||||
- const: hot
|
||||
- const: phy
|
||||
- const: core
|
||||
- const: pipe
|
||||
- const: sticky
|
||||
- const: non-sticky
|
||||
|
||||
baikal,bt1-syscon:
|
||||
$ref: /schemas/types.yaml#/definitions/phandle
|
||||
description:
|
||||
Phandle to the Baikal-T1 System Controller DT node. It's required to
|
||||
access some additional PM, Reset-related and LTSSM signals.
|
||||
|
||||
num-lanes:
|
||||
maximum: 4
|
||||
|
||||
max-link-speed:
|
||||
maximum: 3
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- reg-names
|
||||
- interrupts
|
||||
- interrupt-names
|
||||
|
||||
unevaluatedProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
#include <dt-bindings/interrupt-controller/mips-gic.h>
|
||||
#include <dt-bindings/gpio/gpio.h>
|
||||
|
||||
pcie@1f052000 {
|
||||
compatible = "baikal,bt1-pcie";
|
||||
device_type = "pci";
|
||||
reg = <0x1f052000 0x1000>, <0x1f053000 0x1000>, <0x1bdbf000 0x1000>;
|
||||
reg-names = "dbi", "dbi2", "config";
|
||||
#address-cells = <3>;
|
||||
#size-cells = <2>;
|
||||
ranges = <0x81000000 0 0x00000000 0x1bdb0000 0 0x00008000>,
|
||||
<0x82000000 0 0x20000000 0x08000000 0 0x13db0000>;
|
||||
bus-range = <0x0 0xff>;
|
||||
|
||||
interrupts = <GIC_SHARED 80 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<GIC_SHARED 81 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<GIC_SHARED 82 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<GIC_SHARED 83 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<GIC_SHARED 84 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<GIC_SHARED 85 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<GIC_SHARED 86 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<GIC_SHARED 87 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<GIC_SHARED 88 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<GIC_SHARED 89 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<GIC_SHARED 90 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<GIC_SHARED 91 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<GIC_SHARED 92 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<GIC_SHARED 93 IRQ_TYPE_LEVEL_HIGH>;
|
||||
interrupt-names = "dma0", "dma1", "dma2", "dma3",
|
||||
"dma4", "dma5", "dma6", "dma7",
|
||||
"msi", "aer", "pme", "hp", "bw_mg",
|
||||
"l_eq";
|
||||
|
||||
clocks = <&ccu_sys 1>, <&ccu_axi 6>, <&ccu_axi 7>, <&clk_pcie>;
|
||||
clock-names = "dbi", "mstr", "slv", "ref";
|
||||
|
||||
resets = <&ccu_axi 6>, <&ccu_axi 7>, <&ccu_sys 7>, <&ccu_sys 10>,
|
||||
<&ccu_sys 4>, <&ccu_sys 6>, <&ccu_sys 5>, <&ccu_sys 8>,
|
||||
<&ccu_sys 9>;
|
||||
reset-names = "mstr", "slv", "pwr", "hot", "phy", "core", "pipe",
|
||||
"sticky", "non-sticky";
|
||||
|
||||
reset-gpios = <&port0 0 GPIO_ACTIVE_LOW>;
|
||||
|
||||
num-lanes = <4>;
|
||||
max-link-speed = <3>;
|
||||
};
|
||||
...
|
|
@ -14,9 +14,6 @@ description: |+
|
|||
This PCIe host controller is based on the Synopsys DesignWare PCIe IP
|
||||
and thus inherits all the common properties defined in snps,dw-pcie.yaml.
|
||||
|
||||
allOf:
|
||||
- $ref: /schemas/pci/snps,dw-pcie.yaml#
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
enum:
|
||||
|
@ -61,7 +58,7 @@ properties:
|
|||
- const: pcie
|
||||
- const: pcie_bus
|
||||
- const: pcie_phy
|
||||
- const: pcie_inbound_axi for imx6sx-pcie, pcie_aux for imx8mq-pcie
|
||||
- enum: [ pcie_inbound_axi, pcie_aux ]
|
||||
|
||||
num-lanes:
|
||||
const: 1
|
||||
|
@ -175,6 +172,47 @@ required:
|
|||
- clocks
|
||||
- clock-names
|
||||
|
||||
allOf:
|
||||
- $ref: /schemas/pci/snps,dw-pcie.yaml#
|
||||
- if:
|
||||
properties:
|
||||
compatible:
|
||||
contains:
|
||||
const: fsl,imx6sx-pcie
|
||||
then:
|
||||
properties:
|
||||
clock-names:
|
||||
items:
|
||||
- {}
|
||||
- {}
|
||||
- {}
|
||||
- const: pcie_inbound_axi
|
||||
- if:
|
||||
properties:
|
||||
compatible:
|
||||
contains:
|
||||
const: fsl,imx8mq-pcie
|
||||
then:
|
||||
properties:
|
||||
clock-names:
|
||||
items:
|
||||
- {}
|
||||
- {}
|
||||
- {}
|
||||
- const: pcie_aux
|
||||
- if:
|
||||
properties:
|
||||
compatible:
|
||||
not:
|
||||
contains:
|
||||
enum:
|
||||
- fsl,imx6sx-pcie
|
||||
- fsl,imx8mq-pcie
|
||||
then:
|
||||
properties:
|
||||
clock-names:
|
||||
maxItems: 3
|
||||
|
||||
unevaluatedProperties: false
|
||||
|
||||
examples:
|
||||
|
|
|
@ -43,14 +43,12 @@ description: |+
|
|||
each set has its own address for MSI message, and supports 32 MSI vectors
|
||||
to generate interrupt.
|
||||
|
||||
allOf:
|
||||
- $ref: /schemas/pci/pci-bus.yaml#
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
oneOf:
|
||||
- items:
|
||||
- enum:
|
||||
- mediatek,mt7986-pcie
|
||||
- mediatek,mt8188-pcie
|
||||
- mediatek,mt8195-pcie
|
||||
- const: mediatek,mt8192-pcie
|
||||
|
@ -70,29 +68,29 @@ properties:
|
|||
minItems: 1
|
||||
maxItems: 8
|
||||
|
||||
iommu-map:
|
||||
maxItems: 1
|
||||
|
||||
iommu-map-mask:
|
||||
const: 0
|
||||
|
||||
resets:
|
||||
minItems: 1
|
||||
maxItems: 2
|
||||
|
||||
reset-names:
|
||||
minItems: 1
|
||||
maxItems: 2
|
||||
items:
|
||||
- const: phy
|
||||
- const: mac
|
||||
enum: [ phy, mac ]
|
||||
|
||||
clocks:
|
||||
minItems: 4
|
||||
maxItems: 6
|
||||
|
||||
clock-names:
|
||||
items:
|
||||
- const: pl_250m
|
||||
- const: tl_26m
|
||||
- const: tl_96m
|
||||
- const: tl_32k
|
||||
- const: peri_26m
|
||||
- enum:
|
||||
- top_133m # for MT8192
|
||||
- peri_mem # for MT8188/MT8195
|
||||
minItems: 4
|
||||
maxItems: 6
|
||||
|
||||
assigned-clocks:
|
||||
maxItems: 1
|
||||
|
@ -107,6 +105,9 @@ properties:
|
|||
items:
|
||||
- const: pcie-phy
|
||||
|
||||
power-domains:
|
||||
maxItems: 1
|
||||
|
||||
'#interrupt-cells':
|
||||
const: 1
|
||||
|
||||
|
@ -138,6 +139,54 @@ required:
|
|||
- '#interrupt-cells'
|
||||
- interrupt-controller
|
||||
|
||||
allOf:
|
||||
- $ref: /schemas/pci/pci-bus.yaml#
|
||||
- if:
|
||||
properties:
|
||||
compatible:
|
||||
const: mediatek,mt8192-pcie
|
||||
then:
|
||||
properties:
|
||||
clock-names:
|
||||
items:
|
||||
- const: pl_250m
|
||||
- const: tl_26m
|
||||
- const: tl_96m
|
||||
- const: tl_32k
|
||||
- const: peri_26m
|
||||
- const: top_133m
|
||||
- if:
|
||||
properties:
|
||||
compatible:
|
||||
contains:
|
||||
enum:
|
||||
- mediatek,mt8188-pcie
|
||||
- mediatek,mt8195-pcie
|
||||
then:
|
||||
properties:
|
||||
clock-names:
|
||||
items:
|
||||
- const: pl_250m
|
||||
- const: tl_26m
|
||||
- const: tl_96m
|
||||
- const: tl_32k
|
||||
- const: peri_26m
|
||||
- const: peri_mem
|
||||
- if:
|
||||
properties:
|
||||
compatible:
|
||||
contains:
|
||||
enum:
|
||||
- mediatek,mt7986-pcie
|
||||
then:
|
||||
properties:
|
||||
clock-names:
|
||||
items:
|
||||
- const: pl_250m
|
||||
- const: tl_26m
|
||||
- const: peri_26m
|
||||
- const: top_133m
|
||||
|
||||
unevaluatedProperties: false
|
||||
|
||||
examples:
|
||||
|
|
|
@ -62,6 +62,16 @@ properties:
|
|||
minItems: 3
|
||||
maxItems: 13
|
||||
|
||||
dma-coherent: true
|
||||
|
||||
interconnects:
|
||||
maxItems: 2
|
||||
|
||||
interconnect-names:
|
||||
items:
|
||||
- const: pcie-mem
|
||||
- const: cpu-pcie
|
||||
|
||||
resets:
|
||||
minItems: 1
|
||||
maxItems: 12
|
||||
|
@ -631,6 +641,18 @@ allOf:
|
|||
items:
|
||||
- const: pci # PCIe core reset
|
||||
|
||||
- if:
|
||||
properties:
|
||||
compatible:
|
||||
contains:
|
||||
enum:
|
||||
- qcom,pcie-sa8540p
|
||||
- qcom,pcie-sc8280xp
|
||||
then:
|
||||
required:
|
||||
- interconnects
|
||||
- interconnect-names
|
||||
|
||||
- if:
|
||||
not:
|
||||
properties:
|
||||
|
|
|
@ -14,10 +14,10 @@ maintainers:
|
|||
description: |+
|
||||
RK3568 SoC PCIe host controller is based on the Synopsys DesignWare
|
||||
PCIe IP and thus inherits all the common properties defined in
|
||||
designware-pcie.txt.
|
||||
snps,dw-pcie.yaml.
|
||||
|
||||
allOf:
|
||||
- $ref: /schemas/pci/pci-bus.yaml#
|
||||
- $ref: /schemas/pci/snps,dw-pcie.yaml#
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
|
|
|
@ -0,0 +1,266 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/pci/snps,dw-pcie-common.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Synopsys DWC PCIe RP/EP controller
|
||||
|
||||
maintainers:
|
||||
- Jingoo Han <jingoohan1@gmail.com>
|
||||
- Gustavo Pimentel <gustavo.pimentel@synopsys.com>
|
||||
|
||||
description:
|
||||
Generic Synopsys DesignWare PCIe Root Port and Endpoint controller
|
||||
properties.
|
||||
|
||||
select: false
|
||||
|
||||
properties:
|
||||
reg:
|
||||
description:
|
||||
DWC PCIe CSR space is normally accessed over the dedicated Data Bus
|
||||
Interface - DBI. In accordance with the reference manual the register
|
||||
configuration space belongs to the Configuration-Dependent Module (CDM)
|
||||
and is split up into several sub-parts Standard PCIe configuration
|
||||
space, Port Logic Registers (PL), Shadow Config-space Registers,
|
||||
iATU/eDMA registers. The particular sub-space is selected by the
|
||||
CDM/ELBI (dbi_cs) and CS2 (dbi_cs2) signals (selector bits). Such
|
||||
configuration provides a flexible interface for the system engineers to
|
||||
either map the particular space at a desired MMIO address or just leave
|
||||
them in a contiguous memory space if pure Native or AXI Bridge DBI access
|
||||
is selected. Note the PCIe CFG-space, PL and Shadow registers are
|
||||
specific for each activated function, while the rest of the sub-spaces
|
||||
are common for all of them (if there are more than one).
|
||||
minItems: 2
|
||||
maxItems: 6
|
||||
|
||||
reg-names:
|
||||
minItems: 2
|
||||
maxItems: 6
|
||||
|
||||
interrupts:
|
||||
description:
|
||||
There are two main sub-blocks which are normally capable of
|
||||
generating interrupts. It's System Information Interface and MSI
|
||||
interface. While the former one has some common for the Host and
|
||||
Endpoint controllers IRQ-signals, the later interface is obviously
|
||||
Root Complex specific since it's responsible for the incoming MSI
|
||||
messages signalling. The System Information IRQ signals are mainly
|
||||
responsible for reporting the generic PCIe hierarchy and Root
|
||||
Complex events like VPD IO request, general AER, PME, Hot-plug, link
|
||||
bandwidth change, link equalization request, INTx asserted/deasserted
|
||||
Message detection, embedded DMA Tx/Rx/Error.
|
||||
minItems: 1
|
||||
maxItems: 26
|
||||
|
||||
interrupt-names:
|
||||
minItems: 1
|
||||
maxItems: 26
|
||||
|
||||
clocks:
|
||||
description:
|
||||
DWC PCIe reference manual explicitly defines a set of the clocks required
|
||||
to get the controller working correctly. In general all of them can
|
||||
be divided into two groups':' application and core clocks. Note the
|
||||
platforms may have some of the clock sources unspecified in case if the
|
||||
corresponding domains are fed up from a common clock source.
|
||||
minItems: 1
|
||||
maxItems: 7
|
||||
|
||||
clock-names:
|
||||
minItems: 1
|
||||
maxItems: 7
|
||||
items:
|
||||
oneOf:
|
||||
- description:
|
||||
Data Bus Interface (DBI) clock. Clock signal for the AXI-bus
|
||||
interface of the Configuration-Dependent Module, which is
|
||||
basically the set of the controller CSRs.
|
||||
const: dbi
|
||||
- description:
|
||||
Application AXI-bus Master interface clock. Basically this is
|
||||
a clock for the controller DMA interface (PCI-to-CPU).
|
||||
const: mstr
|
||||
- description:
|
||||
Application AXI-bus Slave interface clock. This is a clock for
|
||||
the CPU-to-PCI memory IO interface.
|
||||
const: slv
|
||||
- description:
|
||||
Controller Core-PCS PIPE interface clock. It's normally
|
||||
supplied by an external PCS-PHY.
|
||||
const: pipe
|
||||
- description:
|
||||
Controller Primary clock. It's assumed that all controller input
|
||||
signals (except resets) are synchronous to this clock.
|
||||
const: core
|
||||
- description:
|
||||
Auxiliary clock for the controller PMC domain. The controller
|
||||
partitioning implies having some parts to operate with this
|
||||
clock in some power management states.
|
||||
const: aux
|
||||
- description:
|
||||
Generic reference clock. In case if there are several
|
||||
interfaces fed up with a common clock source it's advisable to
|
||||
define it with this name (for instance pipe, core and aux can
|
||||
be connected to a single source of the periodic signal).
|
||||
const: ref
|
||||
- description:
|
||||
Clock for the PHY registers interface. Originally this is
|
||||
a PHY-viewport-based interface, but some platform may have
|
||||
specifically designed one.
|
||||
const: phy_reg
|
||||
- description:
|
||||
Vendor-specific clock names. Consider using the generic names
|
||||
above for new bindings.
|
||||
oneOf:
|
||||
- description: See native 'dbi' clock for details
|
||||
enum: [ pcie, pcie_apb_sys, aclk_dbi ]
|
||||
- description: See native 'mstr/slv' clock for details
|
||||
enum: [ pcie_bus, pcie_inbound_axi, pcie_aclk, aclk_mst, aclk_slv ]
|
||||
- description: See native 'pipe' clock for details
|
||||
enum: [ pcie_phy, pcie_phy_ref, link ]
|
||||
- description: See native 'aux' clock for details
|
||||
enum: [ pcie_aux ]
|
||||
- description: See native 'ref' clock for details.
|
||||
enum: [ gio ]
|
||||
- description: See nativs 'phy_reg' clock for details
|
||||
enum: [ pcie_apb_phy, pclk ]
|
||||
|
||||
resets:
|
||||
description:
|
||||
DWC PCIe reference manual explicitly defines a set of the reset
|
||||
signals required to be de-asserted to properly activate the controller
|
||||
sub-parts. All of these signals can be divided into two sub-groups':'
|
||||
application and core resets with respect to the main sub-domains they
|
||||
are supposed to reset. Note the platforms may have some of these signals
|
||||
unspecified in case if they are automatically handled or aggregated into
|
||||
a comprehensive control module.
|
||||
minItems: 1
|
||||
maxItems: 10
|
||||
|
||||
reset-names:
|
||||
minItems: 1
|
||||
maxItems: 10
|
||||
items:
|
||||
oneOf:
|
||||
- description: Data Bus Interface (DBI) domain reset
|
||||
const: dbi
|
||||
- description: AXI-bus Master interface reset
|
||||
const: mstr
|
||||
- description: AXI-bus Slave interface reset
|
||||
const: slv
|
||||
- description: Application-dependent interface reset
|
||||
const: app
|
||||
- description: Controller Non-sticky CSR flags reset
|
||||
const: non-sticky
|
||||
- description: Controller sticky CSR flags reset
|
||||
const: sticky
|
||||
- description: PIPE-interface (Core-PCS) logic reset
|
||||
const: pipe
|
||||
- description:
|
||||
Controller primary reset (resets everything except PMC module)
|
||||
const: core
|
||||
- description: PCS/PHY block reset
|
||||
const: phy
|
||||
- description: PMC hot reset signal
|
||||
const: hot
|
||||
- description: Cold reset signal
|
||||
const: pwr
|
||||
- description:
|
||||
Vendor-specific reset names. Consider using the generic names
|
||||
above for new bindings.
|
||||
oneOf:
|
||||
- description: See native 'app' reset for details
|
||||
enum: [ apps, gio, apb ]
|
||||
- description: See native 'phy' reset for details
|
||||
enum: [ pciephy, link ]
|
||||
- description: See native 'pwr' reset for details
|
||||
enum: [ turnoff ]
|
||||
|
||||
phys:
|
||||
description:
|
||||
There can be up to the number of possible lanes PHYs specified placed in
|
||||
the phandle array in the line-based order. Obviously each the specified
|
||||
PHYs are supposed to be able to work in the PCIe mode with a speed
|
||||
implied by the DWC PCIe controller they are attached to.
|
||||
minItems: 1
|
||||
maxItems: 16
|
||||
|
||||
phy-names:
|
||||
minItems: 1
|
||||
maxItems: 16
|
||||
oneOf:
|
||||
- description: Generic PHY names
|
||||
items:
|
||||
pattern: '^pcie[0-9]+$'
|
||||
- description:
|
||||
Vendor-specific PHY names. Consider using the generic
|
||||
names above for new bindings.
|
||||
items:
|
||||
oneOf:
|
||||
- pattern: '^pcie(-?phy[0-9]*)?$'
|
||||
- pattern: '^p2u-[0-7]$'
|
||||
|
||||
reset-gpio:
|
||||
deprecated: true
|
||||
description:
|
||||
Reference to the GPIO-controlled PERST# signal. It is used to reset all
|
||||
the peripheral devices available on the PCIe bus.
|
||||
maxItems: 1
|
||||
|
||||
reset-gpios:
|
||||
description:
|
||||
Reference to the GPIO-controlled PERST# signal. It is used to reset all
|
||||
the peripheral devices available on the PCIe bus.
|
||||
maxItems: 1
|
||||
|
||||
max-link-speed:
|
||||
maximum: 5
|
||||
|
||||
num-lanes:
|
||||
description:
|
||||
Number of PCIe link lanes to use. Can be omitted if the already brought
|
||||
up link is supposed to be preserved.
|
||||
maximum: 16
|
||||
|
||||
num-ob-windows:
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
deprecated: true
|
||||
description:
|
||||
Number of outbound address translation windows. This parameter can be
|
||||
auto-detected based on the iATU memory writability. So there is no
|
||||
point in having a dedicated DT-property for it.
|
||||
maximum: 256
|
||||
|
||||
num-ib-windows:
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
deprecated: true
|
||||
description:
|
||||
Number of inbound address translation windows. In the same way as
|
||||
for the outbound AT windows, this parameter can be auto-detected based
|
||||
on the iATU memory writability. There is no point having a dedicated
|
||||
DT-property for it either.
|
||||
maximum: 256
|
||||
|
||||
num-viewport:
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
deprecated: true
|
||||
description:
|
||||
Number of outbound view ports configured in hardware. It's the same as
|
||||
the number of outbound AT windows.
|
||||
maximum: 256
|
||||
|
||||
snps,enable-cdm-check:
|
||||
$ref: /schemas/types.yaml#/definitions/flag
|
||||
description:
|
||||
Enable automatic checking of CDM (Configuration Dependent Module)
|
||||
registers for data corruption. CDM registers include standard PCIe
|
||||
configuration space registers, Port Logic registers, DMA and iATU
|
||||
registers. This feature has been available since DWC PCIe v4.80a.
|
||||
|
||||
dma-coherent: true
|
||||
|
||||
additionalProperties: true
|
||||
|
||||
...
|
|
@ -13,76 +13,182 @@ maintainers:
|
|||
description: |
|
||||
Synopsys DesignWare PCIe host controller endpoint
|
||||
|
||||
# Please create a separate DT-schema for your DWC PCIe Endpoint controller
|
||||
# and make sure it's assigned with the vendor-specific compatible string.
|
||||
select:
|
||||
properties:
|
||||
compatible:
|
||||
const: snps,dw-pcie-ep
|
||||
required:
|
||||
- compatible
|
||||
|
||||
allOf:
|
||||
- $ref: /schemas/pci/pci-ep.yaml#
|
||||
- $ref: /schemas/pci/snps,dw-pcie-common.yaml#
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
anyOf:
|
||||
- {}
|
||||
- const: snps,dw-pcie-ep
|
||||
|
||||
reg:
|
||||
description: |
|
||||
It should contain Data Bus Interface (dbi) and config registers for all
|
||||
versions.
|
||||
For designware core version >= 4.80, it may contain ATU address space.
|
||||
description:
|
||||
DBI, DBI2 reg-spaces and outbound memory window are required for the
|
||||
normal controller functioning. iATU memory IO region is also required
|
||||
if the space is unrolled (IP-core version >= 4.80a).
|
||||
minItems: 2
|
||||
maxItems: 4
|
||||
maxItems: 5
|
||||
|
||||
reg-names:
|
||||
minItems: 2
|
||||
maxItems: 4
|
||||
maxItems: 5
|
||||
items:
|
||||
enum: [dbi, dbi2, config, atu, addr_space, link, atu_dma, appl]
|
||||
oneOf:
|
||||
- description:
|
||||
Basic DWC PCIe controller configuration-space accessible over
|
||||
the DBI interface. This memory space is either activated with
|
||||
CDM/ELBI = 0 and CS2 = 0 or is a contiguous memory region
|
||||
with all spaces. Note iATU/eDMA CSRs are indirectly accessible
|
||||
via the PL viewports on the DWC PCIe controllers older than
|
||||
v4.80a.
|
||||
const: dbi
|
||||
- description:
|
||||
Shadow DWC PCIe config-space registers. This space is selected
|
||||
by setting CDM/ELBI = 0 and CS2 = 1. This is an intermix of
|
||||
the PCI-SIG PCIe CFG-space with the shadow registers for some
|
||||
PCI Header space, PCI Standard and Extended Structures. It's
|
||||
mainly relevant for the end-point controller configuration,
|
||||
but still there are some shadow registers available for the
|
||||
Root Port mode too.
|
||||
const: dbi2
|
||||
- description:
|
||||
External Local Bus registers. It's an application-dependent
|
||||
registers normally defined by the platform engineers. The space
|
||||
can be selected by setting CDM/ELBI = 1 and CS2 = 0 wires or can
|
||||
be accessed over some platform-specific means (for instance
|
||||
as a part of a system controller).
|
||||
enum: [ elbi, app ]
|
||||
- description:
|
||||
iATU/eDMA registers common for all device functions. It's an
|
||||
unrolled memory space with the internal Address Translation
|
||||
Unit and Enhanced DMA, which is selected by setting CDM/ELBI = 1
|
||||
and CS2 = 1. For IP-core releases prior v4.80a, these registers
|
||||
have been programmed via an indirect addressing scheme using a
|
||||
set of viewport CSRs mapped into the PL space. Note iATU is
|
||||
normally mapped to the 0x0 address of this region, while eDMA
|
||||
is available at 0x80000 base address.
|
||||
const: atu
|
||||
- description:
|
||||
Platform-specific eDMA registers. Some platforms may have eDMA
|
||||
CSRs mapped in a non-standard base address. The registers offset
|
||||
can be changed or the MS/LS-bits of the address can be attached
|
||||
in an additional RTL block before the MEM-IO transactions reach
|
||||
the DW PCIe slave interface.
|
||||
const: dma
|
||||
- description:
|
||||
PHY/PCS configuration registers. Some platforms can have the
|
||||
PCS and PHY CSRs accessible over a dedicated memory mapped
|
||||
region, but mainly these registers are indirectly accessible
|
||||
either by means of the embedded PHY viewport schema or by some
|
||||
platform-specific method.
|
||||
const: phy
|
||||
- description:
|
||||
Outbound iATU-capable memory-region which will be used to
|
||||
generate various application-specific traffic on the PCIe bus
|
||||
hierarchy. It's usage scenario depends on the endpoint
|
||||
functionality, for instance it can be used to create MSI(X)
|
||||
messages.
|
||||
const: addr_space
|
||||
- description:
|
||||
Vendor-specific CSR names. Consider using the generic names above
|
||||
for new bindings.
|
||||
oneOf:
|
||||
- description: See native 'elbi/app' CSR region for details.
|
||||
enum: [ link, appl ]
|
||||
- description: See native 'atu' CSR region for details.
|
||||
enum: [ atu_dma ]
|
||||
allOf:
|
||||
- contains:
|
||||
const: dbi
|
||||
- contains:
|
||||
const: addr_space
|
||||
|
||||
reset-gpio:
|
||||
description: GPIO pin number of PERST# signal
|
||||
maxItems: 1
|
||||
deprecated: true
|
||||
interrupts:
|
||||
description:
|
||||
There is no mandatory IRQ signals for the normal controller functioning,
|
||||
but in addition to the native set the platforms may have a link- or
|
||||
PM-related IRQs specified.
|
||||
minItems: 1
|
||||
maxItems: 20
|
||||
|
||||
reset-gpios:
|
||||
description: GPIO controlled connection to PERST# signal
|
||||
maxItems: 1
|
||||
interrupt-names:
|
||||
minItems: 1
|
||||
maxItems: 20
|
||||
items:
|
||||
oneOf:
|
||||
- description:
|
||||
Controller request to read or write virtual product data
|
||||
from/to the VPD capability registers.
|
||||
const: vpd
|
||||
- description:
|
||||
Link Equalization Request flag is set in the Link Status 2
|
||||
register (applicable if the corresponding IRQ is enabled in
|
||||
the Link Control 3 register).
|
||||
const: l_eq
|
||||
- description:
|
||||
Indicates that the eDMA Tx/Rx transfer is complete or that an
|
||||
error has occurred on the corresponding channel. eDMA can have
|
||||
eight Tx (Write) and Rx (Read) eDMA channels thus supporting up
|
||||
to 16 IRQ signals all together. Write eDMA channels shall go
|
||||
first in the ordered row as per default edma_int[*] bus setup.
|
||||
pattern: '^dma([0-9]|1[0-5])?$'
|
||||
- description:
|
||||
PCIe protocol correctable error or a Data Path protection
|
||||
correctable error is detected by the automotive/safety
|
||||
feature.
|
||||
const: sft_ce
|
||||
- description:
|
||||
Indicates that the internal safety mechanism has detected an
|
||||
uncorrectable error.
|
||||
const: sft_ue
|
||||
- description:
|
||||
Application-specific IRQ raised depending on the vendor-specific
|
||||
events basis.
|
||||
const: app
|
||||
- description:
|
||||
Vendor-specific IRQ names. Consider using the generic names above
|
||||
for new bindings.
|
||||
oneOf:
|
||||
- description: See native "app" IRQ for details
|
||||
enum: [ intr ]
|
||||
|
||||
snps,enable-cdm-check:
|
||||
type: boolean
|
||||
description: |
|
||||
This is a boolean property and if present enables
|
||||
automatic checking of CDM (Configuration Dependent Module) registers
|
||||
for data corruption. CDM registers include standard PCIe configuration
|
||||
space registers, Port Logic registers, DMA and iATU (internal Address
|
||||
Translation Unit) registers.
|
||||
|
||||
num-ib-windows:
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
maximum: 256
|
||||
description: number of inbound address translation windows
|
||||
deprecated: true
|
||||
|
||||
num-ob-windows:
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
maximum: 256
|
||||
description: number of outbound address translation windows
|
||||
deprecated: true
|
||||
max-functions:
|
||||
maximum: 32
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- reg-names
|
||||
- compatible
|
||||
|
||||
additionalProperties: true
|
||||
|
||||
examples:
|
||||
- |
|
||||
bus {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
pcie-ep@dfd00000 {
|
||||
compatible = "snps,dw-pcie-ep";
|
||||
reg = <0xdfc00000 0x0001000>, /* IP registers 1 */
|
||||
<0xdfc01000 0x0001000>, /* IP registers 2 */
|
||||
<0xd0000000 0x2000000>; /* Configuration space */
|
||||
reg-names = "dbi", "dbi2", "addr_space";
|
||||
};
|
||||
pcie-ep@dfd00000 {
|
||||
compatible = "snps,dw-pcie-ep";
|
||||
reg = <0xdfc00000 0x0001000>, /* IP registers 1 */
|
||||
<0xdfc01000 0x0001000>, /* IP registers 2 */
|
||||
<0xd0000000 0x2000000>; /* Configuration space */
|
||||
reg-names = "dbi", "dbi2", "addr_space";
|
||||
|
||||
interrupts = <23>, <24>;
|
||||
interrupt-names = "dma0", "dma1";
|
||||
|
||||
clocks = <&sys_clk 12>, <&sys_clk 24>;
|
||||
clock-names = "dbi", "ref";
|
||||
|
||||
resets = <&sys_rst 12>, <&sys_rst 24>;
|
||||
reset-names = "dbi", "phy";
|
||||
|
||||
phys = <&pcie_phy0>, <&pcie_phy1>, <&pcie_phy2>, <&pcie_phy3>;
|
||||
phy-names = "pcie0", "pcie1", "pcie2", "pcie3";
|
||||
|
||||
max-link-speed = <3>;
|
||||
max-functions = /bits/ 8 <4>;
|
||||
};
|
||||
|
|
|
@ -13,20 +13,25 @@ maintainers:
|
|||
description: |
|
||||
Synopsys DesignWare PCIe host controller
|
||||
|
||||
# Please create a separate DT-schema for your DWC PCIe Root Port controller
|
||||
# and make sure it's assigned with the vendor-specific compatible string.
|
||||
select:
|
||||
properties:
|
||||
compatible:
|
||||
const: snps,dw-pcie
|
||||
required:
|
||||
- compatible
|
||||
|
||||
allOf:
|
||||
- $ref: /schemas/pci/pci-bus.yaml#
|
||||
- $ref: /schemas/pci/snps,dw-pcie-common.yaml#
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
anyOf:
|
||||
- {}
|
||||
- const: snps,dw-pcie
|
||||
|
||||
reg:
|
||||
description: |
|
||||
It should contain Data Bus Interface (dbi) and config registers for all
|
||||
versions.
|
||||
For designware core version >= 4.80, it may contain ATU address space.
|
||||
description:
|
||||
At least DBI reg-space and peripheral devices CFG-space outbound window
|
||||
are required for the normal controller work. iATU memory IO region is
|
||||
also required if the space is unrolled (IP-core version >= 4.80a).
|
||||
minItems: 2
|
||||
maxItems: 5
|
||||
|
||||
|
@ -34,71 +39,194 @@ properties:
|
|||
minItems: 2
|
||||
maxItems: 5
|
||||
items:
|
||||
enum: [ dbi, dbi2, config, atu, atu_dma, app, appl, elbi, mgmt, ctrl,
|
||||
parf, cfg, link, ulreg, smu, mpu, apb, phy, ecam ]
|
||||
oneOf:
|
||||
- description:
|
||||
Basic DWC PCIe controller configuration-space accessible over
|
||||
the DBI interface. This memory space is either activated with
|
||||
CDM/ELBI = 0 and CS2 = 0 or is a contiguous memory region
|
||||
with all spaces. Note iATU/eDMA CSRs are indirectly accessible
|
||||
via the PL viewports on the DWC PCIe controllers older than
|
||||
v4.80a.
|
||||
const: dbi
|
||||
- description:
|
||||
Shadow DWC PCIe config-space registers. This space is selected
|
||||
by setting CDM/ELBI = 0 and CS2 = 1. This is an intermix of
|
||||
the PCI-SIG PCIe CFG-space with the shadow registers for some
|
||||
PCI Header space, PCI Standard and Extended Structures. It's
|
||||
mainly relevant for the end-point controller configuration,
|
||||
but still there are some shadow registers available for the
|
||||
Root Port mode too.
|
||||
const: dbi2
|
||||
- description:
|
||||
External Local Bus registers. It's an application-dependent
|
||||
registers normally defined by the platform engineers. The space
|
||||
can be selected by setting CDM/ELBI = 1 and CS2 = 0 wires or can
|
||||
be accessed over some platform-specific means (for instance
|
||||
as a part of a system controller).
|
||||
enum: [ elbi, app ]
|
||||
- description:
|
||||
iATU/eDMA registers common for all device functions. It's an
|
||||
unrolled memory space with the internal Address Translation
|
||||
Unit and Enhanced DMA, which is selected by setting CDM/ELBI = 1
|
||||
and CS2 = 1. For IP-core releases prior v4.80a, these registers
|
||||
have been programmed via an indirect addressing scheme using a
|
||||
set of viewport CSRs mapped into the PL space. Note iATU is
|
||||
normally mapped to the 0x0 address of this region, while eDMA
|
||||
is available at 0x80000 base address.
|
||||
const: atu
|
||||
- description:
|
||||
Platform-specific eDMA registers. Some platforms may have eDMA
|
||||
CSRs mapped in a non-standard base address. The registers offset
|
||||
can be changed or the MS/LS-bits of the address can be attached
|
||||
in an additional RTL block before the MEM-IO transactions reach
|
||||
the DW PCIe slave interface.
|
||||
const: dma
|
||||
- description:
|
||||
PHY/PCS configuration registers. Some platforms can have the
|
||||
PCS and PHY CSRs accessible over a dedicated memory mapped
|
||||
region, but mainly these registers are indirectly accessible
|
||||
either by means of the embedded PHY viewport schema or by some
|
||||
platform-specific method.
|
||||
const: phy
|
||||
- description:
|
||||
Outbound iATU-capable memory-region which will be used to access
|
||||
the peripheral PCIe devices configuration space.
|
||||
const: config
|
||||
- description:
|
||||
Vendor-specific CSR names. Consider using the generic names above
|
||||
for new bindings.
|
||||
oneOf:
|
||||
- description: See native 'elbi/app' CSR region for details.
|
||||
enum: [ apb, mgmt, link, ulreg, appl ]
|
||||
- description: See native 'atu' CSR region for details.
|
||||
enum: [ atu_dma ]
|
||||
- description: Syscon-related CSR regions.
|
||||
enum: [ smu, mpu ]
|
||||
- description: Tegra234 aperture
|
||||
enum: [ ecam ]
|
||||
allOf:
|
||||
- contains:
|
||||
const: dbi
|
||||
- contains:
|
||||
const: config
|
||||
|
||||
num-lanes:
|
||||
description: |
|
||||
number of lanes to use (this property should be specified unless
|
||||
the link is brought already up in firmware)
|
||||
maximum: 16
|
||||
interrupts:
|
||||
description:
|
||||
DWC PCIe Root Port/Complex specific IRQ signals. At least MSI interrupt
|
||||
signal is supposed to be specified for the host controller.
|
||||
minItems: 1
|
||||
maxItems: 26
|
||||
|
||||
reset-gpio:
|
||||
description: GPIO pin number of PERST# signal
|
||||
maxItems: 1
|
||||
deprecated: true
|
||||
|
||||
reset-gpios:
|
||||
description: GPIO controlled connection to PERST# signal
|
||||
maxItems: 1
|
||||
|
||||
interrupts: true
|
||||
|
||||
interrupt-names: true
|
||||
|
||||
clocks: true
|
||||
|
||||
snps,enable-cdm-check:
|
||||
type: boolean
|
||||
description: |
|
||||
This is a boolean property and if present enables
|
||||
automatic checking of CDM (Configuration Dependent Module) registers
|
||||
for data corruption. CDM registers include standard PCIe configuration
|
||||
space registers, Port Logic registers, DMA and iATU (internal Address
|
||||
Translation Unit) registers.
|
||||
|
||||
num-viewport:
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
maximum: 256
|
||||
description: |
|
||||
number of view ports configured in hardware. If a platform
|
||||
does not specify it, the driver autodetects it.
|
||||
deprecated: true
|
||||
interrupt-names:
|
||||
minItems: 1
|
||||
maxItems: 26
|
||||
items:
|
||||
oneOf:
|
||||
- description:
|
||||
Controller request to read or write virtual product data
|
||||
from/to the VPD capability registers.
|
||||
const: vpd
|
||||
- description:
|
||||
Link Equalization Request flag is set in the Link Status 2
|
||||
register (applicable if the corresponding IRQ is enabled in
|
||||
the Link Control 3 register).
|
||||
const: l_eq
|
||||
- description:
|
||||
Indicates that the eDMA Tx/Rx transfer is complete or that an
|
||||
error has occurred on the corresponding channel. eDMA can have
|
||||
eight Tx (Write) and Rx (Read) eDMA channels thus supporting up
|
||||
to 16 IRQ signals all together. Write eDMA channels shall go
|
||||
first in the ordered row as per default edma_int[*] bus setup.
|
||||
pattern: '^dma([0-9]|1[0-5])?$'
|
||||
- description:
|
||||
PCIe protocol correctable error or a Data Path protection
|
||||
correctable error is detected by the automotive/safety
|
||||
feature.
|
||||
const: sft_ce
|
||||
- description:
|
||||
Indicates that the internal safety mechanism has detected an
|
||||
uncorrectable error.
|
||||
const: sft_ue
|
||||
- description:
|
||||
Application-specific IRQ raised depending on the vendor-specific
|
||||
events basis.
|
||||
const: app
|
||||
- description:
|
||||
DSP AXI MSI Interrupt detected. It gets de-asserted when there is
|
||||
no more MSI interrupt pending. The interrupt is relevant to the
|
||||
iMSI-RX - Integrated MSI Receiver (AXI bridge).
|
||||
const: msi
|
||||
- description:
|
||||
Legacy A/B/C/D interrupt signal. Basically it's triggered by
|
||||
receiving a Assert_INT{A,B,C,D}/Desassert_INT{A,B,C,D} message
|
||||
from the downstream device.
|
||||
pattern: "^int(a|b|c|d)$"
|
||||
- description:
|
||||
Error condition detected and a flag is set in the Root Error Status
|
||||
register of the AER capability. It's asserted when the RC
|
||||
internally generated an error or an error message is received by
|
||||
the RC.
|
||||
const: aer
|
||||
- description:
|
||||
PME message is received by the port. That means having the PME
|
||||
status bit set in the Root Status register (the event is
|
||||
supposed to be unmasked in the Root Control register).
|
||||
const: pme
|
||||
- description:
|
||||
Hot-plug event is detected. That is a bit has been set in the
|
||||
Slot Status register and the corresponding event is enabled in
|
||||
the Slot Control register.
|
||||
const: hp
|
||||
- description:
|
||||
Link Autonomous Bandwidth Status flag has been set in the Link
|
||||
Status register (the event is supposed to be unmasked in the
|
||||
Link Control register).
|
||||
const: bw_au
|
||||
- description:
|
||||
Bandwidth Management Status flag has been set in the Link
|
||||
Status register (the event is supposed to be unmasked in the
|
||||
Link Control register).
|
||||
const: bw_mg
|
||||
- description:
|
||||
Vendor-specific IRQ names. Consider using the generic names above
|
||||
for new bindings.
|
||||
oneOf:
|
||||
- description: See native "app" IRQ for details
|
||||
enum: [ intr ]
|
||||
allOf:
|
||||
- contains:
|
||||
const: msi
|
||||
|
||||
additionalProperties: true
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- reg-names
|
||||
- compatible
|
||||
|
||||
examples:
|
||||
- |
|
||||
bus {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
pcie@dfc00000 {
|
||||
device_type = "pci";
|
||||
compatible = "snps,dw-pcie";
|
||||
reg = <0xdfc00000 0x0001000>, /* IP registers */
|
||||
<0xd0000000 0x0002000>; /* Configuration space */
|
||||
reg-names = "dbi", "config";
|
||||
#address-cells = <3>;
|
||||
#size-cells = <2>;
|
||||
ranges = <0x81000000 0 0x00000000 0xde000000 0 0x00010000>,
|
||||
<0x82000000 0 0xd0400000 0xd0400000 0 0x0d000000>;
|
||||
interrupts = <25>, <24>;
|
||||
#interrupt-cells = <1>;
|
||||
num-lanes = <1>;
|
||||
};
|
||||
pcie@dfc00000 {
|
||||
compatible = "snps,dw-pcie";
|
||||
device_type = "pci";
|
||||
reg = <0xdfc00000 0x0001000>, /* IP registers */
|
||||
<0xd0000000 0x0002000>; /* Configuration space */
|
||||
reg-names = "dbi", "config";
|
||||
#address-cells = <3>;
|
||||
#size-cells = <2>;
|
||||
ranges = <0x81000000 0 0x00000000 0xde000000 0 0x00010000>,
|
||||
<0x82000000 0 0xd0400000 0xd0400000 0 0x0d000000>;
|
||||
bus-range = <0x0 0xff>;
|
||||
|
||||
interrupts = <25>, <24>;
|
||||
interrupt-names = "msi", "hp";
|
||||
#interrupt-cells = <1>;
|
||||
|
||||
reset-gpios = <&port0 0 1>;
|
||||
|
||||
phys = <&pcie_phy>;
|
||||
phy-names = "pcie";
|
||||
|
||||
num-lanes = <1>;
|
||||
max-link-speed = <3>;
|
||||
};
|
||||
|
|
|
@ -58,6 +58,13 @@ properties:
|
|||
dma-coherent:
|
||||
description: Indicates that the PCIe IP block can ensure the coherency
|
||||
|
||||
interrupts:
|
||||
maxItems: 1
|
||||
|
||||
interrupt-names:
|
||||
items:
|
||||
- const: link_state
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
|
|
|
@ -73,9 +73,31 @@ properties:
|
|||
- const: 0xb00f
|
||||
- items:
|
||||
- const: 0xb010
|
||||
- items:
|
||||
- const: 0xb013
|
||||
|
||||
msi-map: true
|
||||
|
||||
interrupts:
|
||||
maxItems: 1
|
||||
|
||||
interrupt-names:
|
||||
items:
|
||||
- const: link_state
|
||||
|
||||
interrupt-controller:
|
||||
type: object
|
||||
additionalProperties: false
|
||||
|
||||
properties:
|
||||
interrupt-controller: true
|
||||
|
||||
'#interrupt-cells':
|
||||
const: 1
|
||||
|
||||
interrupts:
|
||||
maxItems: 1
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
|
|
|
@ -36,7 +36,7 @@ properties:
|
|||
- const: mpu
|
||||
|
||||
interrupts:
|
||||
maxItems: 1
|
||||
maxItems: 2
|
||||
|
||||
clocks:
|
||||
items:
|
||||
|
@ -94,8 +94,9 @@ examples:
|
|||
#interrupt-cells = <1>;
|
||||
ranges = <0x81000000 0 0x40000000 0 0x40000000 0 0x00010000>,
|
||||
<0x82000000 0 0x50000000 0 0x50000000 0 0x20000000>;
|
||||
interrupts = <GIC_SPI 215 IRQ_TYPE_LEVEL_HIGH>;
|
||||
interrupt-names = "intr";
|
||||
interrupts = <GIC_SPI 211 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<GIC_SPI 215 IRQ_TYPE_LEVEL_HIGH>;
|
||||
interrupt-names = "msi", "intr";
|
||||
interrupt-map-mask = <0 0 0 7>;
|
||||
interrupt-map =
|
||||
<0 0 0 1 &gic GIC_SPI 215 IRQ_TYPE_LEVEL_HIGH
|
||||
|
|
|
@ -42,8 +42,16 @@ static void remove_e820_regions(struct resource *avail)
|
|||
|
||||
resource_clip(avail, e820_start, e820_end);
|
||||
if (orig.start != avail->start || orig.end != avail->end) {
|
||||
pr_info("clipped %pR to %pR for e820 entry [mem %#010Lx-%#010Lx]\n",
|
||||
&orig, avail, e820_start, e820_end);
|
||||
pr_info("resource: avoiding allocation from e820 entry [mem %#010Lx-%#010Lx]\n",
|
||||
e820_start, e820_end);
|
||||
if (avail->end > avail->start)
|
||||
/*
|
||||
* Use %pa instead of %pR because "avail"
|
||||
* is typically IORESOURCE_UNSET, so %pR
|
||||
* shows the size instead of addresses.
|
||||
*/
|
||||
pr_info("resource: remaining [mem %pa-%pa] available\n",
|
||||
&avail->start, &avail->end);
|
||||
orig = *avail;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,4 +1,7 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
#define pr_fmt(fmt) "PCI: " fmt
|
||||
|
||||
#include <linux/pci.h>
|
||||
#include <linux/acpi.h>
|
||||
#include <linux/init.h>
|
||||
|
@ -37,15 +40,15 @@ static int __init set_nouse_crs(const struct dmi_system_id *id)
|
|||
|
||||
static int __init set_ignore_seg(const struct dmi_system_id *id)
|
||||
{
|
||||
printk(KERN_INFO "PCI: %s detected: ignoring ACPI _SEG\n", id->ident);
|
||||
pr_info("%s detected: ignoring ACPI _SEG\n", id->ident);
|
||||
pci_ignore_seg = true;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int __init set_no_e820(const struct dmi_system_id *id)
|
||||
{
|
||||
printk(KERN_INFO "PCI: %s detected: not clipping E820 regions from _CRS\n",
|
||||
id->ident);
|
||||
pr_info("%s detected: not clipping E820 regions from _CRS\n",
|
||||
id->ident);
|
||||
pci_use_e820 = false;
|
||||
return 0;
|
||||
}
|
||||
|
@ -231,10 +234,9 @@ void __init pci_acpi_crs_quirks(void)
|
|||
else if (pci_probe & PCI_USE__CRS)
|
||||
pci_use_crs = true;
|
||||
|
||||
printk(KERN_INFO "PCI: %s host bridge windows from ACPI; "
|
||||
"if necessary, use \"pci=%s\" and report a bug\n",
|
||||
pci_use_crs ? "Using" : "Ignoring",
|
||||
pci_use_crs ? "nocrs" : "use_crs");
|
||||
pr_info("%s host bridge windows from ACPI; if necessary, use \"pci=%s\" and report a bug\n",
|
||||
pci_use_crs ? "Using" : "Ignoring",
|
||||
pci_use_crs ? "nocrs" : "use_crs");
|
||||
|
||||
/* "pci=use_e820"/"pci=no_e820" on the kernel cmdline takes precedence */
|
||||
if (pci_probe & PCI_NO_E820)
|
||||
|
@ -242,19 +244,17 @@ void __init pci_acpi_crs_quirks(void)
|
|||
else if (pci_probe & PCI_USE_E820)
|
||||
pci_use_e820 = true;
|
||||
|
||||
printk(KERN_INFO "PCI: %s E820 reservations for host bridge windows\n",
|
||||
pci_use_e820 ? "Using" : "Ignoring");
|
||||
pr_info("%s E820 reservations for host bridge windows\n",
|
||||
pci_use_e820 ? "Using" : "Ignoring");
|
||||
if (pci_probe & (PCI_NO_E820 | PCI_USE_E820))
|
||||
printk(KERN_INFO "PCI: Please notify linux-pci@vger.kernel.org so future kernels can this automatically\n");
|
||||
pr_info("Please notify linux-pci@vger.kernel.org so future kernels can do this automatically\n");
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PCI_MMCONFIG
|
||||
static int check_segment(u16 seg, struct device *dev, char *estr)
|
||||
{
|
||||
if (seg) {
|
||||
dev_err(dev,
|
||||
"%s can't access PCI configuration "
|
||||
"space under this host bridge.\n",
|
||||
dev_err(dev, "%s can't access configuration space under this host bridge\n",
|
||||
estr);
|
||||
return -EIO;
|
||||
}
|
||||
|
@ -264,9 +264,7 @@ static int check_segment(u16 seg, struct device *dev, char *estr)
|
|||
* just can't access extended configuration space of
|
||||
* devices under this host bridge.
|
||||
*/
|
||||
dev_warn(dev,
|
||||
"%s can't access extended PCI configuration "
|
||||
"space under this bridge.\n",
|
||||
dev_warn(dev, "%s can't access extended configuration space under this bridge\n",
|
||||
estr);
|
||||
|
||||
return 0;
|
||||
|
@ -421,9 +419,8 @@ struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
|
|||
root->segment = domain = 0;
|
||||
|
||||
if (domain && !pci_domains_supported) {
|
||||
printk(KERN_WARNING "pci_bus %04x:%02x: "
|
||||
"ignored (multiple domains not supported)\n",
|
||||
domain, busnum);
|
||||
pr_warn("pci_bus %04x:%02x: ignored (multiple domains not supported)\n",
|
||||
domain, busnum);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
|
@ -491,7 +488,7 @@ int __init pci_acpi_init(void)
|
|||
if (acpi_noirq)
|
||||
return -ENODEV;
|
||||
|
||||
printk(KERN_INFO "PCI: Using ACPI for IRQ routing\n");
|
||||
pr_info("Using ACPI for IRQ routing\n");
|
||||
acpi_irq_penalty_init();
|
||||
pcibios_enable_irq = acpi_pci_irq_enable;
|
||||
pcibios_disable_irq = acpi_pci_irq_disable;
|
||||
|
@ -503,7 +500,7 @@ int __init pci_acpi_init(void)
|
|||
* also do it here in case there are still broken drivers that
|
||||
* don't use pci_enable_device().
|
||||
*/
|
||||
printk(KERN_INFO "PCI: Routing PCI interrupts for all devices because \"pci=routeirq\" specified\n");
|
||||
pr_info("Routing PCI interrupts for all devices because \"pci=routeirq\" specified\n");
|
||||
for_each_pci_dev(dev)
|
||||
acpi_pci_irq_enable(dev);
|
||||
}
|
||||
|
|
|
@ -305,6 +305,50 @@ static void __init efi_clean_memmap(void)
|
|||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Firmware can use EfiMemoryMappedIO to request that MMIO regions be
|
||||
* mapped by the OS so they can be accessed by EFI runtime services, but
|
||||
* should have no other significance to the OS (UEFI r2.10, sec 7.2).
|
||||
* However, most bootloaders and EFI stubs convert EfiMemoryMappedIO
|
||||
* regions to E820_TYPE_RESERVED entries, which prevent Linux from
|
||||
* allocating space from them (see remove_e820_regions()).
|
||||
*
|
||||
* Some platforms use EfiMemoryMappedIO entries for PCI MMCONFIG space and
|
||||
* PCI host bridge windows, which means Linux can't allocate BAR space for
|
||||
* hot-added devices.
|
||||
*
|
||||
* Remove large EfiMemoryMappedIO regions from the E820 map to avoid this
|
||||
* problem.
|
||||
*
|
||||
* Retain small EfiMemoryMappedIO regions because on some platforms, these
|
||||
* describe non-window space that's included in host bridge _CRS. If we
|
||||
* assign that space to PCI devices, they don't work.
|
||||
*/
|
||||
static void __init efi_remove_e820_mmio(void)
|
||||
{
|
||||
efi_memory_desc_t *md;
|
||||
u64 size, start, end;
|
||||
int i = 0;
|
||||
|
||||
for_each_efi_memory_desc(md) {
|
||||
if (md->type == EFI_MEMORY_MAPPED_IO) {
|
||||
size = md->num_pages << EFI_PAGE_SHIFT;
|
||||
start = md->phys_addr;
|
||||
end = start + size - 1;
|
||||
if (size >= 256*1024) {
|
||||
pr_info("Remove mem%02u: MMIO range=[0x%08llx-0x%08llx] (%lluMB) from e820 map\n",
|
||||
i, start, end, size >> 20);
|
||||
e820__range_remove(start, size,
|
||||
E820_TYPE_RESERVED, 1);
|
||||
} else {
|
||||
pr_info("Not removing mem%02u: MMIO range=[0x%08llx-0x%08llx] (%lluKB) from e820 map\n",
|
||||
i, start, end, size >> 10);
|
||||
}
|
||||
}
|
||||
i++;
|
||||
}
|
||||
}
|
||||
|
||||
void __init efi_print_memmap(void)
|
||||
{
|
||||
efi_memory_desc_t *md;
|
||||
|
@ -476,6 +520,8 @@ void __init efi_init(void)
|
|||
set_bit(EFI_RUNTIME_SERVICES, &efi.flags);
|
||||
efi_clean_memmap();
|
||||
|
||||
efi_remove_e820_mmio();
|
||||
|
||||
if (efi_enabled(EFI_DBG))
|
||||
efi_print_memmap();
|
||||
}
|
||||
|
|
|
@ -488,26 +488,11 @@ static void agp_amdk7_remove(struct pci_dev *pdev)
|
|||
agp_put_bridge(bridge);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PM
|
||||
|
||||
static int agp_amdk7_suspend(struct pci_dev *pdev, pm_message_t state)
|
||||
static int agp_amdk7_resume(struct device *dev)
|
||||
{
|
||||
pci_save_state(pdev);
|
||||
pci_set_power_state(pdev, pci_choose_state(pdev, state));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int agp_amdk7_resume(struct pci_dev *pdev)
|
||||
{
|
||||
pci_set_power_state(pdev, PCI_D0);
|
||||
pci_restore_state(pdev);
|
||||
|
||||
return amd_irongate_driver.configure();
|
||||
}
|
||||
|
||||
#endif /* CONFIG_PM */
|
||||
|
||||
/* must be the same order as name table above */
|
||||
static const struct pci_device_id agp_amdk7_pci_table[] = {
|
||||
{
|
||||
|
@ -539,15 +524,14 @@ static const struct pci_device_id agp_amdk7_pci_table[] = {
|
|||
|
||||
MODULE_DEVICE_TABLE(pci, agp_amdk7_pci_table);
|
||||
|
||||
static DEFINE_SIMPLE_DEV_PM_OPS(agp_amdk7_pm_ops, NULL, agp_amdk7_resume);
|
||||
|
||||
static struct pci_driver agp_amdk7_pci_driver = {
|
||||
.name = "agpgart-amdk7",
|
||||
.id_table = agp_amdk7_pci_table,
|
||||
.probe = agp_amdk7_probe,
|
||||
.remove = agp_amdk7_remove,
|
||||
#ifdef CONFIG_PM
|
||||
.suspend = agp_amdk7_suspend,
|
||||
.resume = agp_amdk7_resume,
|
||||
#endif
|
||||
.driver.pm = &agp_amdk7_pm_ops,
|
||||
};
|
||||
|
||||
static int __init agp_amdk7_init(void)
|
||||
|
|
|
@ -588,9 +588,7 @@ static void agp_amd64_remove(struct pci_dev *pdev)
|
|||
agp_bridges_found--;
|
||||
}
|
||||
|
||||
#define agp_amd64_suspend NULL
|
||||
|
||||
static int __maybe_unused agp_amd64_resume(struct device *dev)
|
||||
static int agp_amd64_resume(struct device *dev)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(dev);
|
||||
|
||||
|
@ -727,7 +725,7 @@ static const struct pci_device_id agp_amd64_pci_promisc_table[] = {
|
|||
{ }
|
||||
};
|
||||
|
||||
static SIMPLE_DEV_PM_OPS(agp_amd64_pm_ops, agp_amd64_suspend, agp_amd64_resume);
|
||||
static DEFINE_SIMPLE_DEV_PM_OPS(agp_amd64_pm_ops, NULL, agp_amd64_resume);
|
||||
|
||||
static struct pci_driver agp_amd64_pci_driver = {
|
||||
.name = "agpgart-amd64",
|
||||
|
|
|
@ -238,23 +238,10 @@ static int ati_configure(void)
|
|||
}
|
||||
|
||||
|
||||
#ifdef CONFIG_PM
|
||||
static int agp_ati_suspend(struct pci_dev *dev, pm_message_t state)
|
||||
static int agp_ati_resume(struct device *dev)
|
||||
{
|
||||
pci_save_state(dev);
|
||||
pci_set_power_state(dev, PCI_D3hot);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int agp_ati_resume(struct pci_dev *dev)
|
||||
{
|
||||
pci_set_power_state(dev, PCI_D0);
|
||||
pci_restore_state(dev);
|
||||
|
||||
return ati_configure();
|
||||
}
|
||||
#endif
|
||||
|
||||
/*
|
||||
*Since we don't need contiguous memory we just try
|
||||
|
@ -559,15 +546,14 @@ static const struct pci_device_id agp_ati_pci_table[] = {
|
|||
|
||||
MODULE_DEVICE_TABLE(pci, agp_ati_pci_table);
|
||||
|
||||
static DEFINE_SIMPLE_DEV_PM_OPS(agp_ati_pm_ops, NULL, agp_ati_resume);
|
||||
|
||||
static struct pci_driver agp_ati_pci_driver = {
|
||||
.name = "agpgart-ati",
|
||||
.id_table = agp_ati_pci_table,
|
||||
.probe = agp_ati_probe,
|
||||
.remove = agp_ati_remove,
|
||||
#ifdef CONFIG_PM
|
||||
.suspend = agp_ati_suspend,
|
||||
.resume = agp_ati_resume,
|
||||
#endif
|
||||
.driver.pm = &agp_ati_pm_ops,
|
||||
};
|
||||
|
||||
static int __init agp_ati_init(void)
|
||||
|
|
|
@ -412,18 +412,11 @@ static void agp_efficeon_remove(struct pci_dev *pdev)
|
|||
agp_put_bridge(bridge);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PM
|
||||
static int agp_efficeon_suspend(struct pci_dev *dev, pm_message_t state)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int agp_efficeon_resume(struct pci_dev *pdev)
|
||||
static int agp_efficeon_resume(struct device *dev)
|
||||
{
|
||||
printk(KERN_DEBUG PFX "agp_efficeon_resume()\n");
|
||||
return efficeon_configure();
|
||||
}
|
||||
#endif
|
||||
|
||||
static const struct pci_device_id agp_efficeon_pci_table[] = {
|
||||
{
|
||||
|
@ -437,6 +430,8 @@ static const struct pci_device_id agp_efficeon_pci_table[] = {
|
|||
{ }
|
||||
};
|
||||
|
||||
static DEFINE_SIMPLE_DEV_PM_OPS(agp_efficeon_pm_ops, NULL, agp_efficeon_resume);
|
||||
|
||||
MODULE_DEVICE_TABLE(pci, agp_efficeon_pci_table);
|
||||
|
||||
static struct pci_driver agp_efficeon_pci_driver = {
|
||||
|
@ -444,10 +439,7 @@ static struct pci_driver agp_efficeon_pci_driver = {
|
|||
.id_table = agp_efficeon_pci_table,
|
||||
.probe = agp_efficeon_probe,
|
||||
.remove = agp_efficeon_remove,
|
||||
#ifdef CONFIG_PM
|
||||
.suspend = agp_efficeon_suspend,
|
||||
.resume = agp_efficeon_resume,
|
||||
#endif
|
||||
.driver.pm = &agp_efficeon_pm_ops,
|
||||
};
|
||||
|
||||
static int __init agp_efficeon_init(void)
|
||||
|
|
|
@ -817,16 +817,15 @@ static void agp_intel_remove(struct pci_dev *pdev)
|
|||
agp_put_bridge(bridge);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PM
|
||||
static int agp_intel_resume(struct pci_dev *pdev)
|
||||
static int agp_intel_resume(struct device *dev)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(dev);
|
||||
struct agp_bridge_data *bridge = pci_get_drvdata(pdev);
|
||||
|
||||
bridge->driver->configure();
|
||||
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
|
||||
static const struct pci_device_id agp_intel_pci_table[] = {
|
||||
#define ID(x) \
|
||||
|
@ -895,14 +894,14 @@ static const struct pci_device_id agp_intel_pci_table[] = {
|
|||
|
||||
MODULE_DEVICE_TABLE(pci, agp_intel_pci_table);
|
||||
|
||||
static DEFINE_SIMPLE_DEV_PM_OPS(agp_intel_pm_ops, NULL, agp_intel_resume);
|
||||
|
||||
static struct pci_driver agp_intel_pci_driver = {
|
||||
.name = "agpgart-intel",
|
||||
.id_table = agp_intel_pci_table,
|
||||
.probe = agp_intel_probe,
|
||||
.remove = agp_intel_remove,
|
||||
#ifdef CONFIG_PM
|
||||
.resume = agp_intel_resume,
|
||||
#endif
|
||||
.driver.pm = &agp_intel_pm_ops,
|
||||
};
|
||||
|
||||
static int __init agp_intel_init(void)
|
||||
|
|
|
@ -404,28 +404,13 @@ static void agp_nvidia_remove(struct pci_dev *pdev)
|
|||
agp_put_bridge(bridge);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PM
|
||||
static int agp_nvidia_suspend(struct pci_dev *pdev, pm_message_t state)
|
||||
static int agp_nvidia_resume(struct device *dev)
|
||||
{
|
||||
pci_save_state(pdev);
|
||||
pci_set_power_state(pdev, PCI_D3hot);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int agp_nvidia_resume(struct pci_dev *pdev)
|
||||
{
|
||||
/* set power state 0 and restore PCI space */
|
||||
pci_set_power_state(pdev, PCI_D0);
|
||||
pci_restore_state(pdev);
|
||||
|
||||
/* reconfigure AGP hardware again */
|
||||
nvidia_configure();
|
||||
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
|
||||
|
||||
static const struct pci_device_id agp_nvidia_pci_table[] = {
|
||||
{
|
||||
|
@ -449,15 +434,14 @@ static const struct pci_device_id agp_nvidia_pci_table[] = {
|
|||
|
||||
MODULE_DEVICE_TABLE(pci, agp_nvidia_pci_table);
|
||||
|
||||
static DEFINE_SIMPLE_DEV_PM_OPS(agp_nvidia_pm_ops, NULL, agp_nvidia_resume);
|
||||
|
||||
static struct pci_driver agp_nvidia_pci_driver = {
|
||||
.name = "agpgart-nvidia",
|
||||
.id_table = agp_nvidia_pci_table,
|
||||
.probe = agp_nvidia_probe,
|
||||
.remove = agp_nvidia_remove,
|
||||
#ifdef CONFIG_PM
|
||||
.suspend = agp_nvidia_suspend,
|
||||
.resume = agp_nvidia_resume,
|
||||
#endif
|
||||
.driver.pm = &agp_nvidia_pm_ops,
|
||||
};
|
||||
|
||||
static int __init agp_nvidia_init(void)
|
||||
|
|
|
@ -217,10 +217,7 @@ static void agp_sis_remove(struct pci_dev *pdev)
|
|||
agp_put_bridge(bridge);
|
||||
}
|
||||
|
||||
#define agp_sis_suspend NULL
|
||||
|
||||
static int __maybe_unused agp_sis_resume(
|
||||
__attribute__((unused)) struct device *dev)
|
||||
static int agp_sis_resume(__attribute__((unused)) struct device *dev)
|
||||
{
|
||||
return sis_driver.configure();
|
||||
}
|
||||
|
@ -407,7 +404,7 @@ static const struct pci_device_id agp_sis_pci_table[] = {
|
|||
|
||||
MODULE_DEVICE_TABLE(pci, agp_sis_pci_table);
|
||||
|
||||
static SIMPLE_DEV_PM_OPS(agp_sis_pm_ops, agp_sis_suspend, agp_sis_resume);
|
||||
static DEFINE_SIMPLE_DEV_PM_OPS(agp_sis_pm_ops, NULL, agp_sis_resume);
|
||||
|
||||
static struct pci_driver agp_sis_pci_driver = {
|
||||
.name = "agpgart-sis",
|
||||
|
|
|
@ -489,9 +489,7 @@ static void agp_via_remove(struct pci_dev *pdev)
|
|||
agp_put_bridge(bridge);
|
||||
}
|
||||
|
||||
#define agp_via_suspend NULL
|
||||
|
||||
static int __maybe_unused agp_via_resume(struct device *dev)
|
||||
static int agp_via_resume(struct device *dev)
|
||||
{
|
||||
struct agp_bridge_data *bridge = dev_get_drvdata(dev);
|
||||
|
||||
|
@ -551,7 +549,7 @@ static const struct pci_device_id agp_via_pci_table[] = {
|
|||
|
||||
MODULE_DEVICE_TABLE(pci, agp_via_pci_table);
|
||||
|
||||
static SIMPLE_DEV_PM_OPS(agp_via_pm_ops, agp_via_suspend, agp_via_resume);
|
||||
static DEFINE_SIMPLE_DEV_PM_OPS(agp_via_pm_ops, NULL, agp_via_resume);
|
||||
|
||||
static struct pci_driver agp_via_pci_driver = {
|
||||
.name = "agpgart-via",
|
||||
|
|
|
@ -350,6 +350,11 @@ bool pcie_cap_has_lnkctl(const struct pci_dev *dev)
|
|||
type == PCI_EXP_TYPE_PCIE_BRIDGE;
|
||||
}
|
||||
|
||||
bool pcie_cap_has_lnkctl2(const struct pci_dev *dev)
|
||||
{
|
||||
return pcie_cap_has_lnkctl(dev) && pcie_cap_version(dev) > 1;
|
||||
}
|
||||
|
||||
static inline bool pcie_cap_has_sltctl(const struct pci_dev *dev)
|
||||
{
|
||||
return pcie_downstream_port(dev) &&
|
||||
|
@ -390,10 +395,11 @@ static bool pcie_capability_reg_implemented(struct pci_dev *dev, int pos)
|
|||
return pcie_cap_has_rtctl(dev);
|
||||
case PCI_EXP_DEVCAP2:
|
||||
case PCI_EXP_DEVCTL2:
|
||||
return pcie_cap_version(dev) > 1;
|
||||
case PCI_EXP_LNKCAP2:
|
||||
case PCI_EXP_LNKCTL2:
|
||||
case PCI_EXP_LNKSTA2:
|
||||
return pcie_cap_version(dev) > 1;
|
||||
return pcie_cap_has_lnkctl2(dev);
|
||||
default:
|
||||
return false;
|
||||
}
|
||||
|
|
|
@ -197,6 +197,10 @@ static int pci_bus_alloc_from_region(struct pci_bus *bus, struct resource *res,
|
|||
|
||||
max = avail.end;
|
||||
|
||||
/* Don't bother if available space isn't large enough */
|
||||
if (size > max - min_used + 1)
|
||||
continue;
|
||||
|
||||
/* Ok, try it out.. */
|
||||
ret = allocate_resource(r, res, size, min_used, max,
|
||||
align, alignf, alignf_data);
|
||||
|
|
|
@ -15,7 +15,6 @@
|
|||
#include <linux/mfd/syscon.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
#include <linux/regmap.h>
|
||||
|
|
|
@ -222,6 +222,15 @@ config PCIE_ARTPEC6_EP
|
|||
Enables support for the PCIe controller in the ARTPEC-6 SoC to work in
|
||||
endpoint mode. This uses the DesignWare core.
|
||||
|
||||
config PCIE_BT1
|
||||
tristate "Baikal-T1 PCIe controller"
|
||||
depends on MIPS_BAIKAL_T1 || COMPILE_TEST
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
select PCIE_DW_HOST
|
||||
help
|
||||
Enables support for the PCIe controller in the Baikal-T1 SoC to work
|
||||
in host mode. It's based on the Synopsys DWC PCIe v4.60a IP-core.
|
||||
|
||||
config PCIE_ROCKCHIP_DW_HOST
|
||||
bool "Rockchip DesignWare PCIe controller"
|
||||
select PCIE_DW
|
||||
|
|
|
@ -3,6 +3,7 @@ obj-$(CONFIG_PCIE_DW) += pcie-designware.o
|
|||
obj-$(CONFIG_PCIE_DW_HOST) += pcie-designware-host.o
|
||||
obj-$(CONFIG_PCIE_DW_EP) += pcie-designware-ep.o
|
||||
obj-$(CONFIG_PCIE_DW_PLAT) += pcie-designware-plat.o
|
||||
obj-$(CONFIG_PCIE_BT1) += pcie-bt1.o
|
||||
obj-$(CONFIG_PCI_DRA7XX) += pci-dra7xx.o
|
||||
obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o
|
||||
obj-$(CONFIG_PCIE_FU740) += pcie-fu740.o
|
||||
|
|
|
@ -952,12 +952,6 @@ static int imx6_pcie_host_init(struct dw_pcie_rp *pp)
|
|||
}
|
||||
}
|
||||
|
||||
ret = imx6_pcie_deassert_core_reset(imx6_pcie);
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "pcie deassert core reset failed: %d\n", ret);
|
||||
goto err_phy_off;
|
||||
}
|
||||
|
||||
if (imx6_pcie->phy) {
|
||||
ret = phy_power_on(imx6_pcie->phy);
|
||||
if (ret) {
|
||||
|
@ -965,6 +959,13 @@ static int imx6_pcie_host_init(struct dw_pcie_rp *pp)
|
|||
goto err_phy_off;
|
||||
}
|
||||
}
|
||||
|
||||
ret = imx6_pcie_deassert_core_reset(imx6_pcie);
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "pcie deassert core reset failed: %d\n", ret);
|
||||
goto err_phy_off;
|
||||
}
|
||||
|
||||
imx6_setup_phy_mpll(imx6_pcie);
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -13,7 +13,6 @@
|
|||
#include <linux/init.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/of_platform.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
|
|
@ -21,7 +21,6 @@
|
|||
#include <linux/platform_device.h>
|
||||
#include <linux/resource.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/of_irq.h>
|
||||
|
||||
#include "pcie-designware.h"
|
||||
|
||||
|
|
|
@ -0,0 +1,643 @@
|
|||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
/*
|
||||
* Copyright (C) 2021 BAIKAL ELECTRONICS, JSC
|
||||
*
|
||||
* Authors:
|
||||
* Vadim Vlasov <Vadim.Vlasov@baikalelectronics.ru>
|
||||
* Serge Semin <Sergey.Semin@baikalelectronics.ru>
|
||||
*
|
||||
* Baikal-T1 PCIe controller driver
|
||||
*/
|
||||
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/bits.h>
|
||||
#include <linux/clk.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/gpio/consumer.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/mfd/syscon.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/regmap.h>
|
||||
#include <linux/reset.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
#include "pcie-designware.h"
|
||||
|
||||
/* Baikal-T1 System CCU control registers */
|
||||
#define BT1_CCU_PCIE_CLKC 0x140
|
||||
#define BT1_CCU_PCIE_REQ_PCS_CLK BIT(16)
|
||||
#define BT1_CCU_PCIE_REQ_MAC_CLK BIT(17)
|
||||
#define BT1_CCU_PCIE_REQ_PIPE_CLK BIT(18)
|
||||
|
||||
#define BT1_CCU_PCIE_RSTC 0x144
|
||||
#define BT1_CCU_PCIE_REQ_LINK_RST BIT(13)
|
||||
#define BT1_CCU_PCIE_REQ_SMLH_RST BIT(14)
|
||||
#define BT1_CCU_PCIE_REQ_PHY_RST BIT(16)
|
||||
#define BT1_CCU_PCIE_REQ_CORE_RST BIT(24)
|
||||
#define BT1_CCU_PCIE_REQ_STICKY_RST BIT(26)
|
||||
#define BT1_CCU_PCIE_REQ_NSTICKY_RST BIT(27)
|
||||
|
||||
#define BT1_CCU_PCIE_PMSC 0x148
|
||||
#define BT1_CCU_PCIE_LTSSM_STATE_MASK GENMASK(5, 0)
|
||||
#define BT1_CCU_PCIE_LTSSM_DET_QUIET 0x00
|
||||
#define BT1_CCU_PCIE_LTSSM_DET_ACT 0x01
|
||||
#define BT1_CCU_PCIE_LTSSM_POLL_ACT 0x02
|
||||
#define BT1_CCU_PCIE_LTSSM_POLL_COMP 0x03
|
||||
#define BT1_CCU_PCIE_LTSSM_POLL_CONF 0x04
|
||||
#define BT1_CCU_PCIE_LTSSM_PRE_DET_QUIET 0x05
|
||||
#define BT1_CCU_PCIE_LTSSM_DET_WAIT 0x06
|
||||
#define BT1_CCU_PCIE_LTSSM_CFG_LNKWD_START 0x07
|
||||
#define BT1_CCU_PCIE_LTSSM_CFG_LNKWD_ACEPT 0x08
|
||||
#define BT1_CCU_PCIE_LTSSM_CFG_LNNUM_WAIT 0x09
|
||||
#define BT1_CCU_PCIE_LTSSM_CFG_LNNUM_ACEPT 0x0a
|
||||
#define BT1_CCU_PCIE_LTSSM_CFG_COMPLETE 0x0b
|
||||
#define BT1_CCU_PCIE_LTSSM_CFG_IDLE 0x0c
|
||||
#define BT1_CCU_PCIE_LTSSM_RCVR_LOCK 0x0d
|
||||
#define BT1_CCU_PCIE_LTSSM_RCVR_SPEED 0x0e
|
||||
#define BT1_CCU_PCIE_LTSSM_RCVR_RCVRCFG 0x0f
|
||||
#define BT1_CCU_PCIE_LTSSM_RCVR_IDLE 0x10
|
||||
#define BT1_CCU_PCIE_LTSSM_L0 0x11
|
||||
#define BT1_CCU_PCIE_LTSSM_L0S 0x12
|
||||
#define BT1_CCU_PCIE_LTSSM_L123_SEND_IDLE 0x13
|
||||
#define BT1_CCU_PCIE_LTSSM_L1_IDLE 0x14
|
||||
#define BT1_CCU_PCIE_LTSSM_L2_IDLE 0x15
|
||||
#define BT1_CCU_PCIE_LTSSM_L2_WAKE 0x16
|
||||
#define BT1_CCU_PCIE_LTSSM_DIS_ENTRY 0x17
|
||||
#define BT1_CCU_PCIE_LTSSM_DIS_IDLE 0x18
|
||||
#define BT1_CCU_PCIE_LTSSM_DISABLE 0x19
|
||||
#define BT1_CCU_PCIE_LTSSM_LPBK_ENTRY 0x1a
|
||||
#define BT1_CCU_PCIE_LTSSM_LPBK_ACTIVE 0x1b
|
||||
#define BT1_CCU_PCIE_LTSSM_LPBK_EXIT 0x1c
|
||||
#define BT1_CCU_PCIE_LTSSM_LPBK_EXIT_TOUT 0x1d
|
||||
#define BT1_CCU_PCIE_LTSSM_HOT_RST_ENTRY 0x1e
|
||||
#define BT1_CCU_PCIE_LTSSM_HOT_RST 0x1f
|
||||
#define BT1_CCU_PCIE_LTSSM_RCVR_EQ0 0x20
|
||||
#define BT1_CCU_PCIE_LTSSM_RCVR_EQ1 0x21
|
||||
#define BT1_CCU_PCIE_LTSSM_RCVR_EQ2 0x22
|
||||
#define BT1_CCU_PCIE_LTSSM_RCVR_EQ3 0x23
|
||||
#define BT1_CCU_PCIE_SMLH_LINKUP BIT(6)
|
||||
#define BT1_CCU_PCIE_RDLH_LINKUP BIT(7)
|
||||
#define BT1_CCU_PCIE_PM_LINKSTATE_L0S BIT(8)
|
||||
#define BT1_CCU_PCIE_PM_LINKSTATE_L1 BIT(9)
|
||||
#define BT1_CCU_PCIE_PM_LINKSTATE_L2 BIT(10)
|
||||
#define BT1_CCU_PCIE_L1_PENDING BIT(12)
|
||||
#define BT1_CCU_PCIE_REQ_EXIT_L1 BIT(14)
|
||||
#define BT1_CCU_PCIE_LTSSM_RCVR_EQ BIT(15)
|
||||
#define BT1_CCU_PCIE_PM_DSTAT_MASK GENMASK(18, 16)
|
||||
#define BT1_CCU_PCIE_PM_PME_EN BIT(20)
|
||||
#define BT1_CCU_PCIE_PM_PME_STATUS BIT(21)
|
||||
#define BT1_CCU_PCIE_AUX_PM_EN BIT(22)
|
||||
#define BT1_CCU_PCIE_AUX_PWR_DET BIT(23)
|
||||
#define BT1_CCU_PCIE_WAKE_DET BIT(24)
|
||||
#define BT1_CCU_PCIE_TURNOFF_REQ BIT(30)
|
||||
#define BT1_CCU_PCIE_TURNOFF_ACK BIT(31)
|
||||
|
||||
#define BT1_CCU_PCIE_GENC 0x14c
|
||||
#define BT1_CCU_PCIE_LTSSM_EN BIT(1)
|
||||
#define BT1_CCU_PCIE_DBI2_MODE BIT(2)
|
||||
#define BT1_CCU_PCIE_MGMT_EN BIT(3)
|
||||
#define BT1_CCU_PCIE_RXLANE_FLIP_EN BIT(16)
|
||||
#define BT1_CCU_PCIE_TXLANE_FLIP_EN BIT(17)
|
||||
#define BT1_CCU_PCIE_SLV_XFER_PEND BIT(24)
|
||||
#define BT1_CCU_PCIE_RCV_XFER_PEND BIT(25)
|
||||
#define BT1_CCU_PCIE_DBI_XFER_PEND BIT(26)
|
||||
#define BT1_CCU_PCIE_DMA_XFER_PEND BIT(27)
|
||||
|
||||
#define BT1_CCU_PCIE_LTSSM_LINKUP(_pmsc) \
|
||||
({ \
|
||||
int __state = FIELD_GET(BT1_CCU_PCIE_LTSSM_STATE_MASK, _pmsc); \
|
||||
__state >= BT1_CCU_PCIE_LTSSM_L0 && __state <= BT1_CCU_PCIE_LTSSM_L2_WAKE; \
|
||||
})
|
||||
|
||||
/* Baikal-T1 PCIe specific control registers */
|
||||
#define BT1_PCIE_AXI2MGM_LANENUM 0xd04
|
||||
#define BT1_PCIE_AXI2MGM_LANESEL_MASK GENMASK(3, 0)
|
||||
|
||||
#define BT1_PCIE_AXI2MGM_ADDRCTL 0xd08
|
||||
#define BT1_PCIE_AXI2MGM_PHYREG_ADDR_MASK GENMASK(20, 0)
|
||||
#define BT1_PCIE_AXI2MGM_READ_FLAG BIT(29)
|
||||
#define BT1_PCIE_AXI2MGM_DONE BIT(30)
|
||||
#define BT1_PCIE_AXI2MGM_BUSY BIT(31)
|
||||
|
||||
#define BT1_PCIE_AXI2MGM_WRITEDATA 0xd0c
|
||||
#define BT1_PCIE_AXI2MGM_WDATA GENMASK(15, 0)
|
||||
|
||||
#define BT1_PCIE_AXI2MGM_READDATA 0xd10
|
||||
#define BT1_PCIE_AXI2MGM_RDATA GENMASK(15, 0)
|
||||
|
||||
/* Generic Baikal-T1 PCIe interface resources */
|
||||
#define BT1_PCIE_NUM_APP_CLKS ARRAY_SIZE(bt1_pcie_app_clks)
|
||||
#define BT1_PCIE_NUM_CORE_CLKS ARRAY_SIZE(bt1_pcie_core_clks)
|
||||
#define BT1_PCIE_NUM_APP_RSTS ARRAY_SIZE(bt1_pcie_app_rsts)
|
||||
#define BT1_PCIE_NUM_CORE_RSTS ARRAY_SIZE(bt1_pcie_core_rsts)
|
||||
|
||||
/* PCIe bus setup delays and timeouts */
|
||||
#define BT1_PCIE_RST_DELAY_MS 100
|
||||
#define BT1_PCIE_RUN_DELAY_US 100
|
||||
#define BT1_PCIE_REQ_DELAY_US 1
|
||||
#define BT1_PCIE_REQ_TIMEOUT_US 1000
|
||||
#define BT1_PCIE_LNK_DELAY_US 1000
|
||||
#define BT1_PCIE_LNK_TIMEOUT_US 1000000
|
||||
|
||||
static const enum dw_pcie_app_clk bt1_pcie_app_clks[] = {
|
||||
DW_PCIE_DBI_CLK, DW_PCIE_MSTR_CLK, DW_PCIE_SLV_CLK,
|
||||
};
|
||||
|
||||
static const enum dw_pcie_core_clk bt1_pcie_core_clks[] = {
|
||||
DW_PCIE_REF_CLK,
|
||||
};
|
||||
|
||||
static const enum dw_pcie_app_rst bt1_pcie_app_rsts[] = {
|
||||
DW_PCIE_MSTR_RST, DW_PCIE_SLV_RST,
|
||||
};
|
||||
|
||||
static const enum dw_pcie_core_rst bt1_pcie_core_rsts[] = {
|
||||
DW_PCIE_NON_STICKY_RST, DW_PCIE_STICKY_RST, DW_PCIE_CORE_RST,
|
||||
DW_PCIE_PIPE_RST, DW_PCIE_PHY_RST, DW_PCIE_HOT_RST, DW_PCIE_PWR_RST,
|
||||
};
|
||||
|
||||
struct bt1_pcie {
|
||||
struct dw_pcie dw;
|
||||
struct platform_device *pdev;
|
||||
struct regmap *sys_regs;
|
||||
};
|
||||
#define to_bt1_pcie(_dw) container_of(_dw, struct bt1_pcie, dw)
|
||||
|
||||
/*
|
||||
* Baikal-T1 MMIO space must be read/written by the dword-aligned
|
||||
* instructions. Note the methods are optimized to have the dword operations
|
||||
* performed with minimum overhead as the most frequently used ones.
|
||||
*/
|
||||
static int bt1_pcie_read_mmio(void __iomem *addr, int size, u32 *val)
|
||||
{
|
||||
unsigned int ofs = (uintptr_t)addr & 0x3;
|
||||
|
||||
if (!IS_ALIGNED((uintptr_t)addr, size))
|
||||
return -EINVAL;
|
||||
|
||||
*val = readl(addr - ofs) >> ofs * BITS_PER_BYTE;
|
||||
if (size == 4) {
|
||||
return 0;
|
||||
} else if (size == 2) {
|
||||
*val &= 0xffff;
|
||||
return 0;
|
||||
} else if (size == 1) {
|
||||
*val &= 0xff;
|
||||
return 0;
|
||||
}
|
||||
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
static int bt1_pcie_write_mmio(void __iomem *addr, int size, u32 val)
|
||||
{
|
||||
unsigned int ofs = (uintptr_t)addr & 0x3;
|
||||
u32 tmp, mask;
|
||||
|
||||
if (!IS_ALIGNED((uintptr_t)addr, size))
|
||||
return -EINVAL;
|
||||
|
||||
if (size == 4) {
|
||||
writel(val, addr);
|
||||
return 0;
|
||||
} else if (size == 2 || size == 1) {
|
||||
mask = GENMASK(size * BITS_PER_BYTE - 1, 0);
|
||||
tmp = readl(addr - ofs) & ~(mask << ofs * BITS_PER_BYTE);
|
||||
tmp |= (val & mask) << ofs * BITS_PER_BYTE;
|
||||
writel(tmp, addr - ofs);
|
||||
return 0;
|
||||
}
|
||||
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
static u32 bt1_pcie_read_dbi(struct dw_pcie *pci, void __iomem *base, u32 reg,
|
||||
size_t size)
|
||||
{
|
||||
int ret;
|
||||
u32 val;
|
||||
|
||||
ret = bt1_pcie_read_mmio(base + reg, size, &val);
|
||||
if (ret) {
|
||||
dev_err(pci->dev, "Read DBI address failed\n");
|
||||
return ~0U;
|
||||
}
|
||||
|
||||
return val;
|
||||
}
|
||||
|
||||
static void bt1_pcie_write_dbi(struct dw_pcie *pci, void __iomem *base, u32 reg,
|
||||
size_t size, u32 val)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = bt1_pcie_write_mmio(base + reg, size, val);
|
||||
if (ret)
|
||||
dev_err(pci->dev, "Write DBI address failed\n");
|
||||
}
|
||||
|
||||
static void bt1_pcie_write_dbi2(struct dw_pcie *pci, void __iomem *base, u32 reg,
|
||||
size_t size, u32 val)
|
||||
{
|
||||
struct bt1_pcie *btpci = to_bt1_pcie(pci);
|
||||
int ret;
|
||||
|
||||
regmap_update_bits(btpci->sys_regs, BT1_CCU_PCIE_GENC,
|
||||
BT1_CCU_PCIE_DBI2_MODE, BT1_CCU_PCIE_DBI2_MODE);
|
||||
|
||||
ret = bt1_pcie_write_mmio(base + reg, size, val);
|
||||
if (ret)
|
||||
dev_err(pci->dev, "Write DBI2 address failed\n");
|
||||
|
||||
regmap_update_bits(btpci->sys_regs, BT1_CCU_PCIE_GENC,
|
||||
BT1_CCU_PCIE_DBI2_MODE, 0);
|
||||
}
|
||||
|
||||
static int bt1_pcie_start_link(struct dw_pcie *pci)
|
||||
{
|
||||
struct bt1_pcie *btpci = to_bt1_pcie(pci);
|
||||
u32 val;
|
||||
int ret;
|
||||
|
||||
/*
|
||||
* Enable LTSSM and make sure it was able to establish both PHY and
|
||||
* data links. This procedure shall work fine to reach 2.5 GT/s speed.
|
||||
*/
|
||||
regmap_update_bits(btpci->sys_regs, BT1_CCU_PCIE_GENC,
|
||||
BT1_CCU_PCIE_LTSSM_EN, BT1_CCU_PCIE_LTSSM_EN);
|
||||
|
||||
ret = regmap_read_poll_timeout(btpci->sys_regs, BT1_CCU_PCIE_PMSC, val,
|
||||
(val & BT1_CCU_PCIE_SMLH_LINKUP),
|
||||
BT1_PCIE_LNK_DELAY_US, BT1_PCIE_LNK_TIMEOUT_US);
|
||||
if (ret) {
|
||||
dev_err(pci->dev, "LTSSM failed to set PHY link up\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = regmap_read_poll_timeout(btpci->sys_regs, BT1_CCU_PCIE_PMSC, val,
|
||||
(val & BT1_CCU_PCIE_RDLH_LINKUP),
|
||||
BT1_PCIE_LNK_DELAY_US, BT1_PCIE_LNK_TIMEOUT_US);
|
||||
if (ret) {
|
||||
dev_err(pci->dev, "LTSSM failed to set data link up\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Activate direct speed change after the link is established in an
|
||||
* attempt to reach a higher bus performance (up to Gen.3 - 8.0 GT/s).
|
||||
* This is required at least to get 8.0 GT/s speed.
|
||||
*/
|
||||
val = dw_pcie_readl_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL);
|
||||
val |= PORT_LOGIC_SPEED_CHANGE;
|
||||
dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, val);
|
||||
|
||||
ret = regmap_read_poll_timeout(btpci->sys_regs, BT1_CCU_PCIE_PMSC, val,
|
||||
BT1_CCU_PCIE_LTSSM_LINKUP(val),
|
||||
BT1_PCIE_LNK_DELAY_US, BT1_PCIE_LNK_TIMEOUT_US);
|
||||
if (ret)
|
||||
dev_err(pci->dev, "LTSSM failed to get into L0 state\n");
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void bt1_pcie_stop_link(struct dw_pcie *pci)
|
||||
{
|
||||
struct bt1_pcie *btpci = to_bt1_pcie(pci);
|
||||
|
||||
regmap_update_bits(btpci->sys_regs, BT1_CCU_PCIE_GENC,
|
||||
BT1_CCU_PCIE_LTSSM_EN, 0);
|
||||
}
|
||||
|
||||
static const struct dw_pcie_ops bt1_pcie_ops = {
|
||||
.read_dbi = bt1_pcie_read_dbi,
|
||||
.write_dbi = bt1_pcie_write_dbi,
|
||||
.write_dbi2 = bt1_pcie_write_dbi2,
|
||||
.start_link = bt1_pcie_start_link,
|
||||
.stop_link = bt1_pcie_stop_link,
|
||||
};
|
||||
|
||||
static struct pci_ops bt1_pci_ops = {
|
||||
.map_bus = dw_pcie_own_conf_map_bus,
|
||||
.read = pci_generic_config_read32,
|
||||
.write = pci_generic_config_write32,
|
||||
};
|
||||
|
||||
static int bt1_pcie_get_resources(struct bt1_pcie *btpci)
|
||||
{
|
||||
struct device *dev = btpci->dw.dev;
|
||||
int i;
|
||||
|
||||
/* DBI access is supposed to be performed by the dword-aligned IOs */
|
||||
btpci->dw.pp.bridge->ops = &bt1_pci_ops;
|
||||
|
||||
/* These CSRs are in MMIO so we won't check the regmap-methods status */
|
||||
btpci->sys_regs =
|
||||
syscon_regmap_lookup_by_phandle(dev->of_node, "baikal,bt1-syscon");
|
||||
if (IS_ERR(btpci->sys_regs))
|
||||
return dev_err_probe(dev, PTR_ERR(btpci->sys_regs),
|
||||
"Failed to get syscon\n");
|
||||
|
||||
/* Make sure all the required resources have been specified */
|
||||
for (i = 0; i < BT1_PCIE_NUM_APP_CLKS; i++) {
|
||||
if (!btpci->dw.app_clks[bt1_pcie_app_clks[i]].clk) {
|
||||
dev_err(dev, "App clocks set is incomplete\n");
|
||||
return -ENOENT;
|
||||
}
|
||||
}
|
||||
|
||||
for (i = 0; i < BT1_PCIE_NUM_CORE_CLKS; i++) {
|
||||
if (!btpci->dw.core_clks[bt1_pcie_core_clks[i]].clk) {
|
||||
dev_err(dev, "Core clocks set is incomplete\n");
|
||||
return -ENOENT;
|
||||
}
|
||||
}
|
||||
|
||||
for (i = 0; i < BT1_PCIE_NUM_APP_RSTS; i++) {
|
||||
if (!btpci->dw.app_rsts[bt1_pcie_app_rsts[i]].rstc) {
|
||||
dev_err(dev, "App resets set is incomplete\n");
|
||||
return -ENOENT;
|
||||
}
|
||||
}
|
||||
|
||||
for (i = 0; i < BT1_PCIE_NUM_CORE_RSTS; i++) {
|
||||
if (!btpci->dw.core_rsts[bt1_pcie_core_rsts[i]].rstc) {
|
||||
dev_err(dev, "Core resets set is incomplete\n");
|
||||
return -ENOENT;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void bt1_pcie_full_stop_bus(struct bt1_pcie *btpci, bool init)
|
||||
{
|
||||
struct device *dev = btpci->dw.dev;
|
||||
struct dw_pcie *pci = &btpci->dw;
|
||||
int ret;
|
||||
|
||||
/* Disable LTSSM for sure */
|
||||
regmap_update_bits(btpci->sys_regs, BT1_CCU_PCIE_GENC,
|
||||
BT1_CCU_PCIE_LTSSM_EN, 0);
|
||||
|
||||
/*
|
||||
* Application reset controls are trigger-based so assert the core
|
||||
* resets only.
|
||||
*/
|
||||
ret = reset_control_bulk_assert(DW_PCIE_NUM_CORE_RSTS, pci->core_rsts);
|
||||
if (ret)
|
||||
dev_err(dev, "Failed to assert core resets\n");
|
||||
|
||||
/*
|
||||
* Clocks are disabled by default at least in accordance with the clk
|
||||
* enable counter value on init stage.
|
||||
*/
|
||||
if (!init) {
|
||||
clk_bulk_disable_unprepare(DW_PCIE_NUM_CORE_CLKS, pci->core_clks);
|
||||
|
||||
clk_bulk_disable_unprepare(DW_PCIE_NUM_APP_CLKS, pci->app_clks);
|
||||
}
|
||||
|
||||
/* The peripheral devices are unavailable anyway so reset them too */
|
||||
gpiod_set_value_cansleep(pci->pe_rst, 1);
|
||||
|
||||
/* Make sure all the resets are settled */
|
||||
msleep(BT1_PCIE_RST_DELAY_MS);
|
||||
}
|
||||
|
||||
/*
|
||||
* Implements the cold reset procedure in accordance with the reference manual
|
||||
* and available PM signals.
|
||||
*/
|
||||
static int bt1_pcie_cold_start_bus(struct bt1_pcie *btpci)
|
||||
{
|
||||
struct device *dev = btpci->dw.dev;
|
||||
struct dw_pcie *pci = &btpci->dw;
|
||||
u32 val;
|
||||
int ret;
|
||||
|
||||
/* First get out of the Power/Hot reset state */
|
||||
ret = reset_control_deassert(pci->core_rsts[DW_PCIE_PWR_RST].rstc);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to deassert PHY reset\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = reset_control_deassert(pci->core_rsts[DW_PCIE_HOT_RST].rstc);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to deassert hot reset\n");
|
||||
goto err_assert_pwr_rst;
|
||||
}
|
||||
|
||||
/* Wait for the PM-core to stop requesting the PHY reset */
|
||||
ret = regmap_read_poll_timeout(btpci->sys_regs, BT1_CCU_PCIE_RSTC, val,
|
||||
!(val & BT1_CCU_PCIE_REQ_PHY_RST),
|
||||
BT1_PCIE_REQ_DELAY_US, BT1_PCIE_REQ_TIMEOUT_US);
|
||||
if (ret) {
|
||||
dev_err(dev, "Timed out waiting for PM to stop PHY resetting\n");
|
||||
goto err_assert_hot_rst;
|
||||
}
|
||||
|
||||
ret = reset_control_deassert(pci->core_rsts[DW_PCIE_PHY_RST].rstc);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to deassert PHY reset\n");
|
||||
goto err_assert_hot_rst;
|
||||
}
|
||||
|
||||
/* Clocks can be now enabled, but the ref one is crucial at this stage */
|
||||
ret = clk_bulk_prepare_enable(DW_PCIE_NUM_APP_CLKS, pci->app_clks);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to enable app clocks\n");
|
||||
goto err_assert_phy_rst;
|
||||
}
|
||||
|
||||
ret = clk_bulk_prepare_enable(DW_PCIE_NUM_CORE_CLKS, pci->core_clks);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to enable ref clocks\n");
|
||||
goto err_disable_app_clk;
|
||||
}
|
||||
|
||||
/* Wait for the PM to stop requesting the controller core reset */
|
||||
ret = regmap_read_poll_timeout(btpci->sys_regs, BT1_CCU_PCIE_RSTC, val,
|
||||
!(val & BT1_CCU_PCIE_REQ_CORE_RST),
|
||||
BT1_PCIE_REQ_DELAY_US, BT1_PCIE_REQ_TIMEOUT_US);
|
||||
if (ret) {
|
||||
dev_err(dev, "Timed out waiting for PM to stop core resetting\n");
|
||||
goto err_disable_core_clk;
|
||||
}
|
||||
|
||||
/* PCS-PIPE interface and controller core can be now activated */
|
||||
ret = reset_control_deassert(pci->core_rsts[DW_PCIE_PIPE_RST].rstc);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to deassert PIPE reset\n");
|
||||
goto err_disable_core_clk;
|
||||
}
|
||||
|
||||
ret = reset_control_deassert(pci->core_rsts[DW_PCIE_CORE_RST].rstc);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to deassert core reset\n");
|
||||
goto err_assert_pipe_rst;
|
||||
}
|
||||
|
||||
/* It's recommended to reset the core and application logic together */
|
||||
ret = reset_control_bulk_reset(DW_PCIE_NUM_APP_RSTS, pci->app_rsts);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to reset app domain\n");
|
||||
goto err_assert_core_rst;
|
||||
}
|
||||
|
||||
/* Sticky/Non-sticky CSR flags can be now unreset too */
|
||||
ret = reset_control_deassert(pci->core_rsts[DW_PCIE_STICKY_RST].rstc);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to deassert sticky reset\n");
|
||||
goto err_assert_core_rst;
|
||||
}
|
||||
|
||||
ret = reset_control_deassert(pci->core_rsts[DW_PCIE_NON_STICKY_RST].rstc);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to deassert non-sticky reset\n");
|
||||
goto err_assert_sticky_rst;
|
||||
}
|
||||
|
||||
/* Activate the PCIe bus peripheral devices */
|
||||
gpiod_set_value_cansleep(pci->pe_rst, 0);
|
||||
|
||||
/* Make sure the state is settled (LTSSM is still disabled though) */
|
||||
usleep_range(BT1_PCIE_RUN_DELAY_US, BT1_PCIE_RUN_DELAY_US + 100);
|
||||
|
||||
return 0;
|
||||
|
||||
err_assert_sticky_rst:
|
||||
reset_control_assert(pci->core_rsts[DW_PCIE_STICKY_RST].rstc);
|
||||
|
||||
err_assert_core_rst:
|
||||
reset_control_assert(pci->core_rsts[DW_PCIE_CORE_RST].rstc);
|
||||
|
||||
err_assert_pipe_rst:
|
||||
reset_control_assert(pci->core_rsts[DW_PCIE_PIPE_RST].rstc);
|
||||
|
||||
err_disable_core_clk:
|
||||
clk_bulk_disable_unprepare(DW_PCIE_NUM_CORE_CLKS, pci->core_clks);
|
||||
|
||||
err_disable_app_clk:
|
||||
clk_bulk_disable_unprepare(DW_PCIE_NUM_APP_CLKS, pci->app_clks);
|
||||
|
||||
err_assert_phy_rst:
|
||||
reset_control_assert(pci->core_rsts[DW_PCIE_PHY_RST].rstc);
|
||||
|
||||
err_assert_hot_rst:
|
||||
reset_control_assert(pci->core_rsts[DW_PCIE_HOT_RST].rstc);
|
||||
|
||||
err_assert_pwr_rst:
|
||||
reset_control_assert(pci->core_rsts[DW_PCIE_PWR_RST].rstc);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int bt1_pcie_host_init(struct dw_pcie_rp *pp)
|
||||
{
|
||||
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
|
||||
struct bt1_pcie *btpci = to_bt1_pcie(pci);
|
||||
int ret;
|
||||
|
||||
ret = bt1_pcie_get_resources(btpci);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
bt1_pcie_full_stop_bus(btpci, true);
|
||||
|
||||
return bt1_pcie_cold_start_bus(btpci);
|
||||
}
|
||||
|
||||
static void bt1_pcie_host_deinit(struct dw_pcie_rp *pp)
|
||||
{
|
||||
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
|
||||
struct bt1_pcie *btpci = to_bt1_pcie(pci);
|
||||
|
||||
bt1_pcie_full_stop_bus(btpci, false);
|
||||
}
|
||||
|
||||
static const struct dw_pcie_host_ops bt1_pcie_host_ops = {
|
||||
.host_init = bt1_pcie_host_init,
|
||||
.host_deinit = bt1_pcie_host_deinit,
|
||||
};
|
||||
|
||||
static struct bt1_pcie *bt1_pcie_create_data(struct platform_device *pdev)
|
||||
{
|
||||
struct bt1_pcie *btpci;
|
||||
|
||||
btpci = devm_kzalloc(&pdev->dev, sizeof(*btpci), GFP_KERNEL);
|
||||
if (!btpci)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
btpci->pdev = pdev;
|
||||
|
||||
platform_set_drvdata(pdev, btpci);
|
||||
|
||||
return btpci;
|
||||
}
|
||||
|
||||
static int bt1_pcie_add_port(struct bt1_pcie *btpci)
|
||||
{
|
||||
struct device *dev = &btpci->pdev->dev;
|
||||
int ret;
|
||||
|
||||
btpci->dw.version = DW_PCIE_VER_460A;
|
||||
btpci->dw.dev = dev;
|
||||
btpci->dw.ops = &bt1_pcie_ops;
|
||||
|
||||
btpci->dw.pp.num_vectors = MAX_MSI_IRQS;
|
||||
btpci->dw.pp.ops = &bt1_pcie_host_ops;
|
||||
|
||||
dw_pcie_cap_set(&btpci->dw, REQ_RES);
|
||||
|
||||
ret = dw_pcie_host_init(&btpci->dw.pp);
|
||||
|
||||
return dev_err_probe(dev, ret, "Failed to initialize DWC PCIe host\n");
|
||||
}
|
||||
|
||||
static void bt1_pcie_del_port(struct bt1_pcie *btpci)
|
||||
{
|
||||
dw_pcie_host_deinit(&btpci->dw.pp);
|
||||
}
|
||||
|
||||
static int bt1_pcie_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct bt1_pcie *btpci;
|
||||
|
||||
btpci = bt1_pcie_create_data(pdev);
|
||||
if (IS_ERR(btpci))
|
||||
return PTR_ERR(btpci);
|
||||
|
||||
return bt1_pcie_add_port(btpci);
|
||||
}
|
||||
|
||||
static int bt1_pcie_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct bt1_pcie *btpci = platform_get_drvdata(pdev);
|
||||
|
||||
bt1_pcie_del_port(btpci);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct of_device_id bt1_pcie_of_match[] = {
|
||||
{ .compatible = "baikal,bt1-pcie" },
|
||||
{},
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, bt1_pcie_of_match);
|
||||
|
||||
static struct platform_driver bt1_pcie_driver = {
|
||||
.probe = bt1_pcie_probe,
|
||||
.remove = bt1_pcie_remove,
|
||||
.driver = {
|
||||
.name = "bt1-pcie",
|
||||
.of_match_table = bt1_pcie_of_match,
|
||||
},
|
||||
};
|
||||
module_platform_driver(bt1_pcie_driver);
|
||||
|
||||
MODULE_AUTHOR("Serge Semin <Sergey.Semin@baikalelectronics.ru>");
|
||||
MODULE_DESCRIPTION("Baikal-T1 PCIe driver");
|
||||
MODULE_LICENSE("GPL");
|
|
@ -13,8 +13,6 @@
|
|||
#include <linux/pci-epc.h>
|
||||
#include <linux/pci-epf.h>
|
||||
|
||||
#include "../../pci.h"
|
||||
|
||||
void dw_pcie_ep_linkup(struct dw_pcie_ep *ep)
|
||||
{
|
||||
struct pci_epc *epc = ep->epc;
|
||||
|
@ -171,8 +169,8 @@ static int dw_pcie_ep_inbound_atu(struct dw_pcie_ep *ep, u8 func_no, int type,
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
ret = dw_pcie_prog_inbound_atu(pci, func_no, free_win, type,
|
||||
cpu_addr, bar);
|
||||
ret = dw_pcie_prog_ep_inbound_atu(pci, func_no, free_win, type,
|
||||
cpu_addr, bar);
|
||||
if (ret < 0) {
|
||||
dev_err(pci->dev, "Failed to program IB window\n");
|
||||
return ret;
|
||||
|
@ -643,7 +641,7 @@ static unsigned int dw_pcie_ep_find_ext_capability(struct dw_pcie *pci, int cap)
|
|||
int dw_pcie_ep_init_complete(struct dw_pcie_ep *ep)
|
||||
{
|
||||
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
|
||||
unsigned int offset;
|
||||
unsigned int offset, ptm_cap_base;
|
||||
unsigned int nbars;
|
||||
u8 hdr_type;
|
||||
u32 reg;
|
||||
|
@ -659,6 +657,7 @@ int dw_pcie_ep_init_complete(struct dw_pcie_ep *ep)
|
|||
}
|
||||
|
||||
offset = dw_pcie_ep_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR);
|
||||
ptm_cap_base = dw_pcie_ep_find_ext_capability(pci, PCI_EXT_CAP_ID_PTM);
|
||||
|
||||
dw_pcie_dbi_ro_wr_en(pci);
|
||||
|
||||
|
@ -671,6 +670,22 @@ int dw_pcie_ep_init_complete(struct dw_pcie_ep *ep)
|
|||
dw_pcie_writel_dbi(pci, offset + PCI_REBAR_CAP, 0x0);
|
||||
}
|
||||
|
||||
/*
|
||||
* PTM responder capability can be disabled only after disabling
|
||||
* PTM root capability.
|
||||
*/
|
||||
if (ptm_cap_base) {
|
||||
dw_pcie_dbi_ro_wr_en(pci);
|
||||
reg = dw_pcie_readl_dbi(pci, ptm_cap_base + PCI_PTM_CAP);
|
||||
reg &= ~PCI_PTM_CAP_ROOT;
|
||||
dw_pcie_writel_dbi(pci, ptm_cap_base + PCI_PTM_CAP, reg);
|
||||
|
||||
reg = dw_pcie_readl_dbi(pci, ptm_cap_base + PCI_PTM_CAP);
|
||||
reg &= ~(PCI_PTM_CAP_RES | PCI_PTM_GRANULARITY_MASK);
|
||||
dw_pcie_writel_dbi(pci, ptm_cap_base + PCI_PTM_CAP, reg);
|
||||
dw_pcie_dbi_ro_wr_dis(pci);
|
||||
}
|
||||
|
||||
dw_pcie_setup(pci);
|
||||
dw_pcie_dbi_ro_wr_dis(pci);
|
||||
|
||||
|
@ -694,23 +709,9 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
|
|||
|
||||
INIT_LIST_HEAD(&ep->func_list);
|
||||
|
||||
if (!pci->dbi_base) {
|
||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
|
||||
pci->dbi_base = devm_pci_remap_cfg_resource(dev, res);
|
||||
if (IS_ERR(pci->dbi_base))
|
||||
return PTR_ERR(pci->dbi_base);
|
||||
}
|
||||
|
||||
if (!pci->dbi_base2) {
|
||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi2");
|
||||
if (!res) {
|
||||
pci->dbi_base2 = pci->dbi_base + SZ_4K;
|
||||
} else {
|
||||
pci->dbi_base2 = devm_pci_remap_cfg_resource(dev, res);
|
||||
if (IS_ERR(pci->dbi_base2))
|
||||
return PTR_ERR(pci->dbi_base2);
|
||||
}
|
||||
}
|
||||
ret = dw_pcie_get_resources(pci);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "addr_space");
|
||||
if (!res)
|
||||
|
@ -739,9 +740,6 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
|
|||
return -ENOMEM;
|
||||
ep->outbound_addr = addr;
|
||||
|
||||
if (pci->link_gen < 1)
|
||||
pci->link_gen = of_pci_get_max_link_speed(np);
|
||||
|
||||
epc = devm_pci_epc_create(dev, &epc_ops);
|
||||
if (IS_ERR(epc)) {
|
||||
dev_err(dev, "Failed to create epc device\n");
|
||||
|
|
|
@ -16,7 +16,6 @@
|
|||
#include <linux/pci_regs.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
||||
#include "../../pci.h"
|
||||
#include "pcie-designware.h"
|
||||
|
||||
static struct pci_ops dw_pcie_ops;
|
||||
|
@ -395,6 +394,10 @@ int dw_pcie_host_init(struct dw_pcie_rp *pp)
|
|||
|
||||
raw_spin_lock_init(&pp->lock);
|
||||
|
||||
ret = dw_pcie_get_resources(pci);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "config");
|
||||
if (res) {
|
||||
pp->cfg0_size = resource_size(res);
|
||||
|
@ -408,13 +411,6 @@ int dw_pcie_host_init(struct dw_pcie_rp *pp)
|
|||
return -ENODEV;
|
||||
}
|
||||
|
||||
if (!pci->dbi_base) {
|
||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
|
||||
pci->dbi_base = devm_pci_remap_cfg_resource(dev, res);
|
||||
if (IS_ERR(pci->dbi_base))
|
||||
return PTR_ERR(pci->dbi_base);
|
||||
}
|
||||
|
||||
bridge = devm_pci_alloc_host_bridge(dev, 0);
|
||||
if (!bridge)
|
||||
return -ENOMEM;
|
||||
|
@ -429,9 +425,6 @@ int dw_pcie_host_init(struct dw_pcie_rp *pp)
|
|||
pp->io_base = pci_pio_to_address(win->res->start);
|
||||
}
|
||||
|
||||
if (pci->link_gen < 1)
|
||||
pci->link_gen = of_pci_get_max_link_speed(np);
|
||||
|
||||
/* Set default bus ops */
|
||||
bridge->ops = &dw_pcie_ops;
|
||||
bridge->child_ops = &dw_child_pcie_ops;
|
||||
|
@ -643,12 +636,15 @@ static int dw_pcie_iatu_setup(struct dw_pcie_rp *pp)
|
|||
}
|
||||
|
||||
/*
|
||||
* Ensure all outbound windows are disabled before proceeding with
|
||||
* the MEM/IO ranges setups.
|
||||
* Ensure all out/inbound windows are disabled before proceeding with
|
||||
* the MEM/IO (dma-)ranges setups.
|
||||
*/
|
||||
for (i = 0; i < pci->num_ob_windows; i++)
|
||||
dw_pcie_disable_atu(pci, PCIE_ATU_REGION_DIR_OB, i);
|
||||
|
||||
for (i = 0; i < pci->num_ib_windows; i++)
|
||||
dw_pcie_disable_atu(pci, PCIE_ATU_REGION_DIR_IB, i);
|
||||
|
||||
i = 0;
|
||||
resource_list_for_each_entry(entry, &pp->bridge->windows) {
|
||||
if (resource_type(entry->res) != IORESOURCE_MEM)
|
||||
|
@ -685,9 +681,32 @@ static int dw_pcie_iatu_setup(struct dw_pcie_rp *pp)
|
|||
}
|
||||
|
||||
if (pci->num_ob_windows <= i)
|
||||
dev_warn(pci->dev, "Resources exceed number of ATU entries (%d)\n",
|
||||
dev_warn(pci->dev, "Ranges exceed outbound iATU size (%d)\n",
|
||||
pci->num_ob_windows);
|
||||
|
||||
i = 0;
|
||||
resource_list_for_each_entry(entry, &pp->bridge->dma_ranges) {
|
||||
if (resource_type(entry->res) != IORESOURCE_MEM)
|
||||
continue;
|
||||
|
||||
if (pci->num_ib_windows <= i)
|
||||
break;
|
||||
|
||||
ret = dw_pcie_prog_inbound_atu(pci, i++, PCIE_ATU_TYPE_MEM,
|
||||
entry->res->start,
|
||||
entry->res->start - entry->offset,
|
||||
resource_size(entry->res));
|
||||
if (ret) {
|
||||
dev_err(pci->dev, "Failed to set DMA range %pr\n",
|
||||
entry->res);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
if (pci->num_ib_windows <= i)
|
||||
dev_warn(pci->dev, "Dma-ranges exceed inbound iATU size (%u)\n",
|
||||
pci->num_ib_windows);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -10,7 +10,10 @@
|
|||
|
||||
#include <linux/align.h>
|
||||
#include <linux/bitops.h>
|
||||
#include <linux/clk.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/gpio/consumer.h>
|
||||
#include <linux/ioport.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_platform.h>
|
||||
#include <linux/sizes.h>
|
||||
|
@ -19,6 +22,148 @@
|
|||
#include "../../pci.h"
|
||||
#include "pcie-designware.h"
|
||||
|
||||
static const char * const dw_pcie_app_clks[DW_PCIE_NUM_APP_CLKS] = {
|
||||
[DW_PCIE_DBI_CLK] = "dbi",
|
||||
[DW_PCIE_MSTR_CLK] = "mstr",
|
||||
[DW_PCIE_SLV_CLK] = "slv",
|
||||
};
|
||||
|
||||
static const char * const dw_pcie_core_clks[DW_PCIE_NUM_CORE_CLKS] = {
|
||||
[DW_PCIE_PIPE_CLK] = "pipe",
|
||||
[DW_PCIE_CORE_CLK] = "core",
|
||||
[DW_PCIE_AUX_CLK] = "aux",
|
||||
[DW_PCIE_REF_CLK] = "ref",
|
||||
};
|
||||
|
||||
static const char * const dw_pcie_app_rsts[DW_PCIE_NUM_APP_RSTS] = {
|
||||
[DW_PCIE_DBI_RST] = "dbi",
|
||||
[DW_PCIE_MSTR_RST] = "mstr",
|
||||
[DW_PCIE_SLV_RST] = "slv",
|
||||
};
|
||||
|
||||
static const char * const dw_pcie_core_rsts[DW_PCIE_NUM_CORE_RSTS] = {
|
||||
[DW_PCIE_NON_STICKY_RST] = "non-sticky",
|
||||
[DW_PCIE_STICKY_RST] = "sticky",
|
||||
[DW_PCIE_CORE_RST] = "core",
|
||||
[DW_PCIE_PIPE_RST] = "pipe",
|
||||
[DW_PCIE_PHY_RST] = "phy",
|
||||
[DW_PCIE_HOT_RST] = "hot",
|
||||
[DW_PCIE_PWR_RST] = "pwr",
|
||||
};
|
||||
|
||||
static int dw_pcie_get_clocks(struct dw_pcie *pci)
|
||||
{
|
||||
int i, ret;
|
||||
|
||||
for (i = 0; i < DW_PCIE_NUM_APP_CLKS; i++)
|
||||
pci->app_clks[i].id = dw_pcie_app_clks[i];
|
||||
|
||||
for (i = 0; i < DW_PCIE_NUM_CORE_CLKS; i++)
|
||||
pci->core_clks[i].id = dw_pcie_core_clks[i];
|
||||
|
||||
ret = devm_clk_bulk_get_optional(pci->dev, DW_PCIE_NUM_APP_CLKS,
|
||||
pci->app_clks);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return devm_clk_bulk_get_optional(pci->dev, DW_PCIE_NUM_CORE_CLKS,
|
||||
pci->core_clks);
|
||||
}
|
||||
|
||||
static int dw_pcie_get_resets(struct dw_pcie *pci)
|
||||
{
|
||||
int i, ret;
|
||||
|
||||
for (i = 0; i < DW_PCIE_NUM_APP_RSTS; i++)
|
||||
pci->app_rsts[i].id = dw_pcie_app_rsts[i];
|
||||
|
||||
for (i = 0; i < DW_PCIE_NUM_CORE_RSTS; i++)
|
||||
pci->core_rsts[i].id = dw_pcie_core_rsts[i];
|
||||
|
||||
ret = devm_reset_control_bulk_get_optional_shared(pci->dev,
|
||||
DW_PCIE_NUM_APP_RSTS,
|
||||
pci->app_rsts);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = devm_reset_control_bulk_get_optional_exclusive(pci->dev,
|
||||
DW_PCIE_NUM_CORE_RSTS,
|
||||
pci->core_rsts);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
pci->pe_rst = devm_gpiod_get_optional(pci->dev, "reset", GPIOD_OUT_HIGH);
|
||||
if (IS_ERR(pci->pe_rst))
|
||||
return PTR_ERR(pci->pe_rst);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int dw_pcie_get_resources(struct dw_pcie *pci)
|
||||
{
|
||||
struct platform_device *pdev = to_platform_device(pci->dev);
|
||||
struct device_node *np = dev_of_node(pci->dev);
|
||||
struct resource *res;
|
||||
int ret;
|
||||
|
||||
if (!pci->dbi_base) {
|
||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
|
||||
pci->dbi_base = devm_pci_remap_cfg_resource(pci->dev, res);
|
||||
if (IS_ERR(pci->dbi_base))
|
||||
return PTR_ERR(pci->dbi_base);
|
||||
}
|
||||
|
||||
/* DBI2 is mainly useful for the endpoint controller */
|
||||
if (!pci->dbi_base2) {
|
||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi2");
|
||||
if (res) {
|
||||
pci->dbi_base2 = devm_pci_remap_cfg_resource(pci->dev, res);
|
||||
if (IS_ERR(pci->dbi_base2))
|
||||
return PTR_ERR(pci->dbi_base2);
|
||||
} else {
|
||||
pci->dbi_base2 = pci->dbi_base + SZ_4K;
|
||||
}
|
||||
}
|
||||
|
||||
/* For non-unrolled iATU/eDMA platforms this range will be ignored */
|
||||
if (!pci->atu_base) {
|
||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "atu");
|
||||
if (res) {
|
||||
pci->atu_size = resource_size(res);
|
||||
pci->atu_base = devm_ioremap_resource(pci->dev, res);
|
||||
if (IS_ERR(pci->atu_base))
|
||||
return PTR_ERR(pci->atu_base);
|
||||
} else {
|
||||
pci->atu_base = pci->dbi_base + DEFAULT_DBI_ATU_OFFSET;
|
||||
}
|
||||
}
|
||||
|
||||
/* Set a default value suitable for at most 8 in and 8 out windows */
|
||||
if (!pci->atu_size)
|
||||
pci->atu_size = SZ_4K;
|
||||
|
||||
/* LLDD is supposed to manually switch the clocks and resets state */
|
||||
if (dw_pcie_cap_is(pci, REQ_RES)) {
|
||||
ret = dw_pcie_get_clocks(pci);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = dw_pcie_get_resets(pci);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (pci->link_gen < 1)
|
||||
pci->link_gen = of_pci_get_max_link_speed(np);
|
||||
|
||||
of_property_read_u32(np, "num-lanes", &pci->num_lanes);
|
||||
|
||||
if (of_property_read_bool(np, "snps,enable-cdm-check"))
|
||||
dw_pcie_cap_set(pci, CDM_CHECK);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void dw_pcie_version_detect(struct dw_pcie *pci)
|
||||
{
|
||||
u32 ver;
|
||||
|
@ -211,7 +356,7 @@ void dw_pcie_write_dbi2(struct dw_pcie *pci, u32 reg, size_t size, u32 val)
|
|||
static inline void __iomem *dw_pcie_select_atu(struct dw_pcie *pci, u32 dir,
|
||||
u32 index)
|
||||
{
|
||||
if (pci->iatu_unroll_enabled)
|
||||
if (dw_pcie_cap_is(pci, IATU_UNROLL))
|
||||
return pci->atu_base + PCIE_ATU_UNROLL_BASE(dir, index);
|
||||
|
||||
dw_pcie_writel_dbi(pci, PCIE_ATU_VIEWPORT, dir | index);
|
||||
|
@ -393,8 +538,60 @@ static inline void dw_pcie_writel_atu_ib(struct dw_pcie *pci, u32 index, u32 reg
|
|||
dw_pcie_writel_atu(pci, PCIE_ATU_REGION_DIR_IB, index, reg, val);
|
||||
}
|
||||
|
||||
int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, u8 func_no, int index,
|
||||
int type, u64 cpu_addr, u8 bar)
|
||||
int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, int index, int type,
|
||||
u64 cpu_addr, u64 pci_addr, u64 size)
|
||||
{
|
||||
u64 limit_addr = pci_addr + size - 1;
|
||||
u32 retries, val;
|
||||
|
||||
if ((limit_addr & ~pci->region_limit) != (pci_addr & ~pci->region_limit) ||
|
||||
!IS_ALIGNED(cpu_addr, pci->region_align) ||
|
||||
!IS_ALIGNED(pci_addr, pci->region_align) || !size) {
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
dw_pcie_writel_atu_ib(pci, index, PCIE_ATU_LOWER_BASE,
|
||||
lower_32_bits(pci_addr));
|
||||
dw_pcie_writel_atu_ib(pci, index, PCIE_ATU_UPPER_BASE,
|
||||
upper_32_bits(pci_addr));
|
||||
|
||||
dw_pcie_writel_atu_ib(pci, index, PCIE_ATU_LIMIT,
|
||||
lower_32_bits(limit_addr));
|
||||
if (dw_pcie_ver_is_ge(pci, 460A))
|
||||
dw_pcie_writel_atu_ib(pci, index, PCIE_ATU_UPPER_LIMIT,
|
||||
upper_32_bits(limit_addr));
|
||||
|
||||
dw_pcie_writel_atu_ib(pci, index, PCIE_ATU_LOWER_TARGET,
|
||||
lower_32_bits(cpu_addr));
|
||||
dw_pcie_writel_atu_ib(pci, index, PCIE_ATU_UPPER_TARGET,
|
||||
upper_32_bits(cpu_addr));
|
||||
|
||||
val = type;
|
||||
if (upper_32_bits(limit_addr) > upper_32_bits(pci_addr) &&
|
||||
dw_pcie_ver_is_ge(pci, 460A))
|
||||
val |= PCIE_ATU_INCREASE_REGION_SIZE;
|
||||
dw_pcie_writel_atu_ib(pci, index, PCIE_ATU_REGION_CTRL1, val);
|
||||
dw_pcie_writel_atu_ib(pci, index, PCIE_ATU_REGION_CTRL2, PCIE_ATU_ENABLE);
|
||||
|
||||
/*
|
||||
* Make sure ATU enable takes effect before any subsequent config
|
||||
* and I/O accesses.
|
||||
*/
|
||||
for (retries = 0; retries < LINK_WAIT_MAX_IATU_RETRIES; retries++) {
|
||||
val = dw_pcie_readl_atu_ib(pci, index, PCIE_ATU_REGION_CTRL2);
|
||||
if (val & PCIE_ATU_ENABLE)
|
||||
return 0;
|
||||
|
||||
mdelay(LINK_WAIT_IATU);
|
||||
}
|
||||
|
||||
dev_err(pci->dev, "Inbound iATU is not being enabled\n");
|
||||
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
int dw_pcie_prog_ep_inbound_atu(struct dw_pcie *pci, u8 func_no, int index,
|
||||
int type, u64 cpu_addr, u8 bar)
|
||||
{
|
||||
u32 retries, val;
|
||||
|
||||
|
@ -448,7 +645,7 @@ int dw_pcie_wait_for_link(struct dw_pcie *pci)
|
|||
}
|
||||
|
||||
if (retries >= LINK_WAIT_MAX_RETRIES) {
|
||||
dev_err(pci->dev, "Phy link never came up\n");
|
||||
dev_info(pci->dev, "Phy link never came up\n");
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
|
@ -522,26 +719,21 @@ static void dw_pcie_link_set_max_speed(struct dw_pcie *pci, u32 link_gen)
|
|||
|
||||
}
|
||||
|
||||
static bool dw_pcie_iatu_unroll_enabled(struct dw_pcie *pci)
|
||||
{
|
||||
u32 val;
|
||||
|
||||
val = dw_pcie_readl_dbi(pci, PCIE_ATU_VIEWPORT);
|
||||
if (val == 0xffffffff)
|
||||
return true;
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
static void dw_pcie_iatu_detect_regions(struct dw_pcie *pci)
|
||||
void dw_pcie_iatu_detect(struct dw_pcie *pci)
|
||||
{
|
||||
int max_region, ob, ib;
|
||||
u32 val, min, dir;
|
||||
u64 max;
|
||||
|
||||
if (pci->iatu_unroll_enabled) {
|
||||
val = dw_pcie_readl_dbi(pci, PCIE_ATU_VIEWPORT);
|
||||
if (val == 0xFFFFFFFF) {
|
||||
dw_pcie_cap_set(pci, IATU_UNROLL);
|
||||
|
||||
max_region = min((int)pci->atu_size / 512, 256);
|
||||
} else {
|
||||
pci->atu_base = pci->dbi_base + PCIE_ATU_VIEWPORT_BASE;
|
||||
pci->atu_size = PCIE_ATU_VIEWPORT_SIZE;
|
||||
|
||||
dw_pcie_writel_dbi(pci, PCIE_ATU_VIEWPORT, 0xFF);
|
||||
max_region = dw_pcie_readl_dbi(pci, PCIE_ATU_VIEWPORT) + 1;
|
||||
}
|
||||
|
@ -583,46 +775,15 @@ static void dw_pcie_iatu_detect_regions(struct dw_pcie *pci)
|
|||
pci->num_ib_windows = ib;
|
||||
pci->region_align = 1 << fls(min);
|
||||
pci->region_limit = (max << 32) | (SZ_4G - 1);
|
||||
}
|
||||
|
||||
void dw_pcie_iatu_detect(struct dw_pcie *pci)
|
||||
{
|
||||
struct platform_device *pdev = to_platform_device(pci->dev);
|
||||
|
||||
pci->iatu_unroll_enabled = dw_pcie_iatu_unroll_enabled(pci);
|
||||
if (pci->iatu_unroll_enabled) {
|
||||
if (!pci->atu_base) {
|
||||
struct resource *res =
|
||||
platform_get_resource_byname(pdev, IORESOURCE_MEM, "atu");
|
||||
if (res) {
|
||||
pci->atu_size = resource_size(res);
|
||||
pci->atu_base = devm_ioremap_resource(pci->dev, res);
|
||||
}
|
||||
if (!pci->atu_base || IS_ERR(pci->atu_base))
|
||||
pci->atu_base = pci->dbi_base + DEFAULT_DBI_ATU_OFFSET;
|
||||
}
|
||||
|
||||
if (!pci->atu_size)
|
||||
/* Pick a minimal default, enough for 8 in and 8 out windows */
|
||||
pci->atu_size = SZ_4K;
|
||||
} else {
|
||||
pci->atu_base = pci->dbi_base + PCIE_ATU_VIEWPORT_BASE;
|
||||
pci->atu_size = PCIE_ATU_VIEWPORT_SIZE;
|
||||
}
|
||||
|
||||
dw_pcie_iatu_detect_regions(pci);
|
||||
|
||||
dev_info(pci->dev, "iATU unroll: %s\n", pci->iatu_unroll_enabled ?
|
||||
"enabled" : "disabled");
|
||||
|
||||
dev_info(pci->dev, "iATU regions: %u ob, %u ib, align %uK, limit %lluG\n",
|
||||
dev_info(pci->dev, "iATU: unroll %s, %u ob, %u ib, align %uK, limit %lluG\n",
|
||||
dw_pcie_cap_is(pci, IATU_UNROLL) ? "T" : "F",
|
||||
pci->num_ob_windows, pci->num_ib_windows,
|
||||
pci->region_align / SZ_1K, (pci->region_limit + 1) / SZ_1G);
|
||||
}
|
||||
|
||||
void dw_pcie_setup(struct dw_pcie *pci)
|
||||
{
|
||||
struct device_node *np = pci->dev->of_node;
|
||||
u32 val;
|
||||
|
||||
if (pci->link_gen > 0)
|
||||
|
@ -641,7 +802,7 @@ void dw_pcie_setup(struct dw_pcie *pci)
|
|||
if (pci->n_fts[1]) {
|
||||
val = dw_pcie_readl_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL);
|
||||
val &= ~PORT_LOGIC_N_FTS_MASK;
|
||||
val |= pci->n_fts[pci->link_gen - 1];
|
||||
val |= pci->n_fts[1];
|
||||
dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, val);
|
||||
}
|
||||
|
||||
|
@ -650,14 +811,13 @@ void dw_pcie_setup(struct dw_pcie *pci)
|
|||
val |= PORT_LINK_DLL_LINK_EN;
|
||||
dw_pcie_writel_dbi(pci, PCIE_PORT_LINK_CONTROL, val);
|
||||
|
||||
if (of_property_read_bool(np, "snps,enable-cdm-check")) {
|
||||
if (dw_pcie_cap_is(pci, CDM_CHECK)) {
|
||||
val = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS);
|
||||
val |= PCIE_PL_CHK_REG_CHK_REG_CONTINUOUS |
|
||||
PCIE_PL_CHK_REG_CHK_REG_START;
|
||||
dw_pcie_writel_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS, val);
|
||||
}
|
||||
|
||||
of_property_read_u32(np, "num-lanes", &pci->num_lanes);
|
||||
if (!pci->num_lanes) {
|
||||
dev_dbg(pci->dev, "Using h/w default number of lanes\n");
|
||||
return;
|
||||
|
|
|
@ -12,10 +12,14 @@
|
|||
#define _PCIE_DESIGNWARE_H
|
||||
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/bitops.h>
|
||||
#include <linux/clk.h>
|
||||
#include <linux/dma-mapping.h>
|
||||
#include <linux/gpio/consumer.h>
|
||||
#include <linux/irq.h>
|
||||
#include <linux/msi.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/reset.h>
|
||||
|
||||
#include <linux/pci-epc.h>
|
||||
#include <linux/pci-epf.h>
|
||||
|
@ -43,6 +47,17 @@
|
|||
(__dw_pcie_ver_cmp(_pci, _ver, ==) && \
|
||||
__dw_pcie_ver_cmp(_pci, TYPE_ ## _type, >=))
|
||||
|
||||
/* DWC PCIe controller capabilities */
|
||||
#define DW_PCIE_CAP_REQ_RES 0
|
||||
#define DW_PCIE_CAP_IATU_UNROLL 1
|
||||
#define DW_PCIE_CAP_CDM_CHECK 2
|
||||
|
||||
#define dw_pcie_cap_is(_pci, _cap) \
|
||||
test_bit(DW_PCIE_CAP_ ## _cap, &(_pci)->caps)
|
||||
|
||||
#define dw_pcie_cap_set(_pci, _cap) \
|
||||
set_bit(DW_PCIE_CAP_ ## _cap, &(_pci)->caps)
|
||||
|
||||
/* Parameters for the waiting for link up routine */
|
||||
#define LINK_WAIT_MAX_RETRIES 10
|
||||
#define LINK_WAIT_USLEEP_MIN 90000
|
||||
|
@ -222,6 +237,39 @@ enum dw_pcie_device_mode {
|
|||
DW_PCIE_RC_TYPE,
|
||||
};
|
||||
|
||||
enum dw_pcie_app_clk {
|
||||
DW_PCIE_DBI_CLK,
|
||||
DW_PCIE_MSTR_CLK,
|
||||
DW_PCIE_SLV_CLK,
|
||||
DW_PCIE_NUM_APP_CLKS
|
||||
};
|
||||
|
||||
enum dw_pcie_core_clk {
|
||||
DW_PCIE_PIPE_CLK,
|
||||
DW_PCIE_CORE_CLK,
|
||||
DW_PCIE_AUX_CLK,
|
||||
DW_PCIE_REF_CLK,
|
||||
DW_PCIE_NUM_CORE_CLKS
|
||||
};
|
||||
|
||||
enum dw_pcie_app_rst {
|
||||
DW_PCIE_DBI_RST,
|
||||
DW_PCIE_MSTR_RST,
|
||||
DW_PCIE_SLV_RST,
|
||||
DW_PCIE_NUM_APP_RSTS
|
||||
};
|
||||
|
||||
enum dw_pcie_core_rst {
|
||||
DW_PCIE_NON_STICKY_RST,
|
||||
DW_PCIE_STICKY_RST,
|
||||
DW_PCIE_CORE_RST,
|
||||
DW_PCIE_PIPE_RST,
|
||||
DW_PCIE_PHY_RST,
|
||||
DW_PCIE_HOT_RST,
|
||||
DW_PCIE_PWR_RST,
|
||||
DW_PCIE_NUM_CORE_RSTS
|
||||
};
|
||||
|
||||
struct dw_pcie_host_ops {
|
||||
int (*host_init)(struct dw_pcie_rp *pp);
|
||||
void (*host_deinit)(struct dw_pcie_rp *pp);
|
||||
|
@ -317,10 +365,15 @@ struct dw_pcie {
|
|||
const struct dw_pcie_ops *ops;
|
||||
u32 version;
|
||||
u32 type;
|
||||
unsigned long caps;
|
||||
int num_lanes;
|
||||
int link_gen;
|
||||
u8 n_fts[2];
|
||||
bool iatu_unroll_enabled: 1;
|
||||
struct clk_bulk_data app_clks[DW_PCIE_NUM_APP_CLKS];
|
||||
struct clk_bulk_data core_clks[DW_PCIE_NUM_CORE_CLKS];
|
||||
struct reset_control_bulk_data app_rsts[DW_PCIE_NUM_APP_RSTS];
|
||||
struct reset_control_bulk_data core_rsts[DW_PCIE_NUM_CORE_RSTS];
|
||||
struct gpio_desc *pe_rst;
|
||||
};
|
||||
|
||||
#define to_dw_pcie_from_pp(port) container_of((port), struct dw_pcie, pp)
|
||||
|
@ -328,6 +381,8 @@ struct dw_pcie {
|
|||
#define to_dw_pcie_from_ep(endpoint) \
|
||||
container_of((endpoint), struct dw_pcie, ep)
|
||||
|
||||
int dw_pcie_get_resources(struct dw_pcie *pci);
|
||||
|
||||
void dw_pcie_version_detect(struct dw_pcie *pci);
|
||||
|
||||
u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap);
|
||||
|
@ -346,8 +401,10 @@ int dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index, int type,
|
|||
u64 cpu_addr, u64 pci_addr, u64 size);
|
||||
int dw_pcie_prog_ep_outbound_atu(struct dw_pcie *pci, u8 func_no, int index,
|
||||
int type, u64 cpu_addr, u64 pci_addr, u64 size);
|
||||
int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, u8 func_no, int index,
|
||||
int type, u64 cpu_addr, u8 bar);
|
||||
int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, int index, int type,
|
||||
u64 cpu_addr, u64 pci_addr, u64 size);
|
||||
int dw_pcie_prog_ep_inbound_atu(struct dw_pcie *pci, u8 func_no, int index,
|
||||
int type, u64 cpu_addr, u8 bar);
|
||||
void dw_pcie_disable_atu(struct dw_pcie *pci, u32 dir, int index);
|
||||
void dw_pcie_setup(struct dw_pcie *pci);
|
||||
void dw_pcie_iatu_detect(struct dw_pcie *pci);
|
||||
|
|
|
@ -10,11 +10,11 @@
|
|||
|
||||
#include <linux/clk.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/gpio/consumer.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_gpio.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/phy/phy.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
@ -60,7 +60,7 @@ struct histb_pcie {
|
|||
struct reset_control *sys_reset;
|
||||
struct reset_control *bus_reset;
|
||||
void __iomem *ctrl;
|
||||
int reset_gpio;
|
||||
struct gpio_desc *reset_gpio;
|
||||
struct regulator *vpcie;
|
||||
};
|
||||
|
||||
|
@ -212,8 +212,8 @@ static void histb_pcie_host_disable(struct histb_pcie *hipcie)
|
|||
clk_disable_unprepare(hipcie->sys_clk);
|
||||
clk_disable_unprepare(hipcie->bus_clk);
|
||||
|
||||
if (gpio_is_valid(hipcie->reset_gpio))
|
||||
gpio_set_value_cansleep(hipcie->reset_gpio, 0);
|
||||
if (hipcie->reset_gpio)
|
||||
gpiod_set_value_cansleep(hipcie->reset_gpio, 1);
|
||||
|
||||
if (hipcie->vpcie)
|
||||
regulator_disable(hipcie->vpcie);
|
||||
|
@ -235,8 +235,8 @@ static int histb_pcie_host_enable(struct dw_pcie_rp *pp)
|
|||
}
|
||||
}
|
||||
|
||||
if (gpio_is_valid(hipcie->reset_gpio))
|
||||
gpio_set_value_cansleep(hipcie->reset_gpio, 1);
|
||||
if (hipcie->reset_gpio)
|
||||
gpiod_set_value_cansleep(hipcie->reset_gpio, 0);
|
||||
|
||||
ret = clk_prepare_enable(hipcie->bus_clk);
|
||||
if (ret) {
|
||||
|
@ -298,10 +298,7 @@ static int histb_pcie_probe(struct platform_device *pdev)
|
|||
struct histb_pcie *hipcie;
|
||||
struct dw_pcie *pci;
|
||||
struct dw_pcie_rp *pp;
|
||||
struct device_node *np = pdev->dev.of_node;
|
||||
struct device *dev = &pdev->dev;
|
||||
enum of_gpio_flags of_flags;
|
||||
unsigned long flag = GPIOF_DIR_OUT;
|
||||
int ret;
|
||||
|
||||
hipcie = devm_kzalloc(dev, sizeof(*hipcie), GFP_KERNEL);
|
||||
|
@ -336,17 +333,19 @@ static int histb_pcie_probe(struct platform_device *pdev)
|
|||
hipcie->vpcie = NULL;
|
||||
}
|
||||
|
||||
hipcie->reset_gpio = of_get_named_gpio_flags(np,
|
||||
"reset-gpios", 0, &of_flags);
|
||||
if (of_flags & OF_GPIO_ACTIVE_LOW)
|
||||
flag |= GPIOF_ACTIVE_LOW;
|
||||
if (gpio_is_valid(hipcie->reset_gpio)) {
|
||||
ret = devm_gpio_request_one(dev, hipcie->reset_gpio,
|
||||
flag, "PCIe device power control");
|
||||
if (ret) {
|
||||
dev_err(dev, "unable to request gpio\n");
|
||||
return ret;
|
||||
}
|
||||
hipcie->reset_gpio = devm_gpiod_get_optional(dev, "reset",
|
||||
GPIOD_OUT_HIGH);
|
||||
ret = PTR_ERR_OR_ZERO(hipcie->reset_gpio);
|
||||
if (ret) {
|
||||
dev_err(dev, "unable to request reset gpio: %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = gpiod_set_consumer_name(hipcie->reset_gpio,
|
||||
"PCIe device power control");
|
||||
if (ret) {
|
||||
dev_err(dev, "unable to set reset gpio name: %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
hipcie->aux_clk = devm_clk_get(dev, "aux");
|
||||
|
|
|
@ -12,6 +12,7 @@
|
|||
#include <linux/crc8.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/gpio/consumer.h>
|
||||
#include <linux/interconnect.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/iopoll.h>
|
||||
|
@ -223,6 +224,7 @@ struct qcom_pcie {
|
|||
union qcom_pcie_resources res;
|
||||
struct phy *phy;
|
||||
struct gpio_desc *reset;
|
||||
struct icc_path *icc_mem;
|
||||
const struct qcom_pcie_cfg *cfg;
|
||||
};
|
||||
|
||||
|
@ -1236,7 +1238,7 @@ static int qcom_pcie_init_2_7_0(struct qcom_pcie *pcie)
|
|||
|
||||
ret = reset_control_assert(res->pci_reset);
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "cannot deassert pci reset\n");
|
||||
dev_err(dev, "cannot assert pci reset\n");
|
||||
goto err_disable_clocks;
|
||||
}
|
||||
|
||||
|
@ -1639,6 +1641,74 @@ static const struct dw_pcie_ops dw_pcie_ops = {
|
|||
.start_link = qcom_pcie_start_link,
|
||||
};
|
||||
|
||||
static int qcom_pcie_icc_init(struct qcom_pcie *pcie)
|
||||
{
|
||||
struct dw_pcie *pci = pcie->pci;
|
||||
int ret;
|
||||
|
||||
pcie->icc_mem = devm_of_icc_get(pci->dev, "pcie-mem");
|
||||
if (IS_ERR(pcie->icc_mem))
|
||||
return PTR_ERR(pcie->icc_mem);
|
||||
|
||||
/*
|
||||
* Some Qualcomm platforms require interconnect bandwidth constraints
|
||||
* to be set before enabling interconnect clocks.
|
||||
*
|
||||
* Set an initial peak bandwidth corresponding to single-lane Gen 1
|
||||
* for the pcie-mem path.
|
||||
*/
|
||||
ret = icc_set_bw(pcie->icc_mem, 0, MBps_to_icc(250));
|
||||
if (ret) {
|
||||
dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n",
|
||||
ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void qcom_pcie_icc_update(struct qcom_pcie *pcie)
|
||||
{
|
||||
struct dw_pcie *pci = pcie->pci;
|
||||
u32 offset, status, bw;
|
||||
int speed, width;
|
||||
int ret;
|
||||
|
||||
if (!pcie->icc_mem)
|
||||
return;
|
||||
|
||||
offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
|
||||
status = readw(pci->dbi_base + offset + PCI_EXP_LNKSTA);
|
||||
|
||||
/* Only update constraints if link is up. */
|
||||
if (!(status & PCI_EXP_LNKSTA_DLLLA))
|
||||
return;
|
||||
|
||||
speed = FIELD_GET(PCI_EXP_LNKSTA_CLS, status);
|
||||
width = FIELD_GET(PCI_EXP_LNKSTA_NLW, status);
|
||||
|
||||
switch (speed) {
|
||||
case 1:
|
||||
bw = MBps_to_icc(250);
|
||||
break;
|
||||
case 2:
|
||||
bw = MBps_to_icc(500);
|
||||
break;
|
||||
default:
|
||||
WARN_ON_ONCE(1);
|
||||
fallthrough;
|
||||
case 3:
|
||||
bw = MBps_to_icc(985);
|
||||
break;
|
||||
}
|
||||
|
||||
ret = icc_set_bw(pcie->icc_mem, 0, width * bw);
|
||||
if (ret) {
|
||||
dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n",
|
||||
ret);
|
||||
}
|
||||
}
|
||||
|
||||
static int qcom_pcie_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
|
@ -1699,6 +1769,10 @@ static int qcom_pcie_probe(struct platform_device *pdev)
|
|||
goto err_pm_runtime_put;
|
||||
}
|
||||
|
||||
ret = qcom_pcie_icc_init(pcie);
|
||||
if (ret)
|
||||
goto err_pm_runtime_put;
|
||||
|
||||
ret = pcie->cfg->ops->get_resources(pcie);
|
||||
if (ret)
|
||||
goto err_pm_runtime_put;
|
||||
|
@ -1717,6 +1791,8 @@ static int qcom_pcie_probe(struct platform_device *pdev)
|
|||
goto err_phy_exit;
|
||||
}
|
||||
|
||||
qcom_pcie_icc_update(pcie);
|
||||
|
||||
return 0;
|
||||
|
||||
err_phy_exit:
|
||||
|
|
|
@ -21,7 +21,6 @@
|
|||
#include <linux/of.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/of_gpio.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/phy/phy.h>
|
||||
|
|
|
@ -1859,20 +1859,18 @@ static int advk_pcie_probe(struct platform_device *pdev)
|
|||
return ret;
|
||||
}
|
||||
|
||||
pcie->reset_gpio = devm_gpiod_get_from_of_node(dev, dev->of_node,
|
||||
"reset-gpios", 0,
|
||||
GPIOD_OUT_LOW,
|
||||
"pcie1-reset");
|
||||
pcie->reset_gpio = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_LOW);
|
||||
ret = PTR_ERR_OR_ZERO(pcie->reset_gpio);
|
||||
if (ret) {
|
||||
if (ret == -ENOENT) {
|
||||
pcie->reset_gpio = NULL;
|
||||
} else {
|
||||
if (ret != -EPROBE_DEFER)
|
||||
dev_err(dev, "Failed to get reset-gpio: %i\n",
|
||||
ret);
|
||||
return ret;
|
||||
}
|
||||
if (ret != -EPROBE_DEFER)
|
||||
dev_err(dev, "Failed to get reset-gpio: %i\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = gpiod_set_consumer_name(pcie->reset_gpio, "pcie1-reset");
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to set reset gpio name: %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = of_pci_get_max_link_speed(dev->of_node);
|
||||
|
|
|
@ -553,7 +553,7 @@ static const struct of_device_id faraday_pci_of_match[] = {
|
|||
static struct platform_driver faraday_pci_driver = {
|
||||
.driver = {
|
||||
.name = "ftpci100",
|
||||
.of_match_table = of_match_ptr(faraday_pci_of_match),
|
||||
.of_match_table = faraday_pci_of_match,
|
||||
.suppress_bind_attrs = true,
|
||||
},
|
||||
.probe = faraday_pci_probe,
|
||||
|
|
|
@ -11,14 +11,15 @@
|
|||
#include <linux/bitfield.h>
|
||||
#include <linux/clk.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/gpio.h>
|
||||
#include <linux/gpio/consumer.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/irqchip/chained_irq.h>
|
||||
#include <linux/irqdomain.h>
|
||||
#include <linux/mbus.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/of_gpio.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/of_platform.h>
|
||||
|
||||
|
@ -1261,9 +1262,8 @@ static int mvebu_pcie_parse_port(struct mvebu_pcie *pcie,
|
|||
struct mvebu_pcie_port *port, struct device_node *child)
|
||||
{
|
||||
struct device *dev = &pcie->pdev->dev;
|
||||
enum of_gpio_flags flags;
|
||||
u32 slot_power_limit;
|
||||
int reset_gpio, ret;
|
||||
int ret;
|
||||
u32 num_lanes;
|
||||
|
||||
port->pcie = pcie;
|
||||
|
@ -1327,40 +1327,24 @@ static int mvebu_pcie_parse_port(struct mvebu_pcie *pcie,
|
|||
port->name, child);
|
||||
}
|
||||
|
||||
reset_gpio = of_get_named_gpio_flags(child, "reset-gpios", 0, &flags);
|
||||
if (reset_gpio == -EPROBE_DEFER) {
|
||||
ret = reset_gpio;
|
||||
port->reset_name = devm_kasprintf(dev, GFP_KERNEL, "%s-reset",
|
||||
port->name);
|
||||
if (!port->reset_name) {
|
||||
ret = -ENOMEM;
|
||||
goto err;
|
||||
}
|
||||
|
||||
if (gpio_is_valid(reset_gpio)) {
|
||||
unsigned long gpio_flags;
|
||||
|
||||
port->reset_name = devm_kasprintf(dev, GFP_KERNEL, "%s-reset",
|
||||
port->name);
|
||||
if (!port->reset_name) {
|
||||
ret = -ENOMEM;
|
||||
port->reset_gpio = devm_fwnode_gpiod_get(dev, of_fwnode_handle(child),
|
||||
"reset", GPIOD_OUT_HIGH,
|
||||
port->name);
|
||||
ret = PTR_ERR_OR_ZERO(port->reset_gpio);
|
||||
if (ret) {
|
||||
if (ret != -ENOENT)
|
||||
goto err;
|
||||
}
|
||||
|
||||
if (flags & OF_GPIO_ACTIVE_LOW) {
|
||||
dev_info(dev, "%pOF: reset gpio is active low\n",
|
||||
child);
|
||||
gpio_flags = GPIOF_ACTIVE_LOW |
|
||||
GPIOF_OUT_INIT_LOW;
|
||||
} else {
|
||||
gpio_flags = GPIOF_OUT_INIT_HIGH;
|
||||
}
|
||||
|
||||
ret = devm_gpio_request_one(dev, reset_gpio, gpio_flags,
|
||||
port->reset_name);
|
||||
if (ret) {
|
||||
if (ret == -EPROBE_DEFER)
|
||||
goto err;
|
||||
goto skip;
|
||||
}
|
||||
|
||||
port->reset_gpio = gpio_to_desc(reset_gpio);
|
||||
/* reset gpio is optional */
|
||||
port->reset_gpio = NULL;
|
||||
devm_kfree(dev, port->reset_name);
|
||||
port->reset_name = NULL;
|
||||
}
|
||||
|
||||
slot_power_limit = of_pci_get_slot_power_limit(child,
|
||||
|
|
|
@ -2202,10 +2202,11 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie)
|
|||
* and in this case fall back to using AFI per port register
|
||||
* to toggle PERST# SFIO line.
|
||||
*/
|
||||
rp->reset_gpio = devm_gpiod_get_from_of_node(dev, port,
|
||||
"reset-gpios", 0,
|
||||
GPIOD_OUT_LOW,
|
||||
label);
|
||||
rp->reset_gpio = devm_fwnode_gpiod_get(dev,
|
||||
of_fwnode_handle(port),
|
||||
"reset",
|
||||
GPIOD_OUT_LOW,
|
||||
label);
|
||||
if (IS_ERR(rp->reset_gpio)) {
|
||||
if (PTR_ERR(rp->reset_gpio) == -ENOENT) {
|
||||
rp->reset_gpio = NULL;
|
||||
|
|
|
@ -22,7 +22,6 @@
|
|||
#include <linux/kernel.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
@ -902,7 +901,7 @@ static const struct of_device_id v3_pci_of_match[] = {
|
|||
static struct platform_driver v3_pci_driver = {
|
||||
.driver = {
|
||||
.name = "pci-v3-semi",
|
||||
.of_match_table = of_match_ptr(v3_pci_of_match),
|
||||
.of_match_table = v3_pci_of_match,
|
||||
.suppress_bind_attrs = true,
|
||||
},
|
||||
.probe = v3_pci_probe,
|
||||
|
|
|
@ -8,9 +8,9 @@
|
|||
*/
|
||||
#include <linux/cpu.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/irqdomain.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/msi.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/irqchip/chained_irq.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
|
|
@ -14,7 +14,6 @@
|
|||
#include <linux/init.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/pci-acpi.h>
|
||||
|
|
|
@ -9,11 +9,11 @@
|
|||
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/irqchip/chained_irq.h>
|
||||
#include <linux/irqdomain.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/msi.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
|
|
@ -9,6 +9,7 @@
|
|||
#include <linux/init.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/iopoll.h>
|
||||
#include <linux/ioport.h>
|
||||
#include <linux/irqchip/chained_irq.h>
|
||||
#include <linux/irqdomain.h>
|
||||
|
@ -52,6 +53,8 @@
|
|||
#define PCIE_RC_DL_MDIO_RD_DATA 0x1108
|
||||
|
||||
#define PCIE_MISC_MISC_CTRL 0x4008
|
||||
#define PCIE_MISC_MISC_CTRL_PCIE_RCB_64B_MODE_MASK 0x80
|
||||
#define PCIE_MISC_MISC_CTRL_PCIE_RCB_MPS_MODE_MASK 0x400
|
||||
#define PCIE_MISC_MISC_CTRL_SCB_ACCESS_EN_MASK 0x1000
|
||||
#define PCIE_MISC_MISC_CTRL_CFG_READ_UR_MODE_MASK 0x2000
|
||||
#define PCIE_MISC_MISC_CTRL_MAX_BURST_SIZE_MASK 0x300000
|
||||
|
@ -302,42 +305,34 @@ static u32 brcm_pcie_mdio_form_pkt(int port, int regad, int cmd)
|
|||
/* negative return value indicates error */
|
||||
static int brcm_pcie_mdio_read(void __iomem *base, u8 port, u8 regad, u32 *val)
|
||||
{
|
||||
int tries;
|
||||
u32 data;
|
||||
int err;
|
||||
|
||||
writel(brcm_pcie_mdio_form_pkt(port, regad, MDIO_CMD_READ),
|
||||
base + PCIE_RC_DL_MDIO_ADDR);
|
||||
readl(base + PCIE_RC_DL_MDIO_ADDR);
|
||||
|
||||
data = readl(base + PCIE_RC_DL_MDIO_RD_DATA);
|
||||
for (tries = 0; !MDIO_RD_DONE(data) && tries < 10; tries++) {
|
||||
udelay(10);
|
||||
data = readl(base + PCIE_RC_DL_MDIO_RD_DATA);
|
||||
}
|
||||
|
||||
err = readl_poll_timeout_atomic(base + PCIE_RC_DL_MDIO_RD_DATA, data,
|
||||
MDIO_RD_DONE(data), 10, 100);
|
||||
*val = FIELD_GET(MDIO_DATA_MASK, data);
|
||||
return MDIO_RD_DONE(data) ? 0 : -EIO;
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
/* negative return value indicates error */
|
||||
static int brcm_pcie_mdio_write(void __iomem *base, u8 port,
|
||||
u8 regad, u16 wrdata)
|
||||
{
|
||||
int tries;
|
||||
u32 data;
|
||||
int err;
|
||||
|
||||
writel(brcm_pcie_mdio_form_pkt(port, regad, MDIO_CMD_WRITE),
|
||||
base + PCIE_RC_DL_MDIO_ADDR);
|
||||
readl(base + PCIE_RC_DL_MDIO_ADDR);
|
||||
writel(MDIO_DATA_DONE_MASK | wrdata, base + PCIE_RC_DL_MDIO_WR_DATA);
|
||||
|
||||
data = readl(base + PCIE_RC_DL_MDIO_WR_DATA);
|
||||
for (tries = 0; !MDIO_WT_DONE(data) && tries < 10; tries++) {
|
||||
udelay(10);
|
||||
data = readl(base + PCIE_RC_DL_MDIO_WR_DATA);
|
||||
}
|
||||
|
||||
return MDIO_WT_DONE(data) ? 0 : -EIO;
|
||||
err = readw_poll_timeout_atomic(base + PCIE_RC_DL_MDIO_WR_DATA, data,
|
||||
MDIO_WT_DONE(data), 10, 100);
|
||||
return err;
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -445,7 +440,8 @@ static struct irq_chip brcm_msi_irq_chip = {
|
|||
|
||||
static struct msi_domain_info brcm_msi_domain_info = {
|
||||
/* Multi MSI is supported by the controller, but not by this driver */
|
||||
.flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS),
|
||||
.flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
|
||||
MSI_FLAG_MULTI_PCI_MSI),
|
||||
.chip = &brcm_msi_irq_chip,
|
||||
};
|
||||
|
||||
|
@ -505,21 +501,23 @@ static struct irq_chip brcm_msi_bottom_irq_chip = {
|
|||
.irq_ack = brcm_msi_ack_irq,
|
||||
};
|
||||
|
||||
static int brcm_msi_alloc(struct brcm_msi *msi)
|
||||
static int brcm_msi_alloc(struct brcm_msi *msi, unsigned int nr_irqs)
|
||||
{
|
||||
int hwirq;
|
||||
|
||||
mutex_lock(&msi->lock);
|
||||
hwirq = bitmap_find_free_region(msi->used, msi->nr, 0);
|
||||
hwirq = bitmap_find_free_region(msi->used, msi->nr,
|
||||
order_base_2(nr_irqs));
|
||||
mutex_unlock(&msi->lock);
|
||||
|
||||
return hwirq;
|
||||
}
|
||||
|
||||
static void brcm_msi_free(struct brcm_msi *msi, unsigned long hwirq)
|
||||
static void brcm_msi_free(struct brcm_msi *msi, unsigned long hwirq,
|
||||
unsigned int nr_irqs)
|
||||
{
|
||||
mutex_lock(&msi->lock);
|
||||
bitmap_release_region(msi->used, hwirq, 0);
|
||||
bitmap_release_region(msi->used, hwirq, order_base_2(nr_irqs));
|
||||
mutex_unlock(&msi->lock);
|
||||
}
|
||||
|
||||
|
@ -527,16 +525,17 @@ static int brcm_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
|
|||
unsigned int nr_irqs, void *args)
|
||||
{
|
||||
struct brcm_msi *msi = domain->host_data;
|
||||
int hwirq;
|
||||
int hwirq, i;
|
||||
|
||||
hwirq = brcm_msi_alloc(msi);
|
||||
hwirq = brcm_msi_alloc(msi, nr_irqs);
|
||||
|
||||
if (hwirq < 0)
|
||||
return hwirq;
|
||||
|
||||
irq_domain_set_info(domain, virq, (irq_hw_number_t)hwirq,
|
||||
&brcm_msi_bottom_irq_chip, domain->host_data,
|
||||
handle_edge_irq, NULL, NULL);
|
||||
for (i = 0; i < nr_irqs; i++)
|
||||
irq_domain_set_info(domain, virq + i, hwirq + i,
|
||||
&brcm_msi_bottom_irq_chip, domain->host_data,
|
||||
handle_edge_irq, NULL, NULL);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -546,7 +545,7 @@ static void brcm_irq_domain_free(struct irq_domain *domain,
|
|||
struct irq_data *d = irq_domain_get_irq_data(domain, virq);
|
||||
struct brcm_msi *msi = irq_data_get_irq_chip_data(d);
|
||||
|
||||
brcm_msi_free(msi, d->hwirq);
|
||||
brcm_msi_free(msi, d->hwirq, nr_irqs);
|
||||
}
|
||||
|
||||
static const struct irq_domain_ops msi_domain_ops = {
|
||||
|
@ -726,7 +725,7 @@ static void __iomem *brcm7425_pcie_map_bus(struct pci_bus *bus,
|
|||
return base + DATA_ADDR(pcie);
|
||||
}
|
||||
|
||||
static inline void brcm_pcie_bridge_sw_init_set_generic(struct brcm_pcie *pcie, u32 val)
|
||||
static void brcm_pcie_bridge_sw_init_set_generic(struct brcm_pcie *pcie, u32 val)
|
||||
{
|
||||
u32 tmp, mask = RGR1_SW_INIT_1_INIT_GENERIC_MASK;
|
||||
u32 shift = RGR1_SW_INIT_1_INIT_GENERIC_SHIFT;
|
||||
|
@ -736,7 +735,7 @@ static inline void brcm_pcie_bridge_sw_init_set_generic(struct brcm_pcie *pcie,
|
|||
writel(tmp, pcie->base + PCIE_RGR1_SW_INIT_1(pcie));
|
||||
}
|
||||
|
||||
static inline void brcm_pcie_bridge_sw_init_set_7278(struct brcm_pcie *pcie, u32 val)
|
||||
static void brcm_pcie_bridge_sw_init_set_7278(struct brcm_pcie *pcie, u32 val)
|
||||
{
|
||||
u32 tmp, mask = RGR1_SW_INIT_1_INIT_7278_MASK;
|
||||
u32 shift = RGR1_SW_INIT_1_INIT_7278_SHIFT;
|
||||
|
@ -746,7 +745,7 @@ static inline void brcm_pcie_bridge_sw_init_set_7278(struct brcm_pcie *pcie, u32
|
|||
writel(tmp, pcie->base + PCIE_RGR1_SW_INIT_1(pcie));
|
||||
}
|
||||
|
||||
static inline void brcm_pcie_perst_set_4908(struct brcm_pcie *pcie, u32 val)
|
||||
static void brcm_pcie_perst_set_4908(struct brcm_pcie *pcie, u32 val)
|
||||
{
|
||||
if (WARN_ONCE(!pcie->perst_reset, "missing PERST# reset controller\n"))
|
||||
return;
|
||||
|
@ -757,7 +756,7 @@ static inline void brcm_pcie_perst_set_4908(struct brcm_pcie *pcie, u32 val)
|
|||
reset_control_deassert(pcie->perst_reset);
|
||||
}
|
||||
|
||||
static inline void brcm_pcie_perst_set_7278(struct brcm_pcie *pcie, u32 val)
|
||||
static void brcm_pcie_perst_set_7278(struct brcm_pcie *pcie, u32 val)
|
||||
{
|
||||
u32 tmp;
|
||||
|
||||
|
@ -767,7 +766,7 @@ static inline void brcm_pcie_perst_set_7278(struct brcm_pcie *pcie, u32 val)
|
|||
writel(tmp, pcie->base + PCIE_MISC_PCIE_CTRL);
|
||||
}
|
||||
|
||||
static inline void brcm_pcie_perst_set_generic(struct brcm_pcie *pcie, u32 val)
|
||||
static void brcm_pcie_perst_set_generic(struct brcm_pcie *pcie, u32 val)
|
||||
{
|
||||
u32 tmp;
|
||||
|
||||
|
@ -776,7 +775,7 @@ static inline void brcm_pcie_perst_set_generic(struct brcm_pcie *pcie, u32 val)
|
|||
writel(tmp, pcie->base + PCIE_RGR1_SW_INIT_1(pcie));
|
||||
}
|
||||
|
||||
static inline int brcm_pcie_get_rc_bar2_size_and_offset(struct brcm_pcie *pcie,
|
||||
static int brcm_pcie_get_rc_bar2_size_and_offset(struct brcm_pcie *pcie,
|
||||
u64 *rc_bar2_size,
|
||||
u64 *rc_bar2_offset)
|
||||
{
|
||||
|
@ -903,11 +902,16 @@ static int brcm_pcie_setup(struct brcm_pcie *pcie)
|
|||
else
|
||||
burst = 0x2; /* 512 bytes */
|
||||
|
||||
/* Set SCB_MAX_BURST_SIZE, CFG_READ_UR_MODE, SCB_ACCESS_EN */
|
||||
/*
|
||||
* Set SCB_MAX_BURST_SIZE, CFG_READ_UR_MODE, SCB_ACCESS_EN,
|
||||
* RCB_MPS_MODE, RCB_64B_MODE
|
||||
*/
|
||||
tmp = readl(base + PCIE_MISC_MISC_CTRL);
|
||||
u32p_replace_bits(&tmp, 1, PCIE_MISC_MISC_CTRL_SCB_ACCESS_EN_MASK);
|
||||
u32p_replace_bits(&tmp, 1, PCIE_MISC_MISC_CTRL_CFG_READ_UR_MODE_MASK);
|
||||
u32p_replace_bits(&tmp, burst, PCIE_MISC_MISC_CTRL_MAX_BURST_SIZE_MASK);
|
||||
u32p_replace_bits(&tmp, 1, PCIE_MISC_MISC_CTRL_PCIE_RCB_MPS_MODE_MASK);
|
||||
u32p_replace_bits(&tmp, 1, PCIE_MISC_MISC_CTRL_PCIE_RCB_64B_MODE_MASK);
|
||||
writel(tmp, base + PCIE_MISC_MISC_CTRL);
|
||||
|
||||
ret = brcm_pcie_get_rc_bar2_size_and_offset(pcie, &rc_bar2_size,
|
||||
|
@ -1033,8 +1037,15 @@ static int brcm_pcie_start_link(struct brcm_pcie *pcie)
|
|||
pcie->perst_set(pcie, 0);
|
||||
|
||||
/*
|
||||
* Give the RC/EP time to wake up, before trying to configure RC.
|
||||
* Intermittently check status for link-up, up to a total of 100ms.
|
||||
* Wait for 100ms after PERST# deassertion; see PCIe CEM specification
|
||||
* sections 2.2, PCIe r5.0, 6.6.1.
|
||||
*/
|
||||
msleep(100);
|
||||
|
||||
/*
|
||||
* Give the RC/EP even more time to wake up, before trying to
|
||||
* configure RC. Intermittently check status for link-up, up to a
|
||||
* total of 100ms.
|
||||
*/
|
||||
for (i = 0; i < 100 && !brcm_pcie_link_up(pcie); i += 5)
|
||||
msleep(5);
|
||||
|
|
|
@ -12,7 +12,6 @@
|
|||
#include <linux/platform_device.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/of_platform.h>
|
||||
#include <linux/phy/phy.h>
|
||||
|
||||
|
|
|
@ -18,7 +18,6 @@
|
|||
#include <linux/platform_device.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/of_platform.h>
|
||||
#include <linux/phy/phy.h>
|
||||
|
||||
|
|
|
@ -9,10 +9,10 @@
|
|||
|
||||
#include <linux/clk.h>
|
||||
#include <linux/irqchip/chained_irq.h>
|
||||
#include <linux/irqdomain.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/msi.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/pci-ecam.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
|
|
@ -466,7 +466,8 @@ static int mt7621_pcie_register_host(struct pci_host_bridge *host)
|
|||
}
|
||||
|
||||
static const struct soc_device_attribute mt7621_pcie_quirks_match[] = {
|
||||
{ .soc_id = "mt7621", .revision = "E2" }
|
||||
{ .soc_id = "mt7621", .revision = "E2" },
|
||||
{ /* sentinel */ }
|
||||
};
|
||||
|
||||
static int mt7621_pcie_probe(struct platform_device *pdev)
|
||||
|
|
|
@ -28,7 +28,6 @@
|
|||
#include <linux/of_device.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/of_platform.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/pci_ids.h>
|
||||
#include <linux/phy/phy.h>
|
||||
|
|
|
@ -16,7 +16,6 @@
|
|||
#include <linux/of_address.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/of_platform.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/pci-ecam.h>
|
||||
|
|
|
@ -17,7 +17,6 @@
|
|||
#include <linux/of_address.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/of_platform.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/pci-ecam.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
@ -474,15 +473,15 @@ static int nwl_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
|
|||
|
||||
for (i = 0; i < nr_irqs; i++) {
|
||||
irq_domain_set_info(domain, virq + i, bit + i, &nwl_irq_chip,
|
||||
domain->host_data, handle_simple_irq,
|
||||
NULL, NULL);
|
||||
domain->host_data, handle_simple_irq,
|
||||
NULL, NULL);
|
||||
}
|
||||
mutex_unlock(&msi->lock);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void nwl_irq_domain_free(struct irq_domain *domain, unsigned int virq,
|
||||
unsigned int nr_irqs)
|
||||
unsigned int nr_irqs)
|
||||
{
|
||||
struct irq_data *data = irq_domain_get_irq_data(domain, virq);
|
||||
struct nwl_pcie *pcie = irq_data_get_irq_chip_data(data);
|
||||
|
@ -722,7 +721,6 @@ static int nwl_pcie_bridge_init(struct nwl_pcie *pcie)
|
|||
/* Enable all misc interrupts */
|
||||
nwl_bridge_writel(pcie, MSGF_MISC_SR_MASKALL, MSGF_MISC_MASK);
|
||||
|
||||
|
||||
/* Disable all legacy interrupts */
|
||||
nwl_bridge_writel(pcie, (u32)~MSGF_LEG_SR_MASKALL, MSGF_LEG_MASK);
|
||||
|
||||
|
|
|
@ -719,6 +719,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
|
|||
resource_size_t offset[2] = {0};
|
||||
resource_size_t membar2_offset = 0x2000;
|
||||
struct pci_bus *child;
|
||||
struct pci_dev *dev;
|
||||
int ret;
|
||||
|
||||
/*
|
||||
|
@ -859,8 +860,25 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
|
|||
|
||||
pci_scan_child_bus(vmd->bus);
|
||||
vmd_domain_reset(vmd);
|
||||
list_for_each_entry(child, &vmd->bus->children, node)
|
||||
pci_reset_bus(child->self);
|
||||
|
||||
/* When Intel VMD is enabled, the OS does not discover the Root Ports
|
||||
* owned by Intel VMD within the MMCFG space. pci_reset_bus() applies
|
||||
* a reset to the parent of the PCI device supplied as argument. This
|
||||
* is why we pass a child device, so the reset can be triggered at
|
||||
* the Intel bridge level and propagated to all the children in the
|
||||
* hierarchy.
|
||||
*/
|
||||
list_for_each_entry(child, &vmd->bus->children, node) {
|
||||
if (!list_empty(&child->devices)) {
|
||||
dev = list_first_entry(&child->devices,
|
||||
struct pci_dev, bus_list);
|
||||
if (pci_reset_bus(dev))
|
||||
pci_warn(dev, "can't reset device: %d\n", ret);
|
||||
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
pci_assign_unassigned_bus_resources(vmd->bus);
|
||||
|
||||
/*
|
||||
|
@ -980,6 +998,11 @@ static int vmd_resume(struct device *dev)
|
|||
struct vmd_dev *vmd = pci_get_drvdata(pdev);
|
||||
int err, i;
|
||||
|
||||
if (vmd->irq_domain)
|
||||
vmd_set_msi_remapping(vmd, true);
|
||||
else
|
||||
vmd_set_msi_remapping(vmd, false);
|
||||
|
||||
for (i = 0; i < vmd->msix_count; i++) {
|
||||
err = devm_request_irq(dev, vmd->irqs[i].virq,
|
||||
vmd_irq, IRQF_NO_THREAD,
|
||||
|
|
|
@ -29,6 +29,9 @@
|
|||
#define PCI_DOE_FLAG_CANCEL 0
|
||||
#define PCI_DOE_FLAG_DEAD 1
|
||||
|
||||
/* Max data object length is 2^18 dwords */
|
||||
#define PCI_DOE_MAX_LENGTH (1 << 18)
|
||||
|
||||
/**
|
||||
* struct pci_doe_mb - State for a single DOE mailbox
|
||||
*
|
||||
|
@ -107,6 +110,7 @@ static int pci_doe_send_req(struct pci_doe_mb *doe_mb,
|
|||
{
|
||||
struct pci_dev *pdev = doe_mb->pdev;
|
||||
int offset = doe_mb->cap_offset;
|
||||
size_t length;
|
||||
u32 val;
|
||||
int i;
|
||||
|
||||
|
@ -123,15 +127,20 @@ static int pci_doe_send_req(struct pci_doe_mb *doe_mb,
|
|||
if (FIELD_GET(PCI_DOE_STATUS_ERROR, val))
|
||||
return -EIO;
|
||||
|
||||
/* Length is 2 DW of header + length of payload in DW */
|
||||
length = 2 + task->request_pl_sz / sizeof(u32);
|
||||
if (length > PCI_DOE_MAX_LENGTH)
|
||||
return -EIO;
|
||||
if (length == PCI_DOE_MAX_LENGTH)
|
||||
length = 0;
|
||||
|
||||
/* Write DOE Header */
|
||||
val = FIELD_PREP(PCI_DOE_DATA_OBJECT_HEADER_1_VID, task->prot.vid) |
|
||||
FIELD_PREP(PCI_DOE_DATA_OBJECT_HEADER_1_TYPE, task->prot.type);
|
||||
pci_write_config_dword(pdev, offset + PCI_DOE_WRITE, val);
|
||||
/* Length is 2 DW of header + length of payload in DW */
|
||||
pci_write_config_dword(pdev, offset + PCI_DOE_WRITE,
|
||||
FIELD_PREP(PCI_DOE_DATA_OBJECT_HEADER_2_LENGTH,
|
||||
2 + task->request_pl_sz /
|
||||
sizeof(u32)));
|
||||
length));
|
||||
for (i = 0; i < task->request_pl_sz / sizeof(u32); i++)
|
||||
pci_write_config_dword(pdev, offset + PCI_DOE_WRITE,
|
||||
task->request_pl[i]);
|
||||
|
@ -178,7 +187,10 @@ static int pci_doe_recv_resp(struct pci_doe_mb *doe_mb, struct pci_doe_task *tas
|
|||
pci_write_config_dword(pdev, offset + PCI_DOE_READ, 0);
|
||||
|
||||
length = FIELD_GET(PCI_DOE_DATA_OBJECT_HEADER_2_LENGTH, val);
|
||||
if (length > SZ_1M || length < 2)
|
||||
/* A value of 0x0 indicates max data object length */
|
||||
if (!length)
|
||||
length = PCI_DOE_MAX_LENGTH;
|
||||
if (length < 2)
|
||||
return -EIO;
|
||||
|
||||
/* First 2 dwords have already been read */
|
||||
|
|
|
@ -27,13 +27,13 @@ config PCI_EPF_NTB
|
|||
If in doubt, say "N" to disable Endpoint NTB driver.
|
||||
|
||||
config PCI_EPF_VNTB
|
||||
tristate "PCI Endpoint NTB driver"
|
||||
depends on PCI_ENDPOINT
|
||||
depends on NTB
|
||||
select CONFIGFS_FS
|
||||
help
|
||||
Select this configuration option to enable the Non-Transparent
|
||||
Bridge (NTB) driver for PCIe Endpoint. NTB driver implements NTB
|
||||
between PCI Root Port and PCIe Endpoint.
|
||||
tristate "PCI Endpoint NTB driver"
|
||||
depends on PCI_ENDPOINT
|
||||
depends on NTB
|
||||
select CONFIGFS_FS
|
||||
help
|
||||
Select this configuration option to enable the Non-Transparent
|
||||
Bridge (NTB) driver for PCIe Endpoint. NTB driver implements NTB
|
||||
between PCI Root Port and PCIe Endpoint.
|
||||
|
||||
If in doubt, say "N" to disable Endpoint NTB driver.
|
||||
If in doubt, say "N" to disable Endpoint NTB driver.
|
||||
|
|
|
@ -979,7 +979,7 @@ static int pci_epf_test_bind(struct pci_epf *epf)
|
|||
if (ret)
|
||||
epf_test->dma_supported = false;
|
||||
|
||||
if (linkup_notifier) {
|
||||
if (linkup_notifier || core_init_notifier) {
|
||||
epf->nb.notifier_call = pci_epf_test_notifier;
|
||||
pci_epc_register_notifier(epc, &epf->nb);
|
||||
} else {
|
||||
|
|
|
@ -11,7 +11,7 @@
|
|||
* Author: Kishon Vijay Abraham I <kishon@ti.com>
|
||||
*/
|
||||
|
||||
/**
|
||||
/*
|
||||
* +------------+ +---------------------------------------+
|
||||
* | | | |
|
||||
* +------------+ | +--------------+
|
||||
|
@ -99,20 +99,20 @@ enum epf_ntb_bar {
|
|||
* NTB Driver NTB Driver
|
||||
*/
|
||||
struct epf_ntb_ctrl {
|
||||
u32 command;
|
||||
u32 argument;
|
||||
u16 command_status;
|
||||
u16 link_status;
|
||||
u32 topology;
|
||||
u64 addr;
|
||||
u64 size;
|
||||
u32 num_mws;
|
||||
u32 reserved;
|
||||
u32 spad_offset;
|
||||
u32 spad_count;
|
||||
u32 db_entry_size;
|
||||
u32 db_data[MAX_DB_COUNT];
|
||||
u32 db_offset[MAX_DB_COUNT];
|
||||
u32 command;
|
||||
u32 argument;
|
||||
u16 command_status;
|
||||
u16 link_status;
|
||||
u32 topology;
|
||||
u64 addr;
|
||||
u64 size;
|
||||
u32 num_mws;
|
||||
u32 reserved;
|
||||
u32 spad_offset;
|
||||
u32 spad_count;
|
||||
u32 db_entry_size;
|
||||
u32 db_data[MAX_DB_COUNT];
|
||||
u32 db_offset[MAX_DB_COUNT];
|
||||
} __packed;
|
||||
|
||||
struct epf_ntb {
|
||||
|
@ -136,8 +136,7 @@ struct epf_ntb {
|
|||
|
||||
struct epf_ntb_ctrl *reg;
|
||||
|
||||
phys_addr_t epf_db_phy;
|
||||
void __iomem *epf_db;
|
||||
u32 *epf_db;
|
||||
|
||||
phys_addr_t vpci_mw_phy[MAX_MW];
|
||||
void __iomem *vpci_mw_addr[MAX_MW];
|
||||
|
@ -156,12 +155,14 @@ static struct pci_epf_header epf_ntb_header = {
|
|||
};
|
||||
|
||||
/**
|
||||
* epf_ntb_link_up() - Raise link_up interrupt to Virtual Host
|
||||
* epf_ntb_link_up() - Raise link_up interrupt to Virtual Host (VHOST)
|
||||
* @ntb: NTB device that facilitates communication between HOST and VHOST
|
||||
* @link_up: true or false indicating Link is UP or Down
|
||||
*
|
||||
* Once NTB function in HOST invoke ntb_link_enable(),
|
||||
* this NTB function driver will trigger a link event to vhost.
|
||||
* this NTB function driver will trigger a link event to VHOST.
|
||||
*
|
||||
* Returns: Zero for success, or an error code in case of failure
|
||||
*/
|
||||
static int epf_ntb_link_up(struct epf_ntb *ntb, bool link_up)
|
||||
{
|
||||
|
@ -175,9 +176,9 @@ static int epf_ntb_link_up(struct epf_ntb *ntb, bool link_up)
|
|||
}
|
||||
|
||||
/**
|
||||
* epf_ntb_configure_mw() - Configure the Outbound Address Space for vhost
|
||||
* to access the memory window of host
|
||||
* @ntb: NTB device that facilitates communication between host and vhost
|
||||
* epf_ntb_configure_mw() - Configure the Outbound Address Space for VHOST
|
||||
* to access the memory window of HOST
|
||||
* @ntb: NTB device that facilitates communication between HOST and VHOST
|
||||
* @mw: Index of the memory window (either 0, 1, 2 or 3)
|
||||
*
|
||||
* EP Outbound Window
|
||||
|
@ -194,7 +195,9 @@ static int epf_ntb_link_up(struct epf_ntb *ntb, bool link_up)
|
|||
* | | | |
|
||||
* | | | |
|
||||
* +--------+ +-----------+
|
||||
* VHost PCI EP
|
||||
* VHOST PCI EP
|
||||
*
|
||||
* Returns: Zero for success, or an error code in case of failure
|
||||
*/
|
||||
static int epf_ntb_configure_mw(struct epf_ntb *ntb, u32 mw)
|
||||
{
|
||||
|
@ -219,7 +222,7 @@ static int epf_ntb_configure_mw(struct epf_ntb *ntb, u32 mw)
|
|||
|
||||
/**
|
||||
* epf_ntb_teardown_mw() - Teardown the configured OB ATU
|
||||
* @ntb: NTB device that facilitates communication between HOST and vHOST
|
||||
* @ntb: NTB device that facilitates communication between HOST and VHOST
|
||||
* @mw: Index of the memory window (either 0, 1, 2 or 3)
|
||||
*
|
||||
* Teardown the configured OB ATU configured in epf_ntb_configure_mw() using
|
||||
|
@ -234,12 +237,12 @@ static void epf_ntb_teardown_mw(struct epf_ntb *ntb, u32 mw)
|
|||
}
|
||||
|
||||
/**
|
||||
* epf_ntb_cmd_handler() - Handle commands provided by the NTB Host
|
||||
* epf_ntb_cmd_handler() - Handle commands provided by the NTB HOST
|
||||
* @work: work_struct for the epf_ntb_epc
|
||||
*
|
||||
* Workqueue function that gets invoked for the two epf_ntb_epc
|
||||
* periodically (once every 5ms) to see if it has received any commands
|
||||
* from NTB host. The host can send commands to configure doorbell or
|
||||
* from NTB HOST. The HOST can send commands to configure doorbell or
|
||||
* configure memory window or to update link status.
|
||||
*/
|
||||
static void epf_ntb_cmd_handler(struct work_struct *work)
|
||||
|
@ -254,12 +257,10 @@ static void epf_ntb_cmd_handler(struct work_struct *work)
|
|||
ntb = container_of(work, struct epf_ntb, cmd_handler.work);
|
||||
|
||||
for (i = 1; i < ntb->db_count; i++) {
|
||||
if (readl(ntb->epf_db + i * 4)) {
|
||||
if (readl(ntb->epf_db + i * 4))
|
||||
ntb->db |= 1 << (i - 1);
|
||||
|
||||
if (ntb->epf_db[i]) {
|
||||
ntb->db |= 1 << (i - 1);
|
||||
ntb_db_event(&ntb->ntb, i);
|
||||
writel(0, ntb->epf_db + i * 4);
|
||||
ntb->epf_db[i] = 0;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -321,8 +322,8 @@ reset_handler:
|
|||
|
||||
/**
|
||||
* epf_ntb_config_sspad_bar_clear() - Clear Config + Self scratchpad BAR
|
||||
* @ntb_epc: EPC associated with one of the HOST which holds peer's outbound
|
||||
* address.
|
||||
* @ntb: EPC associated with one of the HOST which holds peer's outbound
|
||||
* address.
|
||||
*
|
||||
* Clear BAR0 of EP CONTROLLER 1 which contains the HOST1's config and
|
||||
* self scratchpad region (removes inbound ATU configuration). While BAR0 is
|
||||
|
@ -331,8 +332,10 @@ reset_handler:
|
|||
* used for self scratchpad from epf_ntb_bar[BAR_CONFIG].
|
||||
*
|
||||
* Please note the self scratchpad region and config region is combined to
|
||||
* a single region and mapped using the same BAR. Also note HOST2's peer
|
||||
* scratchpad is HOST1's self scratchpad.
|
||||
* a single region and mapped using the same BAR. Also note VHOST's peer
|
||||
* scratchpad is HOST's self scratchpad.
|
||||
*
|
||||
* Returns: void
|
||||
*/
|
||||
static void epf_ntb_config_sspad_bar_clear(struct epf_ntb *ntb)
|
||||
{
|
||||
|
@ -347,13 +350,15 @@ static void epf_ntb_config_sspad_bar_clear(struct epf_ntb *ntb)
|
|||
|
||||
/**
|
||||
* epf_ntb_config_sspad_bar_set() - Set Config + Self scratchpad BAR
|
||||
* @ntb: NTB device that facilitates communication between HOST and vHOST
|
||||
* @ntb: NTB device that facilitates communication between HOST and VHOST
|
||||
*
|
||||
* Map BAR0 of EP CONTROLLER 1 which contains the HOST1's config and
|
||||
* Map BAR0 of EP CONTROLLER which contains the VHOST's config and
|
||||
* self scratchpad region.
|
||||
*
|
||||
* Please note the self scratchpad region and config region is combined to
|
||||
* a single region and mapped using the same BAR.
|
||||
*
|
||||
* Returns: Zero for success, or an error code in case of failure
|
||||
*/
|
||||
static int epf_ntb_config_sspad_bar_set(struct epf_ntb *ntb)
|
||||
{
|
||||
|
@ -380,7 +385,7 @@ static int epf_ntb_config_sspad_bar_set(struct epf_ntb *ntb)
|
|||
/**
|
||||
* epf_ntb_config_spad_bar_free() - Free the physical memory associated with
|
||||
* config + scratchpad region
|
||||
* @ntb: NTB device that facilitates communication between HOST and vHOST
|
||||
* @ntb: NTB device that facilitates communication between HOST and VHOST
|
||||
*/
|
||||
static void epf_ntb_config_spad_bar_free(struct epf_ntb *ntb)
|
||||
{
|
||||
|
@ -393,11 +398,13 @@ static void epf_ntb_config_spad_bar_free(struct epf_ntb *ntb)
|
|||
/**
|
||||
* epf_ntb_config_spad_bar_alloc() - Allocate memory for config + scratchpad
|
||||
* region
|
||||
* @ntb: NTB device that facilitates communication between HOST1 and HOST2
|
||||
* @ntb: NTB device that facilitates communication between HOST and VHOST
|
||||
*
|
||||
* Allocate the Local Memory mentioned in the above diagram. The size of
|
||||
* CONFIG REGION is sizeof(struct epf_ntb_ctrl) and size of SCRATCHPAD REGION
|
||||
* is obtained from "spad-count" configfs entry.
|
||||
*
|
||||
* Returns: Zero for success, or an error code in case of failure
|
||||
*/
|
||||
static int epf_ntb_config_spad_bar_alloc(struct epf_ntb *ntb)
|
||||
{
|
||||
|
@ -424,7 +431,7 @@ static int epf_ntb_config_spad_bar_alloc(struct epf_ntb *ntb)
|
|||
spad_count = ntb->spad_count;
|
||||
|
||||
ctrl_size = sizeof(struct epf_ntb_ctrl);
|
||||
spad_size = 2 * spad_count * 4;
|
||||
spad_size = 2 * spad_count * sizeof(u32);
|
||||
|
||||
if (!align) {
|
||||
ctrl_size = roundup_pow_of_two(ctrl_size);
|
||||
|
@ -454,7 +461,7 @@ static int epf_ntb_config_spad_bar_alloc(struct epf_ntb *ntb)
|
|||
ctrl->num_mws = ntb->num_mws;
|
||||
ntb->spad_size = spad_size;
|
||||
|
||||
ctrl->db_entry_size = 4;
|
||||
ctrl->db_entry_size = sizeof(u32);
|
||||
|
||||
for (i = 0; i < ntb->db_count; i++) {
|
||||
ntb->reg->db_data[i] = 1 + i;
|
||||
|
@ -465,11 +472,13 @@ static int epf_ntb_config_spad_bar_alloc(struct epf_ntb *ntb)
|
|||
}
|
||||
|
||||
/**
|
||||
* epf_ntb_configure_interrupt() - Configure MSI/MSI-X capaiblity
|
||||
* @ntb: NTB device that facilitates communication between HOST and vHOST
|
||||
* epf_ntb_configure_interrupt() - Configure MSI/MSI-X capability
|
||||
* @ntb: NTB device that facilitates communication between HOST and VHOST
|
||||
*
|
||||
* Configure MSI/MSI-X capability for each interface with number of
|
||||
* interrupts equal to "db_count" configfs entry.
|
||||
*
|
||||
* Returns: Zero for success, or an error code in case of failure
|
||||
*/
|
||||
static int epf_ntb_configure_interrupt(struct epf_ntb *ntb)
|
||||
{
|
||||
|
@ -511,7 +520,9 @@ static int epf_ntb_configure_interrupt(struct epf_ntb *ntb)
|
|||
|
||||
/**
|
||||
* epf_ntb_db_bar_init() - Configure Doorbell window BARs
|
||||
* @ntb: NTB device that facilitates communication between HOST and vHOST
|
||||
* @ntb: NTB device that facilitates communication between HOST and VHOST
|
||||
*
|
||||
* Returns: Zero for success, or an error code in case of failure
|
||||
*/
|
||||
static int epf_ntb_db_bar_init(struct epf_ntb *ntb)
|
||||
{
|
||||
|
@ -522,7 +533,7 @@ static int epf_ntb_db_bar_init(struct epf_ntb *ntb)
|
|||
struct pci_epf_bar *epf_bar;
|
||||
void __iomem *mw_addr;
|
||||
enum pci_barno barno;
|
||||
size_t size = 4 * ntb->db_count;
|
||||
size_t size = sizeof(u32) * ntb->db_count;
|
||||
|
||||
epc_features = pci_epc_get_features(ntb->epf->epc,
|
||||
ntb->epf->func_no,
|
||||
|
@ -557,7 +568,7 @@ static int epf_ntb_db_bar_init(struct epf_ntb *ntb)
|
|||
return ret;
|
||||
|
||||
err_alloc_peer_mem:
|
||||
pci_epc_mem_free_addr(ntb->epf->epc, epf_bar->phys_addr, mw_addr, epf_bar->size);
|
||||
pci_epf_free_space(ntb->epf, mw_addr, barno, 0);
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
@ -566,7 +577,7 @@ static void epf_ntb_mw_bar_clear(struct epf_ntb *ntb, int num_mws);
|
|||
/**
|
||||
* epf_ntb_db_bar_clear() - Clear doorbell BAR and free memory
|
||||
* allocated in peer's outbound address space
|
||||
* @ntb: NTB device that facilitates communication between HOST and vHOST
|
||||
* @ntb: NTB device that facilitates communication between HOST and VHOST
|
||||
*/
|
||||
static void epf_ntb_db_bar_clear(struct epf_ntb *ntb)
|
||||
{
|
||||
|
@ -582,8 +593,9 @@ static void epf_ntb_db_bar_clear(struct epf_ntb *ntb)
|
|||
|
||||
/**
|
||||
* epf_ntb_mw_bar_init() - Configure Memory window BARs
|
||||
* @ntb: NTB device that facilitates communication between HOST and vHOST
|
||||
* @ntb: NTB device that facilitates communication between HOST and VHOST
|
||||
*
|
||||
* Returns: Zero for success, or an error code in case of failure
|
||||
*/
|
||||
static int epf_ntb_mw_bar_init(struct epf_ntb *ntb)
|
||||
{
|
||||
|
@ -639,7 +651,7 @@ err_alloc_mem:
|
|||
|
||||
/**
|
||||
* epf_ntb_mw_bar_clear() - Clear Memory window BARs
|
||||
* @ntb: NTB device that facilitates communication between HOST and vHOST
|
||||
* @ntb: NTB device that facilitates communication between HOST and VHOST
|
||||
*/
|
||||
static void epf_ntb_mw_bar_clear(struct epf_ntb *ntb, int num_mws)
|
||||
{
|
||||
|
@ -662,7 +674,7 @@ static void epf_ntb_mw_bar_clear(struct epf_ntb *ntb, int num_mws)
|
|||
|
||||
/**
|
||||
* epf_ntb_epc_destroy() - Cleanup NTB EPC interface
|
||||
* @ntb: NTB device that facilitates communication between HOST and vHOST
|
||||
* @ntb: NTB device that facilitates communication between HOST and VHOST
|
||||
*
|
||||
* Wrapper for epf_ntb_epc_destroy_interface() to cleanup all the NTB interfaces
|
||||
*/
|
||||
|
@ -675,7 +687,9 @@ static void epf_ntb_epc_destroy(struct epf_ntb *ntb)
|
|||
/**
|
||||
* epf_ntb_init_epc_bar() - Identify BARs to be used for each of the NTB
|
||||
* constructs (scratchpad region, doorbell, memorywindow)
|
||||
* @ntb: NTB device that facilitates communication between HOST and vHOST
|
||||
* @ntb: NTB device that facilitates communication between HOST and VHOST
|
||||
*
|
||||
* Returns: Zero for success, or an error code in case of failure
|
||||
*/
|
||||
static int epf_ntb_init_epc_bar(struct epf_ntb *ntb)
|
||||
{
|
||||
|
@ -716,11 +730,13 @@ static int epf_ntb_init_epc_bar(struct epf_ntb *ntb)
|
|||
|
||||
/**
|
||||
* epf_ntb_epc_init() - Initialize NTB interface
|
||||
* @ntb: NTB device that facilitates communication between HOST and vHOST2
|
||||
* @ntb: NTB device that facilitates communication between HOST and VHOST
|
||||
*
|
||||
* Wrapper to initialize a particular EPC interface and start the workqueue
|
||||
* to check for commands from host. This function will write to the
|
||||
* to check for commands from HOST. This function will write to the
|
||||
* EP controller HW for configuring it.
|
||||
*
|
||||
* Returns: Zero for success, or an error code in case of failure
|
||||
*/
|
||||
static int epf_ntb_epc_init(struct epf_ntb *ntb)
|
||||
{
|
||||
|
@ -787,7 +803,7 @@ err_config_interrupt:
|
|||
|
||||
/**
|
||||
* epf_ntb_epc_cleanup() - Cleanup all NTB interfaces
|
||||
* @ntb: NTB device that facilitates communication between HOST1 and HOST2
|
||||
* @ntb: NTB device that facilitates communication between HOST and VHOST
|
||||
*
|
||||
* Wrapper to cleanup all NTB interfaces.
|
||||
*/
|
||||
|
@ -951,6 +967,8 @@ static const struct config_item_type ntb_group_type = {
|
|||
*
|
||||
* Add configfs directory specific to NTB. This directory will hold
|
||||
* NTB specific properties like db_count, spad_count, num_mws etc.,
|
||||
*
|
||||
* Returns: Pointer to config_group
|
||||
*/
|
||||
static struct config_group *epf_ntb_add_cfs(struct pci_epf *epf,
|
||||
struct config_group *group)
|
||||
|
@ -1101,11 +1119,11 @@ static int vntb_epf_link_enable(struct ntb_dev *ntb,
|
|||
static u32 vntb_epf_spad_read(struct ntb_dev *ndev, int idx)
|
||||
{
|
||||
struct epf_ntb *ntb = ntb_ndev(ndev);
|
||||
int off = ntb->reg->spad_offset, ct = ntb->reg->spad_count * 4;
|
||||
int off = ntb->reg->spad_offset, ct = ntb->reg->spad_count * sizeof(u32);
|
||||
u32 val;
|
||||
void __iomem *base = ntb->reg;
|
||||
void __iomem *base = (void __iomem *)ntb->reg;
|
||||
|
||||
val = readl(base + off + ct + idx * 4);
|
||||
val = readl(base + off + ct + idx * sizeof(u32));
|
||||
return val;
|
||||
}
|
||||
|
||||
|
@ -1113,10 +1131,10 @@ static int vntb_epf_spad_write(struct ntb_dev *ndev, int idx, u32 val)
|
|||
{
|
||||
struct epf_ntb *ntb = ntb_ndev(ndev);
|
||||
struct epf_ntb_ctrl *ctrl = ntb->reg;
|
||||
int off = ctrl->spad_offset, ct = ctrl->spad_count * 4;
|
||||
void __iomem *base = ntb->reg;
|
||||
int off = ctrl->spad_offset, ct = ctrl->spad_count * sizeof(u32);
|
||||
void __iomem *base = (void __iomem *)ntb->reg;
|
||||
|
||||
writel(val, base + off + ct + idx * 4);
|
||||
writel(val, base + off + ct + idx * sizeof(u32));
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -1125,10 +1143,10 @@ static u32 vntb_epf_peer_spad_read(struct ntb_dev *ndev, int pidx, int idx)
|
|||
struct epf_ntb *ntb = ntb_ndev(ndev);
|
||||
struct epf_ntb_ctrl *ctrl = ntb->reg;
|
||||
int off = ctrl->spad_offset;
|
||||
void __iomem *base = ntb->reg;
|
||||
void __iomem *base = (void __iomem *)ntb->reg;
|
||||
u32 val;
|
||||
|
||||
val = readl(base + off + idx * 4);
|
||||
val = readl(base + off + idx * sizeof(u32));
|
||||
return val;
|
||||
}
|
||||
|
||||
|
@ -1137,9 +1155,9 @@ static int vntb_epf_peer_spad_write(struct ntb_dev *ndev, int pidx, int idx, u32
|
|||
struct epf_ntb *ntb = ntb_ndev(ndev);
|
||||
struct epf_ntb_ctrl *ctrl = ntb->reg;
|
||||
int off = ctrl->spad_offset;
|
||||
void __iomem *base = ntb->reg;
|
||||
void __iomem *base = (void __iomem *)ntb->reg;
|
||||
|
||||
writel(val, base + off + idx * 4);
|
||||
writel(val, base + off + idx * sizeof(u32));
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -1292,6 +1310,8 @@ static struct pci_driver vntb_pci_driver = {
|
|||
* Invoked when a primary interface or secondary interface is bound to EPC
|
||||
* device. This function will succeed only when EPC is bound to both the
|
||||
* interfaces.
|
||||
*
|
||||
* Returns: Zero for success, or an error code in case of failure
|
||||
*/
|
||||
static int epf_ntb_bind(struct pci_epf *epf)
|
||||
{
|
||||
|
@ -1377,6 +1397,8 @@ static struct pci_epf_ops epf_ntb_ops = {
|
|||
*
|
||||
* Probe NTB function driver when endpoint function bus detects a NTB
|
||||
* endpoint function.
|
||||
*
|
||||
* Returns: Zero for success, or an error code in case of failure
|
||||
*/
|
||||
static int epf_ntb_probe(struct pci_epf *epf)
|
||||
{
|
||||
|
|
|
@ -724,7 +724,6 @@ void pci_epc_destroy(struct pci_epc *epc)
|
|||
{
|
||||
pci_ep_cfs_remove_epc_group(epc->group);
|
||||
device_unregister(&epc->dev);
|
||||
kfree(epc);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_epc_destroy);
|
||||
|
||||
|
@ -746,6 +745,11 @@ void devm_pci_epc_destroy(struct device *dev, struct pci_epc *epc)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(devm_pci_epc_destroy);
|
||||
|
||||
static void pci_epc_release(struct device *dev)
|
||||
{
|
||||
kfree(to_pci_epc(dev));
|
||||
}
|
||||
|
||||
/**
|
||||
* __pci_epc_create() - create a new endpoint controller (EPC) device
|
||||
* @dev: device that is creating the new EPC
|
||||
|
@ -779,6 +783,7 @@ __pci_epc_create(struct device *dev, const struct pci_epc_ops *ops,
|
|||
device_initialize(&epc->dev);
|
||||
epc->dev.class = pci_epc_class;
|
||||
epc->dev.parent = dev;
|
||||
epc->dev.release = pci_epc_release;
|
||||
epc->ops = ops;
|
||||
|
||||
ret = dev_set_name(&epc->dev, "%s", dev_name(dev));
|
||||
|
|
|
@ -6,11 +6,14 @@
|
|||
menuconfig HOTPLUG_PCI
|
||||
bool "Support for PCI Hotplug"
|
||||
depends on PCI && SYSFS
|
||||
default y if USB4
|
||||
help
|
||||
Say Y here if you have a motherboard with a PCI Hotplug controller.
|
||||
This allows you to add and remove PCI cards while the machine is
|
||||
powered up and running.
|
||||
|
||||
Thunderbolt/USB4 PCIe tunneling depends on native PCIe hotplug.
|
||||
|
||||
When in doubt, say N.
|
||||
|
||||
if HOTPLUG_PCI
|
||||
|
|
|
@ -58,9 +58,6 @@ shpchp:
|
|||
pciehp with commit 82a9e79ef132 ("PCI: pciehp: remove hpc_ops"). Clarify
|
||||
if there was a specific reason not to apply the same change to shpchp.
|
||||
|
||||
* The ->get_mode1_ECC_cap callback in shpchp_hpc_ops is never invoked.
|
||||
Why was it introduced? Can it be removed?
|
||||
|
||||
* The hardirq handler shpc_isr() queues events on a workqueue. It can be
|
||||
simplified by converting it to threaded IRQ handling. Use pciehp as a
|
||||
template.
|
||||
|
|
|
@ -411,6 +411,14 @@ static void check_hotplug_bridge(struct acpiphp_slot *slot, struct pci_dev *dev)
|
|||
if (dev->is_hotplug_bridge)
|
||||
return;
|
||||
|
||||
/*
|
||||
* In the PCIe case, only Root Ports and Downstream Ports are capable of
|
||||
* accommodating hotplug devices, so avoid marking Upstream Ports as
|
||||
* "hotplug bridges".
|
||||
*/
|
||||
if (pci_is_pcie(dev) && pci_pcie_type(dev) == PCI_EXP_TYPE_UPSTREAM)
|
||||
return;
|
||||
|
||||
list_for_each_entry(func, &slot->funcs, sibling) {
|
||||
if (PCI_FUNC(dev->devfn) == func->function) {
|
||||
dev->is_hotplug_bridge = 1;
|
||||
|
|
|
@ -811,7 +811,9 @@ static void pcie_enable_notification(struct controller *ctrl)
|
|||
else
|
||||
cmd |= PCI_EXP_SLTCTL_PDCE;
|
||||
if (!pciehp_poll_mode)
|
||||
cmd |= PCI_EXP_SLTCTL_HPIE | PCI_EXP_SLTCTL_CCIE;
|
||||
cmd |= PCI_EXP_SLTCTL_HPIE;
|
||||
if (!pciehp_poll_mode && !NO_CMD_CMPL(ctrl))
|
||||
cmd |= PCI_EXP_SLTCTL_CCIE;
|
||||
|
||||
mask = (PCI_EXP_SLTCTL_PDCE | PCI_EXP_SLTCTL_ABPE |
|
||||
PCI_EXP_SLTCTL_PFDE |
|
||||
|
|
|
@ -311,7 +311,6 @@ struct hpc_ops {
|
|||
int (*get_latch_status)(struct slot *slot, u8 *status);
|
||||
int (*get_adapter_status)(struct slot *slot, u8 *status);
|
||||
int (*get_adapter_speed)(struct slot *slot, enum pci_bus_speed *speed);
|
||||
int (*get_mode1_ECC_cap)(struct slot *slot, u8 *mode);
|
||||
int (*get_prog_int)(struct slot *slot, u8 *prog_int);
|
||||
int (*query_power_fault)(struct slot *slot);
|
||||
void (*green_led_on)(struct slot *slot);
|
||||
|
|
|
@ -489,23 +489,6 @@ static int hpc_get_adapter_speed(struct slot *slot, enum pci_bus_speed *value)
|
|||
return retval;
|
||||
}
|
||||
|
||||
static int hpc_get_mode1_ECC_cap(struct slot *slot, u8 *mode)
|
||||
{
|
||||
int retval = 0;
|
||||
struct controller *ctrl = slot->ctrl;
|
||||
u16 sec_bus_status = shpc_readw(ctrl, SEC_BUS_CONFIG);
|
||||
u8 pi = shpc_readb(ctrl, PROG_INTERFACE);
|
||||
|
||||
if (pi == 2) {
|
||||
*mode = (sec_bus_status & 0x0100) >> 8;
|
||||
} else {
|
||||
retval = -1;
|
||||
}
|
||||
|
||||
ctrl_dbg(ctrl, "Mode 1 ECC cap = %d\n", *mode);
|
||||
return retval;
|
||||
}
|
||||
|
||||
static int hpc_query_power_fault(struct slot *slot)
|
||||
{
|
||||
struct controller *ctrl = slot->ctrl;
|
||||
|
@ -900,7 +883,6 @@ static const struct hpc_ops shpchp_hpc_ops = {
|
|||
.get_adapter_status = hpc_get_adapter_status,
|
||||
|
||||
.get_adapter_speed = hpc_get_adapter_speed,
|
||||
.get_mode1_ECC_cap = hpc_get_mode1_ECC_cap,
|
||||
.get_prog_int = hpc_get_prog_int,
|
||||
|
||||
.query_power_fault = hpc_query_power_fault,
|
||||
|
|
|
@ -44,6 +44,8 @@ int pci_request_irq(struct pci_dev *dev, unsigned int nr, irq_handler_t handler,
|
|||
va_start(ap, fmt);
|
||||
devname = kvasprintf(GFP_KERNEL, fmt, ap);
|
||||
va_end(ap);
|
||||
if (!devname)
|
||||
return -ENOMEM;
|
||||
|
||||
ret = request_threaded_irq(pci_irq_vector(dev, nr), handler, thread_fn,
|
||||
irqflags, devname, dev_id);
|
||||
|
|
|
@ -67,7 +67,7 @@ static acpi_status acpi_match_rc(acpi_handle handle, u32 lvl, void *context,
|
|||
unsigned long long uid;
|
||||
acpi_status status;
|
||||
|
||||
status = acpi_evaluate_integer(handle, "_UID", NULL, &uid);
|
||||
status = acpi_evaluate_integer(handle, METHOD_NAME__UID, NULL, &uid);
|
||||
if (ACPI_FAILURE(status) || uid != *segment)
|
||||
return AE_CTRL_DEPTH;
|
||||
|
||||
|
|
|
@ -646,7 +646,7 @@ static int pci_legacy_suspend(struct device *dev, pm_message_t state)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int pci_legacy_suspend_late(struct device *dev, pm_message_t state)
|
||||
static int pci_legacy_suspend_late(struct device *dev)
|
||||
{
|
||||
struct pci_dev *pci_dev = to_pci_dev(dev);
|
||||
|
||||
|
@ -848,7 +848,7 @@ static int pci_pm_suspend_noirq(struct device *dev)
|
|||
return 0;
|
||||
|
||||
if (pci_has_legacy_pm_support(pci_dev))
|
||||
return pci_legacy_suspend_late(dev, PMSG_SUSPEND);
|
||||
return pci_legacy_suspend_late(dev);
|
||||
|
||||
if (!pm) {
|
||||
pci_save_state(pci_dev);
|
||||
|
@ -1060,7 +1060,7 @@ static int pci_pm_freeze_noirq(struct device *dev)
|
|||
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
|
||||
|
||||
if (pci_has_legacy_pm_support(pci_dev))
|
||||
return pci_legacy_suspend_late(dev, PMSG_FREEZE);
|
||||
return pci_legacy_suspend_late(dev);
|
||||
|
||||
if (pm && pm->freeze_noirq) {
|
||||
int error;
|
||||
|
@ -1179,7 +1179,7 @@ static int pci_pm_poweroff_noirq(struct device *dev)
|
|||
return 0;
|
||||
|
||||
if (pci_has_legacy_pm_support(pci_dev))
|
||||
return pci_legacy_suspend_late(dev, PMSG_HIBERNATE);
|
||||
return pci_legacy_suspend_late(dev);
|
||||
|
||||
if (!pm) {
|
||||
pci_fixup_device(pci_fixup_suspend_late, pci_dev);
|
||||
|
|
|
@ -1182,11 +1182,9 @@ static int pci_create_attr(struct pci_dev *pdev, int num, int write_combine)
|
|||
|
||||
sysfs_bin_attr_init(res_attr);
|
||||
if (write_combine) {
|
||||
pdev->res_attr_wc[num] = res_attr;
|
||||
sprintf(res_attr_name, "resource%d_wc", num);
|
||||
res_attr->mmap = pci_mmap_resource_wc;
|
||||
} else {
|
||||
pdev->res_attr[num] = res_attr;
|
||||
sprintf(res_attr_name, "resource%d", num);
|
||||
if (pci_resource_flags(pdev, num) & IORESOURCE_IO) {
|
||||
res_attr->read = pci_read_resource_io;
|
||||
|
@ -1204,10 +1202,17 @@ static int pci_create_attr(struct pci_dev *pdev, int num, int write_combine)
|
|||
res_attr->size = pci_resource_len(pdev, num);
|
||||
res_attr->private = (void *)(unsigned long)num;
|
||||
retval = sysfs_create_bin_file(&pdev->dev.kobj, res_attr);
|
||||
if (retval)
|
||||
if (retval) {
|
||||
kfree(res_attr);
|
||||
return retval;
|
||||
}
|
||||
|
||||
return retval;
|
||||
if (write_combine)
|
||||
pdev->res_attr_wc[num] = res_attr;
|
||||
else
|
||||
pdev->res_attr[num] = res_attr;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -6447,6 +6447,8 @@ bool pci_device_is_present(struct pci_dev *pdev)
|
|||
{
|
||||
u32 v;
|
||||
|
||||
/* Check PF if pdev is a VF, since VF Vendor/Device IDs are 0xffff */
|
||||
pdev = pci_physfn(pdev);
|
||||
if (pci_dev_is_disconnected(pdev))
|
||||
return false;
|
||||
return pci_bus_read_dev_vendor_id(pdev->bus, pdev->devfn, &v, 0);
|
||||
|
@ -6743,60 +6745,70 @@ static void pci_no_domains(void)
|
|||
}
|
||||
|
||||
#ifdef CONFIG_PCI_DOMAINS_GENERIC
|
||||
static atomic_t __domain_nr = ATOMIC_INIT(-1);
|
||||
static DEFINE_IDA(pci_domain_nr_static_ida);
|
||||
static DEFINE_IDA(pci_domain_nr_dynamic_ida);
|
||||
|
||||
static int pci_get_new_domain_nr(void)
|
||||
static void of_pci_reserve_static_domain_nr(void)
|
||||
{
|
||||
return atomic_inc_return(&__domain_nr);
|
||||
struct device_node *np;
|
||||
int domain_nr;
|
||||
|
||||
for_each_node_by_type(np, "pci") {
|
||||
domain_nr = of_get_pci_domain_nr(np);
|
||||
if (domain_nr < 0)
|
||||
continue;
|
||||
/*
|
||||
* Permanently allocate domain_nr in dynamic_ida
|
||||
* to prevent it from dynamic allocation.
|
||||
*/
|
||||
ida_alloc_range(&pci_domain_nr_dynamic_ida,
|
||||
domain_nr, domain_nr, GFP_KERNEL);
|
||||
}
|
||||
}
|
||||
|
||||
static int of_pci_bus_find_domain_nr(struct device *parent)
|
||||
{
|
||||
static int use_dt_domains = -1;
|
||||
int domain = -1;
|
||||
static bool static_domains_reserved = false;
|
||||
int domain_nr;
|
||||
|
||||
if (parent)
|
||||
domain = of_get_pci_domain_nr(parent->of_node);
|
||||
|
||||
/*
|
||||
* Check DT domain and use_dt_domains values.
|
||||
*
|
||||
* If DT domain property is valid (domain >= 0) and
|
||||
* use_dt_domains != 0, the DT assignment is valid since this means
|
||||
* we have not previously allocated a domain number by using
|
||||
* pci_get_new_domain_nr(); we should also update use_dt_domains to
|
||||
* 1, to indicate that we have just assigned a domain number from
|
||||
* DT.
|
||||
*
|
||||
* If DT domain property value is not valid (ie domain < 0), and we
|
||||
* have not previously assigned a domain number from DT
|
||||
* (use_dt_domains != 1) we should assign a domain number by
|
||||
* using the:
|
||||
*
|
||||
* pci_get_new_domain_nr()
|
||||
*
|
||||
* API and update the use_dt_domains value to keep track of method we
|
||||
* are using to assign domain numbers (use_dt_domains = 0).
|
||||
*
|
||||
* All other combinations imply we have a platform that is trying
|
||||
* to mix domain numbers obtained from DT and pci_get_new_domain_nr(),
|
||||
* which is a recipe for domain mishandling and it is prevented by
|
||||
* invalidating the domain value (domain = -1) and printing a
|
||||
* corresponding error.
|
||||
*/
|
||||
if (domain >= 0 && use_dt_domains) {
|
||||
use_dt_domains = 1;
|
||||
} else if (domain < 0 && use_dt_domains != 1) {
|
||||
use_dt_domains = 0;
|
||||
domain = pci_get_new_domain_nr();
|
||||
} else {
|
||||
if (parent)
|
||||
pr_err("Node %pOF has ", parent->of_node);
|
||||
pr_err("Inconsistent \"linux,pci-domain\" property in DT\n");
|
||||
domain = -1;
|
||||
/* On the first call scan device tree for static allocations. */
|
||||
if (!static_domains_reserved) {
|
||||
of_pci_reserve_static_domain_nr();
|
||||
static_domains_reserved = true;
|
||||
}
|
||||
|
||||
return domain;
|
||||
if (parent) {
|
||||
/*
|
||||
* If domain is in DT, allocate it in static IDA. This
|
||||
* prevents duplicate static allocations in case of errors
|
||||
* in DT.
|
||||
*/
|
||||
domain_nr = of_get_pci_domain_nr(parent->of_node);
|
||||
if (domain_nr >= 0)
|
||||
return ida_alloc_range(&pci_domain_nr_static_ida,
|
||||
domain_nr, domain_nr,
|
||||
GFP_KERNEL);
|
||||
}
|
||||
|
||||
/*
|
||||
* If domain was not specified in DT, choose a free ID from dynamic
|
||||
* allocations. All domain numbers from DT are permanently in
|
||||
* dynamic allocations to prevent assigning them to other DT nodes
|
||||
* without static domain.
|
||||
*/
|
||||
return ida_alloc(&pci_domain_nr_dynamic_ida, GFP_KERNEL);
|
||||
}
|
||||
|
||||
static void of_pci_bus_release_domain_nr(struct pci_bus *bus, struct device *parent)
|
||||
{
|
||||
if (bus->domain_nr < 0)
|
||||
return;
|
||||
|
||||
/* Release domain from IDA where it was allocated. */
|
||||
if (of_get_pci_domain_nr(parent->of_node) == bus->domain_nr)
|
||||
ida_free(&pci_domain_nr_static_ida, bus->domain_nr);
|
||||
else
|
||||
ida_free(&pci_domain_nr_dynamic_ida, bus->domain_nr);
|
||||
}
|
||||
|
||||
int pci_bus_find_domain_nr(struct pci_bus *bus, struct device *parent)
|
||||
|
@ -6804,6 +6816,13 @@ int pci_bus_find_domain_nr(struct pci_bus *bus, struct device *parent)
|
|||
return acpi_disabled ? of_pci_bus_find_domain_nr(parent) :
|
||||
acpi_pci_bus_find_domain_nr(bus);
|
||||
}
|
||||
|
||||
void pci_bus_release_domain_nr(struct pci_bus *bus, struct device *parent)
|
||||
{
|
||||
if (!acpi_disabled)
|
||||
return;
|
||||
of_pci_bus_release_domain_nr(bus, parent);
|
||||
}
|
||||
#endif
|
||||
|
||||
/**
|
||||
|
|
|
@ -15,6 +15,7 @@ extern const unsigned char pcie_link_speed[];
|
|||
extern bool pci_early_dump;
|
||||
|
||||
bool pcie_cap_has_lnkctl(const struct pci_dev *dev);
|
||||
bool pcie_cap_has_lnkctl2(const struct pci_dev *dev);
|
||||
bool pcie_cap_has_rtctl(const struct pci_dev *dev);
|
||||
|
||||
/* Functions internal to the PCI core code */
|
||||
|
|
|
@ -4,6 +4,7 @@
|
|||
#
|
||||
config PCIEPORTBUS
|
||||
bool "PCI Express Port Bus support"
|
||||
default y if USB4
|
||||
help
|
||||
This enables PCI Express Port Bus support. Users can then enable
|
||||
support for Native Hot-Plug, Advanced Error Reporting, Power
|
||||
|
@ -15,9 +16,12 @@ config PCIEPORTBUS
|
|||
config HOTPLUG_PCI_PCIE
|
||||
bool "PCI Express Hotplug driver"
|
||||
depends on HOTPLUG_PCI && PCIEPORTBUS
|
||||
default y if USB4
|
||||
help
|
||||
Say Y here if you have a motherboard that supports PCI Express Native
|
||||
Hotplug
|
||||
Say Y here if you have a motherboard that supports PCIe native
|
||||
hotplug.
|
||||
|
||||
Thunderbolt/USB4 PCIe tunneling depends on native PCIe hotplug.
|
||||
|
||||
When in doubt, say N.
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
#
|
||||
# Makefile for PCI Express features and port driver
|
||||
|
||||
pcieportdrv-y := portdrv_core.o portdrv_pci.o rcec.o
|
||||
pcieportdrv-y := portdrv.o rcec.o
|
||||
|
||||
obj-$(CONFIG_PCIEPORTBUS) += pcieportdrv.o
|
||||
|
||||
|
|
|
@ -1,11 +1,13 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Purpose: PCI Express Port Bus Driver's Core Functions
|
||||
* Purpose: PCI Express Port Bus Driver
|
||||
*
|
||||
* Copyright (C) 2004 Intel
|
||||
* Copyright (C) Tom Long Nguyen (tom.l.nguyen@intel.com)
|
||||
*/
|
||||
|
||||
#include <linux/dmi.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/kernel.h>
|
||||
|
@ -19,6 +21,15 @@
|
|||
#include "../pci.h"
|
||||
#include "portdrv.h"
|
||||
|
||||
/*
|
||||
* The PCIe Capability Interrupt Message Number (PCIe r3.1, sec 7.8.2) must
|
||||
* be one of the first 32 MSI-X entries. Per PCI r3.0, sec 6.8.3.1, MSI
|
||||
* supports a maximum of 32 vectors per function.
|
||||
*/
|
||||
#define PCIE_PORT_MAX_MSI_ENTRIES 32
|
||||
|
||||
#define get_descriptor_id(type, service) (((type - 4) << 8) | service)
|
||||
|
||||
struct portdrv_service_data {
|
||||
struct pcie_port_service_driver *drv;
|
||||
struct device *dev;
|
||||
|
@ -209,6 +220,8 @@ static int get_port_device_capability(struct pci_dev *dev)
|
|||
int services = 0;
|
||||
|
||||
if (dev->is_hotplug_bridge &&
|
||||
(pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT ||
|
||||
pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM) &&
|
||||
(pcie_ports_native || host->native_pcie_hotplug)) {
|
||||
services |= PCIE_PORT_SERVICE_HP;
|
||||
|
||||
|
@ -221,7 +234,9 @@ static int get_port_device_capability(struct pci_dev *dev)
|
|||
}
|
||||
|
||||
#ifdef CONFIG_PCIEAER
|
||||
if (dev->aer_cap && pci_aer_available() &&
|
||||
if ((pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT ||
|
||||
pci_pcie_type(dev) == PCI_EXP_TYPE_RC_EC) &&
|
||||
dev->aer_cap && pci_aer_available() &&
|
||||
(pcie_ports_native || host->native_aer))
|
||||
services |= PCIE_PORT_SERVICE_AER;
|
||||
#endif
|
||||
|
@ -308,7 +323,7 @@ static int pcie_device_init(struct pci_dev *pdev, int service, int irq)
|
|||
* Allocate the port extension structure and register services associated with
|
||||
* the port.
|
||||
*/
|
||||
int pcie_port_device_register(struct pci_dev *dev)
|
||||
static int pcie_port_device_register(struct pci_dev *dev)
|
||||
{
|
||||
int status, capabilities, i, nr_service;
|
||||
int irqs[PCIE_PORT_DEVICE_MAXSERVICES];
|
||||
|
@ -362,7 +377,7 @@ error_disable:
|
|||
|
||||
typedef int (*pcie_callback_t)(struct pcie_device *);
|
||||
|
||||
int pcie_port_device_iter(struct device *dev, void *data)
|
||||
static int pcie_port_device_iter(struct device *dev, void *data)
|
||||
{
|
||||
struct pcie_port_service_driver *service_driver;
|
||||
size_t offset = *(size_t *)data;
|
||||
|
@ -382,13 +397,13 @@ int pcie_port_device_iter(struct device *dev, void *data)
|
|||
* pcie_port_device_suspend - suspend port services associated with a PCIe port
|
||||
* @dev: PCI Express port to handle
|
||||
*/
|
||||
int pcie_port_device_suspend(struct device *dev)
|
||||
static int pcie_port_device_suspend(struct device *dev)
|
||||
{
|
||||
size_t off = offsetof(struct pcie_port_service_driver, suspend);
|
||||
return device_for_each_child(dev, &off, pcie_port_device_iter);
|
||||
}
|
||||
|
||||
int pcie_port_device_resume_noirq(struct device *dev)
|
||||
static int pcie_port_device_resume_noirq(struct device *dev)
|
||||
{
|
||||
size_t off = offsetof(struct pcie_port_service_driver, resume_noirq);
|
||||
return device_for_each_child(dev, &off, pcie_port_device_iter);
|
||||
|
@ -398,7 +413,7 @@ int pcie_port_device_resume_noirq(struct device *dev)
|
|||
* pcie_port_device_resume - resume port services associated with a PCIe port
|
||||
* @dev: PCI Express port to handle
|
||||
*/
|
||||
int pcie_port_device_resume(struct device *dev)
|
||||
static int pcie_port_device_resume(struct device *dev)
|
||||
{
|
||||
size_t off = offsetof(struct pcie_port_service_driver, resume);
|
||||
return device_for_each_child(dev, &off, pcie_port_device_iter);
|
||||
|
@ -408,7 +423,7 @@ int pcie_port_device_resume(struct device *dev)
|
|||
* pcie_port_device_runtime_suspend - runtime suspend port services
|
||||
* @dev: PCI Express port to handle
|
||||
*/
|
||||
int pcie_port_device_runtime_suspend(struct device *dev)
|
||||
static int pcie_port_device_runtime_suspend(struct device *dev)
|
||||
{
|
||||
size_t off = offsetof(struct pcie_port_service_driver, runtime_suspend);
|
||||
return device_for_each_child(dev, &off, pcie_port_device_iter);
|
||||
|
@ -418,7 +433,7 @@ int pcie_port_device_runtime_suspend(struct device *dev)
|
|||
* pcie_port_device_runtime_resume - runtime resume port services
|
||||
* @dev: PCI Express port to handle
|
||||
*/
|
||||
int pcie_port_device_runtime_resume(struct device *dev)
|
||||
static int pcie_port_device_runtime_resume(struct device *dev)
|
||||
{
|
||||
size_t off = offsetof(struct pcie_port_service_driver, runtime_resume);
|
||||
return device_for_each_child(dev, &off, pcie_port_device_iter);
|
||||
|
@ -482,7 +497,7 @@ EXPORT_SYMBOL_GPL(pcie_port_find_device);
|
|||
* Remove PCI Express port service devices associated with given port and
|
||||
* disable MSI-X or MSI for the port.
|
||||
*/
|
||||
void pcie_port_device_remove(struct pci_dev *dev)
|
||||
static void pcie_port_device_remove(struct pci_dev *dev)
|
||||
{
|
||||
device_for_each_child(&dev->dev, NULL, remove_iter);
|
||||
pci_free_irq_vectors(dev);
|
||||
|
@ -573,7 +588,6 @@ int pcie_port_service_register(struct pcie_port_service_driver *new)
|
|||
|
||||
return driver_register(&new->driver);
|
||||
}
|
||||
EXPORT_SYMBOL(pcie_port_service_register);
|
||||
|
||||
/**
|
||||
* pcie_port_service_unregister - unregister PCI Express port service driver
|
||||
|
@ -583,4 +597,235 @@ void pcie_port_service_unregister(struct pcie_port_service_driver *drv)
|
|||
{
|
||||
driver_unregister(&drv->driver);
|
||||
}
|
||||
EXPORT_SYMBOL(pcie_port_service_unregister);
|
||||
|
||||
/* If this switch is set, PCIe port native services should not be enabled. */
|
||||
bool pcie_ports_disabled;
|
||||
|
||||
/*
|
||||
* If the user specified "pcie_ports=native", use the PCIe services regardless
|
||||
* of whether the platform has given us permission. On ACPI systems, this
|
||||
* means we ignore _OSC.
|
||||
*/
|
||||
bool pcie_ports_native;
|
||||
|
||||
/*
|
||||
* If the user specified "pcie_ports=dpc-native", use the Linux DPC PCIe
|
||||
* service even if the platform hasn't given us permission.
|
||||
*/
|
||||
bool pcie_ports_dpc_native;
|
||||
|
||||
static int __init pcie_port_setup(char *str)
|
||||
{
|
||||
if (!strncmp(str, "compat", 6))
|
||||
pcie_ports_disabled = true;
|
||||
else if (!strncmp(str, "native", 6))
|
||||
pcie_ports_native = true;
|
||||
else if (!strncmp(str, "dpc-native", 10))
|
||||
pcie_ports_dpc_native = true;
|
||||
|
||||
return 1;
|
||||
}
|
||||
__setup("pcie_ports=", pcie_port_setup);
|
||||
|
||||
/* global data */
|
||||
|
||||
#ifdef CONFIG_PM
|
||||
static int pcie_port_runtime_suspend(struct device *dev)
|
||||
{
|
||||
if (!to_pci_dev(dev)->bridge_d3)
|
||||
return -EBUSY;
|
||||
|
||||
return pcie_port_device_runtime_suspend(dev);
|
||||
}
|
||||
|
||||
static int pcie_port_runtime_idle(struct device *dev)
|
||||
{
|
||||
/*
|
||||
* Assume the PCI core has set bridge_d3 whenever it thinks the port
|
||||
* should be good to go to D3. Everything else, including moving
|
||||
* the port to D3, is handled by the PCI core.
|
||||
*/
|
||||
return to_pci_dev(dev)->bridge_d3 ? 0 : -EBUSY;
|
||||
}
|
||||
|
||||
static const struct dev_pm_ops pcie_portdrv_pm_ops = {
|
||||
.suspend = pcie_port_device_suspend,
|
||||
.resume_noirq = pcie_port_device_resume_noirq,
|
||||
.resume = pcie_port_device_resume,
|
||||
.freeze = pcie_port_device_suspend,
|
||||
.thaw = pcie_port_device_resume,
|
||||
.poweroff = pcie_port_device_suspend,
|
||||
.restore_noirq = pcie_port_device_resume_noirq,
|
||||
.restore = pcie_port_device_resume,
|
||||
.runtime_suspend = pcie_port_runtime_suspend,
|
||||
.runtime_resume = pcie_port_device_runtime_resume,
|
||||
.runtime_idle = pcie_port_runtime_idle,
|
||||
};
|
||||
|
||||
#define PCIE_PORTDRV_PM_OPS (&pcie_portdrv_pm_ops)
|
||||
|
||||
#else /* !PM */
|
||||
|
||||
#define PCIE_PORTDRV_PM_OPS NULL
|
||||
#endif /* !PM */
|
||||
|
||||
/*
|
||||
* pcie_portdrv_probe - Probe PCI-Express port devices
|
||||
* @dev: PCI-Express port device being probed
|
||||
*
|
||||
* If detected invokes the pcie_port_device_register() method for
|
||||
* this port device.
|
||||
*
|
||||
*/
|
||||
static int pcie_portdrv_probe(struct pci_dev *dev,
|
||||
const struct pci_device_id *id)
|
||||
{
|
||||
int type = pci_pcie_type(dev);
|
||||
int status;
|
||||
|
||||
if (!pci_is_pcie(dev) ||
|
||||
((type != PCI_EXP_TYPE_ROOT_PORT) &&
|
||||
(type != PCI_EXP_TYPE_UPSTREAM) &&
|
||||
(type != PCI_EXP_TYPE_DOWNSTREAM) &&
|
||||
(type != PCI_EXP_TYPE_RC_EC)))
|
||||
return -ENODEV;
|
||||
|
||||
if (type == PCI_EXP_TYPE_RC_EC)
|
||||
pcie_link_rcec(dev);
|
||||
|
||||
status = pcie_port_device_register(dev);
|
||||
if (status)
|
||||
return status;
|
||||
|
||||
pci_save_state(dev);
|
||||
|
||||
dev_pm_set_driver_flags(&dev->dev, DPM_FLAG_NO_DIRECT_COMPLETE |
|
||||
DPM_FLAG_SMART_SUSPEND);
|
||||
|
||||
if (pci_bridge_d3_possible(dev)) {
|
||||
/*
|
||||
* Keep the port resumed 100ms to make sure things like
|
||||
* config space accesses from userspace (lspci) will not
|
||||
* cause the port to repeatedly suspend and resume.
|
||||
*/
|
||||
pm_runtime_set_autosuspend_delay(&dev->dev, 100);
|
||||
pm_runtime_use_autosuspend(&dev->dev);
|
||||
pm_runtime_mark_last_busy(&dev->dev);
|
||||
pm_runtime_put_autosuspend(&dev->dev);
|
||||
pm_runtime_allow(&dev->dev);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void pcie_portdrv_remove(struct pci_dev *dev)
|
||||
{
|
||||
if (pci_bridge_d3_possible(dev)) {
|
||||
pm_runtime_forbid(&dev->dev);
|
||||
pm_runtime_get_noresume(&dev->dev);
|
||||
pm_runtime_dont_use_autosuspend(&dev->dev);
|
||||
}
|
||||
|
||||
pcie_port_device_remove(dev);
|
||||
}
|
||||
|
||||
static pci_ers_result_t pcie_portdrv_error_detected(struct pci_dev *dev,
|
||||
pci_channel_state_t error)
|
||||
{
|
||||
if (error == pci_channel_io_frozen)
|
||||
return PCI_ERS_RESULT_NEED_RESET;
|
||||
return PCI_ERS_RESULT_CAN_RECOVER;
|
||||
}
|
||||
|
||||
static pci_ers_result_t pcie_portdrv_slot_reset(struct pci_dev *dev)
|
||||
{
|
||||
size_t off = offsetof(struct pcie_port_service_driver, slot_reset);
|
||||
device_for_each_child(&dev->dev, &off, pcie_port_device_iter);
|
||||
|
||||
pci_restore_state(dev);
|
||||
pci_save_state(dev);
|
||||
return PCI_ERS_RESULT_RECOVERED;
|
||||
}
|
||||
|
||||
static pci_ers_result_t pcie_portdrv_mmio_enabled(struct pci_dev *dev)
|
||||
{
|
||||
return PCI_ERS_RESULT_RECOVERED;
|
||||
}
|
||||
|
||||
/*
|
||||
* LINUX Device Driver Model
|
||||
*/
|
||||
static const struct pci_device_id port_pci_ids[] = {
|
||||
/* handle any PCI-Express port */
|
||||
{ PCI_DEVICE_CLASS(PCI_CLASS_BRIDGE_PCI_NORMAL, ~0) },
|
||||
/* subtractive decode PCI-to-PCI bridge, class type is 060401h */
|
||||
{ PCI_DEVICE_CLASS(PCI_CLASS_BRIDGE_PCI_SUBTRACTIVE, ~0) },
|
||||
/* handle any Root Complex Event Collector */
|
||||
{ PCI_DEVICE_CLASS(((PCI_CLASS_SYSTEM_RCEC << 8) | 0x00), ~0) },
|
||||
{ },
|
||||
};
|
||||
|
||||
static const struct pci_error_handlers pcie_portdrv_err_handler = {
|
||||
.error_detected = pcie_portdrv_error_detected,
|
||||
.slot_reset = pcie_portdrv_slot_reset,
|
||||
.mmio_enabled = pcie_portdrv_mmio_enabled,
|
||||
};
|
||||
|
||||
static struct pci_driver pcie_portdriver = {
|
||||
.name = "pcieport",
|
||||
.id_table = &port_pci_ids[0],
|
||||
|
||||
.probe = pcie_portdrv_probe,
|
||||
.remove = pcie_portdrv_remove,
|
||||
.shutdown = pcie_portdrv_remove,
|
||||
|
||||
.err_handler = &pcie_portdrv_err_handler,
|
||||
|
||||
.driver_managed_dma = true,
|
||||
|
||||
.driver.pm = PCIE_PORTDRV_PM_OPS,
|
||||
};
|
||||
|
||||
static int __init dmi_pcie_pme_disable_msi(const struct dmi_system_id *d)
|
||||
{
|
||||
pr_notice("%s detected: will not use MSI for PCIe PME signaling\n",
|
||||
d->ident);
|
||||
pcie_pme_disable_msi();
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct dmi_system_id pcie_portdrv_dmi_table[] __initconst = {
|
||||
/*
|
||||
* Boxes that should not use MSI for PCIe PME signaling.
|
||||
*/
|
||||
{
|
||||
.callback = dmi_pcie_pme_disable_msi,
|
||||
.ident = "MSI Wind U-100",
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR,
|
||||
"MICRO-STAR INTERNATIONAL CO., LTD"),
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "U-100"),
|
||||
},
|
||||
},
|
||||
{}
|
||||
};
|
||||
|
||||
static void __init pcie_init_services(void)
|
||||
{
|
||||
pcie_aer_init();
|
||||
pcie_pme_init();
|
||||
pcie_dpc_init();
|
||||
pcie_hp_init();
|
||||
}
|
||||
|
||||
static int __init pcie_portdrv_init(void)
|
||||
{
|
||||
if (pcie_ports_disabled)
|
||||
return -EACCES;
|
||||
|
||||
pcie_init_services();
|
||||
dmi_check_system(pcie_portdrv_dmi_table);
|
||||
|
||||
return pci_register_driver(&pcie_portdriver);
|
||||
}
|
||||
device_initcall(pcie_portdrv_init);
|
|
@ -98,26 +98,7 @@ struct pcie_port_service_driver {
|
|||
int pcie_port_service_register(struct pcie_port_service_driver *new);
|
||||
void pcie_port_service_unregister(struct pcie_port_service_driver *new);
|
||||
|
||||
/*
|
||||
* The PCIe Capability Interrupt Message Number (PCIe r3.1, sec 7.8.2) must
|
||||
* be one of the first 32 MSI-X entries. Per PCI r3.0, sec 6.8.3.1, MSI
|
||||
* supports a maximum of 32 vectors per function.
|
||||
*/
|
||||
#define PCIE_PORT_MAX_MSI_ENTRIES 32
|
||||
|
||||
#define get_descriptor_id(type, service) (((type - 4) << 8) | service)
|
||||
|
||||
extern struct bus_type pcie_port_bus_type;
|
||||
int pcie_port_device_register(struct pci_dev *dev);
|
||||
int pcie_port_device_iter(struct device *dev, void *data);
|
||||
#ifdef CONFIG_PM
|
||||
int pcie_port_device_suspend(struct device *dev);
|
||||
int pcie_port_device_resume_noirq(struct device *dev);
|
||||
int pcie_port_device_resume(struct device *dev);
|
||||
int pcie_port_device_runtime_suspend(struct device *dev);
|
||||
int pcie_port_device_runtime_resume(struct device *dev);
|
||||
#endif
|
||||
void pcie_port_device_remove(struct pci_dev *dev);
|
||||
|
||||
struct pci_dev;
|
||||
|
||||
|
|
|
@ -1,252 +0,0 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Purpose: PCI Express Port Bus Driver
|
||||
* Author: Tom Nguyen <tom.l.nguyen@intel.com>
|
||||
*
|
||||
* Copyright (C) 2004 Intel
|
||||
* Copyright (C) Tom Long Nguyen (tom.l.nguyen@intel.com)
|
||||
*/
|
||||
|
||||
#include <linux/pci.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/errno.h>
|
||||
#include <linux/pm.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/aer.h>
|
||||
#include <linux/dmi.h>
|
||||
|
||||
#include "../pci.h"
|
||||
#include "portdrv.h"
|
||||
|
||||
/* If this switch is set, PCIe port native services should not be enabled. */
|
||||
bool pcie_ports_disabled;
|
||||
|
||||
/*
|
||||
* If the user specified "pcie_ports=native", use the PCIe services regardless
|
||||
* of whether the platform has given us permission. On ACPI systems, this
|
||||
* means we ignore _OSC.
|
||||
*/
|
||||
bool pcie_ports_native;
|
||||
|
||||
/*
|
||||
* If the user specified "pcie_ports=dpc-native", use the Linux DPC PCIe
|
||||
* service even if the platform hasn't given us permission.
|
||||
*/
|
||||
bool pcie_ports_dpc_native;
|
||||
|
||||
static int __init pcie_port_setup(char *str)
|
||||
{
|
||||
if (!strncmp(str, "compat", 6))
|
||||
pcie_ports_disabled = true;
|
||||
else if (!strncmp(str, "native", 6))
|
||||
pcie_ports_native = true;
|
||||
else if (!strncmp(str, "dpc-native", 10))
|
||||
pcie_ports_dpc_native = true;
|
||||
|
||||
return 1;
|
||||
}
|
||||
__setup("pcie_ports=", pcie_port_setup);
|
||||
|
||||
/* global data */
|
||||
|
||||
#ifdef CONFIG_PM
|
||||
static int pcie_port_runtime_suspend(struct device *dev)
|
||||
{
|
||||
if (!to_pci_dev(dev)->bridge_d3)
|
||||
return -EBUSY;
|
||||
|
||||
return pcie_port_device_runtime_suspend(dev);
|
||||
}
|
||||
|
||||
static int pcie_port_runtime_idle(struct device *dev)
|
||||
{
|
||||
/*
|
||||
* Assume the PCI core has set bridge_d3 whenever it thinks the port
|
||||
* should be good to go to D3. Everything else, including moving
|
||||
* the port to D3, is handled by the PCI core.
|
||||
*/
|
||||
return to_pci_dev(dev)->bridge_d3 ? 0 : -EBUSY;
|
||||
}
|
||||
|
||||
static const struct dev_pm_ops pcie_portdrv_pm_ops = {
|
||||
.suspend = pcie_port_device_suspend,
|
||||
.resume_noirq = pcie_port_device_resume_noirq,
|
||||
.resume = pcie_port_device_resume,
|
||||
.freeze = pcie_port_device_suspend,
|
||||
.thaw = pcie_port_device_resume,
|
||||
.poweroff = pcie_port_device_suspend,
|
||||
.restore_noirq = pcie_port_device_resume_noirq,
|
||||
.restore = pcie_port_device_resume,
|
||||
.runtime_suspend = pcie_port_runtime_suspend,
|
||||
.runtime_resume = pcie_port_device_runtime_resume,
|
||||
.runtime_idle = pcie_port_runtime_idle,
|
||||
};
|
||||
|
||||
#define PCIE_PORTDRV_PM_OPS (&pcie_portdrv_pm_ops)
|
||||
|
||||
#else /* !PM */
|
||||
|
||||
#define PCIE_PORTDRV_PM_OPS NULL
|
||||
#endif /* !PM */
|
||||
|
||||
/*
|
||||
* pcie_portdrv_probe - Probe PCI-Express port devices
|
||||
* @dev: PCI-Express port device being probed
|
||||
*
|
||||
* If detected invokes the pcie_port_device_register() method for
|
||||
* this port device.
|
||||
*
|
||||
*/
|
||||
static int pcie_portdrv_probe(struct pci_dev *dev,
|
||||
const struct pci_device_id *id)
|
||||
{
|
||||
int type = pci_pcie_type(dev);
|
||||
int status;
|
||||
|
||||
if (!pci_is_pcie(dev) ||
|
||||
((type != PCI_EXP_TYPE_ROOT_PORT) &&
|
||||
(type != PCI_EXP_TYPE_UPSTREAM) &&
|
||||
(type != PCI_EXP_TYPE_DOWNSTREAM) &&
|
||||
(type != PCI_EXP_TYPE_RC_EC)))
|
||||
return -ENODEV;
|
||||
|
||||
if (type == PCI_EXP_TYPE_RC_EC)
|
||||
pcie_link_rcec(dev);
|
||||
|
||||
status = pcie_port_device_register(dev);
|
||||
if (status)
|
||||
return status;
|
||||
|
||||
pci_save_state(dev);
|
||||
|
||||
dev_pm_set_driver_flags(&dev->dev, DPM_FLAG_NO_DIRECT_COMPLETE |
|
||||
DPM_FLAG_SMART_SUSPEND);
|
||||
|
||||
if (pci_bridge_d3_possible(dev)) {
|
||||
/*
|
||||
* Keep the port resumed 100ms to make sure things like
|
||||
* config space accesses from userspace (lspci) will not
|
||||
* cause the port to repeatedly suspend and resume.
|
||||
*/
|
||||
pm_runtime_set_autosuspend_delay(&dev->dev, 100);
|
||||
pm_runtime_use_autosuspend(&dev->dev);
|
||||
pm_runtime_mark_last_busy(&dev->dev);
|
||||
pm_runtime_put_autosuspend(&dev->dev);
|
||||
pm_runtime_allow(&dev->dev);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void pcie_portdrv_remove(struct pci_dev *dev)
|
||||
{
|
||||
if (pci_bridge_d3_possible(dev)) {
|
||||
pm_runtime_forbid(&dev->dev);
|
||||
pm_runtime_get_noresume(&dev->dev);
|
||||
pm_runtime_dont_use_autosuspend(&dev->dev);
|
||||
}
|
||||
|
||||
pcie_port_device_remove(dev);
|
||||
}
|
||||
|
||||
static pci_ers_result_t pcie_portdrv_error_detected(struct pci_dev *dev,
|
||||
pci_channel_state_t error)
|
||||
{
|
||||
if (error == pci_channel_io_frozen)
|
||||
return PCI_ERS_RESULT_NEED_RESET;
|
||||
return PCI_ERS_RESULT_CAN_RECOVER;
|
||||
}
|
||||
|
||||
static pci_ers_result_t pcie_portdrv_slot_reset(struct pci_dev *dev)
|
||||
{
|
||||
size_t off = offsetof(struct pcie_port_service_driver, slot_reset);
|
||||
device_for_each_child(&dev->dev, &off, pcie_port_device_iter);
|
||||
|
||||
pci_restore_state(dev);
|
||||
pci_save_state(dev);
|
||||
return PCI_ERS_RESULT_RECOVERED;
|
||||
}
|
||||
|
||||
static pci_ers_result_t pcie_portdrv_mmio_enabled(struct pci_dev *dev)
|
||||
{
|
||||
return PCI_ERS_RESULT_RECOVERED;
|
||||
}
|
||||
|
||||
/*
|
||||
* LINUX Device Driver Model
|
||||
*/
|
||||
static const struct pci_device_id port_pci_ids[] = {
|
||||
/* handle any PCI-Express port */
|
||||
{ PCI_DEVICE_CLASS(PCI_CLASS_BRIDGE_PCI_NORMAL, ~0) },
|
||||
/* subtractive decode PCI-to-PCI bridge, class type is 060401h */
|
||||
{ PCI_DEVICE_CLASS(PCI_CLASS_BRIDGE_PCI_SUBTRACTIVE, ~0) },
|
||||
/* handle any Root Complex Event Collector */
|
||||
{ PCI_DEVICE_CLASS(((PCI_CLASS_SYSTEM_RCEC << 8) | 0x00), ~0) },
|
||||
{ },
|
||||
};
|
||||
|
||||
static const struct pci_error_handlers pcie_portdrv_err_handler = {
|
||||
.error_detected = pcie_portdrv_error_detected,
|
||||
.slot_reset = pcie_portdrv_slot_reset,
|
||||
.mmio_enabled = pcie_portdrv_mmio_enabled,
|
||||
};
|
||||
|
||||
static struct pci_driver pcie_portdriver = {
|
||||
.name = "pcieport",
|
||||
.id_table = &port_pci_ids[0],
|
||||
|
||||
.probe = pcie_portdrv_probe,
|
||||
.remove = pcie_portdrv_remove,
|
||||
.shutdown = pcie_portdrv_remove,
|
||||
|
||||
.err_handler = &pcie_portdrv_err_handler,
|
||||
|
||||
.driver_managed_dma = true,
|
||||
|
||||
.driver.pm = PCIE_PORTDRV_PM_OPS,
|
||||
};
|
||||
|
||||
static int __init dmi_pcie_pme_disable_msi(const struct dmi_system_id *d)
|
||||
{
|
||||
pr_notice("%s detected: will not use MSI for PCIe PME signaling\n",
|
||||
d->ident);
|
||||
pcie_pme_disable_msi();
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct dmi_system_id pcie_portdrv_dmi_table[] __initconst = {
|
||||
/*
|
||||
* Boxes that should not use MSI for PCIe PME signaling.
|
||||
*/
|
||||
{
|
||||
.callback = dmi_pcie_pme_disable_msi,
|
||||
.ident = "MSI Wind U-100",
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR,
|
||||
"MICRO-STAR INTERNATIONAL CO., LTD"),
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "U-100"),
|
||||
},
|
||||
},
|
||||
{}
|
||||
};
|
||||
|
||||
static void __init pcie_init_services(void)
|
||||
{
|
||||
pcie_aer_init();
|
||||
pcie_pme_init();
|
||||
pcie_dpc_init();
|
||||
pcie_hp_init();
|
||||
}
|
||||
|
||||
static int __init pcie_portdrv_init(void)
|
||||
{
|
||||
if (pcie_ports_disabled)
|
||||
return -EACCES;
|
||||
|
||||
pcie_init_services();
|
||||
dmi_check_system(pcie_portdrv_dmi_table);
|
||||
|
||||
return pci_register_driver(&pcie_portdriver);
|
||||
}
|
||||
device_initcall(pcie_portdrv_init);
|
|
@ -904,6 +904,10 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
|
|||
bus->domain_nr = pci_bus_find_domain_nr(bus, parent);
|
||||
else
|
||||
bus->domain_nr = bridge->domain_nr;
|
||||
if (bus->domain_nr < 0) {
|
||||
err = bus->domain_nr;
|
||||
goto free;
|
||||
}
|
||||
#endif
|
||||
|
||||
b = pci_find_bus(pci_domain_nr(bus), bridge->busnr);
|
||||
|
@ -1028,6 +1032,9 @@ unregister:
|
|||
device_del(&bridge->dev);
|
||||
|
||||
free:
|
||||
#ifdef CONFIG_PCI_DOMAINS_GENERIC
|
||||
pci_bus_release_domain_nr(bus, parent);
|
||||
#endif
|
||||
kfree(bus);
|
||||
return err;
|
||||
}
|
||||
|
@ -1889,9 +1896,6 @@ int pci_setup_device(struct pci_dev *dev)
|
|||
|
||||
dev->broken_intx_masking = pci_intx_mask_broken(dev);
|
||||
|
||||
/* Clear errors left from system firmware */
|
||||
pci_write_config_word(dev, PCI_STATUS, 0xffff);
|
||||
|
||||
switch (dev->hdr_type) { /* header type */
|
||||
case PCI_HEADER_TYPE_NORMAL: /* standard header */
|
||||
if (class == PCI_CLASS_BRIDGE_PCI)
|
||||
|
|
|
@ -160,6 +160,12 @@ void pci_remove_root_bus(struct pci_bus *bus)
|
|||
pci_remove_bus(bus);
|
||||
host_bridge->bus = NULL;
|
||||
|
||||
#ifdef CONFIG_PCI_DOMAINS_GENERIC
|
||||
/* Release domain_nr if it was dynamically allocated */
|
||||
if (host_bridge->domain_nr == PCI_DOMAIN_NR_NOT_SET)
|
||||
pci_bus_release_domain_nr(bus, host_bridge->dev.parent);
|
||||
#endif
|
||||
|
||||
/* remove the host bridge */
|
||||
device_del(&host_bridge->dev);
|
||||
}
|
||||
|
|
|
@ -1760,6 +1760,7 @@ static inline int acpi_pci_bus_find_domain_nr(struct pci_bus *bus)
|
|||
{ return 0; }
|
||||
#endif
|
||||
int pci_bus_find_domain_nr(struct pci_bus *bus, struct device *parent);
|
||||
void pci_bus_release_domain_nr(struct pci_bus *bus, struct device *parent);
|
||||
#endif
|
||||
|
||||
/* Some architectures require additional setup to direct VGA traffic */
|
||||
|
|
|
@ -1058,6 +1058,7 @@
|
|||
/* Precision Time Measurement */
|
||||
#define PCI_PTM_CAP 0x04 /* PTM Capability */
|
||||
#define PCI_PTM_CAP_REQ 0x00000001 /* Requester capable */
|
||||
#define PCI_PTM_CAP_RES 0x00000002 /* Responder capable */
|
||||
#define PCI_PTM_CAP_ROOT 0x00000004 /* Root capable */
|
||||
#define PCI_PTM_GRANULARITY_MASK 0x0000FF00 /* Clock granularity */
|
||||
#define PCI_PTM_CTRL 0x08 /* PTM Control */
|
||||
|
|
Loading…
Reference in New Issue