Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next
Pull networking updates from David Miller: "Another merge window, another set of networking changes. I've heard rumblings that the lightweight tunnels infrastructure has been voted networking change of the year. But what do I know? 1) Add conntrack support to openvswitch, from Joe Stringer. 2) Initial support for VRF (Virtual Routing and Forwarding), which allows the segmentation of routing paths without using multiple devices. There are some semantic kinks to work out still, but this is a reasonably strong foundation. From David Ahern. 3) Remove spinlock fro act_bpf fast path, from Alexei Starovoitov. 4) Ignore route nexthops with a link down state in ipv6, just like ipv4. From Andy Gospodarek. 5) Remove spinlock from fast path of act_gact and act_mirred, from Eric Dumazet. 6) Document the DSA layer, from Florian Fainelli. 7) Add netconsole support to bcmgenet, systemport, and DSA. Also from Florian Fainelli. 8) Add Mellanox Switch Driver and core infrastructure, from Jiri Pirko. 9) Add support for "light weight tunnels", which allow for encapsulation and decapsulation without bearing the overhead of a full blown netdevice. From Thomas Graf, Jiri Benc, and a cast of others. 10) Add Identifier Locator Addressing support for ipv6, from Tom Herbert. 11) Support fragmented SKBs in iwlwifi, from Johannes Berg. 12) Allow perf PMUs to be accessed from eBPF programs, from Kaixu Xia. 13) Add BQL support to 3c59x driver, from Loganaden Velvindron. 14) Stop using a zero TX queue length to mean that a device shouldn't have a qdisc attached, use an explicit flag instead. From Phil Sutter. 15) Use generic geneve netdevice infrastructure in openvswitch, from Pravin B Shelar. 16) Add infrastructure to avoid re-forwarding a packet in software that was already forwarded by a hardware switch. From Scott Feldman. 17) Allow AF_PACKET fanout function to be implemented in a bpf program, from Willem de Bruijn" * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1458 commits) netfilter: nf_conntrack: make nf_ct_zone_dflt built-in netfilter: nf_dup{4, 6}: fix build error when nf_conntrack disabled net: fec: clear receive interrupts before processing a packet ipv6: fix exthdrs offload registration in out_rt path xen-netback: add support for multicast control bgmac: Update fixed_phy_register() sock, diag: fix panic in sock_diag_put_filterinfo flow_dissector: Use 'const' where possible. flow_dissector: Fix function argument ordering dependency ixgbe: Resolve "initialized field overwritten" warnings ixgbe: Remove bimodal SR-IOV disabling ixgbe: Add support for reporting 2.5G link speed ixgbe: fix bounds checking in ixgbe_setup_tc for 82598 ixgbe: support for ethtool set_rxfh ixgbe: Avoid needless PHY access on copper phys ixgbe: cleanup to use cached mask value ixgbe: Remove second instance of lan_id variable ixgbe: use kzalloc for allocating one thing flow: Move __get_hash_from_flowi{4,6} into flow_dissector.c ixgbe: Remove unused PCI bus types ...
This commit is contained in:
commit
dd5cdb48ed
|
@ -44,9 +44,10 @@ Note that a port labelled "dsa" will imply checking for the uplink phandle
|
|||
described below.
|
||||
|
||||
Optionnal property:
|
||||
- link : Should be a phandle to another switch's DSA port.
|
||||
- link : Should be a list of phandles to another switch's DSA port.
|
||||
This property is only used when switches are being
|
||||
chained/cascaded together.
|
||||
chained/cascaded together. This port is used as outgoing port
|
||||
towards the phandle port, which can be more than one hop away.
|
||||
|
||||
- phy-handle : Phandle to a PHY on an external MDIO bus, not the
|
||||
switch internal one. See
|
||||
|
@ -58,6 +59,10 @@ Optionnal property:
|
|||
Documentation/devicetree/bindings/net/ethernet.txt
|
||||
for details.
|
||||
|
||||
- mii-bus : Should be a phandle to a valid MDIO bus device node.
|
||||
This mii-bus will be used in preference to the
|
||||
global dsa,mii-bus defined above, for this switch.
|
||||
|
||||
Optional subnodes:
|
||||
- fixed-link : Fixed-link subnode describing a link to a non-MDIO
|
||||
managed entity. See
|
||||
|
@ -96,10 +101,11 @@ Example:
|
|||
label = "cpu";
|
||||
};
|
||||
|
||||
switch0uplink: port@6 {
|
||||
switch0port6: port@6 {
|
||||
reg = <6>;
|
||||
label = "dsa";
|
||||
link = <&switch1uplink>;
|
||||
link = <&switch1port0
|
||||
&switch2port0>;
|
||||
};
|
||||
};
|
||||
|
||||
|
@ -107,11 +113,31 @@ Example:
|
|||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
reg = <17 1>; /* MDIO address 17, switch 1 in tree */
|
||||
mii-bus = <&mii_bus1>;
|
||||
|
||||
switch1uplink: port@0 {
|
||||
switch1port0: port@0 {
|
||||
reg = <0>;
|
||||
label = "dsa";
|
||||
link = <&switch0uplink>;
|
||||
link = <&switch0port6>;
|
||||
};
|
||||
switch1port1: port@1 {
|
||||
reg = <1>;
|
||||
label = "dsa";
|
||||
link = <&switch2port1>;
|
||||
};
|
||||
};
|
||||
|
||||
switch@2 {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
reg = <18 2>; /* MDIO address 18, switch 2 in tree */
|
||||
mii-bus = <&mii_bus1>;
|
||||
|
||||
switch2port0: port@0 {
|
||||
reg = <0>;
|
||||
label = "dsa";
|
||||
link = <&switch1port1
|
||||
&switch0port6>;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
|
|
@ -25,7 +25,11 @@ The following properties are common to the Ethernet controllers:
|
|||
flow control thresholds.
|
||||
- tx-fifo-depth: the size of the controller's transmit fifo in bytes. This
|
||||
is used for components that can have configurable fifo sizes.
|
||||
- managed: string, specifies the PHY management type. Supported values are:
|
||||
"auto", "in-band-status". "auto" is the default, it usess MDIO for
|
||||
management if fixed-link is not specified.
|
||||
|
||||
Child nodes of the Ethernet controller are typically the individual PHY devices
|
||||
connected via the MDIO bus (sometimes the MDIO bus controller is separate).
|
||||
They are described in the phy.txt file in this same directory.
|
||||
For non-MDIO PHY management see fixed-link.txt.
|
||||
|
|
|
@ -17,6 +17,8 @@ properties:
|
|||
enabled.
|
||||
* 'asym-pause' (boolean, optional), to indicate that asym_pause should
|
||||
be enabled.
|
||||
* 'link-gpios' ('gpio-list', optional), to indicate if a gpio can be read
|
||||
to determine if the link is up.
|
||||
|
||||
Old, deprecated 'fixed-link' binding:
|
||||
|
||||
|
@ -30,7 +32,7 @@ Old, deprecated 'fixed-link' binding:
|
|||
- e: asymmetric pause configuration: 0 for no asymmetric pause, 1 for
|
||||
asymmetric pause
|
||||
|
||||
Example:
|
||||
Examples:
|
||||
|
||||
ethernet@0 {
|
||||
...
|
||||
|
@ -40,3 +42,13 @@ ethernet@0 {
|
|||
};
|
||||
...
|
||||
};
|
||||
|
||||
ethernet@1 {
|
||||
...
|
||||
fixed-link {
|
||||
speed = <1000>;
|
||||
pause;
|
||||
link-gpios = <&gpio0 12 GPIO_ACTIVE_HIGH>;
|
||||
};
|
||||
...
|
||||
};
|
||||
|
|
|
@ -130,7 +130,11 @@ Required properties:
|
|||
|
||||
Optional properties:
|
||||
- efuse-mac: If this is 1, then the MAC address for the interface is
|
||||
obtained from the device efuse mac address register
|
||||
obtained from the device efuse mac address register.
|
||||
If this is 2, the two DWORDs occupied by the MAC address
|
||||
are swapped. The netcp driver will swap the two DWORDs
|
||||
back to the proper order when this property is set to 2
|
||||
when it obtains the mac address from efuse.
|
||||
- local-mac-address: the driver is designed to use the of_get_mac_address api
|
||||
only if efuse-mac is 0. When efuse-mac is 0, the MAC
|
||||
address is obtained from local-mac-address. If this
|
||||
|
|
|
@ -0,0 +1,27 @@
|
|||
* Samsung S3FWRN5 NCI NFC Controller
|
||||
|
||||
Required properties:
|
||||
- compatible: Should be "samsung,s3fwrn5-i2c".
|
||||
- reg: address on the bus
|
||||
- interrupt-parent: phandle for the interrupt gpio controller
|
||||
- interrupts: GPIO interrupt to which the chip is connected
|
||||
- s3fwrn5,en-gpios: Output GPIO pin used for enabling/disabling the chip
|
||||
- s3fwrn5,fw-gpios: Output GPIO pin used to enter firmware mode and
|
||||
sleep/wakeup control
|
||||
|
||||
Example:
|
||||
|
||||
&hsi2c_4 {
|
||||
status = "okay";
|
||||
s3fwrn5@27 {
|
||||
compatible = "samsung,s3fwrn5-i2c";
|
||||
|
||||
reg = <0x27>;
|
||||
|
||||
interrupt-parent = <&gpa1>;
|
||||
interrupts = <3 0 0>;
|
||||
|
||||
s3fwrn5,en-gpios = <&gpf1 4 0>;
|
||||
s3fwrn5,fw-gpios = <&gpj0 2 0>;
|
||||
};
|
||||
};
|
|
@ -0,0 +1,31 @@
|
|||
* STMicroelectronics SAS. ST NCI NFC Controller
|
||||
|
||||
Required properties:
|
||||
- compatible: Should be "st,st21nfcb-spi"
|
||||
- spi-max-frequency: Maximum SPI frequency (<= 10000000).
|
||||
- interrupt-parent: phandle for the interrupt gpio controller
|
||||
- interrupts: GPIO interrupt to which the chip is connected
|
||||
- reset-gpios: Output GPIO pin used to reset the ST21NFCB
|
||||
|
||||
Optional SoC Specific Properties:
|
||||
- pinctrl-names: Contains only one value - "default".
|
||||
- pintctrl-0: Specifies the pin control groups used for this controller.
|
||||
|
||||
Example (for ARM-based BeagleBoard xM with ST21NFCB on SPI4):
|
||||
|
||||
&mcspi4 {
|
||||
|
||||
status = "okay";
|
||||
|
||||
st21nfcb: st21nfcb@0 {
|
||||
|
||||
compatible = "st,st21nfcb-spi";
|
||||
|
||||
clock-frequency = <4000000>;
|
||||
|
||||
interrupt-parent = <&gpio5>;
|
||||
interrupts = <2 IRQ_TYPE_EDGE_RISING>;
|
||||
|
||||
reset-gpios = <&gpio5 29 GPIO_ACTIVE_HIGH>;
|
||||
};
|
||||
};
|
|
@ -0,0 +1,75 @@
|
|||
* Synopsys DWC Ethernet QoS IP version 4.10 driver (GMAC)
|
||||
|
||||
|
||||
Required properties:
|
||||
- compatible: Should be "snps,dwc-qos-ethernet-4.10"
|
||||
- reg: Address and length of the register set for the device
|
||||
- clocks: Phandles to the reference clock and the bus clock
|
||||
- clock-names: Should be "phy_ref_clk" for the reference clock and "apb_pclk"
|
||||
for the bus clock.
|
||||
- interrupt-parent: Should be the phandle for the interrupt controller
|
||||
that services interrupts for this device
|
||||
- interrupts: Should contain the core's combined interrupt signal
|
||||
- phy-mode: See ethernet.txt file in the same directory
|
||||
|
||||
Optional properties:
|
||||
- dma-coherent: Present if dma operations are coherent
|
||||
- mac-address: See ethernet.txt in the same directory
|
||||
- local-mac-address: See ethernet.txt in the same directory
|
||||
- snps,en-lpi: If present it enables use of the AXI low-power interface
|
||||
- snps,write-requests: Number of write requests that the AXI port can issue.
|
||||
It depends on the SoC configuration.
|
||||
- snps,read-requests: Number of read requests that the AXI port can issue.
|
||||
It depends on the SoC configuration.
|
||||
- snps,burst-map: Bitmap of allowed AXI burst lengts, with the LSB
|
||||
representing 4, then 8 etc.
|
||||
- snps,txpbl: DMA Programmable burst length for the TX DMA
|
||||
- snps,rxpbl: DMA Programmable burst length for the RX DMA
|
||||
- snps,en-tx-lpi-clockgating: Enable gating of the MAC TX clock during
|
||||
TX low-power mode.
|
||||
- phy-handle: See ethernet.txt file in the same directory
|
||||
- mdio device tree subnode: When the GMAC has a phy connected to its local
|
||||
mdio, there must be device tree subnode with the following
|
||||
required properties:
|
||||
- compatible: Must be "snps,dwc-qos-ethernet-mdio".
|
||||
- #address-cells: Must be <1>.
|
||||
- #size-cells: Must be <0>.
|
||||
|
||||
For each phy on the mdio bus, there must be a node with the following
|
||||
fields:
|
||||
|
||||
- reg: phy id used to communicate to phy.
|
||||
- device_type: Must be "ethernet-phy".
|
||||
- fixed-mode device tree subnode: see fixed-link.txt in the same directory
|
||||
|
||||
Examples:
|
||||
ethernet2@40010000 {
|
||||
clock-names = "phy_ref_clk", "apb_pclk";
|
||||
clocks = <&clkc 17>, <&clkc 15>;
|
||||
compatible = "snps,dwc-qos-ethernet-4.10";
|
||||
interrupt-parent = <&intc>;
|
||||
interrupts = <0x0 0x1e 0x4>;
|
||||
reg = <0x40010000 0x4000>;
|
||||
phy-handle = <&phy2>;
|
||||
phy-mode = "gmii";
|
||||
|
||||
snps,en-tx-lpi-clockgating;
|
||||
snps,en-lpi;
|
||||
snps,write-requests = <2>;
|
||||
snps,read-requests = <16>;
|
||||
snps,burst-map = <0x7>;
|
||||
snps,txpbl = <8>;
|
||||
snps,rxpbl = <2>;
|
||||
|
||||
dma-coherent;
|
||||
|
||||
mdio {
|
||||
#address-cells = <0x1>;
|
||||
#size-cells = <0x0>;
|
||||
phy2: phy@1 {
|
||||
compatible = "ethernet-phy-ieee802.3-c22";
|
||||
device_type = "ethernet-phy";
|
||||
reg = <0x1>;
|
||||
};
|
||||
};
|
||||
};
|
|
@ -0,0 +1,50 @@
|
|||
|
||||
Netdev private dataroom for 6lowpan interfaces:
|
||||
|
||||
All 6lowpan able net devices, means all interfaces with ARPHRD_6LOWPAN,
|
||||
must have "struct lowpan_priv" placed at beginning of netdev_priv.
|
||||
|
||||
The priv_size of each interface should be calculate by:
|
||||
|
||||
dev->priv_size = LOWPAN_PRIV_SIZE(LL_6LOWPAN_PRIV_DATA);
|
||||
|
||||
Where LL_PRIV_6LOWPAN_DATA is sizeof linklayer 6lowpan private data struct.
|
||||
To access the LL_PRIV_6LOWPAN_DATA structure you can cast:
|
||||
|
||||
lowpan_priv(dev)-priv;
|
||||
|
||||
to your LL_6LOWPAN_PRIV_DATA structure.
|
||||
|
||||
Before registering the lowpan netdev interface you must run:
|
||||
|
||||
lowpan_netdev_setup(dev, LOWPAN_LLTYPE_FOOBAR);
|
||||
|
||||
wheres LOWPAN_LLTYPE_FOOBAR is a define for your 6LoWPAN linklayer type of
|
||||
enum lowpan_lltypes.
|
||||
|
||||
Example to evaluate the private usually you can do:
|
||||
|
||||
static inline sturct lowpan_priv_foobar *
|
||||
lowpan_foobar_priv(struct net_device *dev)
|
||||
{
|
||||
return (sturct lowpan_priv_foobar *)lowpan_priv(dev)->priv;
|
||||
}
|
||||
|
||||
switch (dev->type) {
|
||||
case ARPHRD_6LOWPAN:
|
||||
lowpan_priv = lowpan_priv(dev);
|
||||
/* do great stuff which is ARPHRD_6LOWPAN related */
|
||||
switch (lowpan_priv->lltype) {
|
||||
case LOWPAN_LLTYPE_FOOBAR:
|
||||
/* do 802.15.4 6LoWPAN handling here */
|
||||
lowpan_foobar_priv(dev)->bar = foo;
|
||||
break;
|
||||
...
|
||||
}
|
||||
break;
|
||||
...
|
||||
}
|
||||
|
||||
In case of generic 6lowpan branch ("net/6lowpan") you can remove the check
|
||||
on ARPHRD_6LOWPAN, because you can be sure that these function are called
|
||||
by ARPHRD_6LOWPAN interfaces.
|
|
@ -0,0 +1,114 @@
|
|||
Broadcom Starfighter 2 Ethernet switch driver
|
||||
=============================================
|
||||
|
||||
Broadcom's Starfighter 2 Ethernet switch hardware block is commonly found and
|
||||
deployed in the following products:
|
||||
|
||||
- xDSL gateways such as BCM63138
|
||||
- streaming/multimedia Set Top Box such as BCM7445
|
||||
- Cable Modem/residential gateways such as BCM7145/BCM3390
|
||||
|
||||
The switch is typically deployed in a configuration involving between 5 to 13
|
||||
ports, offering a range of built-in and customizable interfaces:
|
||||
|
||||
- single integrated Gigabit PHY
|
||||
- quad integrated Gigabit PHY
|
||||
- quad external Gigabit PHY w/ MDIO multiplexer
|
||||
- integrated MoCA PHY
|
||||
- several external MII/RevMII/GMII/RGMII interfaces
|
||||
|
||||
The switch also supports specific congestion control features which allow MoCA
|
||||
fail-over not to lose packets during a MoCA role re-election, as well as out of
|
||||
band back-pressure to the host CPU network interface when downstream interfaces
|
||||
are connected at a lower speed.
|
||||
|
||||
The switch hardware block is typically interfaced using MMIO accesses and
|
||||
contains a bunch of sub-blocks/registers:
|
||||
|
||||
* SWITCH_CORE: common switch registers
|
||||
* SWITCH_REG: external interfaces switch register
|
||||
* SWITCH_MDIO: external MDIO bus controller (there is another one in SWITCH_CORE,
|
||||
which is used for indirect PHY accesses)
|
||||
* SWITCH_INDIR_RW: 64-bits wide register helper block
|
||||
* SWITCH_INTRL2_0/1: Level-2 interrupt controllers
|
||||
* SWITCH_ACB: Admission control block
|
||||
* SWITCH_FCB: Fail-over control block
|
||||
|
||||
Implementation details
|
||||
======================
|
||||
|
||||
The driver is located in drivers/net/dsa/bcm_sf2.c and is implemented as a DSA
|
||||
driver; see Documentation/networking/dsa/dsa.txt for details on the subsytem
|
||||
and what it provides.
|
||||
|
||||
The SF2 switch is configured to enable a Broadcom specific 4-bytes switch tag
|
||||
which gets inserted by the switch for every packet forwarded to the CPU
|
||||
interface, conversely, the CPU network interface should insert a similar tag for
|
||||
packets entering the CPU port. The tag format is described in
|
||||
net/dsa/tag_brcm.c.
|
||||
|
||||
Overall, the SF2 driver is a fairly regular DSA driver; there are a few
|
||||
specifics covered below.
|
||||
|
||||
Device Tree probing
|
||||
-------------------
|
||||
|
||||
The DSA platform device driver is probed using a specific compatible string
|
||||
provided in net/dsa/dsa.c. The reason for that is because the DSA subsystem gets
|
||||
registered as a platform device driver currently. DSA will provide the needed
|
||||
device_node pointers which are then accessible by the switch driver setup
|
||||
function to setup resources such as register ranges and interrupts. This
|
||||
currently works very well because none of the of_* functions utilized by the
|
||||
driver require a struct device to be bound to a struct device_node, but things
|
||||
may change in the future.
|
||||
|
||||
MDIO indirect accesses
|
||||
----------------------
|
||||
|
||||
Due to a limitation in how Broadcom switches have been designed, external
|
||||
Broadcom switches connected to a SF2 require the use of the DSA slave MDIO bus
|
||||
in order to properly configure them. By default, the SF2 pseudo-PHY address, and
|
||||
an external switch pseudo-PHY address will both be snooping for incoming MDIO
|
||||
transactions, since they are at the same address (30), resulting in some kind of
|
||||
"double" programming. Using DSA, and setting ds->phys_mii_mask accordingly, we
|
||||
selectively divert reads and writes towards external Broadcom switches
|
||||
pseudo-PHY addresses. Newer revisions of the SF2 hardware have introduced a
|
||||
configurable pseudo-PHY address which circumvents the initial design limitation.
|
||||
|
||||
Multimedia over CoAxial (MoCA) interfaces
|
||||
-----------------------------------------
|
||||
|
||||
MoCA interfaces are fairly specific and require the use of a firmware blob which
|
||||
gets loaded onto the MoCA processor(s) for packet processing. The switch
|
||||
hardware contains logic which will assert/de-assert link states accordingly for
|
||||
the MoCA interface whenever the MoCA coaxial cable gets disconnected or the
|
||||
firmware gets reloaded. The SF2 driver relies on such events to properly set its
|
||||
MoCA interface carrier state and properly report this to the networking stack.
|
||||
|
||||
The MoCA interfaces are supported using the PHY library's fixed PHY/emulated PHY
|
||||
device and the switch driver registers a fixed_link_update callback for such
|
||||
PHYs which reflects the link state obtained from the interrupt handler.
|
||||
|
||||
|
||||
Power Management
|
||||
----------------
|
||||
|
||||
Whenever possible, the SF2 driver tries to minimize the overall switch power
|
||||
consumption by applying a combination of:
|
||||
|
||||
- turning off internal buffers/memories
|
||||
- disabling packet processing logic
|
||||
- putting integrated PHYs in IDDQ/low-power
|
||||
- reducing the switch core clock based on the active port count
|
||||
- enabling and advertising EEE
|
||||
- turning off RGMII data processing logic when the link goes down
|
||||
|
||||
Wake-on-LAN
|
||||
-----------
|
||||
|
||||
Wake-on-LAN is currently implemented by utilizing the host processor Ethernet
|
||||
MAC controller wake-on logic. Whenever Wake-on-LAN is requested, an intersection
|
||||
between the user request and the supported host Ethernet interface WoL
|
||||
capabilities is done and the intersection result gets configured. During
|
||||
system-wide suspend/resume, only ports not participating in Wake-on-LAN are
|
||||
disabled.
|
|
@ -0,0 +1,615 @@
|
|||
Distributed Switch Architecture
|
||||
===============================
|
||||
|
||||
Introduction
|
||||
============
|
||||
|
||||
This document describes the Distributed Switch Architecture (DSA) subsystem
|
||||
design principles, limitations, interactions with other subsystems, and how to
|
||||
develop drivers for this subsystem as well as a TODO for developers interested
|
||||
in joining the effort.
|
||||
|
||||
Design principles
|
||||
=================
|
||||
|
||||
The Distributed Switch Architecture is a subsystem which was primarily designed
|
||||
to support Marvell Ethernet switches (MV88E6xxx, a.k.a Linkstreet product line)
|
||||
using Linux, but has since evolved to support other vendors as well.
|
||||
|
||||
The original philosophy behind this design was to be able to use unmodified
|
||||
Linux tools such as bridge, iproute2, ifconfig to work transparently whether
|
||||
they configured/queried a switch port network device or a regular network
|
||||
device.
|
||||
|
||||
An Ethernet switch is typically comprised of multiple front-panel ports, and one
|
||||
or more CPU or management port. The DSA subsystem currently relies on the
|
||||
presence of a management port connected to an Ethernet controller capable of
|
||||
receiving Ethernet frames from the switch. This is a very common setup for all
|
||||
kinds of Ethernet switches found in Small Home and Office products: routers,
|
||||
gateways, or even top-of-the rack switches. This host Ethernet controller will
|
||||
be later referred to as "master" and "cpu" in DSA terminology and code.
|
||||
|
||||
The D in DSA stands for Distributed, because the subsystem has been designed
|
||||
with the ability to configure and manage cascaded switches on top of each other
|
||||
using upstream and downstream Ethernet links between switches. These specific
|
||||
ports are referred to as "dsa" ports in DSA terminology and code. A collection
|
||||
of multiple switches connected to each other is called a "switch tree".
|
||||
|
||||
For each front-panel port, DSA will create specialized network devices which are
|
||||
used as controlling and data-flowing endpoints for use by the Linux networking
|
||||
stack. These specialized network interfaces are referred to as "slave" network
|
||||
interfaces in DSA terminology and code.
|
||||
|
||||
The ideal case for using DSA is when an Ethernet switch supports a "switch tag"
|
||||
which is a hardware feature making the switch insert a specific tag for each
|
||||
Ethernet frames it received to/from specific ports to help the management
|
||||
interface figure out:
|
||||
|
||||
- what port is this frame coming from
|
||||
- what was the reason why this frame got forwarded
|
||||
- how to send CPU originated traffic to specific ports
|
||||
|
||||
The subsystem does support switches not capable of inserting/stripping tags, but
|
||||
the features might be slightly limited in that case (traffic separation relies
|
||||
on Port-based VLAN IDs).
|
||||
|
||||
Note that DSA does not currently create network interfaces for the "cpu" and
|
||||
"dsa" ports because:
|
||||
|
||||
- the "cpu" port is the Ethernet switch facing side of the management
|
||||
controller, and as such, would create a duplication of feature, since you
|
||||
would get two interfaces for the same conduit: master netdev, and "cpu" netdev
|
||||
|
||||
- the "dsa" port(s) are just conduits between two or more switches, and as such
|
||||
cannot really be used as proper network interfaces either, only the
|
||||
downstream, or the top-most upstream interface makes sense with that model
|
||||
|
||||
Switch tagging protocols
|
||||
------------------------
|
||||
|
||||
DSA currently supports 4 different tagging protocols, and a tag-less mode as
|
||||
well. The different protocols are implemented in:
|
||||
|
||||
net/dsa/tag_trailer.c: Marvell's 4 trailer tag mode (legacy)
|
||||
net/dsa/tag_dsa.c: Marvell's original DSA tag
|
||||
net/dsa/tag_edsa.c: Marvell's enhanced DSA tag
|
||||
net/dsa/tag_brcm.c: Broadcom's 4 bytes tag
|
||||
|
||||
The exact format of the tag protocol is vendor specific, but in general, they
|
||||
all contain something which:
|
||||
|
||||
- identifies which port the Ethernet frame came from/should be sent to
|
||||
- provides a reason why this frame was forwarded to the management interface
|
||||
|
||||
Master network devices
|
||||
----------------------
|
||||
|
||||
Master network devices are regular, unmodified Linux network device drivers for
|
||||
the CPU/management Ethernet interface. Such a driver might occasionally need to
|
||||
know whether DSA is enabled (e.g.: to enable/disable specific offload features),
|
||||
but the DSA subsystem has been proven to work with industry standard drivers:
|
||||
e1000e, mv643xx_eth etc. without having to introduce modifications to these
|
||||
drivers. Such network devices are also often referred to as conduit network
|
||||
devices since they act as a pipe between the host processor and the hardware
|
||||
Ethernet switch.
|
||||
|
||||
Networking stack hooks
|
||||
----------------------
|
||||
|
||||
When a master netdev is used with DSA, a small hook is placed in in the
|
||||
networking stack is in order to have the DSA subsystem process the Ethernet
|
||||
switch specific tagging protocol. DSA accomplishes this by registering a
|
||||
specific (and fake) Ethernet type (later becoming skb->protocol) with the
|
||||
networking stack, this is also known as a ptype or packet_type. A typical
|
||||
Ethernet Frame receive sequence looks like this:
|
||||
|
||||
Master network device (e.g.: e1000e):
|
||||
|
||||
Receive interrupt fires:
|
||||
- receive function is invoked
|
||||
- basic packet processing is done: getting length, status etc.
|
||||
- packet is prepared to be processed by the Ethernet layer by calling
|
||||
eth_type_trans
|
||||
|
||||
net/ethernet/eth.c:
|
||||
|
||||
eth_type_trans(skb, dev)
|
||||
if (dev->dsa_ptr != NULL)
|
||||
-> skb->protocol = ETH_P_XDSA
|
||||
|
||||
drivers/net/ethernet/*:
|
||||
|
||||
netif_receive_skb(skb)
|
||||
-> iterate over registered packet_type
|
||||
-> invoke handler for ETH_P_XDSA, calls dsa_switch_rcv()
|
||||
|
||||
net/dsa/dsa.c:
|
||||
-> dsa_switch_rcv()
|
||||
-> invoke switch tag specific protocol handler in
|
||||
net/dsa/tag_*.c
|
||||
|
||||
net/dsa/tag_*.c:
|
||||
-> inspect and strip switch tag protocol to determine originating port
|
||||
-> locate per-port network device
|
||||
-> invoke eth_type_trans() with the DSA slave network device
|
||||
-> invoked netif_receive_skb()
|
||||
|
||||
Past this point, the DSA slave network devices get delivered regular Ethernet
|
||||
frames that can be processed by the networking stack.
|
||||
|
||||
Slave network devices
|
||||
---------------------
|
||||
|
||||
Slave network devices created by DSA are stacked on top of their master network
|
||||
device, each of these network interfaces will be responsible for being a
|
||||
controlling and data-flowing end-point for each front-panel port of the switch.
|
||||
These interfaces are specialized in order to:
|
||||
|
||||
- insert/remove the switch tag protocol (if it exists) when sending traffic
|
||||
to/from specific switch ports
|
||||
- query the switch for ethtool operations: statistics, link state,
|
||||
Wake-on-LAN, register dumps...
|
||||
- external/internal PHY management: link, auto-negotiation etc.
|
||||
|
||||
These slave network devices have custom net_device_ops and ethtool_ops function
|
||||
pointers which allow DSA to introduce a level of layering between the networking
|
||||
stack/ethtool, and the switch driver implementation.
|
||||
|
||||
Upon frame transmission from these slave network devices, DSA will look up which
|
||||
switch tagging protocol is currently registered with these network devices, and
|
||||
invoke a specific transmit routine which takes care of adding the relevant
|
||||
switch tag in the Ethernet frames.
|
||||
|
||||
These frames are then queued for transmission using the master network device
|
||||
ndo_start_xmit() function, since they contain the appropriate switch tag, the
|
||||
Ethernet switch will be able to process these incoming frames from the
|
||||
management interface and delivers these frames to the physical switch port.
|
||||
|
||||
Graphical representation
|
||||
------------------------
|
||||
|
||||
Summarized, this is basically how DSA looks like from a network device
|
||||
perspective:
|
||||
|
||||
|
||||
|---------------------------
|
||||
| CPU network device (eth0)|
|
||||
----------------------------
|
||||
| <tag added by switch |
|
||||
| |
|
||||
| |
|
||||
| tag added by CPU> |
|
||||
|--------------------------------------------|
|
||||
| Switch driver |
|
||||
|--------------------------------------------|
|
||||
|| || ||
|
||||
|-------| |-------| |-------|
|
||||
| sw0p0 | | sw0p1 | | sw0p2 |
|
||||
|-------| |-------| |-------|
|
||||
|
||||
Slave MDIO bus
|
||||
--------------
|
||||
|
||||
In order to be able to read to/from a switch PHY built into it, DSA creates a
|
||||
slave MDIO bus which allows a specific switch driver to divert and intercept
|
||||
MDIO reads/writes towards specific PHY addresses. In most MDIO-connected
|
||||
switches, these functions would utilize direct or indirect PHY addressing mode
|
||||
to return standard MII registers from the switch builtin PHYs, allowing the PHY
|
||||
library and/or to return link status, link partner pages, auto-negotiation
|
||||
results etc..
|
||||
|
||||
For Ethernet switches which have both external and internal MDIO busses, the
|
||||
slave MII bus can be utilized to mux/demux MDIO reads and writes towards either
|
||||
internal or external MDIO devices this switch might be connected to: internal
|
||||
PHYs, external PHYs, or even external switches.
|
||||
|
||||
Data structures
|
||||
---------------
|
||||
|
||||
DSA data structures are defined in include/net/dsa.h as well as
|
||||
net/dsa/dsa_priv.h.
|
||||
|
||||
dsa_chip_data: platform data configuration for a given switch device, this
|
||||
structure describes a switch device's parent device, its address, as well as
|
||||
various properties of its ports: names/labels, and finally a routing table
|
||||
indication (when cascading switches)
|
||||
|
||||
dsa_platform_data: platform device configuration data which can reference a
|
||||
collection of dsa_chip_data structure if multiples switches are cascaded, the
|
||||
master network device this switch tree is attached to needs to be referenced
|
||||
|
||||
dsa_switch_tree: structure assigned to the master network device under
|
||||
"dsa_ptr", this structure references a dsa_platform_data structure as well as
|
||||
the tagging protocol supported by the switch tree, and which receive/transmit
|
||||
function hooks should be invoked, information about the directly attached switch
|
||||
is also provided: CPU port. Finally, a collection of dsa_switch are referenced
|
||||
to address individual switches in the tree.
|
||||
|
||||
dsa_switch: structure describing a switch device in the tree, referencing a
|
||||
dsa_switch_tree as a backpointer, slave network devices, master network device,
|
||||
and a reference to the backing dsa_switch_driver
|
||||
|
||||
dsa_switch_driver: structure referencing function pointers, see below for a full
|
||||
description.
|
||||
|
||||
Design limitations
|
||||
==================
|
||||
|
||||
DSA is a platform device driver
|
||||
-------------------------------
|
||||
|
||||
DSA is implemented as a DSA platform device driver which is convenient because
|
||||
it will register the entire DSA switch tree attached to a master network device
|
||||
in one-shot, facilitating the device creation and simplifying the device driver
|
||||
model a bit, this comes however with a number of limitations:
|
||||
|
||||
- building DSA and its switch drivers as modules is currently not working
|
||||
- the device driver parenting does not necessarily reflect the original
|
||||
bus/device the switch can be created from
|
||||
- supporting non-MDIO and non-MMIO (platform) switches is not possible
|
||||
|
||||
Limits on the number of devices and ports
|
||||
-----------------------------------------
|
||||
|
||||
DSA currently limits the number of maximum switches within a tree to 4
|
||||
(DSA_MAX_SWITCHES), and the number of ports per switch to 12 (DSA_MAX_PORTS).
|
||||
These limits could be extended to support larger configurations would this need
|
||||
arise.
|
||||
|
||||
Lack of CPU/DSA network devices
|
||||
-------------------------------
|
||||
|
||||
DSA does not currently create slave network devices for the CPU or DSA ports, as
|
||||
described before. This might be an issue in the following cases:
|
||||
|
||||
- inability to fetch switch CPU port statistics counters using ethtool, which
|
||||
can make it harder to debug MDIO switch connected using xMII interfaces
|
||||
|
||||
- inability to configure the CPU port link parameters based on the Ethernet
|
||||
controller capabilities attached to it: http://patchwork.ozlabs.org/patch/509806/
|
||||
|
||||
- inability to configure specific VLAN IDs / trunking VLANs between switches
|
||||
when using a cascaded setup
|
||||
|
||||
Common pitfalls using DSA setups
|
||||
--------------------------------
|
||||
|
||||
Once a master network device is configured to use DSA (dev->dsa_ptr becomes
|
||||
non-NULL), and the switch behind it expects a tagging protocol, this network
|
||||
interface can only exclusively be used as a conduit interface. Sending packets
|
||||
directly through this interface (e.g.: opening a socket using this interface)
|
||||
will not make us go through the switch tagging protocol transmit function, so
|
||||
the Ethernet switch on the other end, expecting a tag will typically drop this
|
||||
frame.
|
||||
|
||||
Slave network devices check that the master network device is UP before allowing
|
||||
you to administratively bring UP these slave network devices. A common
|
||||
configuration mistake is forgetting to bring UP the master network device first.
|
||||
|
||||
Interactions with other subsystems
|
||||
==================================
|
||||
|
||||
DSA currently leverages the following subsystems:
|
||||
|
||||
- MDIO/PHY library: drivers/net/phy/phy.c, mdio_bus.c
|
||||
- Switchdev: net/switchdev/*
|
||||
- Device Tree for various of_* functions
|
||||
- HWMON: drivers/hwmon/*
|
||||
|
||||
MDIO/PHY library
|
||||
----------------
|
||||
|
||||
Slave network devices exposed by DSA may or may not be interfacing with PHY
|
||||
devices (struct phy_device as defined in include/linux/phy.h), but the DSA
|
||||
subsystem deals with all possible combinations:
|
||||
|
||||
- internal PHY devices, built into the Ethernet switch hardware
|
||||
- external PHY devices, connected via an internal or external MDIO bus
|
||||
- internal PHY devices, connected via an internal MDIO bus
|
||||
- special, non-autonegotiated or non MDIO-managed PHY devices: SFPs, MoCA; a.k.a
|
||||
fixed PHYs
|
||||
|
||||
The PHY configuration is done by the dsa_slave_phy_setup() function and the
|
||||
logic basically looks like this:
|
||||
|
||||
- if Device Tree is used, the PHY device is looked up using the standard
|
||||
"phy-handle" property, if found, this PHY device is created and registered
|
||||
using of_phy_connect()
|
||||
|
||||
- if Device Tree is used, and the PHY device is "fixed", that is, conforms to
|
||||
the definition of a non-MDIO managed PHY as defined in
|
||||
Documentation/devicetree/bindings/net/fixed-link.txt, the PHY is registered
|
||||
and connected transparently using the special fixed MDIO bus driver
|
||||
|
||||
- finally, if the PHY is built into the switch, as is very common with
|
||||
standalone switch packages, the PHY is probed using the slave MII bus created
|
||||
by DSA
|
||||
|
||||
|
||||
SWITCHDEV
|
||||
---------
|
||||
|
||||
DSA directly utilizes SWITCHDEV when interfacing with the bridge layer, and
|
||||
more specifically with its VLAN filtering portion when configuring VLANs on top
|
||||
of per-port slave network devices. Since DSA primarily deals with
|
||||
MDIO-connected switches, although not exclusively, SWITCHDEV's
|
||||
prepare/abort/commit phases are often simplified into a prepare phase which
|
||||
checks whether the operation is supporte by the DSA switch driver, and a commit
|
||||
phase which applies the changes.
|
||||
|
||||
As of today, the only SWITCHDEV objects supported by DSA are the FDB and VLAN
|
||||
objects.
|
||||
|
||||
Device Tree
|
||||
-----------
|
||||
|
||||
DSA features a standardized binding which is documented in
|
||||
Documentation/devicetree/bindings/net/dsa/dsa.txt. PHY/MDIO library helper
|
||||
functions such as of_get_phy_mode(), of_phy_connect() are also used to query
|
||||
per-port PHY specific details: interface connection, MDIO bus location etc..
|
||||
|
||||
HWMON
|
||||
-----
|
||||
|
||||
Some switch drivers feature internal temperature sensors which are exposed as
|
||||
regular HWMON devices in /sys/class/hwmon/.
|
||||
|
||||
Driver development
|
||||
==================
|
||||
|
||||
DSA switch drivers need to implement a dsa_switch_driver structure which will
|
||||
contain the various members described below.
|
||||
|
||||
register_switch_driver() registers this dsa_switch_driver in its internal list
|
||||
of drivers to probe for. unregister_switch_driver() does the exact opposite.
|
||||
|
||||
Unless requested differently by setting the priv_size member accordingly, DSA
|
||||
does not allocate any driver private context space.
|
||||
|
||||
Switch configuration
|
||||
--------------------
|
||||
|
||||
- priv_size: additional size needed by the switch driver for its private context
|
||||
|
||||
- tag_protocol: this is to indicate what kind of tagging protocol is supported,
|
||||
should be a valid value from the dsa_tag_protocol enum
|
||||
|
||||
- probe: probe routine which will be invoked by the DSA platform device upon
|
||||
registration to test for the presence/absence of a switch device. For MDIO
|
||||
devices, it is recommended to issue a read towards internal registers using
|
||||
the switch pseudo-PHY and return whether this is a supported device. For other
|
||||
buses, return a non-NULL string
|
||||
|
||||
- setup: setup function for the switch, this function is responsible for setting
|
||||
up the dsa_switch_driver private structure with all it needs: register maps,
|
||||
interrupts, mutexes, locks etc.. This function is also expected to properly
|
||||
configure the switch to separate all network interfaces from each other, that
|
||||
is, they should be isolated by the switch hardware itself, typically by creating
|
||||
a Port-based VLAN ID for each port and allowing only the CPU port and the
|
||||
specific port to be in the forwarding vector. Ports that are unused by the
|
||||
platform should be disabled. Past this function, the switch is expected to be
|
||||
fully configured and ready to serve any kind of request. It is recommended
|
||||
to issue a software reset of the switch during this setup function in order to
|
||||
avoid relying on what a previous software agent such as a bootloader/firmware
|
||||
may have previously configured.
|
||||
|
||||
- set_addr: Some switches require the programming of the management interface's
|
||||
Ethernet MAC address, switch drivers can also disable ageing of MAC addresses
|
||||
on the management interface and "hardcode"/"force" this MAC address for the
|
||||
CPU/management interface as an optimization
|
||||
|
||||
PHY devices and link management
|
||||
-------------------------------
|
||||
|
||||
- get_phy_flags: Some switches are interfaced to various kinds of Ethernet PHYs,
|
||||
if the PHY library PHY driver needs to know about information it cannot obtain
|
||||
on its own (e.g.: coming from switch memory mapped registers), this function
|
||||
should return a 32-bits bitmask of "flags", that is private between the switch
|
||||
driver and the Ethernet PHY driver in drivers/net/phy/*.
|
||||
|
||||
- phy_read: Function invoked by the DSA slave MDIO bus when attempting to read
|
||||
the switch port MDIO registers. If unavailable, return 0xffff for each read.
|
||||
For builtin switch Ethernet PHYs, this function should allow reading the link
|
||||
status, auto-negotiation results, link partner pages etc..
|
||||
|
||||
- phy_write: Function invoked by the DSA slave MDIO bus when attempting to write
|
||||
to the switch port MDIO registers. If unavailable return a negative error
|
||||
code.
|
||||
|
||||
- poll_link: Function invoked by DSA to query the link state of the switch
|
||||
builtin Ethernet PHYs, per port. This function is responsible for calling
|
||||
netif_carrier_{on,off} when appropriate, and can be used to poll all ports in a
|
||||
single call. Executes from workqueue context.
|
||||
|
||||
- adjust_link: Function invoked by the PHY library when a slave network device
|
||||
is attached to a PHY device. This function is responsible for appropriately
|
||||
configuring the switch port link parameters: speed, duplex, pause based on
|
||||
what the phy_device is providing.
|
||||
|
||||
- fixed_link_update: Function invoked by the PHY library, and specifically by
|
||||
the fixed PHY driver asking the switch driver for link parameters that could
|
||||
not be auto-negotiated, or obtained by reading the PHY registers through MDIO.
|
||||
This is particularly useful for specific kinds of hardware such as QSGMII,
|
||||
MoCA or other kinds of non-MDIO managed PHYs where out of band link
|
||||
information is obtained
|
||||
|
||||
Ethtool operations
|
||||
------------------
|
||||
|
||||
- get_strings: ethtool function used to query the driver's strings, will
|
||||
typically return statistics strings, private flags strings etc.
|
||||
|
||||
- get_ethtool_stats: ethtool function used to query per-port statistics and
|
||||
return their values. DSA overlays slave network devices general statistics:
|
||||
RX/TX counters from the network device, with switch driver specific statistics
|
||||
per port
|
||||
|
||||
- get_sset_count: ethtool function used to query the number of statistics items
|
||||
|
||||
- get_wol: ethtool function used to obtain Wake-on-LAN settings per-port, this
|
||||
function may, for certain implementations also query the master network device
|
||||
Wake-on-LAN settings if this interface needs to participate in Wake-on-LAN
|
||||
|
||||
- set_wol: ethtool function used to configure Wake-on-LAN settings per-port,
|
||||
direct counterpart to set_wol with similar restrictions
|
||||
|
||||
- set_eee: ethtool function which is used to configure a switch port EEE (Green
|
||||
Ethernet) settings, can optionally invoke the PHY library to enable EEE at the
|
||||
PHY level if relevant. This function should enable EEE at the switch port MAC
|
||||
controller and data-processing logic
|
||||
|
||||
- get_eee: ethtool function which is used to query a switch port EEE settings,
|
||||
this function should return the EEE state of the switch port MAC controller
|
||||
and data-processing logic as well as query the PHY for its currently configured
|
||||
EEE settings
|
||||
|
||||
- get_eeprom_len: ethtool function returning for a given switch the EEPROM
|
||||
length/size in bytes
|
||||
|
||||
- get_eeprom: ethtool function returning for a given switch the EEPROM contents
|
||||
|
||||
- set_eeprom: ethtool function writing specified data to a given switch EEPROM
|
||||
|
||||
- get_regs_len: ethtool function returning the register length for a given
|
||||
switch
|
||||
|
||||
- get_regs: ethtool function returning the Ethernet switch internal register
|
||||
contents. This function might require user-land code in ethtool to
|
||||
pretty-print register values and registers
|
||||
|
||||
Power management
|
||||
----------------
|
||||
|
||||
- suspend: function invoked by the DSA platform device when the system goes to
|
||||
suspend, should quiesce all Ethernet switch activities, but keep ports
|
||||
participating in Wake-on-LAN active as well as additional wake-up logic if
|
||||
supported
|
||||
|
||||
- resume: function invoked by the DSA platform device when the system resumes,
|
||||
should resume all Ethernet switch activities and re-configure the switch to be
|
||||
in a fully active state
|
||||
|
||||
- port_enable: function invoked by the DSA slave network device ndo_open
|
||||
function when a port is administratively brought up, this function should be
|
||||
fully enabling a given switch port. DSA takes care of marking the port with
|
||||
BR_STATE_BLOCKING if the port is a bridge member, or BR_STATE_FORWARDING if it
|
||||
was not, and propagating these changes down to the hardware
|
||||
|
||||
- port_disable: function invoked by the DSA slave network device ndo_close
|
||||
function when a port is administratively brought down, this function should be
|
||||
fully disabling a given switch port. DSA takes care of marking the port with
|
||||
BR_STATE_DISABLED and propagating changes to the hardware if this port is
|
||||
disabled while being a bridge member
|
||||
|
||||
Hardware monitoring
|
||||
-------------------
|
||||
|
||||
These callbacks are only available if CONFIG_NET_DSA_HWMON is enabled:
|
||||
|
||||
- get_temp: this function queries the given switch for its temperature
|
||||
|
||||
- get_temp_limit: this function returns the switch current maximum temperature
|
||||
limit
|
||||
|
||||
- set_temp_limit: this function configures the maximum temperature limit allowed
|
||||
|
||||
- get_temp_alarm: this function returns the critical temperature threshold
|
||||
returning an alarm notification
|
||||
|
||||
See Documentation/hwmon/sysfs-interface for details.
|
||||
|
||||
Bridge layer
|
||||
------------
|
||||
|
||||
- port_join_bridge: bridge layer function invoked when a given switch port is
|
||||
added to a bridge, this function should be doing the necessary at the switch
|
||||
level to permit the joining port from being added to the relevant logical
|
||||
domain for it to ingress/egress traffic with other members of the bridge. DSA
|
||||
does nothing but calculate a bitmask of switch ports currently members of the
|
||||
specified bridge being requested the join
|
||||
|
||||
- port_leave_bridge: bridge layer function invoked when a given switch port is
|
||||
removed from a bridge, this function should be doing the necessary at the
|
||||
switch level to deny the leaving port from ingress/egress traffic from the
|
||||
remaining bridge members. When the port leaves the bridge, it should be aged
|
||||
out at the switch hardware for the switch to (re) learn MAC addresses behind
|
||||
this port. DSA calculates the bitmask of ports still members of the bridge
|
||||
being left
|
||||
|
||||
- port_stp_update: bridge layer function invoked when a given switch port STP
|
||||
state is computed by the bridge layer and should be propagated to switch
|
||||
hardware to forward/block/learn traffic. The switch driver is responsible for
|
||||
computing a STP state change based on current and asked parameters and perform
|
||||
the relevant ageing based on the intersection results
|
||||
|
||||
Bridge VLAN filtering
|
||||
---------------------
|
||||
|
||||
- port_pvid_get: bridge layer function invoked when a Port-based VLAN ID is
|
||||
queried for the given switch port
|
||||
|
||||
- port_pvid_set: bridge layer function invoked when a Port-based VLAN ID needs
|
||||
to be configured on the given switch port
|
||||
|
||||
- port_vlan_add: bridge layer function invoked when a VLAN is configured
|
||||
(tagged or untagged) for the given switch port
|
||||
|
||||
- port_vlan_del: bridge layer function invoked when a VLAN is removed from the
|
||||
given switch port
|
||||
|
||||
- vlan_getnext: bridge layer function invoked to query the next configured VLAN
|
||||
in the switch, i.e. returns the bitmaps of members and untagged ports
|
||||
|
||||
- port_fdb_add: bridge layer function invoked when the bridge wants to install a
|
||||
Forwarding Database entry, the switch hardware should be programmed with the
|
||||
specified address in the specified VLAN Id in the forwarding database
|
||||
associated with this VLAN ID
|
||||
|
||||
Note: VLAN ID 0 corresponds to the port private database, which, in the context
|
||||
of DSA, would be the its port-based VLAN, used by the associated bridge device.
|
||||
|
||||
- port_fdb_del: bridge layer function invoked when the bridge wants to remove a
|
||||
Forwarding Database entry, the switch hardware should be programmed to delete
|
||||
the specified MAC address from the specified VLAN ID if it was mapped into
|
||||
this port forwarding database
|
||||
|
||||
TODO
|
||||
====
|
||||
|
||||
The platform device problem
|
||||
---------------------------
|
||||
DSA is currently implemented as a platform device driver which is far from ideal
|
||||
as was discussed in this thread:
|
||||
|
||||
http://permalink.gmane.org/gmane.linux.network/329848
|
||||
|
||||
This basically prevents the device driver model to be properly used and applied,
|
||||
and support non-MDIO, non-MMIO Ethernet connected switches.
|
||||
|
||||
Another problem with the platform device driver approach is that it prevents the
|
||||
use of a modular switch drivers build due to a circular dependency, illustrated
|
||||
here:
|
||||
|
||||
http://comments.gmane.org/gmane.linux.network/345803
|
||||
|
||||
Attempts of reworking this has been done here:
|
||||
|
||||
https://lwn.net/Articles/643149/
|
||||
|
||||
Making SWITCHDEV and DSA converge towards an unified codebase
|
||||
-------------------------------------------------------------
|
||||
|
||||
SWITCHDEV properly takes care of abstracting the networking stack with offload
|
||||
capable hardware, but does not enforce a strict switch device driver model. On
|
||||
the other DSA enforces a fairly strict device driver model, and deals with most
|
||||
of the switch specific. At some point we should envision a merger between these
|
||||
two subsystems and get the best of both worlds.
|
||||
|
||||
Other hanging fruits
|
||||
--------------------
|
||||
|
||||
- making the number of ports fully dynamic and not dependent on DSA_MAX_PORTS
|
||||
- allowing more than one CPU/management interface:
|
||||
http://comments.gmane.org/gmane.linux.network/365657
|
||||
- porting more drivers from other vendors:
|
||||
http://comments.gmane.org/gmane.linux.network/365510
|
|
@ -586,6 +586,21 @@ tcp_min_tso_segs - INTEGER
|
|||
if available window is too small.
|
||||
Default: 2
|
||||
|
||||
tcp_pacing_ss_ratio - INTEGER
|
||||
sk->sk_pacing_rate is set by TCP stack using a ratio applied
|
||||
to current rate. (current_rate = cwnd * mss / srtt)
|
||||
If TCP is in slow start, tcp_pacing_ss_ratio is applied
|
||||
to let TCP probe for bigger speeds, assuming cwnd can be
|
||||
doubled every other RTT.
|
||||
Default: 200
|
||||
|
||||
tcp_pacing_ca_ratio - INTEGER
|
||||
sk->sk_pacing_rate is set by TCP stack using a ratio applied
|
||||
to current rate. (current_rate = cwnd * mss / srtt)
|
||||
If TCP is in congestion avoidance phase, tcp_pacing_ca_ratio
|
||||
is applied to conservatively probe for bigger throughput.
|
||||
Default: 120
|
||||
|
||||
tcp_tso_win_divisor - INTEGER
|
||||
This allows control over what percentage of the congestion window
|
||||
can be consumed by a single TSO frame.
|
||||
|
@ -1181,6 +1196,16 @@ tag - INTEGER
|
|||
Allows you to write a number, which can be used as required.
|
||||
Default value is 0.
|
||||
|
||||
xfrm4_gc_thresh - INTEGER
|
||||
The threshold at which we will start garbage collecting for IPv4
|
||||
destination cache entries. At twice this value the system will
|
||||
refuse new allocations.
|
||||
|
||||
igmp_link_local_mcast_reports - BOOLEAN
|
||||
Enable IGMP reports for link local multicast groups in the
|
||||
224.0.0.X range.
|
||||
Default TRUE
|
||||
|
||||
Alexey Kuznetsov.
|
||||
kuznet@ms2.inr.ac.ru
|
||||
|
||||
|
@ -1215,14 +1240,20 @@ flowlabel_consistency - BOOLEAN
|
|||
FALSE: disabled
|
||||
Default: TRUE
|
||||
|
||||
auto_flowlabels - BOOLEAN
|
||||
Automatically generate flow labels based based on a flow hash
|
||||
of the packet. This allows intermediate devices, such as routers,
|
||||
to idenfify packet flows for mechanisms like Equal Cost Multipath
|
||||
auto_flowlabels - INTEGER
|
||||
Automatically generate flow labels based on a flow hash of the
|
||||
packet. This allows intermediate devices, such as routers, to
|
||||
identify packet flows for mechanisms like Equal Cost Multipath
|
||||
Routing (see RFC 6438).
|
||||
TRUE: enabled
|
||||
FALSE: disabled
|
||||
Default: false
|
||||
0: automatic flow labels are completely disabled
|
||||
1: automatic flow labels are enabled by default, they can be
|
||||
disabled on a per socket basis using the IPV6_AUTOFLOWLABEL
|
||||
socket option
|
||||
2: automatic flow labels are allowed, they may be enabled on a
|
||||
per socket basis using the IPV6_AUTOFLOWLABEL socket option
|
||||
3: automatic flow labels are enabled and enforced, they cannot
|
||||
be disabled by the socket option
|
||||
Default: 1
|
||||
|
||||
flowlabel_state_ranges - BOOLEAN
|
||||
Split the flow label number space into two ranges. 0-0x7FFFF is
|
||||
|
@ -1340,6 +1371,14 @@ accept_ra_from_local - BOOLEAN
|
|||
disabled if accept_ra_from_local is disabled
|
||||
on a specific interface.
|
||||
|
||||
accept_ra_min_hop_limit - INTEGER
|
||||
Minimum hop limit Information in Router Advertisement.
|
||||
|
||||
Hop limit Information in Router Advertisement less than this
|
||||
variable shall be ignored.
|
||||
|
||||
Default: 1
|
||||
|
||||
accept_ra_pinfo - BOOLEAN
|
||||
Learn Prefix Information in Router Advertisement.
|
||||
|
||||
|
@ -1435,6 +1474,11 @@ mtu - INTEGER
|
|||
Default Maximum Transfer Unit
|
||||
Default: 1280 (IPv6 required minimum)
|
||||
|
||||
ip_nonlocal_bind - BOOLEAN
|
||||
If set, allows processes to bind() to non-local IPv6 addresses,
|
||||
which can be quite useful - but may break some applications.
|
||||
Default: 0
|
||||
|
||||
router_probe_interval - INTEGER
|
||||
Minimum interval (in seconds) between Router Probing described
|
||||
in RFC4191.
|
||||
|
@ -1455,6 +1499,13 @@ router_solicitations - INTEGER
|
|||
routers are present.
|
||||
Default: 3
|
||||
|
||||
use_oif_addrs_only - BOOLEAN
|
||||
When enabled, the candidate source addresses for destinations
|
||||
routed via this interface are restricted to the set of addresses
|
||||
configured on this interface (vis. RFC 6724, section 4).
|
||||
|
||||
Default: false
|
||||
|
||||
use_tempaddr - INTEGER
|
||||
Preference for Privacy Extensions (RFC3041).
|
||||
<= 0 : disable Privacy Extensions
|
||||
|
@ -1591,6 +1642,11 @@ ratelimit - INTEGER
|
|||
otherwise the minimal space between responses in milliseconds.
|
||||
Default: 1000
|
||||
|
||||
xfrm6_gc_thresh - INTEGER
|
||||
The threshold at which we will start garbage collecting for IPv6
|
||||
destination cache entries. At twice this value the system will
|
||||
refuse new allocations.
|
||||
|
||||
|
||||
IPv6 Update by:
|
||||
Pekka Savola <pekkas@netcore.fi>
|
||||
|
|
|
@ -135,12 +135,8 @@ struct plat_stmmacenet_data {
|
|||
int maxmtu;
|
||||
void (*fix_mac_speed)(void *priv, unsigned int speed);
|
||||
void (*bus_setup)(void __iomem *ioaddr);
|
||||
void *(*setup)(struct platform_device *pdev);
|
||||
void (*free)(struct platform_device *pdev, void *priv);
|
||||
int (*init)(struct platform_device *pdev, void *priv);
|
||||
void (*exit)(struct platform_device *pdev, void *priv);
|
||||
void *custom_cfg;
|
||||
void *custom_data;
|
||||
void *bsp_priv;
|
||||
};
|
||||
|
||||
|
@ -179,15 +175,11 @@ Where:
|
|||
o bus_setup: perform HW setup of the bus. For example, on some ST platforms
|
||||
this field is used to configure the AMBA bridge to generate more
|
||||
efficient STBus traffic.
|
||||
o setup/init/exit: callbacks used for calling a custom initialization;
|
||||
o init/exit: callbacks used for calling a custom initialization;
|
||||
this is sometime necessary on some platforms (e.g. ST boxes)
|
||||
where the HW needs to have set some PIO lines or system cfg
|
||||
registers. setup should return a pointer to private data,
|
||||
which will be stored in bsp_priv, and then passed to init and
|
||||
exit callbacks. init/exit callbacks should not use or modify
|
||||
registers. init/exit callbacks should not use or modify
|
||||
platform data.
|
||||
o custom_cfg/custom_data: this is a custom configuration that can be passed
|
||||
while initializing the resources.
|
||||
o bsp_priv: another private pointer.
|
||||
|
||||
For MDIO bus The we have:
|
||||
|
@ -262,7 +254,7 @@ static struct fixed_phy_status stmmac0_fixed_phy_status = {
|
|||
|
||||
During the board's device_init we can configure the first
|
||||
MAC for fixed_link by calling:
|
||||
fixed_phy_add(PHY_POLL, 1, &stmmac0_fixed_phy_status));)
|
||||
fixed_phy_add(PHY_POLL, 1, &stmmac0_fixed_phy_status, -1);
|
||||
and the second one, with a real PHY device attached to the bus,
|
||||
by using the stmmac_mdio_bus_data structure (to provide the id, the
|
||||
reset procedure etc).
|
||||
|
@ -278,8 +270,6 @@ capability register can replace what has been passed from the platform.
|
|||
Please see the following document:
|
||||
Documentation/devicetree/bindings/net/stmmac.txt
|
||||
|
||||
and the stmmac_of_data structure inside the include/linux/stmmac.h header file.
|
||||
|
||||
4.11) This is a summary of the content of some relevant files:
|
||||
o stmmac_main.c: to implement the main network device driver;
|
||||
o stmmac_mdio.c: to provide mdio functions;
|
||||
|
|
|
@ -279,8 +279,18 @@ and unknown unicast packets to all ports in domain, if allowed by port's
|
|||
current STP state. The switch driver, knowing which ports are within which
|
||||
vlan L2 domain, can program the switch device for flooding. The packet should
|
||||
also be sent to the port netdev for processing by the bridge driver. The
|
||||
bridge should not reflood the packet to the same ports the device flooded.
|
||||
XXX: the mechanism to avoid duplicate flood packets is being discuseed.
|
||||
bridge should not reflood the packet to the same ports the device flooded,
|
||||
otherwise there will be duplicate packets on the wire.
|
||||
|
||||
To avoid duplicate packets, the device/driver should mark a packet as already
|
||||
forwarded using skb->offload_fwd_mark. The same mark is set on the device
|
||||
ports in the domain using dev->offload_fwd_mark. If the skb->offload_fwd_mark
|
||||
is non-zero and matches the forwarding egress port's dev->skb_mark, the kernel
|
||||
will drop the skb right before transmit on the egress port, with the
|
||||
understanding that the device already forwarded the packet on same egress port.
|
||||
The driver can use switchdev_port_fwd_mark_set() to set a globally unique mark
|
||||
for port's dev->offload_fwd_mark, based on the port's parent ID (switch ID) and
|
||||
a group ifindex.
|
||||
|
||||
It is possible for the switch device to not handle flooding and push the
|
||||
packets up to the bridge driver for flooding. This is not ideal as the number
|
||||
|
@ -357,4 +367,5 @@ driver's rocker_port_ipv4_resolve() for an example.
|
|||
|
||||
The driver can monitor for updates to arp_tbl using the netevent notifier
|
||||
NETEVENT_NEIGH_UPDATE. The device can be programmed with resolved nexthops
|
||||
for the routes as arp_tbl updates.
|
||||
for the routes as arp_tbl updates. The driver implements ndo_neigh_destroy
|
||||
to know when arp_tbl neighbor entries are purged from the port.
|
||||
|
|
|
@ -359,6 +359,13 @@ the requested fine-grained filtering for incoming packets is not
|
|||
supported, the driver may time stamp more than just the requested types
|
||||
of packets.
|
||||
|
||||
Drivers are free to use a more permissive configuration than the requested
|
||||
configuration. It is expected that drivers should only implement directly the
|
||||
most generic mode that can be supported. For example if the hardware can
|
||||
support HWTSTAMP_FILTER_V2_EVENT, then it should generally always upscale
|
||||
HWTSTAMP_FILTER_V2_L2_SYNC_MESSAGE, and so forth, as HWTSTAMP_FILTER_V2_EVENT
|
||||
is more generic (and more useful to applications).
|
||||
|
||||
A driver which supports hardware time stamping shall update the struct
|
||||
with the actual, possibly more permissive configuration. If the
|
||||
requested packets cannot be time stamped, then nothing should be
|
||||
|
|
|
@ -1,32 +1,36 @@
|
|||
Virtual eXtensible Local Area Networking documentation
|
||||
======================================================
|
||||
|
||||
The VXLAN protocol is a tunnelling protocol that is designed to
|
||||
solve the problem of limited number of available VLAN's (4096).
|
||||
With VXLAN identifier is expanded to 24 bits.
|
||||
The VXLAN protocol is a tunnelling protocol designed to solve the
|
||||
problem of limited VLAN IDs (4096) in IEEE 802.1q. With VXLAN the
|
||||
size of the identifier is expanded to 24 bits (16777216).
|
||||
|
||||
It is a draft RFC standard, that is implemented by Cisco Nexus,
|
||||
Vmware and Brocade. The protocol runs over UDP using a single
|
||||
destination port (still not standardized by IANA).
|
||||
This document describes the Linux kernel tunnel device,
|
||||
there is also an implantation of VXLAN for Openvswitch.
|
||||
VXLAN is described by IETF RFC 7348, and has been implemented by a
|
||||
number of vendors. The protocol runs over UDP using a single
|
||||
destination port. This document describes the Linux kernel tunnel
|
||||
device, there is also a separate implementation of VXLAN for
|
||||
Openvswitch.
|
||||
|
||||
Unlike most tunnels, a VXLAN is a 1 to N network, not just point
|
||||
to point. A VXLAN device can either dynamically learn the IP address
|
||||
of the other end, in a manner similar to a learning bridge, or the
|
||||
forwarding entries can be configured statically.
|
||||
Unlike most tunnels, a VXLAN is a 1 to N network, not just point to
|
||||
point. A VXLAN device can learn the IP address of the other endpoint
|
||||
either dynamically in a manner similar to a learning bridge, or make
|
||||
use of statically-configured forwarding entries.
|
||||
|
||||
The management of vxlan is done in a similar fashion to it's
|
||||
too closest neighbors GRE and VLAN. Configuring VXLAN requires
|
||||
the version of iproute2 that matches the kernel release
|
||||
where VXLAN was first merged upstream.
|
||||
The management of vxlan is done in a manner similar to its two closest
|
||||
neighbors GRE and VLAN. Configuring VXLAN requires the version of
|
||||
iproute2 that matches the kernel release where VXLAN was first merged
|
||||
upstream.
|
||||
|
||||
1. Create vxlan device
|
||||
# ip li add vxlan0 type vxlan id 42 group 239.1.1.1 dev eth1
|
||||
# ip link add vxlan0 type vxlan id 42 group 239.1.1.1 dev eth1 dstport 4789
|
||||
|
||||
This creates a new device (vxlan0). The device uses the
|
||||
the multicast group 239.1.1.1 over eth1 to handle packets where
|
||||
no entry is in the forwarding table.
|
||||
This creates a new device named vxlan0. The device uses the multicast
|
||||
group 239.1.1.1 over eth1 to handle traffic for which there is no
|
||||
entry in the forwarding table. The destination port number is set to
|
||||
the IANA-assigned value of 4789. The Linux implementation of VXLAN
|
||||
pre-dates the IANA's selection of a standard destination port number
|
||||
and uses the Linux-selected value by default to maintain backwards
|
||||
compatibility.
|
||||
|
||||
2. Delete vxlan device
|
||||
# ip link delete vxlan0
|
||||
|
|
38
MAINTAINERS
38
MAINTAINERS
|
@ -158,6 +158,7 @@ L: linux-wpan@vger.kernel.org
|
|||
S: Maintained
|
||||
F: net/6lowpan/
|
||||
F: include/net/6lowpan.h
|
||||
F: Documentation/networking/6lowpan.txt
|
||||
|
||||
6PACK NETWORK DRIVER FOR AX.25
|
||||
M: Andreas Koensgen <ajk@comnets.uni-bremen.de>
|
||||
|
@ -933,7 +934,7 @@ M: Sunil Goutham <sgoutham@cavium.com>
|
|||
M: Robert Richter <rric@kernel.org>
|
||||
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
|
||||
S: Supported
|
||||
F: drivers/net/ethernet/cavium/
|
||||
F: drivers/net/ethernet/cavium/thunder/
|
||||
|
||||
ARM/CIRRUS LOGIC CLPS711X ARM ARCHITECTURE
|
||||
M: Alexander Shiyan <shc_work@mail.ru>
|
||||
|
@ -2551,7 +2552,6 @@ M: Raghu Vatsavayi <raghu.vatsavayi@caviumnetworks.com>
|
|||
L: netdev@vger.kernel.org
|
||||
W: http://www.cavium.com
|
||||
S: Supported
|
||||
F: drivers/net/ethernet/cavium/
|
||||
F: drivers/net/ethernet/cavium/liquidio/
|
||||
|
||||
CC2520 IEEE-802.15.4 RADIO DRIVER
|
||||
|
@ -6541,7 +6541,7 @@ F: drivers/net/ethernet/marvell/mvneta.*
|
|||
|
||||
MARVELL MWIFIEX WIRELESS DRIVER
|
||||
M: Amitkumar Karwar <akarwar@marvell.com>
|
||||
M: Avinash Patil <patila@marvell.com>
|
||||
M: Nishant Sarmukadam <nishants@marvell.com>
|
||||
L: linux-wireless@vger.kernel.org
|
||||
S: Maintained
|
||||
F: drivers/net/wireless/mwifiex/
|
||||
|
@ -6686,6 +6686,15 @@ W: http://www.mellanox.com
|
|||
Q: http://patchwork.ozlabs.org/project/netdev/list/
|
||||
F: drivers/net/ethernet/mellanox/mlx4/en_*
|
||||
|
||||
MELLANOX ETHERNET SWITCH DRIVERS
|
||||
M: Jiri Pirko <jiri@mellanox.com>
|
||||
M: Ido Schimmel <idosch@mellanox.com>
|
||||
L: netdev@vger.kernel.org
|
||||
S: Supported
|
||||
W: http://www.mellanox.com
|
||||
Q: http://patchwork.ozlabs.org/project/netdev/list/
|
||||
F: drivers/net/ethernet/mellanox/mlxsw/
|
||||
|
||||
MEMORY MANAGEMENT
|
||||
L: linux-mm@kvack.org
|
||||
W: http://www.linux-mm.org
|
||||
|
@ -8908,6 +8917,12 @@ L: linux-media@vger.kernel.org
|
|||
S: Supported
|
||||
F: drivers/media/i2c/s5k5baf.c
|
||||
|
||||
SAMSUNG S3FWRN5 NFC DRIVER
|
||||
M: Robert Baldyga <r.baldyga@samsung.com>
|
||||
L: linux-nfc@lists.01.org (moderated for non-subscribers)
|
||||
S: Supported
|
||||
F: drivers/nfc/s3fwrn5
|
||||
|
||||
SAMSUNG SOC CLOCK DRIVERS
|
||||
M: Sylwester Nawrocki <s.nawrocki@samsung.com>
|
||||
M: Tomasz Figa <tomasz.figa@gmail.com>
|
||||
|
@ -8958,6 +8973,13 @@ F: include/linux/dma/dw.h
|
|||
F: include/linux/platform_data/dma-dw.h
|
||||
F: drivers/dma/dw/
|
||||
|
||||
SYNOPSYS DESIGNWARE ETHERNET QOS 4.10a driver
|
||||
M: Lars Persson <lars.persson@axis.com>
|
||||
L: netdev@vger.kernel.org
|
||||
S: Supported
|
||||
F: Documentation/devicetree/bindings/net/snps,dwc-qos-ethernet.txt
|
||||
F: drivers/net/ethernet/synopsys/dwc_eth_qos.c
|
||||
|
||||
SYNOPSYS DESIGNWARE MMC/SD/SDIO DRIVER
|
||||
M: Seungwon Jeon <tgih.jun@samsung.com>
|
||||
M: Jaehoon Chung <jh80.chung@samsung.com>
|
||||
|
@ -11064,7 +11086,7 @@ F: drivers/input/mouse/vmmouse.c
|
|||
F: drivers/input/mouse/vmmouse.h
|
||||
|
||||
VMWARE VMXNET3 ETHERNET DRIVER
|
||||
M: Shreyas Bhatewara <sbhatewara@vmware.com>
|
||||
M: Shrikrishna Khare <skhare@vmware.com>
|
||||
M: "VMware, Inc." <pv-drivers@vmware.com>
|
||||
L: netdev@vger.kernel.org
|
||||
S: Maintained
|
||||
|
@ -11089,6 +11111,14 @@ S: Supported
|
|||
F: drivers/regulator/
|
||||
F: include/linux/regulator/
|
||||
|
||||
VRF
|
||||
M: David Ahern <dsa@cumulusnetworks.com>
|
||||
M: Shrijeet Mukherjee <shm@cumulusnetworks.com>
|
||||
L: netdev@vger.kernel.org
|
||||
S: Maintained
|
||||
F: drivers/net/vrf.c
|
||||
F: include/net/vrf.h
|
||||
|
||||
VT1211 HARDWARE MONITOR DRIVER
|
||||
M: Juerg Haefliger <juergh@gmail.com>
|
||||
L: lm-sensors@lm-sensors.org
|
||||
|
|
|
@ -717,7 +717,7 @@
|
|||
};
|
||||
|
||||
mac: ethernet@4a100000 {
|
||||
compatible = "ti,cpsw";
|
||||
compatible = "ti,am335x-cpsw","ti,cpsw";
|
||||
ti,hwmods = "cpgmac0";
|
||||
clocks = <&cpsw_125mhz_gclk>, <&cpsw_cpts_rft_clk>;
|
||||
clock-names = "fck", "cpts";
|
||||
|
|
|
@ -1418,7 +1418,7 @@
|
|||
};
|
||||
|
||||
mac: ethernet@4a100000 {
|
||||
compatible = "ti,cpsw";
|
||||
compatible = "ti,dra7-cpsw","ti,cpsw";
|
||||
ti,hwmods = "gmac";
|
||||
clocks = <&dpll_gmac_ck>, <&gmac_gmii_ref_clk_div>;
|
||||
clock-names = "fck", "cpts";
|
||||
|
|
|
@ -857,7 +857,9 @@ b_epilogue:
|
|||
emit(ARM_LDR_I(r_A, r_scratch, off), ctx);
|
||||
break;
|
||||
case BPF_ANC | SKF_AD_IFINDEX:
|
||||
case BPF_ANC | SKF_AD_HATYPE:
|
||||
/* A = skb->dev->ifindex */
|
||||
/* A = skb->dev->type */
|
||||
ctx->seen |= SEEN_SKB;
|
||||
off = offsetof(struct sk_buff, dev);
|
||||
emit(ARM_LDR_I(r_scratch, r_skb, off), ctx);
|
||||
|
@ -867,8 +869,24 @@ b_epilogue:
|
|||
|
||||
BUILD_BUG_ON(FIELD_SIZEOF(struct net_device,
|
||||
ifindex) != 4);
|
||||
off = offsetof(struct net_device, ifindex);
|
||||
emit(ARM_LDR_I(r_A, r_scratch, off), ctx);
|
||||
BUILD_BUG_ON(FIELD_SIZEOF(struct net_device,
|
||||
type) != 2);
|
||||
|
||||
if (code == (BPF_ANC | SKF_AD_IFINDEX)) {
|
||||
off = offsetof(struct net_device, ifindex);
|
||||
emit(ARM_LDR_I(r_A, r_scratch, off), ctx);
|
||||
} else {
|
||||
/*
|
||||
* offset of field "type" in "struct
|
||||
* net_device" is above what can be
|
||||
* used in the ldrh rd, [rn, #imm]
|
||||
* instruction, so load the offset in
|
||||
* a register and use ldrh rd, [rn, rm]
|
||||
*/
|
||||
off = offsetof(struct net_device, type);
|
||||
emit_mov_i(ARM_R3, off, ctx);
|
||||
emit(ARM_LDRH_R(r_A, r_scratch, ARM_R3), ctx);
|
||||
}
|
||||
break;
|
||||
case BPF_ANC | SKF_AD_MARK:
|
||||
ctx->seen |= SEEN_SKB;
|
||||
|
@ -895,6 +913,17 @@ b_epilogue:
|
|||
OP_IMM3(ARM_AND, r_A, r_A, 0x1, ctx);
|
||||
}
|
||||
break;
|
||||
case BPF_ANC | SKF_AD_PKTTYPE:
|
||||
ctx->seen |= SEEN_SKB;
|
||||
BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff,
|
||||
__pkt_type_offset[0]) != 1);
|
||||
off = PKT_TYPE_OFFSET();
|
||||
emit(ARM_LDRB_I(r_A, r_skb, off), ctx);
|
||||
emit(ARM_AND_I(r_A, r_A, PKT_TYPE_MAX), ctx);
|
||||
#ifdef __BIG_ENDIAN_BITFIELD
|
||||
emit(ARM_LSR_I(r_A, r_A, 5), ctx);
|
||||
#endif
|
||||
break;
|
||||
case BPF_ANC | SKF_AD_QUEUE:
|
||||
ctx->seen |= SEEN_SKB;
|
||||
BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff,
|
||||
|
@ -904,6 +933,14 @@ b_epilogue:
|
|||
off = offsetof(struct sk_buff, queue_mapping);
|
||||
emit(ARM_LDRH_I(r_A, r_skb, off), ctx);
|
||||
break;
|
||||
case BPF_ANC | SKF_AD_PAY_OFFSET:
|
||||
ctx->seen |= SEEN_SKB | SEEN_CALL;
|
||||
|
||||
emit(ARM_MOV_R(ARM_R0, r_skb), ctx);
|
||||
emit_mov_i(ARM_R3, (unsigned int)skb_get_poff, ctx);
|
||||
emit_blx_r(ARM_R3, ctx);
|
||||
emit(ARM_MOV_R(r_A, ARM_R0), ctx);
|
||||
break;
|
||||
case BPF_LDX | BPF_W | BPF_ABS:
|
||||
/*
|
||||
* load a 32bit word from struct seccomp_data.
|
||||
|
|
|
@ -74,6 +74,7 @@
|
|||
#define ARM_INST_LDRB_I 0x05d00000
|
||||
#define ARM_INST_LDRB_R 0x07d00000
|
||||
#define ARM_INST_LDRH_I 0x01d000b0
|
||||
#define ARM_INST_LDRH_R 0x019000b0
|
||||
#define ARM_INST_LDR_I 0x05900000
|
||||
|
||||
#define ARM_INST_LDM 0x08900000
|
||||
|
@ -160,6 +161,8 @@
|
|||
| (rm))
|
||||
#define ARM_LDRH_I(rt, rn, off) (ARM_INST_LDRH_I | (rt) << 12 | (rn) << 16 \
|
||||
| (((off) & 0xf0) << 4) | ((off) & 0xf))
|
||||
#define ARM_LDRH_R(rt, rn, rm) (ARM_INST_LDRH_R | (rt) << 12 | (rn) << 16 \
|
||||
| (rm))
|
||||
|
||||
#define ARM_LDM(rn, regs) (ARM_INST_LDM | (rn) << 16 | (regs))
|
||||
|
||||
|
|
|
@ -126,7 +126,7 @@ static struct fixed_phy_status nettel_fixed_phy_status __initdata = {
|
|||
static int __init init_BSP(void)
|
||||
{
|
||||
m5272_uarts_init();
|
||||
fixed_phy_add(PHY_POLL, 0, &nettel_fixed_phy_status);
|
||||
fixed_phy_add(PHY_POLL, 0, &nettel_fixed_phy_status, -1);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -679,7 +679,8 @@ static int __init ar7_register_devices(void)
|
|||
}
|
||||
|
||||
if (ar7_has_high_cpmac()) {
|
||||
res = fixed_phy_add(PHY_POLL, cpmac_high.id, &fixed_phy_status);
|
||||
res = fixed_phy_add(PHY_POLL, cpmac_high.id,
|
||||
&fixed_phy_status, -1);
|
||||
if (!res) {
|
||||
cpmac_get_mac(1, cpmac_high_data.dev_addr);
|
||||
|
||||
|
@ -692,7 +693,7 @@ static int __init ar7_register_devices(void)
|
|||
} else
|
||||
cpmac_low_data.phy_mask = 0xffffffff;
|
||||
|
||||
res = fixed_phy_add(PHY_POLL, cpmac_low.id, &fixed_phy_status);
|
||||
res = fixed_phy_add(PHY_POLL, cpmac_low.id, &fixed_phy_status, -1);
|
||||
if (!res) {
|
||||
cpmac_get_mac(0, cpmac_low_data.dev_addr);
|
||||
res = platform_device_register(&cpmac_low);
|
||||
|
|
|
@ -263,7 +263,7 @@ static int __init bcm47xx_register_bus_complete(void)
|
|||
bcm47xx_leds_register();
|
||||
bcm47xx_workarounds();
|
||||
|
||||
fixed_phy_add(PHY_POLL, 0, &bcm47xx_fixed_phy_status);
|
||||
fixed_phy_add(PHY_POLL, 0, &bcm47xx_fixed_phy_status, -1);
|
||||
return 0;
|
||||
}
|
||||
device_initcall(bcm47xx_register_bus_complete);
|
||||
|
|
|
@ -36,6 +36,8 @@ extern u8 sk_load_word[], sk_load_half[], sk_load_byte[];
|
|||
* | BPF stack | |
|
||||
* | | |
|
||||
* +---------------+ |
|
||||
* | 8 byte skbp | |
|
||||
* R15+170 -> +---------------+ |
|
||||
* | 8 byte hlen | |
|
||||
* R15+168 -> +---------------+ |
|
||||
* | 4 byte align | |
|
||||
|
@ -51,11 +53,12 @@ extern u8 sk_load_word[], sk_load_half[], sk_load_byte[];
|
|||
* We get 160 bytes stack space from calling function, but only use
|
||||
* 12 * 8 byte for old backchain, r15..r6, and tail_call_cnt.
|
||||
*/
|
||||
#define STK_SPACE (MAX_BPF_STACK + 8 + 4 + 4 + 160)
|
||||
#define STK_SPACE (MAX_BPF_STACK + 8 + 8 + 4 + 4 + 160)
|
||||
#define STK_160_UNUSED (160 - 12 * 8)
|
||||
#define STK_OFF (STK_SPACE - STK_160_UNUSED)
|
||||
#define STK_OFF_TMP 160 /* Offset of tmp buffer on stack */
|
||||
#define STK_OFF_HLEN 168 /* Offset of SKB header length on stack */
|
||||
#define STK_OFF_SKBP 170 /* Offset of SKB pointer on stack */
|
||||
|
||||
#define STK_OFF_R6 (160 - 11 * 8) /* Offset of r6 on stack */
|
||||
#define STK_OFF_TCCNT (160 - 12 * 8) /* Offset of tail_call_cnt on stack */
|
||||
|
|
|
@ -45,7 +45,7 @@ struct bpf_jit {
|
|||
int labels[1]; /* Labels for local jumps */
|
||||
};
|
||||
|
||||
#define BPF_SIZE_MAX 4096 /* Max size for program */
|
||||
#define BPF_SIZE_MAX 0x7ffff /* Max size for program (20 bit signed displ) */
|
||||
|
||||
#define SEEN_SKB 1 /* skb access */
|
||||
#define SEEN_MEM 2 /* use mem[] for temporary storage */
|
||||
|
@ -53,6 +53,7 @@ struct bpf_jit {
|
|||
#define SEEN_LITERAL 8 /* code uses literals */
|
||||
#define SEEN_FUNC 16 /* calls C functions */
|
||||
#define SEEN_TAIL_CALL 32 /* code uses tail calls */
|
||||
#define SEEN_SKB_CHANGE 64 /* code changes skb data */
|
||||
#define SEEN_STACK (SEEN_FUNC | SEEN_MEM | SEEN_SKB)
|
||||
|
||||
/*
|
||||
|
@ -203,19 +204,11 @@ static inline void reg_set_seen(struct bpf_jit *jit, u32 b1)
|
|||
_EMIT6(op1 | __disp, op2); \
|
||||
})
|
||||
|
||||
#define EMIT6_DISP(op1, op2, b1, b2, b3, disp) \
|
||||
({ \
|
||||
_EMIT6_DISP(op1 | reg(b1, b2) << 16 | \
|
||||
reg_high(b3) << 8, op2, disp); \
|
||||
REG_SET_SEEN(b1); \
|
||||
REG_SET_SEEN(b2); \
|
||||
REG_SET_SEEN(b3); \
|
||||
})
|
||||
|
||||
#define _EMIT6_DISP_LH(op1, op2, disp) \
|
||||
({ \
|
||||
unsigned int __disp_h = ((u32)disp) & 0xff000; \
|
||||
unsigned int __disp_l = ((u32)disp) & 0x00fff; \
|
||||
u32 _disp = (u32) disp; \
|
||||
unsigned int __disp_h = _disp & 0xff000; \
|
||||
unsigned int __disp_l = _disp & 0x00fff; \
|
||||
_EMIT6(op1 | __disp_l, op2 | __disp_h >> 4); \
|
||||
})
|
||||
|
||||
|
@ -389,13 +382,33 @@ static void save_restore_regs(struct bpf_jit *jit, int op)
|
|||
} while (re <= 15);
|
||||
}
|
||||
|
||||
/*
|
||||
* For SKB access %b1 contains the SKB pointer. For "bpf_jit.S"
|
||||
* we store the SKB header length on the stack and the SKB data
|
||||
* pointer in REG_SKB_DATA.
|
||||
*/
|
||||
static void emit_load_skb_data_hlen(struct bpf_jit *jit)
|
||||
{
|
||||
/* Header length: llgf %w1,<len>(%b1) */
|
||||
EMIT6_DISP_LH(0xe3000000, 0x0016, REG_W1, REG_0, BPF_REG_1,
|
||||
offsetof(struct sk_buff, len));
|
||||
/* s %w1,<data_len>(%b1) */
|
||||
EMIT4_DISP(0x5b000000, REG_W1, BPF_REG_1,
|
||||
offsetof(struct sk_buff, data_len));
|
||||
/* stg %w1,ST_OFF_HLEN(%r0,%r15) */
|
||||
EMIT6_DISP_LH(0xe3000000, 0x0024, REG_W1, REG_0, REG_15, STK_OFF_HLEN);
|
||||
/* lg %skb_data,data_off(%b1) */
|
||||
EMIT6_DISP_LH(0xe3000000, 0x0004, REG_SKB_DATA, REG_0,
|
||||
BPF_REG_1, offsetof(struct sk_buff, data));
|
||||
}
|
||||
|
||||
/*
|
||||
* Emit function prologue
|
||||
*
|
||||
* Save registers and create stack frame if necessary.
|
||||
* See stack frame layout desription in "bpf_jit.h"!
|
||||
*/
|
||||
static void bpf_jit_prologue(struct bpf_jit *jit)
|
||||
static void bpf_jit_prologue(struct bpf_jit *jit, bool is_classic)
|
||||
{
|
||||
if (jit->seen & SEEN_TAIL_CALL) {
|
||||
/* xc STK_OFF_TCCNT(4,%r15),STK_OFF_TCCNT(%r15) */
|
||||
|
@ -429,32 +442,21 @@ static void bpf_jit_prologue(struct bpf_jit *jit)
|
|||
EMIT6_DISP_LH(0xe3000000, 0x0024, REG_W1, REG_0,
|
||||
REG_15, 152);
|
||||
}
|
||||
/*
|
||||
* For SKB access %b1 contains the SKB pointer. For "bpf_jit.S"
|
||||
* we store the SKB header length on the stack and the SKB data
|
||||
* pointer in REG_SKB_DATA.
|
||||
*/
|
||||
if (jit->seen & SEEN_SKB) {
|
||||
/* Header length: llgf %w1,<len>(%b1) */
|
||||
EMIT6_DISP_LH(0xe3000000, 0x0016, REG_W1, REG_0, BPF_REG_1,
|
||||
offsetof(struct sk_buff, len));
|
||||
/* s %w1,<data_len>(%b1) */
|
||||
EMIT4_DISP(0x5b000000, REG_W1, BPF_REG_1,
|
||||
offsetof(struct sk_buff, data_len));
|
||||
/* stg %w1,ST_OFF_HLEN(%r0,%r15) */
|
||||
if (jit->seen & SEEN_SKB)
|
||||
emit_load_skb_data_hlen(jit);
|
||||
if (jit->seen & SEEN_SKB_CHANGE)
|
||||
/* stg %b1,ST_OFF_SKBP(%r0,%r15) */
|
||||
EMIT6_DISP_LH(0xe3000000, 0x0024, REG_W1, REG_0, REG_15,
|
||||
STK_OFF_HLEN);
|
||||
/* lg %skb_data,data_off(%b1) */
|
||||
EMIT6_DISP_LH(0xe3000000, 0x0004, REG_SKB_DATA, REG_0,
|
||||
BPF_REG_1, offsetof(struct sk_buff, data));
|
||||
STK_OFF_SKBP);
|
||||
/* Clear A (%b0) and X (%b7) registers for converted BPF programs */
|
||||
if (is_classic) {
|
||||
if (REG_SEEN(BPF_REG_A))
|
||||
/* lghi %ba,0 */
|
||||
EMIT4_IMM(0xa7090000, BPF_REG_A, 0);
|
||||
if (REG_SEEN(BPF_REG_X))
|
||||
/* lghi %bx,0 */
|
||||
EMIT4_IMM(0xa7090000, BPF_REG_X, 0);
|
||||
}
|
||||
/* BPF compatibility: clear A (%b0) and X (%b7) registers */
|
||||
if (REG_SEEN(BPF_REG_A))
|
||||
/* lghi %ba,0 */
|
||||
EMIT4_IMM(0xa7090000, BPF_REG_A, 0);
|
||||
if (REG_SEEN(BPF_REG_X))
|
||||
/* lghi %bx,0 */
|
||||
EMIT4_IMM(0xa7090000, BPF_REG_X, 0);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -976,12 +978,19 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, int i
|
|||
REG_SET_SEEN(BPF_REG_5);
|
||||
jit->seen |= SEEN_FUNC;
|
||||
/* lg %w1,<d(imm)>(%l) */
|
||||
EMIT6_DISP(0xe3000000, 0x0004, REG_W1, REG_0, REG_L,
|
||||
EMIT_CONST_U64(func));
|
||||
EMIT6_DISP_LH(0xe3000000, 0x0004, REG_W1, REG_0, REG_L,
|
||||
EMIT_CONST_U64(func));
|
||||
/* basr %r14,%w1 */
|
||||
EMIT2(0x0d00, REG_14, REG_W1);
|
||||
/* lgr %b0,%r2: load return value into %b0 */
|
||||
EMIT4(0xb9040000, BPF_REG_0, REG_2);
|
||||
if (bpf_helper_changes_skb_data((void *)func)) {
|
||||
jit->seen |= SEEN_SKB_CHANGE;
|
||||
/* lg %b1,ST_OFF_SKBP(%r15) */
|
||||
EMIT6_DISP_LH(0xe3000000, 0x0004, BPF_REG_1, REG_0,
|
||||
REG_15, STK_OFF_SKBP);
|
||||
emit_load_skb_data_hlen(jit);
|
||||
}
|
||||
break;
|
||||
}
|
||||
case BPF_JMP | BPF_CALL | BPF_X:
|
||||
|
@ -1023,7 +1032,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, int i
|
|||
MAX_TAIL_CALL_CNT, 0, 0x2);
|
||||
|
||||
/*
|
||||
* prog = array->prog[index];
|
||||
* prog = array->ptrs[index];
|
||||
* if (prog == NULL)
|
||||
* goto out;
|
||||
*/
|
||||
|
@ -1032,7 +1041,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, int i
|
|||
EMIT6_DISP_LH(0xeb000000, 0x000d, REG_1, BPF_REG_3, REG_0, 3);
|
||||
/* lg %r1,prog(%b2,%r1) */
|
||||
EMIT6_DISP_LH(0xe3000000, 0x0004, REG_1, BPF_REG_2,
|
||||
REG_1, offsetof(struct bpf_array, prog));
|
||||
REG_1, offsetof(struct bpf_array, ptrs));
|
||||
/* clgij %r1,0,0x8,label0 */
|
||||
EMIT6_PCREL_IMM_LABEL(0xec000000, 0x007d, REG_1, 0, 0, 0x8);
|
||||
|
||||
|
@ -1236,7 +1245,7 @@ static int bpf_jit_prog(struct bpf_jit *jit, struct bpf_prog *fp)
|
|||
jit->lit = jit->lit_start;
|
||||
jit->prg = 0;
|
||||
|
||||
bpf_jit_prologue(jit);
|
||||
bpf_jit_prologue(jit, bpf_prog_was_classic(fp));
|
||||
for (i = 0; i < fp->len; i += insn_count) {
|
||||
insn_count = bpf_jit_insn(jit, fp, i);
|
||||
if (insn_count < 0)
|
||||
|
|
|
@ -807,7 +807,7 @@ cond_branch: f_offset = addrs[i + filter[i].jf];
|
|||
}
|
||||
|
||||
if (bpf_jit_enable > 1)
|
||||
bpf_jit_dump(flen, proglen, pass, image);
|
||||
bpf_jit_dump(flen, proglen, pass + 1, image);
|
||||
|
||||
if (image) {
|
||||
bpf_flush_icache(image, image + proglen);
|
||||
|
|
|
@ -246,7 +246,7 @@ static void emit_prologue(u8 **pprog)
|
|||
* goto out;
|
||||
* if (++tail_call_cnt > MAX_TAIL_CALL_CNT)
|
||||
* goto out;
|
||||
* prog = array->prog[index];
|
||||
* prog = array->ptrs[index];
|
||||
* if (prog == NULL)
|
||||
* goto out;
|
||||
* goto *(prog->bpf_func + prologue_size);
|
||||
|
@ -284,9 +284,9 @@ static void emit_bpf_tail_call(u8 **pprog)
|
|||
EMIT3(0x83, 0xC0, 0x01); /* add eax, 1 */
|
||||
EMIT2_off32(0x89, 0x85, -STACKSIZE + 36); /* mov dword ptr [rbp - 516], eax */
|
||||
|
||||
/* prog = array->prog[index]; */
|
||||
/* prog = array->ptrs[index]; */
|
||||
EMIT4_off32(0x48, 0x8D, 0x84, 0xD6, /* lea rax, [rsi + rdx * 8 + offsetof(...)] */
|
||||
offsetof(struct bpf_array, prog));
|
||||
offsetof(struct bpf_array, ptrs));
|
||||
EMIT3(0x48, 0x8B, 0x00); /* mov rax, qword ptr [rax] */
|
||||
|
||||
/* if (prog == NULL)
|
||||
|
@ -315,6 +315,26 @@ static void emit_bpf_tail_call(u8 **pprog)
|
|||
*pprog = prog;
|
||||
}
|
||||
|
||||
|
||||
static void emit_load_skb_data_hlen(u8 **pprog)
|
||||
{
|
||||
u8 *prog = *pprog;
|
||||
int cnt = 0;
|
||||
|
||||
/* r9d = skb->len - skb->data_len (headlen)
|
||||
* r10 = skb->data
|
||||
*/
|
||||
/* mov %r9d, off32(%rdi) */
|
||||
EMIT3_off32(0x44, 0x8b, 0x8f, offsetof(struct sk_buff, len));
|
||||
|
||||
/* sub %r9d, off32(%rdi) */
|
||||
EMIT3_off32(0x44, 0x2b, 0x8f, offsetof(struct sk_buff, data_len));
|
||||
|
||||
/* mov %r10, off32(%rdi) */
|
||||
EMIT3_off32(0x4c, 0x8b, 0x97, offsetof(struct sk_buff, data));
|
||||
*pprog = prog;
|
||||
}
|
||||
|
||||
static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
|
||||
int oldproglen, struct jit_context *ctx)
|
||||
{
|
||||
|
@ -329,36 +349,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
|
|||
|
||||
emit_prologue(&prog);
|
||||
|
||||
if (seen_ld_abs) {
|
||||
/* r9d : skb->len - skb->data_len (headlen)
|
||||
* r10 : skb->data
|
||||
*/
|
||||
if (is_imm8(offsetof(struct sk_buff, len)))
|
||||
/* mov %r9d, off8(%rdi) */
|
||||
EMIT4(0x44, 0x8b, 0x4f,
|
||||
offsetof(struct sk_buff, len));
|
||||
else
|
||||
/* mov %r9d, off32(%rdi) */
|
||||
EMIT3_off32(0x44, 0x8b, 0x8f,
|
||||
offsetof(struct sk_buff, len));
|
||||
|
||||
if (is_imm8(offsetof(struct sk_buff, data_len)))
|
||||
/* sub %r9d, off8(%rdi) */
|
||||
EMIT4(0x44, 0x2b, 0x4f,
|
||||
offsetof(struct sk_buff, data_len));
|
||||
else
|
||||
EMIT3_off32(0x44, 0x2b, 0x8f,
|
||||
offsetof(struct sk_buff, data_len));
|
||||
|
||||
if (is_imm8(offsetof(struct sk_buff, data)))
|
||||
/* mov %r10, off8(%rdi) */
|
||||
EMIT4(0x4c, 0x8b, 0x57,
|
||||
offsetof(struct sk_buff, data));
|
||||
else
|
||||
/* mov %r10, off32(%rdi) */
|
||||
EMIT3_off32(0x4c, 0x8b, 0x97,
|
||||
offsetof(struct sk_buff, data));
|
||||
}
|
||||
if (seen_ld_abs)
|
||||
emit_load_skb_data_hlen(&prog);
|
||||
|
||||
for (i = 0; i < insn_cnt; i++, insn++) {
|
||||
const s32 imm32 = insn->imm;
|
||||
|
@ -367,6 +359,7 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
|
|||
u8 b1 = 0, b2 = 0, b3 = 0;
|
||||
s64 jmp_offset;
|
||||
u8 jmp_cond;
|
||||
bool reload_skb_data;
|
||||
int ilen;
|
||||
u8 *func;
|
||||
|
||||
|
@ -818,12 +811,18 @@ xadd: if (is_imm8(insn->off))
|
|||
func = (u8 *) __bpf_call_base + imm32;
|
||||
jmp_offset = func - (image + addrs[i]);
|
||||
if (seen_ld_abs) {
|
||||
EMIT2(0x41, 0x52); /* push %r10 */
|
||||
EMIT2(0x41, 0x51); /* push %r9 */
|
||||
/* need to adjust jmp offset, since
|
||||
* pop %r9, pop %r10 take 4 bytes after call insn
|
||||
*/
|
||||
jmp_offset += 4;
|
||||
reload_skb_data = bpf_helper_changes_skb_data(func);
|
||||
if (reload_skb_data) {
|
||||
EMIT1(0x57); /* push %rdi */
|
||||
jmp_offset += 22; /* pop, mov, sub, mov */
|
||||
} else {
|
||||
EMIT2(0x41, 0x52); /* push %r10 */
|
||||
EMIT2(0x41, 0x51); /* push %r9 */
|
||||
/* need to adjust jmp offset, since
|
||||
* pop %r9, pop %r10 take 4 bytes after call insn
|
||||
*/
|
||||
jmp_offset += 4;
|
||||
}
|
||||
}
|
||||
if (!imm32 || !is_simm32(jmp_offset)) {
|
||||
pr_err("unsupported bpf func %d addr %p image %p\n",
|
||||
|
@ -832,8 +831,13 @@ xadd: if (is_imm8(insn->off))
|
|||
}
|
||||
EMIT1_off32(0xE8, jmp_offset);
|
||||
if (seen_ld_abs) {
|
||||
EMIT2(0x41, 0x59); /* pop %r9 */
|
||||
EMIT2(0x41, 0x5A); /* pop %r10 */
|
||||
if (reload_skb_data) {
|
||||
EMIT1(0x5F); /* pop %rdi */
|
||||
emit_load_skb_data_hlen(&prog);
|
||||
} else {
|
||||
EMIT2(0x41, 0x59); /* pop %r9 */
|
||||
EMIT2(0x41, 0x5A); /* pop %r10 */
|
||||
}
|
||||
}
|
||||
break;
|
||||
|
||||
|
@ -1099,7 +1103,7 @@ void bpf_int_jit_compile(struct bpf_prog *prog)
|
|||
}
|
||||
|
||||
if (bpf_jit_enable > 1)
|
||||
bpf_jit_dump(prog->len, proglen, 0, image);
|
||||
bpf_jit_dump(prog->len, proglen, pass + 1, image);
|
||||
|
||||
if (image) {
|
||||
bpf_flush_icache(header, image + proglen);
|
||||
|
|
|
@ -16,6 +16,8 @@
|
|||
#include <linux/of.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/property.h>
|
||||
#include <linux/etherdevice.h>
|
||||
#include <linux/phy.h>
|
||||
|
||||
/**
|
||||
* device_add_property_set - Add a collection of properties to a device object.
|
||||
|
@ -154,6 +156,7 @@ EXPORT_SYMBOL_GPL(fwnode_property_present);
|
|||
* %-ENODATA if the property does not have a value,
|
||||
* %-EPROTO if the property is not an array of numbers,
|
||||
* %-EOVERFLOW if the size of the property is not as expected.
|
||||
* %-ENXIO if no suitable firmware interface is present.
|
||||
*/
|
||||
int device_property_read_u8_array(struct device *dev, const char *propname,
|
||||
u8 *val, size_t nval)
|
||||
|
@ -178,6 +181,7 @@ EXPORT_SYMBOL_GPL(device_property_read_u8_array);
|
|||
* %-ENODATA if the property does not have a value,
|
||||
* %-EPROTO if the property is not an array of numbers,
|
||||
* %-EOVERFLOW if the size of the property is not as expected.
|
||||
* %-ENXIO if no suitable firmware interface is present.
|
||||
*/
|
||||
int device_property_read_u16_array(struct device *dev, const char *propname,
|
||||
u16 *val, size_t nval)
|
||||
|
@ -202,6 +206,7 @@ EXPORT_SYMBOL_GPL(device_property_read_u16_array);
|
|||
* %-ENODATA if the property does not have a value,
|
||||
* %-EPROTO if the property is not an array of numbers,
|
||||
* %-EOVERFLOW if the size of the property is not as expected.
|
||||
* %-ENXIO if no suitable firmware interface is present.
|
||||
*/
|
||||
int device_property_read_u32_array(struct device *dev, const char *propname,
|
||||
u32 *val, size_t nval)
|
||||
|
@ -226,6 +231,7 @@ EXPORT_SYMBOL_GPL(device_property_read_u32_array);
|
|||
* %-ENODATA if the property does not have a value,
|
||||
* %-EPROTO if the property is not an array of numbers,
|
||||
* %-EOVERFLOW if the size of the property is not as expected.
|
||||
* %-ENXIO if no suitable firmware interface is present.
|
||||
*/
|
||||
int device_property_read_u64_array(struct device *dev, const char *propname,
|
||||
u64 *val, size_t nval)
|
||||
|
@ -250,6 +256,7 @@ EXPORT_SYMBOL_GPL(device_property_read_u64_array);
|
|||
* %-ENODATA if the property does not have a value,
|
||||
* %-EPROTO or %-EILSEQ if the property is not an array of strings,
|
||||
* %-EOVERFLOW if the size of the property is not as expected.
|
||||
* %-ENXIO if no suitable firmware interface is present.
|
||||
*/
|
||||
int device_property_read_string_array(struct device *dev, const char *propname,
|
||||
const char **val, size_t nval)
|
||||
|
@ -271,6 +278,7 @@ EXPORT_SYMBOL_GPL(device_property_read_string_array);
|
|||
* %-EINVAL if given arguments are not valid,
|
||||
* %-ENODATA if the property does not have a value,
|
||||
* %-EPROTO or %-EILSEQ if the property type is not a string.
|
||||
* %-ENXIO if no suitable firmware interface is present.
|
||||
*/
|
||||
int device_property_read_string(struct device *dev, const char *propname,
|
||||
const char **val)
|
||||
|
@ -292,9 +300,11 @@ EXPORT_SYMBOL_GPL(device_property_read_string);
|
|||
else if (is_acpi_node(_fwnode_)) \
|
||||
_ret_ = acpi_dev_prop_read(to_acpi_node(_fwnode_), _propname_, \
|
||||
_proptype_, _val_, _nval_); \
|
||||
else \
|
||||
else if (is_pset(_fwnode_)) \
|
||||
_ret_ = pset_prop_read_array(to_pset(_fwnode_), _propname_, \
|
||||
_proptype_, _val_, _nval_); \
|
||||
else \
|
||||
_ret_ = -ENXIO; \
|
||||
_ret_; \
|
||||
})
|
||||
|
||||
|
@ -432,9 +442,10 @@ int fwnode_property_read_string_array(struct fwnode_handle *fwnode,
|
|||
else if (is_acpi_node(fwnode))
|
||||
return acpi_dev_prop_read(to_acpi_node(fwnode), propname,
|
||||
DEV_PROP_STRING, val, nval);
|
||||
|
||||
return pset_prop_read_array(to_pset(fwnode), propname,
|
||||
DEV_PROP_STRING, val, nval);
|
||||
else if (is_pset(fwnode))
|
||||
return pset_prop_read_array(to_pset(fwnode), propname,
|
||||
DEV_PROP_STRING, val, nval);
|
||||
return -ENXIO;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(fwnode_property_read_string_array);
|
||||
|
||||
|
@ -535,3 +546,79 @@ bool device_dma_is_coherent(struct device *dev)
|
|||
return coherent;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(device_dma_is_coherent);
|
||||
|
||||
/**
|
||||
* device_get_phy_mode - Get phy mode for given device
|
||||
* @dev: Pointer to the given device
|
||||
*
|
||||
* The function gets phy interface string from property 'phy-mode' or
|
||||
* 'phy-connection-type', and return its index in phy_modes table, or errno in
|
||||
* error case.
|
||||
*/
|
||||
int device_get_phy_mode(struct device *dev)
|
||||
{
|
||||
const char *pm;
|
||||
int err, i;
|
||||
|
||||
err = device_property_read_string(dev, "phy-mode", &pm);
|
||||
if (err < 0)
|
||||
err = device_property_read_string(dev,
|
||||
"phy-connection-type", &pm);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
for (i = 0; i < PHY_INTERFACE_MODE_MAX; i++)
|
||||
if (!strcasecmp(pm, phy_modes(i)))
|
||||
return i;
|
||||
|
||||
return -ENODEV;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(device_get_phy_mode);
|
||||
|
||||
static void *device_get_mac_addr(struct device *dev,
|
||||
const char *name, char *addr,
|
||||
int alen)
|
||||
{
|
||||
int ret = device_property_read_u8_array(dev, name, addr, alen);
|
||||
|
||||
if (ret == 0 && alen == ETH_ALEN && is_valid_ether_addr(addr))
|
||||
return addr;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/**
|
||||
* device_get_mac_address - Get the MAC for a given device
|
||||
* @dev: Pointer to the device
|
||||
* @addr: Address of buffer to store the MAC in
|
||||
* @alen: Length of the buffer pointed to by addr, should be ETH_ALEN
|
||||
*
|
||||
* Search the firmware node for the best MAC address to use. 'mac-address' is
|
||||
* checked first, because that is supposed to contain to "most recent" MAC
|
||||
* address. If that isn't set, then 'local-mac-address' is checked next,
|
||||
* because that is the default address. If that isn't set, then the obsolete
|
||||
* 'address' is checked, just in case we're using an old device tree.
|
||||
*
|
||||
* Note that the 'address' property is supposed to contain a virtual address of
|
||||
* the register set, but some DTS files have redefined that property to be the
|
||||
* MAC address.
|
||||
*
|
||||
* All-zero MAC addresses are rejected, because those could be properties that
|
||||
* exist in the firmware tables, but were not updated by the firmware. For
|
||||
* example, the DTS could define 'mac-address' and 'local-mac-address', with
|
||||
* zero MAC addresses. Some older U-Boots only initialized 'local-mac-address'.
|
||||
* In this case, the real MAC is in 'local-mac-address', and 'mac-address'
|
||||
* exists but is all zeros.
|
||||
*/
|
||||
void *device_get_mac_address(struct device *dev, char *addr, int alen)
|
||||
{
|
||||
addr = device_get_mac_addr(dev, "mac-address", addr, alen);
|
||||
if (addr)
|
||||
return addr;
|
||||
|
||||
addr = device_get_mac_addr(dev, "local-mac-address", addr, alen);
|
||||
if (addr)
|
||||
return addr;
|
||||
|
||||
return device_get_mac_addr(dev, "address", addr, alen);
|
||||
}
|
||||
EXPORT_SYMBOL(device_get_mac_address);
|
||||
|
|
|
@ -92,7 +92,7 @@ config BCMA_DRIVER_GMAC_CMN
|
|||
config BCMA_DRIVER_GPIO
|
||||
bool "BCMA GPIO driver"
|
||||
depends on BCMA && GPIOLIB
|
||||
select IRQ_DOMAIN if BCMA_HOST_SOC
|
||||
select GPIOLIB_IRQCHIP if BCMA_HOST_SOC
|
||||
help
|
||||
Driver to provide access to the GPIO pins of the bcma bus.
|
||||
|
||||
|
|
|
@ -34,6 +34,7 @@ int __init bcma_bus_early_register(struct bcma_bus *bus);
|
|||
int bcma_bus_suspend(struct bcma_bus *bus);
|
||||
int bcma_bus_resume(struct bcma_bus *bus);
|
||||
#endif
|
||||
struct device *bcma_bus_get_host_dev(struct bcma_bus *bus);
|
||||
|
||||
/* scan.c */
|
||||
void bcma_detect_chip(struct bcma_bus *bus);
|
||||
|
|
|
@ -8,10 +8,8 @@
|
|||
* Licensed under the GNU/GPL. See COPYING for details.
|
||||
*/
|
||||
|
||||
#include <linux/gpio.h>
|
||||
#include <linux/irq.h>
|
||||
#include <linux/gpio/driver.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/irqdomain.h>
|
||||
#include <linux/export.h>
|
||||
#include <linux/bcma/bcma.h>
|
||||
|
||||
|
@ -79,19 +77,11 @@ static void bcma_gpio_free(struct gpio_chip *chip, unsigned gpio)
|
|||
}
|
||||
|
||||
#if IS_BUILTIN(CONFIG_BCM47XX) || IS_BUILTIN(CONFIG_ARCH_BCM_5301X)
|
||||
static int bcma_gpio_to_irq(struct gpio_chip *chip, unsigned gpio)
|
||||
{
|
||||
struct bcma_drv_cc *cc = bcma_gpio_get_cc(chip);
|
||||
|
||||
if (cc->core->bus->hosttype == BCMA_HOSTTYPE_SOC)
|
||||
return irq_find_mapping(cc->irq_domain, gpio);
|
||||
else
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
static void bcma_gpio_irq_unmask(struct irq_data *d)
|
||||
{
|
||||
struct bcma_drv_cc *cc = irq_data_get_irq_chip_data(d);
|
||||
struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
|
||||
struct bcma_drv_cc *cc = bcma_gpio_get_cc(gc);
|
||||
int gpio = irqd_to_hwirq(d);
|
||||
u32 val = bcma_chipco_gpio_in(cc, BIT(gpio));
|
||||
|
||||
|
@ -101,7 +91,8 @@ static void bcma_gpio_irq_unmask(struct irq_data *d)
|
|||
|
||||
static void bcma_gpio_irq_mask(struct irq_data *d)
|
||||
{
|
||||
struct bcma_drv_cc *cc = irq_data_get_irq_chip_data(d);
|
||||
struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
|
||||
struct bcma_drv_cc *cc = bcma_gpio_get_cc(gc);
|
||||
int gpio = irqd_to_hwirq(d);
|
||||
|
||||
bcma_chipco_gpio_intmask(cc, BIT(gpio), 0);
|
||||
|
@ -116,6 +107,7 @@ static struct irq_chip bcma_gpio_irq_chip = {
|
|||
static irqreturn_t bcma_gpio_irq_handler(int irq, void *dev_id)
|
||||
{
|
||||
struct bcma_drv_cc *cc = dev_id;
|
||||
struct gpio_chip *gc = &cc->gpio;
|
||||
u32 val = bcma_cc_read32(cc, BCMA_CC_GPIOIN);
|
||||
u32 mask = bcma_cc_read32(cc, BCMA_CC_GPIOIRQ);
|
||||
u32 pol = bcma_cc_read32(cc, BCMA_CC_GPIOPOL);
|
||||
|
@ -125,81 +117,58 @@ static irqreturn_t bcma_gpio_irq_handler(int irq, void *dev_id)
|
|||
if (!irqs)
|
||||
return IRQ_NONE;
|
||||
|
||||
for_each_set_bit(gpio, &irqs, cc->gpio.ngpio)
|
||||
generic_handle_irq(bcma_gpio_to_irq(&cc->gpio, gpio));
|
||||
for_each_set_bit(gpio, &irqs, gc->ngpio)
|
||||
generic_handle_irq(irq_find_mapping(gc->irqdomain, gpio));
|
||||
bcma_chipco_gpio_polarity(cc, irqs, val & irqs);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static int bcma_gpio_irq_domain_init(struct bcma_drv_cc *cc)
|
||||
static int bcma_gpio_irq_init(struct bcma_drv_cc *cc)
|
||||
{
|
||||
struct gpio_chip *chip = &cc->gpio;
|
||||
int gpio, hwirq, err;
|
||||
int hwirq, err;
|
||||
|
||||
if (cc->core->bus->hosttype != BCMA_HOSTTYPE_SOC)
|
||||
return 0;
|
||||
|
||||
cc->irq_domain = irq_domain_add_linear(NULL, chip->ngpio,
|
||||
&irq_domain_simple_ops, cc);
|
||||
if (!cc->irq_domain) {
|
||||
err = -ENODEV;
|
||||
goto err_irq_domain;
|
||||
}
|
||||
for (gpio = 0; gpio < chip->ngpio; gpio++) {
|
||||
int irq = irq_create_mapping(cc->irq_domain, gpio);
|
||||
|
||||
irq_set_chip_data(irq, cc);
|
||||
irq_set_chip_and_handler(irq, &bcma_gpio_irq_chip,
|
||||
handle_simple_irq);
|
||||
}
|
||||
|
||||
hwirq = bcma_core_irq(cc->core, 0);
|
||||
err = request_irq(hwirq, bcma_gpio_irq_handler, IRQF_SHARED, "gpio",
|
||||
cc);
|
||||
if (err)
|
||||
goto err_req_irq;
|
||||
return err;
|
||||
|
||||
bcma_chipco_gpio_intmask(cc, ~0, 0);
|
||||
bcma_cc_set32(cc, BCMA_CC_IRQMASK, BCMA_CC_IRQ_GPIO);
|
||||
|
||||
return 0;
|
||||
|
||||
err_req_irq:
|
||||
for (gpio = 0; gpio < chip->ngpio; gpio++) {
|
||||
int irq = irq_find_mapping(cc->irq_domain, gpio);
|
||||
|
||||
irq_dispose_mapping(irq);
|
||||
err = gpiochip_irqchip_add(chip,
|
||||
&bcma_gpio_irq_chip,
|
||||
0,
|
||||
handle_simple_irq,
|
||||
IRQ_TYPE_NONE);
|
||||
if (err) {
|
||||
free_irq(hwirq, cc);
|
||||
return err;
|
||||
}
|
||||
irq_domain_remove(cc->irq_domain);
|
||||
err_irq_domain:
|
||||
return err;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void bcma_gpio_irq_domain_exit(struct bcma_drv_cc *cc)
|
||||
static void bcma_gpio_irq_exit(struct bcma_drv_cc *cc)
|
||||
{
|
||||
struct gpio_chip *chip = &cc->gpio;
|
||||
int gpio;
|
||||
|
||||
if (cc->core->bus->hosttype != BCMA_HOSTTYPE_SOC)
|
||||
return;
|
||||
|
||||
bcma_cc_mask32(cc, BCMA_CC_IRQMASK, ~BCMA_CC_IRQ_GPIO);
|
||||
free_irq(bcma_core_irq(cc->core, 0), cc);
|
||||
for (gpio = 0; gpio < chip->ngpio; gpio++) {
|
||||
int irq = irq_find_mapping(cc->irq_domain, gpio);
|
||||
|
||||
irq_dispose_mapping(irq);
|
||||
}
|
||||
irq_domain_remove(cc->irq_domain);
|
||||
}
|
||||
#else
|
||||
static int bcma_gpio_irq_domain_init(struct bcma_drv_cc *cc)
|
||||
static int bcma_gpio_irq_init(struct bcma_drv_cc *cc)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void bcma_gpio_irq_domain_exit(struct bcma_drv_cc *cc)
|
||||
static void bcma_gpio_irq_exit(struct bcma_drv_cc *cc)
|
||||
{
|
||||
}
|
||||
#endif
|
||||
|
@ -218,9 +187,8 @@ int bcma_gpio_init(struct bcma_drv_cc *cc)
|
|||
chip->set = bcma_gpio_set_value;
|
||||
chip->direction_input = bcma_gpio_direction_input;
|
||||
chip->direction_output = bcma_gpio_direction_output;
|
||||
#if IS_BUILTIN(CONFIG_BCM47XX) || IS_BUILTIN(CONFIG_ARCH_BCM_5301X)
|
||||
chip->to_irq = bcma_gpio_to_irq;
|
||||
#endif
|
||||
chip->owner = THIS_MODULE;
|
||||
chip->dev = bcma_bus_get_host_dev(bus);
|
||||
#if IS_BUILTIN(CONFIG_OF)
|
||||
if (cc->core->bus->hosttype == BCMA_HOSTTYPE_SOC)
|
||||
chip->of_node = cc->core->dev.of_node;
|
||||
|
@ -248,13 +216,13 @@ int bcma_gpio_init(struct bcma_drv_cc *cc)
|
|||
else
|
||||
chip->base = -1;
|
||||
|
||||
err = bcma_gpio_irq_domain_init(cc);
|
||||
err = gpiochip_add(chip);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
err = gpiochip_add(chip);
|
||||
err = bcma_gpio_irq_init(cc);
|
||||
if (err) {
|
||||
bcma_gpio_irq_domain_exit(cc);
|
||||
gpiochip_remove(chip);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@ -263,7 +231,7 @@ int bcma_gpio_init(struct bcma_drv_cc *cc)
|
|||
|
||||
int bcma_gpio_unregister(struct bcma_drv_cc *cc)
|
||||
{
|
||||
bcma_gpio_irq_domain_exit(cc);
|
||||
bcma_gpio_irq_exit(cc);
|
||||
gpiochip_remove(&cc->gpio);
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -7,11 +7,14 @@
|
|||
|
||||
#include "bcma_private.h"
|
||||
#include <linux/module.h>
|
||||
#include <linux/mmc/sdio_func.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/bcma/bcma.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/of_platform.h>
|
||||
|
||||
MODULE_DESCRIPTION("Broadcom's specific AMBA driver");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
@ -268,6 +271,28 @@ void bcma_prepare_core(struct bcma_bus *bus, struct bcma_device *core)
|
|||
}
|
||||
}
|
||||
|
||||
struct device *bcma_bus_get_host_dev(struct bcma_bus *bus)
|
||||
{
|
||||
switch (bus->hosttype) {
|
||||
case BCMA_HOSTTYPE_PCI:
|
||||
if (bus->host_pci)
|
||||
return &bus->host_pci->dev;
|
||||
else
|
||||
return NULL;
|
||||
case BCMA_HOSTTYPE_SOC:
|
||||
if (bus->host_pdev)
|
||||
return &bus->host_pdev->dev;
|
||||
else
|
||||
return NULL;
|
||||
case BCMA_HOSTTYPE_SDIO:
|
||||
if (bus->host_sdio)
|
||||
return &bus->host_sdio->dev;
|
||||
else
|
||||
return NULL;
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
|
||||
void bcma_init_bus(struct bcma_bus *bus)
|
||||
{
|
||||
mutex_lock(&bcma_buses_mutex);
|
||||
|
@ -387,6 +412,7 @@ int bcma_bus_register(struct bcma_bus *bus)
|
|||
{
|
||||
int err;
|
||||
struct bcma_device *core;
|
||||
struct device *dev;
|
||||
|
||||
/* Scan for devices (cores) */
|
||||
err = bcma_bus_scan(bus);
|
||||
|
@ -409,6 +435,16 @@ int bcma_bus_register(struct bcma_bus *bus)
|
|||
bcma_core_pci_early_init(&bus->drv_pci[0]);
|
||||
}
|
||||
|
||||
dev = bcma_bus_get_host_dev(bus);
|
||||
/* TODO: remove check for IS_BUILTIN(CONFIG_BCMA) check when
|
||||
* of_default_bus_match_table is exported or in some other way
|
||||
* accessible. This is just a temporary workaround.
|
||||
*/
|
||||
if (IS_BUILTIN(CONFIG_BCMA) && dev) {
|
||||
of_platform_populate(dev->of_node, of_default_bus_match_table,
|
||||
NULL, dev);
|
||||
}
|
||||
|
||||
/* Cores providing flash access go before SPROM init */
|
||||
list_for_each_entry(core, &bus->cores, list) {
|
||||
if (bcma_is_core_needed_early(core->id.id))
|
||||
|
|
|
@ -13,6 +13,10 @@ config BT_RTL
|
|||
tristate
|
||||
select FW_LOADER
|
||||
|
||||
config BT_QCA
|
||||
tristate
|
||||
select FW_LOADER
|
||||
|
||||
config BT_HCIBTUSB
|
||||
tristate "HCI USB driver"
|
||||
depends on USB
|
||||
|
@ -132,6 +136,7 @@ config BT_HCIUART_3WIRE
|
|||
config BT_HCIUART_INTEL
|
||||
bool "Intel protocol support"
|
||||
depends on BT_HCIUART
|
||||
select BT_HCIUART_H4
|
||||
select BT_INTEL
|
||||
help
|
||||
The Intel protocol support enables Bluetooth HCI over serial
|
||||
|
@ -150,6 +155,19 @@ config BT_HCIUART_BCM
|
|||
|
||||
Say Y here to compile support for Broadcom protocol.
|
||||
|
||||
config BT_HCIUART_QCA
|
||||
bool "Qualcomm Atheros protocol support"
|
||||
depends on BT_HCIUART
|
||||
select BT_HCIUART_H4
|
||||
select BT_QCA
|
||||
help
|
||||
The Qualcomm Atheros protocol supports HCI In-Band Sleep feature
|
||||
over serial port interface(H4) between controller and host.
|
||||
This protocol is required for UART clock control for QCA Bluetooth
|
||||
devices.
|
||||
|
||||
Say Y here to compile support for QCA protocol.
|
||||
|
||||
config BT_HCIBCM203X
|
||||
tristate "HCI BCM203x USB driver"
|
||||
depends on USB
|
||||
|
|
|
@ -22,6 +22,7 @@ obj-$(CONFIG_BT_MRVL_SDIO) += btmrvl_sdio.o
|
|||
obj-$(CONFIG_BT_WILINK) += btwilink.o
|
||||
obj-$(CONFIG_BT_BCM) += btbcm.o
|
||||
obj-$(CONFIG_BT_RTL) += btrtl.o
|
||||
obj-$(CONFIG_BT_QCA) += btqca.o
|
||||
|
||||
btmrvl-y := btmrvl_main.o
|
||||
btmrvl-$(CONFIG_DEBUG_FS) += btmrvl_debugfs.o
|
||||
|
@ -34,6 +35,7 @@ hci_uart-$(CONFIG_BT_HCIUART_ATH3K) += hci_ath.o
|
|||
hci_uart-$(CONFIG_BT_HCIUART_3WIRE) += hci_h5.o
|
||||
hci_uart-$(CONFIG_BT_HCIUART_INTEL) += hci_intel.o
|
||||
hci_uart-$(CONFIG_BT_HCIUART_BCM) += hci_bcm.o
|
||||
hci_uart-$(CONFIG_BT_HCIUART_QCA) += hci_qca.o
|
||||
hci_uart-objs := $(hci_uart-y)
|
||||
|
||||
ccflags-y += -D__CHECK_ENDIAN__
|
||||
|
|
|
@ -492,7 +492,7 @@ static int bfusb_send_frame(struct hci_dev *hdev, struct sk_buff *skb)
|
|||
case HCI_SCODATA_PKT:
|
||||
hdev->stat.sco_tx++;
|
||||
break;
|
||||
};
|
||||
}
|
||||
|
||||
/* Prepend skb with frame type */
|
||||
memcpy(skb_push(skb, 1), &bt_cb(skb)->pkt_type, 1);
|
||||
|
|
|
@ -427,7 +427,7 @@ static int bt3c_hci_send_frame(struct hci_dev *hdev, struct sk_buff *skb)
|
|||
case HCI_SCODATA_PKT:
|
||||
hdev->stat.sco_tx++;
|
||||
break;
|
||||
};
|
||||
}
|
||||
|
||||
/* Prepend skb with frame type */
|
||||
memcpy(skb_push(skb, 1), &bt_cb(skb)->pkt_type, 1);
|
||||
|
|
|
@ -34,6 +34,7 @@
|
|||
|
||||
#define BDADDR_BCM20702A0 (&(bdaddr_t) {{0x00, 0xa0, 0x02, 0x70, 0x20, 0x00}})
|
||||
#define BDADDR_BCM4324B3 (&(bdaddr_t) {{0x00, 0x00, 0x00, 0xb3, 0x24, 0x43}})
|
||||
#define BDADDR_BCM4330B1 (&(bdaddr_t) {{0x00, 0x00, 0x00, 0xb1, 0x30, 0x43}})
|
||||
|
||||
int btbcm_check_bdaddr(struct hci_dev *hdev)
|
||||
{
|
||||
|
@ -66,9 +67,13 @@ int btbcm_check_bdaddr(struct hci_dev *hdev)
|
|||
*
|
||||
* The address 43:24:B3:00:00:00 indicates a BCM4324B3 controller
|
||||
* with waiting for configuration state.
|
||||
*
|
||||
* The address 43:30:B1:00:00:00 indicates a BCM4330B1 controller
|
||||
* with waiting for configuration state.
|
||||
*/
|
||||
if (!bacmp(&bda->bdaddr, BDADDR_BCM20702A0) ||
|
||||
!bacmp(&bda->bdaddr, BDADDR_BCM4324B3)) {
|
||||
!bacmp(&bda->bdaddr, BDADDR_BCM4324B3) ||
|
||||
!bacmp(&bda->bdaddr, BDADDR_BCM4330B1)) {
|
||||
BT_INFO("%s: BCM: Using default device address (%pMR)",
|
||||
hdev->name, &bda->bdaddr);
|
||||
set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks);
|
||||
|
@ -241,6 +246,7 @@ static const struct {
|
|||
u16 subver;
|
||||
const char *name;
|
||||
} bcm_uart_subver_table[] = {
|
||||
{ 0x4103, "BCM4330B1" }, /* 002.001.003 */
|
||||
{ 0x410e, "BCM43341B0" }, /* 002.001.014 */
|
||||
{ 0x4406, "BCM4324B3" }, /* 002.004.006 */
|
||||
{ 0x610c, "BCM4354" }, /* 003.001.012 */
|
||||
|
|
|
@ -89,7 +89,89 @@ int btintel_set_bdaddr(struct hci_dev *hdev, const bdaddr_t *bdaddr)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(btintel_set_bdaddr);
|
||||
|
||||
void btintel_hw_error(struct hci_dev *hdev, u8 code)
|
||||
{
|
||||
struct sk_buff *skb;
|
||||
u8 type = 0x00;
|
||||
|
||||
BT_ERR("%s: Hardware error 0x%2.2x", hdev->name, code);
|
||||
|
||||
skb = __hci_cmd_sync(hdev, HCI_OP_RESET, 0, NULL, HCI_INIT_TIMEOUT);
|
||||
if (IS_ERR(skb)) {
|
||||
BT_ERR("%s: Reset after hardware error failed (%ld)",
|
||||
hdev->name, PTR_ERR(skb));
|
||||
return;
|
||||
}
|
||||
kfree_skb(skb);
|
||||
|
||||
skb = __hci_cmd_sync(hdev, 0xfc22, 1, &type, HCI_INIT_TIMEOUT);
|
||||
if (IS_ERR(skb)) {
|
||||
BT_ERR("%s: Retrieving Intel exception info failed (%ld)",
|
||||
hdev->name, PTR_ERR(skb));
|
||||
return;
|
||||
}
|
||||
|
||||
if (skb->len != 13) {
|
||||
BT_ERR("%s: Exception info size mismatch", hdev->name);
|
||||
kfree_skb(skb);
|
||||
return;
|
||||
}
|
||||
|
||||
BT_ERR("%s: Exception info %s", hdev->name, (char *)(skb->data + 1));
|
||||
|
||||
kfree_skb(skb);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(btintel_hw_error);
|
||||
|
||||
void btintel_version_info(struct hci_dev *hdev, struct intel_version *ver)
|
||||
{
|
||||
const char *variant;
|
||||
|
||||
switch (ver->fw_variant) {
|
||||
case 0x06:
|
||||
variant = "Bootloader";
|
||||
break;
|
||||
case 0x23:
|
||||
variant = "Firmware";
|
||||
break;
|
||||
default:
|
||||
return;
|
||||
}
|
||||
|
||||
BT_INFO("%s: %s revision %u.%u build %u week %u %u", hdev->name,
|
||||
variant, ver->fw_revision >> 4, ver->fw_revision & 0x0f,
|
||||
ver->fw_build_num, ver->fw_build_ww, 2000 + ver->fw_build_yy);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(btintel_version_info);
|
||||
|
||||
int btintel_secure_send(struct hci_dev *hdev, u8 fragment_type, u32 plen,
|
||||
const void *param)
|
||||
{
|
||||
while (plen > 0) {
|
||||
struct sk_buff *skb;
|
||||
u8 cmd_param[253], fragment_len = (plen > 252) ? 252 : plen;
|
||||
|
||||
cmd_param[0] = fragment_type;
|
||||
memcpy(cmd_param + 1, param, fragment_len);
|
||||
|
||||
skb = __hci_cmd_sync(hdev, 0xfc09, fragment_len + 1,
|
||||
cmd_param, HCI_INIT_TIMEOUT);
|
||||
if (IS_ERR(skb))
|
||||
return PTR_ERR(skb);
|
||||
|
||||
kfree_skb(skb);
|
||||
|
||||
plen -= fragment_len;
|
||||
param += fragment_len;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(btintel_secure_send);
|
||||
|
||||
MODULE_AUTHOR("Marcel Holtmann <marcel@holtmann.org>");
|
||||
MODULE_DESCRIPTION("Bluetooth support for Intel devices ver " VERSION);
|
||||
MODULE_VERSION(VERSION);
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_FIRMWARE("intel/ibt-11-5.sfi");
|
||||
MODULE_FIRMWARE("intel/ibt-11-5.ddc");
|
||||
|
|
|
@ -73,6 +73,11 @@ struct intel_secure_send_result {
|
|||
|
||||
int btintel_check_bdaddr(struct hci_dev *hdev);
|
||||
int btintel_set_bdaddr(struct hci_dev *hdev, const bdaddr_t *bdaddr);
|
||||
void btintel_hw_error(struct hci_dev *hdev, u8 code);
|
||||
|
||||
void btintel_version_info(struct hci_dev *hdev, struct intel_version *ver);
|
||||
int btintel_secure_send(struct hci_dev *hdev, u8 fragment_type, u32 plen,
|
||||
const void *param);
|
||||
|
||||
#else
|
||||
|
||||
|
@ -86,4 +91,18 @@ static inline int btintel_set_bdaddr(struct hci_dev *hdev, const bdaddr_t *bdadd
|
|||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
static inline void btintel_hw_error(struct hci_dev *hdev, u8 code)
|
||||
{
|
||||
}
|
||||
|
||||
static void btintel_version_info(struct hci_dev *hdev, struct intel_version *ver)
|
||||
{
|
||||
}
|
||||
|
||||
static inline int btintel_secure_send(struct hci_dev *hdev, u8 fragment_type,
|
||||
u32 plen, const void *param)
|
||||
{
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
#endif
|
||||
|
|
|
@ -95,10 +95,10 @@ struct btmrvl_private {
|
|||
struct btmrvl_device btmrvl_dev;
|
||||
struct btmrvl_adapter *adapter;
|
||||
struct btmrvl_thread main_thread;
|
||||
int (*hw_host_to_card) (struct btmrvl_private *priv,
|
||||
int (*hw_host_to_card)(struct btmrvl_private *priv,
|
||||
u8 *payload, u16 nb);
|
||||
int (*hw_wakeup_firmware) (struct btmrvl_private *priv);
|
||||
int (*hw_process_int_status) (struct btmrvl_private *priv);
|
||||
int (*hw_wakeup_firmware)(struct btmrvl_private *priv);
|
||||
int (*hw_process_int_status)(struct btmrvl_private *priv);
|
||||
void (*firmware_dump)(struct btmrvl_private *priv);
|
||||
spinlock_t driver_lock; /* spinlock used by driver */
|
||||
#ifdef CONFIG_DEBUG_FS
|
||||
|
|
|
@ -1071,8 +1071,6 @@ static int btmrvl_sdio_download_fw(struct btmrvl_sdio_card *card)
|
|||
}
|
||||
}
|
||||
|
||||
sdio_release_host(card->func);
|
||||
|
||||
/*
|
||||
* winner or not, with this test the FW synchronizes when the
|
||||
* module can continue its initialization
|
||||
|
@ -1082,6 +1080,8 @@ static int btmrvl_sdio_download_fw(struct btmrvl_sdio_card *card)
|
|||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
sdio_release_host(card->func);
|
||||
|
||||
return 0;
|
||||
|
||||
done:
|
||||
|
@ -1376,8 +1376,7 @@ done:
|
|||
|
||||
/* fw_dump_data will be free in device coredump release function
|
||||
after 5 min*/
|
||||
dev_coredumpv(&priv->btmrvl_dev.hcidev->dev, fw_dump_data,
|
||||
fw_dump_len, GFP_KERNEL);
|
||||
dev_coredumpv(&card->func->dev, fw_dump_data, fw_dump_len, GFP_KERNEL);
|
||||
BT_INFO("== btmrvl firmware dump to /sys/class/devcoredump end");
|
||||
}
|
||||
|
||||
|
|
|
@ -0,0 +1,392 @@
|
|||
/*
|
||||
* Bluetooth supports for Qualcomm Atheros chips
|
||||
*
|
||||
* Copyright (c) 2015 The Linux Foundation. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2
|
||||
* as published by the Free Software Foundation
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program; if not, write to the Free Software
|
||||
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
|
||||
*
|
||||
*/
|
||||
#include <linux/module.h>
|
||||
#include <linux/firmware.h>
|
||||
|
||||
#include <net/bluetooth/bluetooth.h>
|
||||
#include <net/bluetooth/hci_core.h>
|
||||
|
||||
#include "btqca.h"
|
||||
|
||||
#define VERSION "0.1"
|
||||
|
||||
static int rome_patch_ver_req(struct hci_dev *hdev, u32 *rome_version)
|
||||
{
|
||||
struct sk_buff *skb;
|
||||
struct edl_event_hdr *edl;
|
||||
struct rome_version *ver;
|
||||
char cmd;
|
||||
int err = 0;
|
||||
|
||||
BT_DBG("%s: ROME Patch Version Request", hdev->name);
|
||||
|
||||
cmd = EDL_PATCH_VER_REQ_CMD;
|
||||
skb = __hci_cmd_sync_ev(hdev, EDL_PATCH_CMD_OPCODE, EDL_PATCH_CMD_LEN,
|
||||
&cmd, HCI_VENDOR_PKT, HCI_INIT_TIMEOUT);
|
||||
if (IS_ERR(skb)) {
|
||||
err = PTR_ERR(skb);
|
||||
BT_ERR("%s: Failed to read version of ROME (%d)", hdev->name,
|
||||
err);
|
||||
return err;
|
||||
}
|
||||
|
||||
if (skb->len != sizeof(*edl) + sizeof(*ver)) {
|
||||
BT_ERR("%s: Version size mismatch len %d", hdev->name,
|
||||
skb->len);
|
||||
err = -EILSEQ;
|
||||
goto out;
|
||||
}
|
||||
|
||||
edl = (struct edl_event_hdr *)(skb->data);
|
||||
if (!edl || !edl->data) {
|
||||
BT_ERR("%s: TLV with no header or no data", hdev->name);
|
||||
err = -EILSEQ;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (edl->cresp != EDL_CMD_REQ_RES_EVT ||
|
||||
edl->rtype != EDL_APP_VER_RES_EVT) {
|
||||
BT_ERR("%s: Wrong packet received %d %d", hdev->name,
|
||||
edl->cresp, edl->rtype);
|
||||
err = -EIO;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ver = (struct rome_version *)(edl->data);
|
||||
|
||||
BT_DBG("%s: Product:0x%08x", hdev->name, le32_to_cpu(ver->product_id));
|
||||
BT_DBG("%s: Patch :0x%08x", hdev->name, le16_to_cpu(ver->patch_ver));
|
||||
BT_DBG("%s: ROM :0x%08x", hdev->name, le16_to_cpu(ver->rome_ver));
|
||||
BT_DBG("%s: SOC :0x%08x", hdev->name, le32_to_cpu(ver->soc_id));
|
||||
|
||||
/* ROME chipset version can be decided by patch and SoC
|
||||
* version, combination with upper 2 bytes from SoC
|
||||
* and lower 2 bytes from patch will be used.
|
||||
*/
|
||||
*rome_version = (le32_to_cpu(ver->soc_id) << 16) |
|
||||
(le16_to_cpu(ver->rome_ver) & 0x0000ffff);
|
||||
|
||||
out:
|
||||
kfree_skb(skb);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static int rome_reset(struct hci_dev *hdev)
|
||||
{
|
||||
struct sk_buff *skb;
|
||||
int err;
|
||||
|
||||
BT_DBG("%s: ROME HCI_RESET", hdev->name);
|
||||
|
||||
skb = __hci_cmd_sync(hdev, HCI_OP_RESET, 0, NULL, HCI_INIT_TIMEOUT);
|
||||
if (IS_ERR(skb)) {
|
||||
err = PTR_ERR(skb);
|
||||
BT_ERR("%s: Reset failed (%d)", hdev->name, err);
|
||||
return err;
|
||||
}
|
||||
|
||||
kfree_skb(skb);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void rome_tlv_check_data(struct rome_config *config,
|
||||
const struct firmware *fw)
|
||||
{
|
||||
const u8 *data;
|
||||
u32 type_len;
|
||||
u16 tag_id, tag_len;
|
||||
int idx, length;
|
||||
struct tlv_type_hdr *tlv;
|
||||
struct tlv_type_patch *tlv_patch;
|
||||
struct tlv_type_nvm *tlv_nvm;
|
||||
|
||||
tlv = (struct tlv_type_hdr *)fw->data;
|
||||
|
||||
type_len = le32_to_cpu(tlv->type_len);
|
||||
length = (type_len >> 8) & 0x00ffffff;
|
||||
|
||||
BT_DBG("TLV Type\t\t : 0x%x", type_len & 0x000000ff);
|
||||
BT_DBG("Length\t\t : %d bytes", length);
|
||||
|
||||
switch (config->type) {
|
||||
case TLV_TYPE_PATCH:
|
||||
tlv_patch = (struct tlv_type_patch *)tlv->data;
|
||||
BT_DBG("Total Length\t\t : %d bytes",
|
||||
le32_to_cpu(tlv_patch->total_size));
|
||||
BT_DBG("Patch Data Length\t : %d bytes",
|
||||
le32_to_cpu(tlv_patch->data_length));
|
||||
BT_DBG("Signing Format Version : 0x%x",
|
||||
tlv_patch->format_version);
|
||||
BT_DBG("Signature Algorithm\t : 0x%x",
|
||||
tlv_patch->signature);
|
||||
BT_DBG("Reserved\t\t : 0x%x",
|
||||
le16_to_cpu(tlv_patch->reserved1));
|
||||
BT_DBG("Product ID\t\t : 0x%04x",
|
||||
le16_to_cpu(tlv_patch->product_id));
|
||||
BT_DBG("Rom Build Version\t : 0x%04x",
|
||||
le16_to_cpu(tlv_patch->rom_build));
|
||||
BT_DBG("Patch Version\t\t : 0x%04x",
|
||||
le16_to_cpu(tlv_patch->patch_version));
|
||||
BT_DBG("Reserved\t\t : 0x%x",
|
||||
le16_to_cpu(tlv_patch->reserved2));
|
||||
BT_DBG("Patch Entry Address\t : 0x%x",
|
||||
le32_to_cpu(tlv_patch->entry));
|
||||
break;
|
||||
|
||||
case TLV_TYPE_NVM:
|
||||
idx = 0;
|
||||
data = tlv->data;
|
||||
while (idx < length) {
|
||||
tlv_nvm = (struct tlv_type_nvm *)(data + idx);
|
||||
|
||||
tag_id = le16_to_cpu(tlv_nvm->tag_id);
|
||||
tag_len = le16_to_cpu(tlv_nvm->tag_len);
|
||||
|
||||
/* Update NVM tags as needed */
|
||||
switch (tag_id) {
|
||||
case EDL_TAG_ID_HCI:
|
||||
/* HCI transport layer parameters
|
||||
* enabling software inband sleep
|
||||
* onto controller side.
|
||||
*/
|
||||
tlv_nvm->data[0] |= 0x80;
|
||||
|
||||
/* UART Baud Rate */
|
||||
tlv_nvm->data[2] = config->user_baud_rate;
|
||||
|
||||
break;
|
||||
|
||||
case EDL_TAG_ID_DEEP_SLEEP:
|
||||
/* Sleep enable mask
|
||||
* enabling deep sleep feature on controller.
|
||||
*/
|
||||
tlv_nvm->data[0] |= 0x01;
|
||||
|
||||
break;
|
||||
}
|
||||
|
||||
idx += (sizeof(u16) + sizeof(u16) + 8 + tag_len);
|
||||
}
|
||||
break;
|
||||
|
||||
default:
|
||||
BT_ERR("Unknown TLV type %d", config->type);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
static int rome_tlv_send_segment(struct hci_dev *hdev, int idx, int seg_size,
|
||||
const u8 *data)
|
||||
{
|
||||
struct sk_buff *skb;
|
||||
struct edl_event_hdr *edl;
|
||||
struct tlv_seg_resp *tlv_resp;
|
||||
u8 cmd[MAX_SIZE_PER_TLV_SEGMENT + 2];
|
||||
int err = 0;
|
||||
|
||||
BT_DBG("%s: Download segment #%d size %d", hdev->name, idx, seg_size);
|
||||
|
||||
cmd[0] = EDL_PATCH_TLV_REQ_CMD;
|
||||
cmd[1] = seg_size;
|
||||
memcpy(cmd + 2, data, seg_size);
|
||||
|
||||
skb = __hci_cmd_sync_ev(hdev, EDL_PATCH_CMD_OPCODE, seg_size + 2, cmd,
|
||||
HCI_VENDOR_PKT, HCI_INIT_TIMEOUT);
|
||||
if (IS_ERR(skb)) {
|
||||
err = PTR_ERR(skb);
|
||||
BT_ERR("%s: Failed to send TLV segment (%d)", hdev->name, err);
|
||||
return err;
|
||||
}
|
||||
|
||||
if (skb->len != sizeof(*edl) + sizeof(*tlv_resp)) {
|
||||
BT_ERR("%s: TLV response size mismatch", hdev->name);
|
||||
err = -EILSEQ;
|
||||
goto out;
|
||||
}
|
||||
|
||||
edl = (struct edl_event_hdr *)(skb->data);
|
||||
if (!edl || !edl->data) {
|
||||
BT_ERR("%s: TLV with no header or no data", hdev->name);
|
||||
err = -EILSEQ;
|
||||
goto out;
|
||||
}
|
||||
|
||||
tlv_resp = (struct tlv_seg_resp *)(edl->data);
|
||||
|
||||
if (edl->cresp != EDL_CMD_REQ_RES_EVT ||
|
||||
edl->rtype != EDL_TVL_DNLD_RES_EVT || tlv_resp->result != 0x00) {
|
||||
BT_ERR("%s: TLV with error stat 0x%x rtype 0x%x (0x%x)",
|
||||
hdev->name, edl->cresp, edl->rtype, tlv_resp->result);
|
||||
err = -EIO;
|
||||
}
|
||||
|
||||
out:
|
||||
kfree_skb(skb);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static int rome_tlv_download_request(struct hci_dev *hdev,
|
||||
const struct firmware *fw)
|
||||
{
|
||||
const u8 *buffer, *data;
|
||||
int total_segment, remain_size;
|
||||
int ret, i;
|
||||
|
||||
if (!fw || !fw->data)
|
||||
return -EINVAL;
|
||||
|
||||
total_segment = fw->size / MAX_SIZE_PER_TLV_SEGMENT;
|
||||
remain_size = fw->size % MAX_SIZE_PER_TLV_SEGMENT;
|
||||
|
||||
BT_DBG("%s: Total segment num %d remain size %d total size %zu",
|
||||
hdev->name, total_segment, remain_size, fw->size);
|
||||
|
||||
data = fw->data;
|
||||
for (i = 0; i < total_segment; i++) {
|
||||
buffer = data + i * MAX_SIZE_PER_TLV_SEGMENT;
|
||||
ret = rome_tlv_send_segment(hdev, i, MAX_SIZE_PER_TLV_SEGMENT,
|
||||
buffer);
|
||||
if (ret < 0)
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
if (remain_size) {
|
||||
buffer = data + total_segment * MAX_SIZE_PER_TLV_SEGMENT;
|
||||
ret = rome_tlv_send_segment(hdev, total_segment, remain_size,
|
||||
buffer);
|
||||
if (ret < 0)
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int rome_download_firmware(struct hci_dev *hdev,
|
||||
struct rome_config *config)
|
||||
{
|
||||
const struct firmware *fw;
|
||||
int ret;
|
||||
|
||||
BT_INFO("%s: ROME Downloading %s", hdev->name, config->fwname);
|
||||
|
||||
ret = request_firmware(&fw, config->fwname, &hdev->dev);
|
||||
if (ret) {
|
||||
BT_ERR("%s: Failed to request file: %s (%d)", hdev->name,
|
||||
config->fwname, ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
rome_tlv_check_data(config, fw);
|
||||
|
||||
ret = rome_tlv_download_request(hdev, fw);
|
||||
if (ret) {
|
||||
BT_ERR("%s: Failed to download file: %s (%d)", hdev->name,
|
||||
config->fwname, ret);
|
||||
}
|
||||
|
||||
release_firmware(fw);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
int qca_set_bdaddr_rome(struct hci_dev *hdev, const bdaddr_t *bdaddr)
|
||||
{
|
||||
struct sk_buff *skb;
|
||||
u8 cmd[9];
|
||||
int err;
|
||||
|
||||
cmd[0] = EDL_NVM_ACCESS_SET_REQ_CMD;
|
||||
cmd[1] = 0x02; /* TAG ID */
|
||||
cmd[2] = sizeof(bdaddr_t); /* size */
|
||||
memcpy(cmd + 3, bdaddr, sizeof(bdaddr_t));
|
||||
skb = __hci_cmd_sync_ev(hdev, EDL_NVM_ACCESS_OPCODE, sizeof(cmd), cmd,
|
||||
HCI_VENDOR_PKT, HCI_INIT_TIMEOUT);
|
||||
if (IS_ERR(skb)) {
|
||||
err = PTR_ERR(skb);
|
||||
BT_ERR("%s: Change address command failed (%d)",
|
||||
hdev->name, err);
|
||||
return err;
|
||||
}
|
||||
|
||||
kfree_skb(skb);
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(qca_set_bdaddr_rome);
|
||||
|
||||
int qca_uart_setup_rome(struct hci_dev *hdev, uint8_t baudrate)
|
||||
{
|
||||
u32 rome_ver = 0;
|
||||
struct rome_config config;
|
||||
int err;
|
||||
|
||||
BT_DBG("%s: ROME setup on UART", hdev->name);
|
||||
|
||||
config.user_baud_rate = baudrate;
|
||||
|
||||
/* Get ROME version information */
|
||||
err = rome_patch_ver_req(hdev, &rome_ver);
|
||||
if (err < 0 || rome_ver == 0) {
|
||||
BT_ERR("%s: Failed to get version 0x%x", hdev->name, err);
|
||||
return err;
|
||||
}
|
||||
|
||||
BT_INFO("%s: ROME controller version 0x%08x", hdev->name, rome_ver);
|
||||
|
||||
/* Download rampatch file */
|
||||
config.type = TLV_TYPE_PATCH;
|
||||
snprintf(config.fwname, sizeof(config.fwname), "qca/rampatch_%08x.bin",
|
||||
rome_ver);
|
||||
err = rome_download_firmware(hdev, &config);
|
||||
if (err < 0) {
|
||||
BT_ERR("%s: Failed to download patch (%d)", hdev->name, err);
|
||||
return err;
|
||||
}
|
||||
|
||||
/* Download NVM configuration */
|
||||
config.type = TLV_TYPE_NVM;
|
||||
snprintf(config.fwname, sizeof(config.fwname), "qca/nvm_%08x.bin",
|
||||
rome_ver);
|
||||
err = rome_download_firmware(hdev, &config);
|
||||
if (err < 0) {
|
||||
BT_ERR("%s: Failed to download NVM (%d)", hdev->name, err);
|
||||
return err;
|
||||
}
|
||||
|
||||
/* Perform HCI reset */
|
||||
err = rome_reset(hdev);
|
||||
if (err < 0) {
|
||||
BT_ERR("%s: Failed to run HCI_RESET (%d)", hdev->name, err);
|
||||
return err;
|
||||
}
|
||||
|
||||
BT_INFO("%s: ROME setup on UART is completed", hdev->name);
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(qca_uart_setup_rome);
|
||||
|
||||
MODULE_AUTHOR("Ben Young Tae Kim <ytkim@qca.qualcomm.com>");
|
||||
MODULE_DESCRIPTION("Bluetooth support for Qualcomm Atheros family ver " VERSION);
|
||||
MODULE_VERSION(VERSION);
|
||||
MODULE_LICENSE("GPL");
|
|
@ -0,0 +1,135 @@
|
|||
/*
|
||||
* Bluetooth supports for Qualcomm Atheros ROME chips
|
||||
*
|
||||
* Copyright (c) 2015 The Linux Foundation. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2
|
||||
* as published by the Free Software Foundation
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program; if not, write to the Free Software
|
||||
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
|
||||
*
|
||||
*/
|
||||
|
||||
#define EDL_PATCH_CMD_OPCODE (0xFC00)
|
||||
#define EDL_NVM_ACCESS_OPCODE (0xFC0B)
|
||||
#define EDL_PATCH_CMD_LEN (1)
|
||||
#define EDL_PATCH_VER_REQ_CMD (0x19)
|
||||
#define EDL_PATCH_TLV_REQ_CMD (0x1E)
|
||||
#define EDL_NVM_ACCESS_SET_REQ_CMD (0x01)
|
||||
#define MAX_SIZE_PER_TLV_SEGMENT (243)
|
||||
|
||||
#define EDL_CMD_REQ_RES_EVT (0x00)
|
||||
#define EDL_PATCH_VER_RES_EVT (0x19)
|
||||
#define EDL_APP_VER_RES_EVT (0x02)
|
||||
#define EDL_TVL_DNLD_RES_EVT (0x04)
|
||||
#define EDL_CMD_EXE_STATUS_EVT (0x00)
|
||||
#define EDL_SET_BAUDRATE_RSP_EVT (0x92)
|
||||
#define EDL_NVM_ACCESS_CODE_EVT (0x0B)
|
||||
|
||||
#define EDL_TAG_ID_HCI (17)
|
||||
#define EDL_TAG_ID_DEEP_SLEEP (27)
|
||||
|
||||
enum qca_bardrate {
|
||||
QCA_BAUDRATE_115200 = 0,
|
||||
QCA_BAUDRATE_57600,
|
||||
QCA_BAUDRATE_38400,
|
||||
QCA_BAUDRATE_19200,
|
||||
QCA_BAUDRATE_9600,
|
||||
QCA_BAUDRATE_230400,
|
||||
QCA_BAUDRATE_250000,
|
||||
QCA_BAUDRATE_460800,
|
||||
QCA_BAUDRATE_500000,
|
||||
QCA_BAUDRATE_720000,
|
||||
QCA_BAUDRATE_921600,
|
||||
QCA_BAUDRATE_1000000,
|
||||
QCA_BAUDRATE_1250000,
|
||||
QCA_BAUDRATE_2000000,
|
||||
QCA_BAUDRATE_3000000,
|
||||
QCA_BAUDRATE_4000000,
|
||||
QCA_BAUDRATE_1600000,
|
||||
QCA_BAUDRATE_3200000,
|
||||
QCA_BAUDRATE_3500000,
|
||||
QCA_BAUDRATE_AUTO = 0xFE,
|
||||
QCA_BAUDRATE_RESERVED
|
||||
};
|
||||
|
||||
enum rome_tlv_type {
|
||||
TLV_TYPE_PATCH = 1,
|
||||
TLV_TYPE_NVM
|
||||
};
|
||||
|
||||
struct rome_config {
|
||||
u8 type;
|
||||
char fwname[64];
|
||||
uint8_t user_baud_rate;
|
||||
};
|
||||
|
||||
struct edl_event_hdr {
|
||||
__u8 cresp;
|
||||
__u8 rtype;
|
||||
__u8 data[0];
|
||||
} __packed;
|
||||
|
||||
struct rome_version {
|
||||
__le32 product_id;
|
||||
__le16 patch_ver;
|
||||
__le16 rome_ver;
|
||||
__le32 soc_id;
|
||||
} __packed;
|
||||
|
||||
struct tlv_seg_resp {
|
||||
__u8 result;
|
||||
} __packed;
|
||||
|
||||
struct tlv_type_patch {
|
||||
__le32 total_size;
|
||||
__le32 data_length;
|
||||
__u8 format_version;
|
||||
__u8 signature;
|
||||
__le16 reserved1;
|
||||
__le16 product_id;
|
||||
__le16 rom_build;
|
||||
__le16 patch_version;
|
||||
__le16 reserved2;
|
||||
__le32 entry;
|
||||
} __packed;
|
||||
|
||||
struct tlv_type_nvm {
|
||||
__le16 tag_id;
|
||||
__le16 tag_len;
|
||||
__le32 reserve1;
|
||||
__le32 reserve2;
|
||||
__u8 data[0];
|
||||
} __packed;
|
||||
|
||||
struct tlv_type_hdr {
|
||||
__le32 type_len;
|
||||
__u8 data[0];
|
||||
} __packed;
|
||||
|
||||
#if IS_ENABLED(CONFIG_BT_QCA)
|
||||
|
||||
int qca_set_bdaddr_rome(struct hci_dev *hdev, const bdaddr_t *bdaddr);
|
||||
int qca_uart_setup_rome(struct hci_dev *hdev, uint8_t baudrate);
|
||||
|
||||
#else
|
||||
|
||||
static inline int qca_set_bdaddr_rome(struct hci_dev *hdev, const bdaddr_t *bdaddr)
|
||||
{
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
static inline int qca_uart_setup_rome(struct hci_dev *hdev, int speed)
|
||||
{
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
#endif
|
|
@ -68,6 +68,9 @@ static const struct usb_device_id btusb_table[] = {
|
|||
/* Generic Bluetooth AMP device */
|
||||
{ USB_DEVICE_INFO(0xe0, 0x01, 0x04), .driver_info = BTUSB_AMP },
|
||||
|
||||
/* Generic Bluetooth USB interface */
|
||||
{ USB_INTERFACE_INFO(0xe0, 0x01, 0x01) },
|
||||
|
||||
/* Apple-specific (Broadcom) devices */
|
||||
{ USB_VENDOR_AND_INTERFACE_INFO(0x05ac, 0xff, 0x01, 0x01),
|
||||
.driver_info = BTUSB_BCM_APPLE },
|
||||
|
@ -319,6 +322,9 @@ static const struct usb_device_id blacklist_table[] = {
|
|||
{ USB_DEVICE(0x13d3, 0x3461), .driver_info = BTUSB_REALTEK },
|
||||
{ USB_DEVICE(0x13d3, 0x3462), .driver_info = BTUSB_REALTEK },
|
||||
|
||||
/* Silicon Wave based devices */
|
||||
{ USB_DEVICE(0x0c10, 0x0000), .driver_info = BTUSB_SWAVE },
|
||||
|
||||
{ } /* Terminating entry */
|
||||
};
|
||||
|
||||
|
@ -1575,7 +1581,7 @@ static int btusb_setup_intel(struct hci_dev *hdev)
|
|||
|
||||
/* fw_patch_num indicates the version of patch the device currently
|
||||
* have. If there is no patch data in the device, it is always 0x00.
|
||||
* So, if it is other than 0x00, no need to patch the deivce again.
|
||||
* So, if it is other than 0x00, no need to patch the device again.
|
||||
*/
|
||||
if (ver->fw_patch_num) {
|
||||
BT_INFO("%s: Intel device is already patched. patch num: %02x",
|
||||
|
@ -1878,51 +1884,6 @@ static int btusb_send_frame_intel(struct hci_dev *hdev, struct sk_buff *skb)
|
|||
return -EILSEQ;
|
||||
}
|
||||
|
||||
static int btusb_intel_secure_send(struct hci_dev *hdev, u8 fragment_type,
|
||||
u32 plen, const void *param)
|
||||
{
|
||||
while (plen > 0) {
|
||||
struct sk_buff *skb;
|
||||
u8 cmd_param[253], fragment_len = (plen > 252) ? 252 : plen;
|
||||
|
||||
cmd_param[0] = fragment_type;
|
||||
memcpy(cmd_param + 1, param, fragment_len);
|
||||
|
||||
skb = __hci_cmd_sync(hdev, 0xfc09, fragment_len + 1,
|
||||
cmd_param, HCI_INIT_TIMEOUT);
|
||||
if (IS_ERR(skb))
|
||||
return PTR_ERR(skb);
|
||||
|
||||
kfree_skb(skb);
|
||||
|
||||
plen -= fragment_len;
|
||||
param += fragment_len;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void btusb_intel_version_info(struct hci_dev *hdev,
|
||||
struct intel_version *ver)
|
||||
{
|
||||
const char *variant;
|
||||
|
||||
switch (ver->fw_variant) {
|
||||
case 0x06:
|
||||
variant = "Bootloader";
|
||||
break;
|
||||
case 0x23:
|
||||
variant = "Firmware";
|
||||
break;
|
||||
default:
|
||||
return;
|
||||
}
|
||||
|
||||
BT_INFO("%s: %s revision %u.%u build %u week %u %u", hdev->name,
|
||||
variant, ver->fw_revision >> 4, ver->fw_revision & 0x0f,
|
||||
ver->fw_build_num, ver->fw_build_ww, 2000 + ver->fw_build_yy);
|
||||
}
|
||||
|
||||
static int btusb_setup_intel_new(struct hci_dev *hdev)
|
||||
{
|
||||
static const u8 reset_param[] = { 0x00, 0x01, 0x00, 0x01,
|
||||
|
@ -1984,7 +1945,7 @@ static int btusb_setup_intel_new(struct hci_dev *hdev)
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
btusb_intel_version_info(hdev, ver);
|
||||
btintel_version_info(hdev, ver);
|
||||
|
||||
/* The firmware variant determines if the device is in bootloader
|
||||
* mode or is running operational firmware. The value 0x06 identifies
|
||||
|
@ -2104,7 +2065,7 @@ static int btusb_setup_intel_new(struct hci_dev *hdev)
|
|||
/* Start the firmware download transaction with the Init fragment
|
||||
* represented by the 128 bytes of CSS header.
|
||||
*/
|
||||
err = btusb_intel_secure_send(hdev, 0x00, 128, fw->data);
|
||||
err = btintel_secure_send(hdev, 0x00, 128, fw->data);
|
||||
if (err < 0) {
|
||||
BT_ERR("%s: Failed to send firmware header (%d)",
|
||||
hdev->name, err);
|
||||
|
@ -2114,7 +2075,7 @@ static int btusb_setup_intel_new(struct hci_dev *hdev)
|
|||
/* Send the 256 bytes of public key information from the firmware
|
||||
* as the PKey fragment.
|
||||
*/
|
||||
err = btusb_intel_secure_send(hdev, 0x03, 256, fw->data + 128);
|
||||
err = btintel_secure_send(hdev, 0x03, 256, fw->data + 128);
|
||||
if (err < 0) {
|
||||
BT_ERR("%s: Failed to send firmware public key (%d)",
|
||||
hdev->name, err);
|
||||
|
@ -2124,7 +2085,7 @@ static int btusb_setup_intel_new(struct hci_dev *hdev)
|
|||
/* Send the 256 bytes of signature information from the firmware
|
||||
* as the Sign fragment.
|
||||
*/
|
||||
err = btusb_intel_secure_send(hdev, 0x02, 256, fw->data + 388);
|
||||
err = btintel_secure_send(hdev, 0x02, 256, fw->data + 388);
|
||||
if (err < 0) {
|
||||
BT_ERR("%s: Failed to send firmware signature (%d)",
|
||||
hdev->name, err);
|
||||
|
@ -2139,7 +2100,7 @@ static int btusb_setup_intel_new(struct hci_dev *hdev)
|
|||
|
||||
frag_len += sizeof(*cmd) + cmd->plen;
|
||||
|
||||
/* The paramter length of the secure send command requires
|
||||
/* The parameter length of the secure send command requires
|
||||
* a 4 byte alignment. It happens so that the firmware file
|
||||
* contains proper Intel_NOP commands to align the fragments
|
||||
* as needed.
|
||||
|
@ -2148,8 +2109,7 @@ static int btusb_setup_intel_new(struct hci_dev *hdev)
|
|||
* firmware data buffer as a single Data fragement.
|
||||
*/
|
||||
if (!(frag_len % 4)) {
|
||||
err = btusb_intel_secure_send(hdev, 0x01, frag_len,
|
||||
fw_ptr);
|
||||
err = btintel_secure_send(hdev, 0x01, frag_len, fw_ptr);
|
||||
if (err < 0) {
|
||||
BT_ERR("%s: Failed to send firmware data (%d)",
|
||||
hdev->name, err);
|
||||
|
@ -2291,39 +2251,6 @@ done:
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void btusb_hw_error_intel(struct hci_dev *hdev, u8 code)
|
||||
{
|
||||
struct sk_buff *skb;
|
||||
u8 type = 0x00;
|
||||
|
||||
BT_ERR("%s: Hardware error 0x%2.2x", hdev->name, code);
|
||||
|
||||
skb = __hci_cmd_sync(hdev, HCI_OP_RESET, 0, NULL, HCI_INIT_TIMEOUT);
|
||||
if (IS_ERR(skb)) {
|
||||
BT_ERR("%s: Reset after hardware error failed (%ld)",
|
||||
hdev->name, PTR_ERR(skb));
|
||||
return;
|
||||
}
|
||||
kfree_skb(skb);
|
||||
|
||||
skb = __hci_cmd_sync(hdev, 0xfc22, 1, &type, HCI_INIT_TIMEOUT);
|
||||
if (IS_ERR(skb)) {
|
||||
BT_ERR("%s: Retrieving Intel exception info failed (%ld)",
|
||||
hdev->name, PTR_ERR(skb));
|
||||
return;
|
||||
}
|
||||
|
||||
if (skb->len != 13) {
|
||||
BT_ERR("%s: Exception info size mismatch", hdev->name);
|
||||
kfree_skb(skb);
|
||||
return;
|
||||
}
|
||||
|
||||
BT_ERR("%s: Exception info %s", hdev->name, (char *)(skb->data + 1));
|
||||
|
||||
kfree_skb(skb);
|
||||
}
|
||||
|
||||
static int btusb_shutdown_intel(struct hci_dev *hdev)
|
||||
{
|
||||
struct sk_buff *skb;
|
||||
|
@ -2783,7 +2710,7 @@ static int btusb_probe(struct usb_interface *intf,
|
|||
if (id->driver_info & BTUSB_INTEL_NEW) {
|
||||
hdev->send = btusb_send_frame_intel;
|
||||
hdev->setup = btusb_setup_intel_new;
|
||||
hdev->hw_error = btusb_hw_error_intel;
|
||||
hdev->hw_error = btintel_hw_error;
|
||||
hdev->set_bdaddr = btintel_set_bdaddr;
|
||||
set_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks);
|
||||
}
|
||||
|
|
|
@ -182,9 +182,9 @@ static void dtl1_control(struct dtl1_info *info, struct sk_buff *skb)
|
|||
int i;
|
||||
|
||||
printk(KERN_INFO "Bluetooth: Nokia control data =");
|
||||
for (i = 0; i < skb->len; i++) {
|
||||
for (i = 0; i < skb->len; i++)
|
||||
printk(" %02x", skb->data[i]);
|
||||
}
|
||||
|
||||
printk("\n");
|
||||
|
||||
/* transition to active state */
|
||||
|
@ -406,7 +406,7 @@ static int dtl1_hci_send_frame(struct hci_dev *hdev, struct sk_buff *skb)
|
|||
break;
|
||||
default:
|
||||
return -EILSEQ;
|
||||
};
|
||||
}
|
||||
|
||||
nsh.zero = 0;
|
||||
nsh.len = skb->len;
|
||||
|
|
|
@ -25,6 +25,12 @@
|
|||
#include <linux/errno.h>
|
||||
#include <linux/skbuff.h>
|
||||
#include <linux/firmware.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/acpi.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/clk.h>
|
||||
#include <linux/gpio/consumer.h>
|
||||
#include <linux/tty.h>
|
||||
|
||||
#include <net/bluetooth/bluetooth.h>
|
||||
#include <net/bluetooth/hci_core.h>
|
||||
|
@ -32,11 +38,37 @@
|
|||
#include "btbcm.h"
|
||||
#include "hci_uart.h"
|
||||
|
||||
struct bcm_data {
|
||||
struct sk_buff *rx_skb;
|
||||
struct sk_buff_head txq;
|
||||
struct bcm_device {
|
||||
struct list_head list;
|
||||
|
||||
struct platform_device *pdev;
|
||||
|
||||
const char *name;
|
||||
struct gpio_desc *device_wakeup;
|
||||
struct gpio_desc *shutdown;
|
||||
|
||||
struct clk *clk;
|
||||
bool clk_enabled;
|
||||
|
||||
u32 init_speed;
|
||||
|
||||
#ifdef CONFIG_PM_SLEEP
|
||||
struct hci_uart *hu;
|
||||
bool is_suspended; /* suspend/resume flag */
|
||||
#endif
|
||||
};
|
||||
|
||||
struct bcm_data {
|
||||
struct sk_buff *rx_skb;
|
||||
struct sk_buff_head txq;
|
||||
|
||||
struct bcm_device *dev;
|
||||
};
|
||||
|
||||
/* List of BCM BT UART devices */
|
||||
static DEFINE_SPINLOCK(bcm_device_lock);
|
||||
static LIST_HEAD(bcm_device_list);
|
||||
|
||||
static int bcm_set_baudrate(struct hci_uart *hu, unsigned int speed)
|
||||
{
|
||||
struct hci_dev *hdev = hu->hdev;
|
||||
|
@ -86,9 +118,41 @@ static int bcm_set_baudrate(struct hci_uart *hu, unsigned int speed)
|
|||
return 0;
|
||||
}
|
||||
|
||||
/* bcm_device_exists should be protected by bcm_device_lock */
|
||||
static bool bcm_device_exists(struct bcm_device *device)
|
||||
{
|
||||
struct list_head *p;
|
||||
|
||||
list_for_each(p, &bcm_device_list) {
|
||||
struct bcm_device *dev = list_entry(p, struct bcm_device, list);
|
||||
|
||||
if (device == dev)
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
static int bcm_gpio_set_power(struct bcm_device *dev, bool powered)
|
||||
{
|
||||
if (powered && !IS_ERR(dev->clk) && !dev->clk_enabled)
|
||||
clk_enable(dev->clk);
|
||||
|
||||
gpiod_set_value(dev->shutdown, powered);
|
||||
gpiod_set_value(dev->device_wakeup, powered);
|
||||
|
||||
if (!powered && !IS_ERR(dev->clk) && dev->clk_enabled)
|
||||
clk_disable(dev->clk);
|
||||
|
||||
dev->clk_enabled = powered;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int bcm_open(struct hci_uart *hu)
|
||||
{
|
||||
struct bcm_data *bcm;
|
||||
struct list_head *p;
|
||||
|
||||
BT_DBG("hu %p", hu);
|
||||
|
||||
|
@ -99,6 +163,30 @@ static int bcm_open(struct hci_uart *hu)
|
|||
skb_queue_head_init(&bcm->txq);
|
||||
|
||||
hu->priv = bcm;
|
||||
|
||||
spin_lock(&bcm_device_lock);
|
||||
list_for_each(p, &bcm_device_list) {
|
||||
struct bcm_device *dev = list_entry(p, struct bcm_device, list);
|
||||
|
||||
/* Retrieve saved bcm_device based on parent of the
|
||||
* platform device (saved during device probe) and
|
||||
* parent of tty device used by hci_uart
|
||||
*/
|
||||
if (hu->tty->dev->parent == dev->pdev->dev.parent) {
|
||||
bcm->dev = dev;
|
||||
hu->init_speed = dev->init_speed;
|
||||
#ifdef CONFIG_PM_SLEEP
|
||||
dev->hu = hu;
|
||||
#endif
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (bcm->dev)
|
||||
bcm_gpio_set_power(bcm->dev, true);
|
||||
|
||||
spin_unlock(&bcm_device_lock);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -108,6 +196,16 @@ static int bcm_close(struct hci_uart *hu)
|
|||
|
||||
BT_DBG("hu %p", hu);
|
||||
|
||||
/* Protect bcm->dev against removal of the device or driver */
|
||||
spin_lock(&bcm_device_lock);
|
||||
if (bcm_device_exists(bcm->dev)) {
|
||||
bcm_gpio_set_power(bcm->dev, false);
|
||||
#ifdef CONFIG_PM_SLEEP
|
||||
bcm->dev->hu = NULL;
|
||||
#endif
|
||||
}
|
||||
spin_unlock(&bcm_device_lock);
|
||||
|
||||
skb_queue_purge(&bcm->txq);
|
||||
kfree_skb(bcm->rx_skb);
|
||||
kfree(bcm);
|
||||
|
@ -232,6 +330,204 @@ static struct sk_buff *bcm_dequeue(struct hci_uart *hu)
|
|||
return skb_dequeue(&bcm->txq);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PM_SLEEP
|
||||
/* Platform suspend callback */
|
||||
static int bcm_suspend(struct device *dev)
|
||||
{
|
||||
struct bcm_device *bdev = platform_get_drvdata(to_platform_device(dev));
|
||||
|
||||
BT_DBG("suspend (%p): is_suspended %d", bdev, bdev->is_suspended);
|
||||
|
||||
spin_lock(&bcm_device_lock);
|
||||
|
||||
if (!bdev->hu)
|
||||
goto unlock;
|
||||
|
||||
if (!bdev->is_suspended) {
|
||||
hci_uart_set_flow_control(bdev->hu, true);
|
||||
|
||||
/* Once this callback returns, driver suspends BT via GPIO */
|
||||
bdev->is_suspended = true;
|
||||
}
|
||||
|
||||
/* Suspend the device */
|
||||
if (bdev->device_wakeup) {
|
||||
gpiod_set_value(bdev->device_wakeup, false);
|
||||
BT_DBG("suspend, delaying 15 ms");
|
||||
mdelay(15);
|
||||
}
|
||||
|
||||
unlock:
|
||||
spin_unlock(&bcm_device_lock);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Platform resume callback */
|
||||
static int bcm_resume(struct device *dev)
|
||||
{
|
||||
struct bcm_device *bdev = platform_get_drvdata(to_platform_device(dev));
|
||||
|
||||
BT_DBG("resume (%p): is_suspended %d", bdev, bdev->is_suspended);
|
||||
|
||||
spin_lock(&bcm_device_lock);
|
||||
|
||||
if (!bdev->hu)
|
||||
goto unlock;
|
||||
|
||||
if (bdev->device_wakeup) {
|
||||
gpiod_set_value(bdev->device_wakeup, true);
|
||||
BT_DBG("resume, delaying 15 ms");
|
||||
mdelay(15);
|
||||
}
|
||||
|
||||
/* When this callback executes, the device has woken up already */
|
||||
if (bdev->is_suspended) {
|
||||
bdev->is_suspended = false;
|
||||
|
||||
hci_uart_set_flow_control(bdev->hu, false);
|
||||
}
|
||||
|
||||
unlock:
|
||||
spin_unlock(&bcm_device_lock);
|
||||
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
|
||||
static const struct acpi_gpio_params device_wakeup_gpios = { 0, 0, false };
|
||||
static const struct acpi_gpio_params shutdown_gpios = { 1, 0, false };
|
||||
|
||||
static const struct acpi_gpio_mapping acpi_bcm_default_gpios[] = {
|
||||
{ "device-wakeup-gpios", &device_wakeup_gpios, 1 },
|
||||
{ "shutdown-gpios", &shutdown_gpios, 1 },
|
||||
{ },
|
||||
};
|
||||
|
||||
#ifdef CONFIG_ACPI
|
||||
static int bcm_resource(struct acpi_resource *ares, void *data)
|
||||
{
|
||||
struct bcm_device *dev = data;
|
||||
|
||||
if (ares->type == ACPI_RESOURCE_TYPE_SERIAL_BUS) {
|
||||
struct acpi_resource_uart_serialbus *sb;
|
||||
|
||||
sb = &ares->data.uart_serial_bus;
|
||||
if (sb->type == ACPI_RESOURCE_SERIAL_TYPE_UART)
|
||||
dev->init_speed = sb->default_baud_rate;
|
||||
}
|
||||
|
||||
/* Always tell the ACPI core to skip this resource */
|
||||
return 1;
|
||||
}
|
||||
|
||||
static int bcm_acpi_probe(struct bcm_device *dev)
|
||||
{
|
||||
struct platform_device *pdev = dev->pdev;
|
||||
const struct acpi_device_id *id;
|
||||
struct acpi_device *adev;
|
||||
LIST_HEAD(resources);
|
||||
int ret;
|
||||
|
||||
id = acpi_match_device(pdev->dev.driver->acpi_match_table, &pdev->dev);
|
||||
if (!id)
|
||||
return -ENODEV;
|
||||
|
||||
/* Retrieve GPIO data */
|
||||
dev->name = dev_name(&pdev->dev);
|
||||
ret = acpi_dev_add_driver_gpios(ACPI_COMPANION(&pdev->dev),
|
||||
acpi_bcm_default_gpios);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
dev->clk = devm_clk_get(&pdev->dev, NULL);
|
||||
|
||||
dev->device_wakeup = devm_gpiod_get_optional(&pdev->dev,
|
||||
"device-wakeup",
|
||||
GPIOD_OUT_LOW);
|
||||
if (IS_ERR(dev->device_wakeup))
|
||||
return PTR_ERR(dev->device_wakeup);
|
||||
|
||||
dev->shutdown = devm_gpiod_get_optional(&pdev->dev, "shutdown",
|
||||
GPIOD_OUT_LOW);
|
||||
if (IS_ERR(dev->shutdown))
|
||||
return PTR_ERR(dev->shutdown);
|
||||
|
||||
/* Make sure at-least one of the GPIO is defined and that
|
||||
* a name is specified for this instance
|
||||
*/
|
||||
if ((!dev->device_wakeup && !dev->shutdown) || !dev->name) {
|
||||
dev_err(&pdev->dev, "invalid platform data\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Retrieve UART ACPI info */
|
||||
adev = ACPI_COMPANION(&dev->pdev->dev);
|
||||
if (!adev)
|
||||
return 0;
|
||||
|
||||
acpi_dev_get_resources(adev, &resources, bcm_resource, dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
#else
|
||||
static int bcm_acpi_probe(struct bcm_device *dev)
|
||||
{
|
||||
return -EINVAL;
|
||||
}
|
||||
#endif /* CONFIG_ACPI */
|
||||
|
||||
static int bcm_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct bcm_device *dev;
|
||||
struct acpi_device_id *pdata = pdev->dev.platform_data;
|
||||
int ret;
|
||||
|
||||
dev = devm_kzalloc(&pdev->dev, sizeof(*dev), GFP_KERNEL);
|
||||
if (!dev)
|
||||
return -ENOMEM;
|
||||
|
||||
dev->pdev = pdev;
|
||||
|
||||
if (ACPI_HANDLE(&pdev->dev)) {
|
||||
ret = bcm_acpi_probe(dev);
|
||||
if (ret)
|
||||
return ret;
|
||||
} else if (pdata) {
|
||||
dev->name = pdata->id;
|
||||
} else {
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
platform_set_drvdata(pdev, dev);
|
||||
|
||||
dev_info(&pdev->dev, "%s device registered.\n", dev->name);
|
||||
|
||||
/* Place this instance on the device list */
|
||||
spin_lock(&bcm_device_lock);
|
||||
list_add_tail(&dev->list, &bcm_device_list);
|
||||
spin_unlock(&bcm_device_lock);
|
||||
|
||||
bcm_gpio_set_power(dev, false);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int bcm_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct bcm_device *dev = platform_get_drvdata(pdev);
|
||||
|
||||
spin_lock(&bcm_device_lock);
|
||||
list_del(&dev->list);
|
||||
spin_unlock(&bcm_device_lock);
|
||||
|
||||
acpi_dev_remove_driver_gpios(ACPI_COMPANION(&pdev->dev));
|
||||
|
||||
dev_info(&pdev->dev, "%s device unregistered.\n", dev->name);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct hci_uart_proto bcm_proto = {
|
||||
.id = HCI_UART_BCM,
|
||||
.name = "BCM",
|
||||
|
@ -247,12 +543,38 @@ static const struct hci_uart_proto bcm_proto = {
|
|||
.dequeue = bcm_dequeue,
|
||||
};
|
||||
|
||||
#ifdef CONFIG_ACPI
|
||||
static const struct acpi_device_id bcm_acpi_match[] = {
|
||||
{ "BCM2E39", 0 },
|
||||
{ "BCM2E67", 0 },
|
||||
{ },
|
||||
};
|
||||
MODULE_DEVICE_TABLE(acpi, bcm_acpi_match);
|
||||
#endif
|
||||
|
||||
/* Platform suspend and resume callbacks */
|
||||
static SIMPLE_DEV_PM_OPS(bcm_pm_ops, bcm_suspend, bcm_resume);
|
||||
|
||||
static struct platform_driver bcm_driver = {
|
||||
.probe = bcm_probe,
|
||||
.remove = bcm_remove,
|
||||
.driver = {
|
||||
.name = "hci_bcm",
|
||||
.acpi_match_table = ACPI_PTR(bcm_acpi_match),
|
||||
.pm = &bcm_pm_ops,
|
||||
},
|
||||
};
|
||||
|
||||
int __init bcm_init(void)
|
||||
{
|
||||
platform_driver_register(&bcm_driver);
|
||||
|
||||
return hci_uart_register_proto(&bcm_proto);
|
||||
}
|
||||
|
||||
int __exit bcm_deinit(void)
|
||||
{
|
||||
platform_driver_unregister(&bcm_driver);
|
||||
|
||||
return hci_uart_unregister_proto(&bcm_proto);
|
||||
}
|
||||
|
|
|
@ -223,8 +223,7 @@ struct sk_buff *h4_recv_buf(struct hci_dev *hdev, struct sk_buff *skb,
|
|||
switch ((&pkts[i])->lsize) {
|
||||
case 0:
|
||||
/* No variable data length */
|
||||
(&pkts[i])->recv(hdev, skb);
|
||||
skb = NULL;
|
||||
dlen = 0;
|
||||
break;
|
||||
case 1:
|
||||
/* Single octet variable length */
|
||||
|
@ -252,6 +251,12 @@ struct sk_buff *h4_recv_buf(struct hci_dev *hdev, struct sk_buff *skb,
|
|||
kfree_skb(skb);
|
||||
return ERR_PTR(-EILSEQ);
|
||||
}
|
||||
|
||||
if (!dlen) {
|
||||
/* No more data, complete frame */
|
||||
(&pkts[i])->recv(hdev, skb);
|
||||
skb = NULL;
|
||||
}
|
||||
} else {
|
||||
/* Complete frame */
|
||||
(&pkts[i])->recv(hdev, skb);
|
||||
|
|
|
@ -75,7 +75,7 @@ struct h5 {
|
|||
size_t rx_pending; /* Expecting more bytes */
|
||||
u8 rx_ack; /* Last ack number received */
|
||||
|
||||
int (*rx_func) (struct hci_uart *hu, u8 c);
|
||||
int (*rx_func)(struct hci_uart *hu, u8 c);
|
||||
|
||||
struct timer_list timer; /* Retransmission timer */
|
||||
|
||||
|
|
|
@ -24,8 +24,864 @@
|
|||
#include <linux/kernel.h>
|
||||
#include <linux/errno.h>
|
||||
#include <linux/skbuff.h>
|
||||
#include <linux/firmware.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/wait.h>
|
||||
#include <linux/tty.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/gpio/consumer.h>
|
||||
#include <linux/acpi.h>
|
||||
|
||||
#include <net/bluetooth/bluetooth.h>
|
||||
#include <net/bluetooth/hci_core.h>
|
||||
|
||||
#include "hci_uart.h"
|
||||
#include "btintel.h"
|
||||
|
||||
#define STATE_BOOTLOADER 0
|
||||
#define STATE_DOWNLOADING 1
|
||||
#define STATE_FIRMWARE_LOADED 2
|
||||
#define STATE_FIRMWARE_FAILED 3
|
||||
#define STATE_BOOTING 4
|
||||
|
||||
struct intel_device {
|
||||
struct list_head list;
|
||||
struct platform_device *pdev;
|
||||
struct gpio_desc *reset;
|
||||
};
|
||||
|
||||
static LIST_HEAD(intel_device_list);
|
||||
static DEFINE_SPINLOCK(intel_device_list_lock);
|
||||
|
||||
struct intel_data {
|
||||
struct sk_buff *rx_skb;
|
||||
struct sk_buff_head txq;
|
||||
unsigned long flags;
|
||||
};
|
||||
|
||||
static u8 intel_convert_speed(unsigned int speed)
|
||||
{
|
||||
switch (speed) {
|
||||
case 9600:
|
||||
return 0x00;
|
||||
case 19200:
|
||||
return 0x01;
|
||||
case 38400:
|
||||
return 0x02;
|
||||
case 57600:
|
||||
return 0x03;
|
||||
case 115200:
|
||||
return 0x04;
|
||||
case 230400:
|
||||
return 0x05;
|
||||
case 460800:
|
||||
return 0x06;
|
||||
case 921600:
|
||||
return 0x07;
|
||||
case 1843200:
|
||||
return 0x08;
|
||||
case 3250000:
|
||||
return 0x09;
|
||||
case 2000000:
|
||||
return 0x0a;
|
||||
case 3000000:
|
||||
return 0x0b;
|
||||
default:
|
||||
return 0xff;
|
||||
}
|
||||
}
|
||||
|
||||
static int intel_wait_booting(struct hci_uart *hu)
|
||||
{
|
||||
struct intel_data *intel = hu->priv;
|
||||
int err;
|
||||
|
||||
err = wait_on_bit_timeout(&intel->flags, STATE_BOOTING,
|
||||
TASK_INTERRUPTIBLE,
|
||||
msecs_to_jiffies(1000));
|
||||
|
||||
if (err == 1) {
|
||||
BT_ERR("%s: Device boot interrupted", hu->hdev->name);
|
||||
return -EINTR;
|
||||
}
|
||||
|
||||
if (err) {
|
||||
BT_ERR("%s: Device boot timeout", hu->hdev->name);
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static int intel_set_power(struct hci_uart *hu, bool powered)
|
||||
{
|
||||
struct list_head *p;
|
||||
int err = -ENODEV;
|
||||
|
||||
spin_lock(&intel_device_list_lock);
|
||||
|
||||
list_for_each(p, &intel_device_list) {
|
||||
struct intel_device *idev = list_entry(p, struct intel_device,
|
||||
list);
|
||||
|
||||
/* tty device and pdev device should share the same parent
|
||||
* which is the UART port.
|
||||
*/
|
||||
if (hu->tty->dev->parent != idev->pdev->dev.parent)
|
||||
continue;
|
||||
|
||||
if (!idev->reset) {
|
||||
err = -ENOTSUPP;
|
||||
break;
|
||||
}
|
||||
|
||||
BT_INFO("hu %p, Switching compatible pm device (%s) to %u",
|
||||
hu, dev_name(&idev->pdev->dev), powered);
|
||||
|
||||
gpiod_set_value(idev->reset, powered);
|
||||
}
|
||||
|
||||
spin_unlock(&intel_device_list_lock);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static int intel_open(struct hci_uart *hu)
|
||||
{
|
||||
struct intel_data *intel;
|
||||
|
||||
BT_DBG("hu %p", hu);
|
||||
|
||||
intel = kzalloc(sizeof(*intel), GFP_KERNEL);
|
||||
if (!intel)
|
||||
return -ENOMEM;
|
||||
|
||||
skb_queue_head_init(&intel->txq);
|
||||
|
||||
hu->priv = intel;
|
||||
|
||||
if (!intel_set_power(hu, true))
|
||||
set_bit(STATE_BOOTING, &intel->flags);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int intel_close(struct hci_uart *hu)
|
||||
{
|
||||
struct intel_data *intel = hu->priv;
|
||||
|
||||
BT_DBG("hu %p", hu);
|
||||
|
||||
intel_set_power(hu, false);
|
||||
|
||||
skb_queue_purge(&intel->txq);
|
||||
kfree_skb(intel->rx_skb);
|
||||
kfree(intel);
|
||||
|
||||
hu->priv = NULL;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int intel_flush(struct hci_uart *hu)
|
||||
{
|
||||
struct intel_data *intel = hu->priv;
|
||||
|
||||
BT_DBG("hu %p", hu);
|
||||
|
||||
skb_queue_purge(&intel->txq);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int inject_cmd_complete(struct hci_dev *hdev, __u16 opcode)
|
||||
{
|
||||
struct sk_buff *skb;
|
||||
struct hci_event_hdr *hdr;
|
||||
struct hci_ev_cmd_complete *evt;
|
||||
|
||||
skb = bt_skb_alloc(sizeof(*hdr) + sizeof(*evt) + 1, GFP_ATOMIC);
|
||||
if (!skb)
|
||||
return -ENOMEM;
|
||||
|
||||
hdr = (struct hci_event_hdr *)skb_put(skb, sizeof(*hdr));
|
||||
hdr->evt = HCI_EV_CMD_COMPLETE;
|
||||
hdr->plen = sizeof(*evt) + 1;
|
||||
|
||||
evt = (struct hci_ev_cmd_complete *)skb_put(skb, sizeof(*evt));
|
||||
evt->ncmd = 0x01;
|
||||
evt->opcode = cpu_to_le16(opcode);
|
||||
|
||||
*skb_put(skb, 1) = 0x00;
|
||||
|
||||
bt_cb(skb)->pkt_type = HCI_EVENT_PKT;
|
||||
|
||||
return hci_recv_frame(hdev, skb);
|
||||
}
|
||||
|
||||
static int intel_set_baudrate(struct hci_uart *hu, unsigned int speed)
|
||||
{
|
||||
struct intel_data *intel = hu->priv;
|
||||
struct hci_dev *hdev = hu->hdev;
|
||||
u8 speed_cmd[] = { 0x06, 0xfc, 0x01, 0x00 };
|
||||
struct sk_buff *skb;
|
||||
int err;
|
||||
|
||||
/* This can be the first command sent to the chip, check
|
||||
* that the controller is ready.
|
||||
*/
|
||||
err = intel_wait_booting(hu);
|
||||
|
||||
clear_bit(STATE_BOOTING, &intel->flags);
|
||||
|
||||
/* In case of timeout, try to continue anyway */
|
||||
if (err && err != ETIMEDOUT)
|
||||
return err;
|
||||
|
||||
BT_INFO("%s: Change controller speed to %d", hdev->name, speed);
|
||||
|
||||
speed_cmd[3] = intel_convert_speed(speed);
|
||||
if (speed_cmd[3] == 0xff) {
|
||||
BT_ERR("%s: Unsupported speed", hdev->name);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Device will not accept speed change if Intel version has not been
|
||||
* previously requested.
|
||||
*/
|
||||
skb = __hci_cmd_sync(hdev, 0xfc05, 0, NULL, HCI_INIT_TIMEOUT);
|
||||
if (IS_ERR(skb)) {
|
||||
BT_ERR("%s: Reading Intel version information failed (%ld)",
|
||||
hdev->name, PTR_ERR(skb));
|
||||
return PTR_ERR(skb);
|
||||
}
|
||||
kfree_skb(skb);
|
||||
|
||||
skb = bt_skb_alloc(sizeof(speed_cmd), GFP_KERNEL);
|
||||
if (!skb) {
|
||||
BT_ERR("%s: Failed to allocate memory for baudrate packet",
|
||||
hdev->name);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
memcpy(skb_put(skb, sizeof(speed_cmd)), speed_cmd, sizeof(speed_cmd));
|
||||
bt_cb(skb)->pkt_type = HCI_COMMAND_PKT;
|
||||
|
||||
hci_uart_set_flow_control(hu, true);
|
||||
|
||||
skb_queue_tail(&intel->txq, skb);
|
||||
hci_uart_tx_wakeup(hu);
|
||||
|
||||
/* wait 100ms to change baudrate on controller side */
|
||||
msleep(100);
|
||||
|
||||
hci_uart_set_baudrate(hu, speed);
|
||||
hci_uart_set_flow_control(hu, false);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int intel_setup(struct hci_uart *hu)
|
||||
{
|
||||
static const u8 reset_param[] = { 0x00, 0x01, 0x00, 0x01,
|
||||
0x00, 0x08, 0x04, 0x00 };
|
||||
struct intel_data *intel = hu->priv;
|
||||
struct hci_dev *hdev = hu->hdev;
|
||||
struct sk_buff *skb;
|
||||
struct intel_version *ver;
|
||||
struct intel_boot_params *params;
|
||||
const struct firmware *fw;
|
||||
const u8 *fw_ptr;
|
||||
char fwname[64];
|
||||
u32 frag_len;
|
||||
ktime_t calltime, delta, rettime;
|
||||
unsigned long long duration;
|
||||
unsigned int init_speed, oper_speed;
|
||||
int speed_change = 0;
|
||||
int err;
|
||||
|
||||
BT_DBG("%s", hdev->name);
|
||||
|
||||
hu->hdev->set_bdaddr = btintel_set_bdaddr;
|
||||
|
||||
calltime = ktime_get();
|
||||
|
||||
if (hu->init_speed)
|
||||
init_speed = hu->init_speed;
|
||||
else
|
||||
init_speed = hu->proto->init_speed;
|
||||
|
||||
if (hu->oper_speed)
|
||||
oper_speed = hu->oper_speed;
|
||||
else
|
||||
oper_speed = hu->proto->oper_speed;
|
||||
|
||||
if (oper_speed && init_speed && oper_speed != init_speed)
|
||||
speed_change = 1;
|
||||
|
||||
/* Check that the controller is ready */
|
||||
err = intel_wait_booting(hu);
|
||||
|
||||
clear_bit(STATE_BOOTING, &intel->flags);
|
||||
|
||||
/* In case of timeout, try to continue anyway */
|
||||
if (err && err != ETIMEDOUT)
|
||||
return err;
|
||||
|
||||
set_bit(STATE_BOOTLOADER, &intel->flags);
|
||||
|
||||
/* Read the Intel version information to determine if the device
|
||||
* is in bootloader mode or if it already has operational firmware
|
||||
* loaded.
|
||||
*/
|
||||
skb = __hci_cmd_sync(hdev, 0xfc05, 0, NULL, HCI_INIT_TIMEOUT);
|
||||
if (IS_ERR(skb)) {
|
||||
BT_ERR("%s: Reading Intel version information failed (%ld)",
|
||||
hdev->name, PTR_ERR(skb));
|
||||
return PTR_ERR(skb);
|
||||
}
|
||||
|
||||
if (skb->len != sizeof(*ver)) {
|
||||
BT_ERR("%s: Intel version event size mismatch", hdev->name);
|
||||
kfree_skb(skb);
|
||||
return -EILSEQ;
|
||||
}
|
||||
|
||||
ver = (struct intel_version *)skb->data;
|
||||
if (ver->status) {
|
||||
BT_ERR("%s: Intel version command failure (%02x)",
|
||||
hdev->name, ver->status);
|
||||
err = -bt_to_errno(ver->status);
|
||||
kfree_skb(skb);
|
||||
return err;
|
||||
}
|
||||
|
||||
/* The hardware platform number has a fixed value of 0x37 and
|
||||
* for now only accept this single value.
|
||||
*/
|
||||
if (ver->hw_platform != 0x37) {
|
||||
BT_ERR("%s: Unsupported Intel hardware platform (%u)",
|
||||
hdev->name, ver->hw_platform);
|
||||
kfree_skb(skb);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* At the moment only the hardware variant iBT 3.0 (LnP/SfP) is
|
||||
* supported by this firmware loading method. This check has been
|
||||
* put in place to ensure correct forward compatibility options
|
||||
* when newer hardware variants come along.
|
||||
*/
|
||||
if (ver->hw_variant != 0x0b) {
|
||||
BT_ERR("%s: Unsupported Intel hardware variant (%u)",
|
||||
hdev->name, ver->hw_variant);
|
||||
kfree_skb(skb);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
btintel_version_info(hdev, ver);
|
||||
|
||||
/* The firmware variant determines if the device is in bootloader
|
||||
* mode or is running operational firmware. The value 0x06 identifies
|
||||
* the bootloader and the value 0x23 identifies the operational
|
||||
* firmware.
|
||||
*
|
||||
* When the operational firmware is already present, then only
|
||||
* the check for valid Bluetooth device address is needed. This
|
||||
* determines if the device will be added as configured or
|
||||
* unconfigured controller.
|
||||
*
|
||||
* It is not possible to use the Secure Boot Parameters in this
|
||||
* case since that command is only available in bootloader mode.
|
||||
*/
|
||||
if (ver->fw_variant == 0x23) {
|
||||
kfree_skb(skb);
|
||||
clear_bit(STATE_BOOTLOADER, &intel->flags);
|
||||
btintel_check_bdaddr(hdev);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* If the device is not in bootloader mode, then the only possible
|
||||
* choice is to return an error and abort the device initialization.
|
||||
*/
|
||||
if (ver->fw_variant != 0x06) {
|
||||
BT_ERR("%s: Unsupported Intel firmware variant (%u)",
|
||||
hdev->name, ver->fw_variant);
|
||||
kfree_skb(skb);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
kfree_skb(skb);
|
||||
|
||||
/* Read the secure boot parameters to identify the operating
|
||||
* details of the bootloader.
|
||||
*/
|
||||
skb = __hci_cmd_sync(hdev, 0xfc0d, 0, NULL, HCI_INIT_TIMEOUT);
|
||||
if (IS_ERR(skb)) {
|
||||
BT_ERR("%s: Reading Intel boot parameters failed (%ld)",
|
||||
hdev->name, PTR_ERR(skb));
|
||||
return PTR_ERR(skb);
|
||||
}
|
||||
|
||||
if (skb->len != sizeof(*params)) {
|
||||
BT_ERR("%s: Intel boot parameters size mismatch", hdev->name);
|
||||
kfree_skb(skb);
|
||||
return -EILSEQ;
|
||||
}
|
||||
|
||||
params = (struct intel_boot_params *)skb->data;
|
||||
if (params->status) {
|
||||
BT_ERR("%s: Intel boot parameters command failure (%02x)",
|
||||
hdev->name, params->status);
|
||||
err = -bt_to_errno(params->status);
|
||||
kfree_skb(skb);
|
||||
return err;
|
||||
}
|
||||
|
||||
BT_INFO("%s: Device revision is %u", hdev->name,
|
||||
le16_to_cpu(params->dev_revid));
|
||||
|
||||
BT_INFO("%s: Secure boot is %s", hdev->name,
|
||||
params->secure_boot ? "enabled" : "disabled");
|
||||
|
||||
BT_INFO("%s: Minimum firmware build %u week %u %u", hdev->name,
|
||||
params->min_fw_build_nn, params->min_fw_build_cw,
|
||||
2000 + params->min_fw_build_yy);
|
||||
|
||||
/* It is required that every single firmware fragment is acknowledged
|
||||
* with a command complete event. If the boot parameters indicate
|
||||
* that this bootloader does not send them, then abort the setup.
|
||||
*/
|
||||
if (params->limited_cce != 0x00) {
|
||||
BT_ERR("%s: Unsupported Intel firmware loading method (%u)",
|
||||
hdev->name, params->limited_cce);
|
||||
kfree_skb(skb);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* If the OTP has no valid Bluetooth device address, then there will
|
||||
* also be no valid address for the operational firmware.
|
||||
*/
|
||||
if (!bacmp(¶ms->otp_bdaddr, BDADDR_ANY)) {
|
||||
BT_INFO("%s: No device address configured", hdev->name);
|
||||
set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks);
|
||||
}
|
||||
|
||||
/* With this Intel bootloader only the hardware variant and device
|
||||
* revision information are used to select the right firmware.
|
||||
*
|
||||
* Currently this bootloader support is limited to hardware variant
|
||||
* iBT 3.0 (LnP/SfP) which is identified by the value 11 (0x0b).
|
||||
*/
|
||||
snprintf(fwname, sizeof(fwname), "intel/ibt-11-%u.sfi",
|
||||
le16_to_cpu(params->dev_revid));
|
||||
|
||||
err = request_firmware(&fw, fwname, &hdev->dev);
|
||||
if (err < 0) {
|
||||
BT_ERR("%s: Failed to load Intel firmware file (%d)",
|
||||
hdev->name, err);
|
||||
kfree_skb(skb);
|
||||
return err;
|
||||
}
|
||||
|
||||
BT_INFO("%s: Found device firmware: %s", hdev->name, fwname);
|
||||
|
||||
kfree_skb(skb);
|
||||
|
||||
if (fw->size < 644) {
|
||||
BT_ERR("%s: Invalid size of firmware file (%zu)",
|
||||
hdev->name, fw->size);
|
||||
err = -EBADF;
|
||||
goto done;
|
||||
}
|
||||
|
||||
set_bit(STATE_DOWNLOADING, &intel->flags);
|
||||
|
||||
/* Start the firmware download transaction with the Init fragment
|
||||
* represented by the 128 bytes of CSS header.
|
||||
*/
|
||||
err = btintel_secure_send(hdev, 0x00, 128, fw->data);
|
||||
if (err < 0) {
|
||||
BT_ERR("%s: Failed to send firmware header (%d)",
|
||||
hdev->name, err);
|
||||
goto done;
|
||||
}
|
||||
|
||||
/* Send the 256 bytes of public key information from the firmware
|
||||
* as the PKey fragment.
|
||||
*/
|
||||
err = btintel_secure_send(hdev, 0x03, 256, fw->data + 128);
|
||||
if (err < 0) {
|
||||
BT_ERR("%s: Failed to send firmware public key (%d)",
|
||||
hdev->name, err);
|
||||
goto done;
|
||||
}
|
||||
|
||||
/* Send the 256 bytes of signature information from the firmware
|
||||
* as the Sign fragment.
|
||||
*/
|
||||
err = btintel_secure_send(hdev, 0x02, 256, fw->data + 388);
|
||||
if (err < 0) {
|
||||
BT_ERR("%s: Failed to send firmware signature (%d)",
|
||||
hdev->name, err);
|
||||
goto done;
|
||||
}
|
||||
|
||||
fw_ptr = fw->data + 644;
|
||||
frag_len = 0;
|
||||
|
||||
while (fw_ptr - fw->data < fw->size) {
|
||||
struct hci_command_hdr *cmd = (void *)(fw_ptr + frag_len);
|
||||
|
||||
frag_len += sizeof(*cmd) + cmd->plen;
|
||||
|
||||
BT_DBG("%s: patching %td/%zu", hdev->name,
|
||||
(fw_ptr - fw->data), fw->size);
|
||||
|
||||
/* The parameter length of the secure send command requires
|
||||
* a 4 byte alignment. It happens so that the firmware file
|
||||
* contains proper Intel_NOP commands to align the fragments
|
||||
* as needed.
|
||||
*
|
||||
* Send set of commands with 4 byte alignment from the
|
||||
* firmware data buffer as a single Data fragement.
|
||||
*/
|
||||
if (frag_len % 4)
|
||||
continue;
|
||||
|
||||
/* Send each command from the firmware data buffer as
|
||||
* a single Data fragment.
|
||||
*/
|
||||
err = btintel_secure_send(hdev, 0x01, frag_len, fw_ptr);
|
||||
if (err < 0) {
|
||||
BT_ERR("%s: Failed to send firmware data (%d)",
|
||||
hdev->name, err);
|
||||
goto done;
|
||||
}
|
||||
|
||||
fw_ptr += frag_len;
|
||||
frag_len = 0;
|
||||
}
|
||||
|
||||
set_bit(STATE_FIRMWARE_LOADED, &intel->flags);
|
||||
|
||||
BT_INFO("%s: Waiting for firmware download to complete", hdev->name);
|
||||
|
||||
/* Before switching the device into operational mode and with that
|
||||
* booting the loaded firmware, wait for the bootloader notification
|
||||
* that all fragments have been successfully received.
|
||||
*
|
||||
* When the event processing receives the notification, then the
|
||||
* STATE_DOWNLOADING flag will be cleared.
|
||||
*
|
||||
* The firmware loading should not take longer than 5 seconds
|
||||
* and thus just timeout if that happens and fail the setup
|
||||
* of this device.
|
||||
*/
|
||||
err = wait_on_bit_timeout(&intel->flags, STATE_DOWNLOADING,
|
||||
TASK_INTERRUPTIBLE,
|
||||
msecs_to_jiffies(5000));
|
||||
if (err == 1) {
|
||||
BT_ERR("%s: Firmware loading interrupted", hdev->name);
|
||||
err = -EINTR;
|
||||
goto done;
|
||||
}
|
||||
|
||||
if (err) {
|
||||
BT_ERR("%s: Firmware loading timeout", hdev->name);
|
||||
err = -ETIMEDOUT;
|
||||
goto done;
|
||||
}
|
||||
|
||||
if (test_bit(STATE_FIRMWARE_FAILED, &intel->flags)) {
|
||||
BT_ERR("%s: Firmware loading failed", hdev->name);
|
||||
err = -ENOEXEC;
|
||||
goto done;
|
||||
}
|
||||
|
||||
rettime = ktime_get();
|
||||
delta = ktime_sub(rettime, calltime);
|
||||
duration = (unsigned long long) ktime_to_ns(delta) >> 10;
|
||||
|
||||
BT_INFO("%s: Firmware loaded in %llu usecs", hdev->name, duration);
|
||||
|
||||
done:
|
||||
release_firmware(fw);
|
||||
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
/* We need to restore the default speed before Intel reset */
|
||||
if (speed_change) {
|
||||
err = intel_set_baudrate(hu, init_speed);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
calltime = ktime_get();
|
||||
|
||||
set_bit(STATE_BOOTING, &intel->flags);
|
||||
|
||||
skb = __hci_cmd_sync(hdev, 0xfc01, sizeof(reset_param), reset_param,
|
||||
HCI_INIT_TIMEOUT);
|
||||
if (IS_ERR(skb))
|
||||
return PTR_ERR(skb);
|
||||
|
||||
kfree_skb(skb);
|
||||
|
||||
/* The bootloader will not indicate when the device is ready. This
|
||||
* is done by the operational firmware sending bootup notification.
|
||||
*
|
||||
* Booting into operational firmware should not take longer than
|
||||
* 1 second. However if that happens, then just fail the setup
|
||||
* since something went wrong.
|
||||
*/
|
||||
BT_INFO("%s: Waiting for device to boot", hdev->name);
|
||||
|
||||
err = intel_wait_booting(hu);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
clear_bit(STATE_BOOTING, &intel->flags);
|
||||
|
||||
rettime = ktime_get();
|
||||
delta = ktime_sub(rettime, calltime);
|
||||
duration = (unsigned long long) ktime_to_ns(delta) >> 10;
|
||||
|
||||
BT_INFO("%s: Device booted in %llu usecs", hdev->name, duration);
|
||||
|
||||
skb = __hci_cmd_sync(hdev, HCI_OP_RESET, 0, NULL, HCI_CMD_TIMEOUT);
|
||||
if (IS_ERR(skb))
|
||||
return PTR_ERR(skb);
|
||||
kfree_skb(skb);
|
||||
|
||||
if (speed_change) {
|
||||
err = intel_set_baudrate(hu, oper_speed);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
BT_INFO("%s: Setup complete", hdev->name);
|
||||
|
||||
clear_bit(STATE_BOOTLOADER, &intel->flags);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int intel_recv_event(struct hci_dev *hdev, struct sk_buff *skb)
|
||||
{
|
||||
struct hci_uart *hu = hci_get_drvdata(hdev);
|
||||
struct intel_data *intel = hu->priv;
|
||||
struct hci_event_hdr *hdr;
|
||||
|
||||
if (!test_bit(STATE_BOOTLOADER, &intel->flags) &&
|
||||
!test_bit(STATE_BOOTING, &intel->flags))
|
||||
goto recv;
|
||||
|
||||
hdr = (void *)skb->data;
|
||||
|
||||
/* When the firmware loading completes the device sends
|
||||
* out a vendor specific event indicating the result of
|
||||
* the firmware loading.
|
||||
*/
|
||||
if (skb->len == 7 && hdr->evt == 0xff && hdr->plen == 0x05 &&
|
||||
skb->data[2] == 0x06) {
|
||||
if (skb->data[3] != 0x00)
|
||||
set_bit(STATE_FIRMWARE_FAILED, &intel->flags);
|
||||
|
||||
if (test_and_clear_bit(STATE_DOWNLOADING, &intel->flags) &&
|
||||
test_bit(STATE_FIRMWARE_LOADED, &intel->flags)) {
|
||||
smp_mb__after_atomic();
|
||||
wake_up_bit(&intel->flags, STATE_DOWNLOADING);
|
||||
}
|
||||
|
||||
/* When switching to the operational firmware the device
|
||||
* sends a vendor specific event indicating that the bootup
|
||||
* completed.
|
||||
*/
|
||||
} else if (skb->len == 9 && hdr->evt == 0xff && hdr->plen == 0x07 &&
|
||||
skb->data[2] == 0x02) {
|
||||
if (test_and_clear_bit(STATE_BOOTING, &intel->flags)) {
|
||||
smp_mb__after_atomic();
|
||||
wake_up_bit(&intel->flags, STATE_BOOTING);
|
||||
}
|
||||
}
|
||||
recv:
|
||||
return hci_recv_frame(hdev, skb);
|
||||
}
|
||||
|
||||
static const struct h4_recv_pkt intel_recv_pkts[] = {
|
||||
{ H4_RECV_ACL, .recv = hci_recv_frame },
|
||||
{ H4_RECV_SCO, .recv = hci_recv_frame },
|
||||
{ H4_RECV_EVENT, .recv = intel_recv_event },
|
||||
};
|
||||
|
||||
static int intel_recv(struct hci_uart *hu, const void *data, int count)
|
||||
{
|
||||
struct intel_data *intel = hu->priv;
|
||||
|
||||
if (!test_bit(HCI_UART_REGISTERED, &hu->flags))
|
||||
return -EUNATCH;
|
||||
|
||||
intel->rx_skb = h4_recv_buf(hu->hdev, intel->rx_skb, data, count,
|
||||
intel_recv_pkts,
|
||||
ARRAY_SIZE(intel_recv_pkts));
|
||||
if (IS_ERR(intel->rx_skb)) {
|
||||
int err = PTR_ERR(intel->rx_skb);
|
||||
BT_ERR("%s: Frame reassembly failed (%d)", hu->hdev->name, err);
|
||||
intel->rx_skb = NULL;
|
||||
return err;
|
||||
}
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
static int intel_enqueue(struct hci_uart *hu, struct sk_buff *skb)
|
||||
{
|
||||
struct intel_data *intel = hu->priv;
|
||||
|
||||
BT_DBG("hu %p skb %p", hu, skb);
|
||||
|
||||
skb_queue_tail(&intel->txq, skb);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct sk_buff *intel_dequeue(struct hci_uart *hu)
|
||||
{
|
||||
struct intel_data *intel = hu->priv;
|
||||
struct sk_buff *skb;
|
||||
|
||||
skb = skb_dequeue(&intel->txq);
|
||||
if (!skb)
|
||||
return skb;
|
||||
|
||||
if (test_bit(STATE_BOOTLOADER, &intel->flags) &&
|
||||
(bt_cb(skb)->pkt_type == HCI_COMMAND_PKT)) {
|
||||
struct hci_command_hdr *cmd = (void *)skb->data;
|
||||
__u16 opcode = le16_to_cpu(cmd->opcode);
|
||||
|
||||
/* When the 0xfc01 command is issued to boot into
|
||||
* the operational firmware, it will actually not
|
||||
* send a command complete event. To keep the flow
|
||||
* control working inject that event here.
|
||||
*/
|
||||
if (opcode == 0xfc01)
|
||||
inject_cmd_complete(hu->hdev, opcode);
|
||||
}
|
||||
|
||||
/* Prepend skb with frame type */
|
||||
memcpy(skb_push(skb, 1), &bt_cb(skb)->pkt_type, 1);
|
||||
|
||||
return skb;
|
||||
}
|
||||
|
||||
static const struct hci_uart_proto intel_proto = {
|
||||
.id = HCI_UART_INTEL,
|
||||
.name = "Intel",
|
||||
.init_speed = 115200,
|
||||
.oper_speed = 3000000,
|
||||
.open = intel_open,
|
||||
.close = intel_close,
|
||||
.flush = intel_flush,
|
||||
.setup = intel_setup,
|
||||
.set_baudrate = intel_set_baudrate,
|
||||
.recv = intel_recv,
|
||||
.enqueue = intel_enqueue,
|
||||
.dequeue = intel_dequeue,
|
||||
};
|
||||
|
||||
#ifdef CONFIG_ACPI
|
||||
static const struct acpi_device_id intel_acpi_match[] = {
|
||||
{ "INT33E1", 0 },
|
||||
{ },
|
||||
};
|
||||
MODULE_DEVICE_TABLE(acpi, intel_acpi_match);
|
||||
|
||||
static int intel_acpi_probe(struct intel_device *idev)
|
||||
{
|
||||
const struct acpi_device_id *id;
|
||||
|
||||
id = acpi_match_device(intel_acpi_match, &idev->pdev->dev);
|
||||
if (!id)
|
||||
return -ENODEV;
|
||||
|
||||
return 0;
|
||||
}
|
||||
#else
|
||||
static int intel_acpi_probe(struct intel_device *idev)
|
||||
{
|
||||
return -ENODEV;
|
||||
}
|
||||
#endif
|
||||
|
||||
static int intel_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct intel_device *idev;
|
||||
|
||||
idev = devm_kzalloc(&pdev->dev, sizeof(*idev), GFP_KERNEL);
|
||||
if (!idev)
|
||||
return -ENOMEM;
|
||||
|
||||
idev->pdev = pdev;
|
||||
|
||||
if (ACPI_HANDLE(&pdev->dev)) {
|
||||
int err = intel_acpi_probe(idev);
|
||||
if (err)
|
||||
return err;
|
||||
} else {
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
idev->reset = devm_gpiod_get_optional(&pdev->dev, "reset",
|
||||
GPIOD_OUT_LOW);
|
||||
if (IS_ERR(idev->reset)) {
|
||||
dev_err(&pdev->dev, "Unable to retrieve gpio\n");
|
||||
return PTR_ERR(idev->reset);
|
||||
}
|
||||
|
||||
platform_set_drvdata(pdev, idev);
|
||||
|
||||
/* Place this instance on the device list */
|
||||
spin_lock(&intel_device_list_lock);
|
||||
list_add_tail(&idev->list, &intel_device_list);
|
||||
spin_unlock(&intel_device_list_lock);
|
||||
|
||||
dev_info(&pdev->dev, "registered.\n");
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int intel_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct intel_device *idev = platform_get_drvdata(pdev);
|
||||
|
||||
spin_lock(&intel_device_list_lock);
|
||||
list_del(&idev->list);
|
||||
spin_unlock(&intel_device_list_lock);
|
||||
|
||||
dev_info(&pdev->dev, "unregistered.\n");
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct platform_driver intel_driver = {
|
||||
.probe = intel_probe,
|
||||
.remove = intel_remove,
|
||||
.driver = {
|
||||
.name = "hci_intel",
|
||||
.acpi_match_table = ACPI_PTR(intel_acpi_match),
|
||||
},
|
||||
};
|
||||
|
||||
int __init intel_init(void)
|
||||
{
|
||||
platform_driver_register(&intel_driver);
|
||||
|
||||
return hci_uart_register_proto(&intel_proto);
|
||||
}
|
||||
|
||||
int __exit intel_deinit(void)
|
||||
{
|
||||
platform_driver_unregister(&intel_driver);
|
||||
|
||||
return hci_uart_unregister_proto(&intel_proto);
|
||||
}
|
||||
|
|
|
@ -770,7 +770,7 @@ static int __init hci_uart_init(void)
|
|||
|
||||
/* Register the tty discipline */
|
||||
|
||||
memset(&hci_uart_ldisc, 0, sizeof (hci_uart_ldisc));
|
||||
memset(&hci_uart_ldisc, 0, sizeof(hci_uart_ldisc));
|
||||
hci_uart_ldisc.magic = TTY_LDISC_MAGIC;
|
||||
hci_uart_ldisc.name = "n_hci";
|
||||
hci_uart_ldisc.open = hci_uart_tty_open;
|
||||
|
@ -804,9 +804,15 @@ static int __init hci_uart_init(void)
|
|||
#ifdef CONFIG_BT_HCIUART_3WIRE
|
||||
h5_init();
|
||||
#endif
|
||||
#ifdef CONFIG_BT_HCIUART_INTEL
|
||||
intel_init();
|
||||
#endif
|
||||
#ifdef CONFIG_BT_HCIUART_BCM
|
||||
bcm_init();
|
||||
#endif
|
||||
#ifdef CONFIG_BT_HCIUART_QCA
|
||||
qca_init();
|
||||
#endif
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -830,9 +836,15 @@ static void __exit hci_uart_exit(void)
|
|||
#ifdef CONFIG_BT_HCIUART_3WIRE
|
||||
h5_deinit();
|
||||
#endif
|
||||
#ifdef CONFIG_BT_HCIUART_INTEL
|
||||
intel_deinit();
|
||||
#endif
|
||||
#ifdef CONFIG_BT_HCIUART_BCM
|
||||
bcm_deinit();
|
||||
#endif
|
||||
#ifdef CONFIG_BT_HCIUART_QCA
|
||||
qca_deinit();
|
||||
#endif
|
||||
|
||||
/* Release tty registration of line discipline */
|
||||
err = tty_unregister_ldisc(N_HCI);
|
||||
|
|
|
@ -0,0 +1,969 @@
|
|||
/*
|
||||
* Bluetooth Software UART Qualcomm protocol
|
||||
*
|
||||
* HCI_IBS (HCI In-Band Sleep) is Qualcomm's power management
|
||||
* protocol extension to H4.
|
||||
*
|
||||
* Copyright (C) 2007 Texas Instruments, Inc.
|
||||
* Copyright (c) 2010, 2012 The Linux Foundation. All rights reserved.
|
||||
*
|
||||
* Acknowledgements:
|
||||
* This file is based on hci_ll.c, which was...
|
||||
* Written by Ohad Ben-Cohen <ohad@bencohen.org>
|
||||
* which was in turn based on hci_h4.c, which was written
|
||||
* by Maxim Krasnyansky and Marcel Holtmann.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2
|
||||
* as published by the Free Software Foundation
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program; if not, write to the Free Software
|
||||
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
|
||||
*
|
||||
*/
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/debugfs.h>
|
||||
|
||||
#include <net/bluetooth/bluetooth.h>
|
||||
#include <net/bluetooth/hci_core.h>
|
||||
|
||||
#include "hci_uart.h"
|
||||
#include "btqca.h"
|
||||
|
||||
/* HCI_IBS protocol messages */
|
||||
#define HCI_IBS_SLEEP_IND 0xFE
|
||||
#define HCI_IBS_WAKE_IND 0xFD
|
||||
#define HCI_IBS_WAKE_ACK 0xFC
|
||||
#define HCI_MAX_IBS_SIZE 10
|
||||
|
||||
/* Controller states */
|
||||
#define STATE_IN_BAND_SLEEP_ENABLED 1
|
||||
|
||||
#define IBS_WAKE_RETRANS_TIMEOUT_MS 100
|
||||
#define IBS_TX_IDLE_TIMEOUT_MS 2000
|
||||
#define BAUDRATE_SETTLE_TIMEOUT_MS 300
|
||||
|
||||
/* HCI_IBS transmit side sleep protocol states */
|
||||
enum tx_ibs_states {
|
||||
HCI_IBS_TX_ASLEEP,
|
||||
HCI_IBS_TX_WAKING,
|
||||
HCI_IBS_TX_AWAKE,
|
||||
};
|
||||
|
||||
/* HCI_IBS receive side sleep protocol states */
|
||||
enum rx_states {
|
||||
HCI_IBS_RX_ASLEEP,
|
||||
HCI_IBS_RX_AWAKE,
|
||||
};
|
||||
|
||||
/* HCI_IBS transmit and receive side clock state vote */
|
||||
enum hci_ibs_clock_state_vote {
|
||||
HCI_IBS_VOTE_STATS_UPDATE,
|
||||
HCI_IBS_TX_VOTE_CLOCK_ON,
|
||||
HCI_IBS_TX_VOTE_CLOCK_OFF,
|
||||
HCI_IBS_RX_VOTE_CLOCK_ON,
|
||||
HCI_IBS_RX_VOTE_CLOCK_OFF,
|
||||
};
|
||||
|
||||
struct qca_data {
|
||||
struct hci_uart *hu;
|
||||
struct sk_buff *rx_skb;
|
||||
struct sk_buff_head txq;
|
||||
struct sk_buff_head tx_wait_q; /* HCI_IBS wait queue */
|
||||
spinlock_t hci_ibs_lock; /* HCI_IBS state lock */
|
||||
u8 tx_ibs_state; /* HCI_IBS transmit side power state*/
|
||||
u8 rx_ibs_state; /* HCI_IBS receive side power state */
|
||||
u32 tx_vote; /* Clock must be on for TX */
|
||||
u32 rx_vote; /* Clock must be on for RX */
|
||||
struct timer_list tx_idle_timer;
|
||||
u32 tx_idle_delay;
|
||||
struct timer_list wake_retrans_timer;
|
||||
u32 wake_retrans;
|
||||
struct workqueue_struct *workqueue;
|
||||
struct work_struct ws_awake_rx;
|
||||
struct work_struct ws_awake_device;
|
||||
struct work_struct ws_rx_vote_off;
|
||||
struct work_struct ws_tx_vote_off;
|
||||
unsigned long flags;
|
||||
|
||||
/* For debugging purpose */
|
||||
u64 ibs_sent_wacks;
|
||||
u64 ibs_sent_slps;
|
||||
u64 ibs_sent_wakes;
|
||||
u64 ibs_recv_wacks;
|
||||
u64 ibs_recv_slps;
|
||||
u64 ibs_recv_wakes;
|
||||
u64 vote_last_jif;
|
||||
u32 vote_on_ms;
|
||||
u32 vote_off_ms;
|
||||
u64 tx_votes_on;
|
||||
u64 rx_votes_on;
|
||||
u64 tx_votes_off;
|
||||
u64 rx_votes_off;
|
||||
u64 votes_on;
|
||||
u64 votes_off;
|
||||
};
|
||||
|
||||
static void __serial_clock_on(struct tty_struct *tty)
|
||||
{
|
||||
/* TODO: Some chipset requires to enable UART clock on client
|
||||
* side to save power consumption or manual work is required.
|
||||
* Please put your code to control UART clock here if needed
|
||||
*/
|
||||
}
|
||||
|
||||
static void __serial_clock_off(struct tty_struct *tty)
|
||||
{
|
||||
/* TODO: Some chipset requires to disable UART clock on client
|
||||
* side to save power consumption or manual work is required.
|
||||
* Please put your code to control UART clock off here if needed
|
||||
*/
|
||||
}
|
||||
|
||||
/* serial_clock_vote needs to be called with the ibs lock held */
|
||||
static void serial_clock_vote(unsigned long vote, struct hci_uart *hu)
|
||||
{
|
||||
struct qca_data *qca = hu->priv;
|
||||
unsigned int diff;
|
||||
|
||||
bool old_vote = (qca->tx_vote | qca->rx_vote);
|
||||
bool new_vote;
|
||||
|
||||
switch (vote) {
|
||||
case HCI_IBS_VOTE_STATS_UPDATE:
|
||||
diff = jiffies_to_msecs(jiffies - qca->vote_last_jif);
|
||||
|
||||
if (old_vote)
|
||||
qca->vote_off_ms += diff;
|
||||
else
|
||||
qca->vote_on_ms += diff;
|
||||
return;
|
||||
|
||||
case HCI_IBS_TX_VOTE_CLOCK_ON:
|
||||
qca->tx_vote = true;
|
||||
qca->tx_votes_on++;
|
||||
new_vote = true;
|
||||
break;
|
||||
|
||||
case HCI_IBS_RX_VOTE_CLOCK_ON:
|
||||
qca->rx_vote = true;
|
||||
qca->rx_votes_on++;
|
||||
new_vote = true;
|
||||
break;
|
||||
|
||||
case HCI_IBS_TX_VOTE_CLOCK_OFF:
|
||||
qca->tx_vote = false;
|
||||
qca->tx_votes_off++;
|
||||
new_vote = qca->rx_vote | qca->tx_vote;
|
||||
break;
|
||||
|
||||
case HCI_IBS_RX_VOTE_CLOCK_OFF:
|
||||
qca->rx_vote = false;
|
||||
qca->rx_votes_off++;
|
||||
new_vote = qca->rx_vote | qca->tx_vote;
|
||||
break;
|
||||
|
||||
default:
|
||||
BT_ERR("Voting irregularity");
|
||||
return;
|
||||
}
|
||||
|
||||
if (new_vote != old_vote) {
|
||||
if (new_vote)
|
||||
__serial_clock_on(hu->tty);
|
||||
else
|
||||
__serial_clock_off(hu->tty);
|
||||
|
||||
BT_DBG("Vote serial clock %s(%s)", new_vote? "true" : "false",
|
||||
vote? "true" : "false");
|
||||
|
||||
diff = jiffies_to_msecs(jiffies - qca->vote_last_jif);
|
||||
|
||||
if (new_vote) {
|
||||
qca->votes_on++;
|
||||
qca->vote_off_ms += diff;
|
||||
} else {
|
||||
qca->votes_off++;
|
||||
qca->vote_on_ms += diff;
|
||||
}
|
||||
qca->vote_last_jif = jiffies;
|
||||
}
|
||||
}
|
||||
|
||||
/* Builds and sends an HCI_IBS command packet.
|
||||
* These are very simple packets with only 1 cmd byte.
|
||||
*/
|
||||
static int send_hci_ibs_cmd(u8 cmd, struct hci_uart *hu)
|
||||
{
|
||||
int err = 0;
|
||||
struct sk_buff *skb = NULL;
|
||||
struct qca_data *qca = hu->priv;
|
||||
|
||||
BT_DBG("hu %p send hci ibs cmd 0x%x", hu, cmd);
|
||||
|
||||
skb = bt_skb_alloc(1, GFP_ATOMIC);
|
||||
if (!skb) {
|
||||
BT_ERR("Failed to allocate memory for HCI_IBS packet");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
/* Assign HCI_IBS type */
|
||||
*skb_put(skb, 1) = cmd;
|
||||
|
||||
skb_queue_tail(&qca->txq, skb);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static void qca_wq_awake_device(struct work_struct *work)
|
||||
{
|
||||
struct qca_data *qca = container_of(work, struct qca_data,
|
||||
ws_awake_device);
|
||||
struct hci_uart *hu = qca->hu;
|
||||
unsigned long retrans_delay;
|
||||
|
||||
BT_DBG("hu %p wq awake device", hu);
|
||||
|
||||
/* Vote for serial clock */
|
||||
serial_clock_vote(HCI_IBS_TX_VOTE_CLOCK_ON, hu);
|
||||
|
||||
spin_lock(&qca->hci_ibs_lock);
|
||||
|
||||
/* Send wake indication to device */
|
||||
if (send_hci_ibs_cmd(HCI_IBS_WAKE_IND, hu) < 0)
|
||||
BT_ERR("Failed to send WAKE to device");
|
||||
|
||||
qca->ibs_sent_wakes++;
|
||||
|
||||
/* Start retransmit timer */
|
||||
retrans_delay = msecs_to_jiffies(qca->wake_retrans);
|
||||
mod_timer(&qca->wake_retrans_timer, jiffies + retrans_delay);
|
||||
|
||||
spin_unlock(&qca->hci_ibs_lock);
|
||||
|
||||
/* Actually send the packets */
|
||||
hci_uart_tx_wakeup(hu);
|
||||
}
|
||||
|
||||
static void qca_wq_awake_rx(struct work_struct *work)
|
||||
{
|
||||
struct qca_data *qca = container_of(work, struct qca_data,
|
||||
ws_awake_rx);
|
||||
struct hci_uart *hu = qca->hu;
|
||||
|
||||
BT_DBG("hu %p wq awake rx", hu);
|
||||
|
||||
serial_clock_vote(HCI_IBS_RX_VOTE_CLOCK_ON, hu);
|
||||
|
||||
spin_lock(&qca->hci_ibs_lock);
|
||||
qca->rx_ibs_state = HCI_IBS_RX_AWAKE;
|
||||
|
||||
/* Always acknowledge device wake up,
|
||||
* sending IBS message doesn't count as TX ON.
|
||||
*/
|
||||
if (send_hci_ibs_cmd(HCI_IBS_WAKE_ACK, hu) < 0)
|
||||
BT_ERR("Failed to acknowledge device wake up");
|
||||
|
||||
qca->ibs_sent_wacks++;
|
||||
|
||||
spin_unlock(&qca->hci_ibs_lock);
|
||||
|
||||
/* Actually send the packets */
|
||||
hci_uart_tx_wakeup(hu);
|
||||
}
|
||||
|
||||
static void qca_wq_serial_rx_clock_vote_off(struct work_struct *work)
|
||||
{
|
||||
struct qca_data *qca = container_of(work, struct qca_data,
|
||||
ws_rx_vote_off);
|
||||
struct hci_uart *hu = qca->hu;
|
||||
|
||||
BT_DBG("hu %p rx clock vote off", hu);
|
||||
|
||||
serial_clock_vote(HCI_IBS_RX_VOTE_CLOCK_OFF, hu);
|
||||
}
|
||||
|
||||
static void qca_wq_serial_tx_clock_vote_off(struct work_struct *work)
|
||||
{
|
||||
struct qca_data *qca = container_of(work, struct qca_data,
|
||||
ws_tx_vote_off);
|
||||
struct hci_uart *hu = qca->hu;
|
||||
|
||||
BT_DBG("hu %p tx clock vote off", hu);
|
||||
|
||||
/* Run HCI tx handling unlocked */
|
||||
hci_uart_tx_wakeup(hu);
|
||||
|
||||
/* Now that message queued to tty driver, vote for tty clocks off.
|
||||
* It is up to the tty driver to pend the clocks off until tx done.
|
||||
*/
|
||||
serial_clock_vote(HCI_IBS_TX_VOTE_CLOCK_OFF, hu);
|
||||
}
|
||||
|
||||
static void hci_ibs_tx_idle_timeout(unsigned long arg)
|
||||
{
|
||||
struct hci_uart *hu = (struct hci_uart *)arg;
|
||||
struct qca_data *qca = hu->priv;
|
||||
unsigned long flags;
|
||||
|
||||
BT_DBG("hu %p idle timeout in %d state", hu, qca->tx_ibs_state);
|
||||
|
||||
spin_lock_irqsave_nested(&qca->hci_ibs_lock,
|
||||
flags, SINGLE_DEPTH_NESTING);
|
||||
|
||||
switch (qca->tx_ibs_state) {
|
||||
case HCI_IBS_TX_AWAKE:
|
||||
/* TX_IDLE, go to SLEEP */
|
||||
if (send_hci_ibs_cmd(HCI_IBS_SLEEP_IND, hu) < 0) {
|
||||
BT_ERR("Failed to send SLEEP to device");
|
||||
break;
|
||||
}
|
||||
qca->tx_ibs_state = HCI_IBS_TX_ASLEEP;
|
||||
qca->ibs_sent_slps++;
|
||||
queue_work(qca->workqueue, &qca->ws_tx_vote_off);
|
||||
break;
|
||||
|
||||
case HCI_IBS_TX_ASLEEP:
|
||||
case HCI_IBS_TX_WAKING:
|
||||
/* Fall through */
|
||||
|
||||
default:
|
||||
BT_ERR("Spurrious timeout tx state %d", qca->tx_ibs_state);
|
||||
break;
|
||||
}
|
||||
|
||||
spin_unlock_irqrestore(&qca->hci_ibs_lock, flags);
|
||||
}
|
||||
|
||||
static void hci_ibs_wake_retrans_timeout(unsigned long arg)
|
||||
{
|
||||
struct hci_uart *hu = (struct hci_uart *)arg;
|
||||
struct qca_data *qca = hu->priv;
|
||||
unsigned long flags, retrans_delay;
|
||||
unsigned long retransmit = 0;
|
||||
|
||||
BT_DBG("hu %p wake retransmit timeout in %d state",
|
||||
hu, qca->tx_ibs_state);
|
||||
|
||||
spin_lock_irqsave_nested(&qca->hci_ibs_lock,
|
||||
flags, SINGLE_DEPTH_NESTING);
|
||||
|
||||
switch (qca->tx_ibs_state) {
|
||||
case HCI_IBS_TX_WAKING:
|
||||
/* No WAKE_ACK, retransmit WAKE */
|
||||
retransmit = 1;
|
||||
if (send_hci_ibs_cmd(HCI_IBS_WAKE_IND, hu) < 0) {
|
||||
BT_ERR("Failed to acknowledge device wake up");
|
||||
break;
|
||||
}
|
||||
qca->ibs_sent_wakes++;
|
||||
retrans_delay = msecs_to_jiffies(qca->wake_retrans);
|
||||
mod_timer(&qca->wake_retrans_timer, jiffies + retrans_delay);
|
||||
break;
|
||||
|
||||
case HCI_IBS_TX_ASLEEP:
|
||||
case HCI_IBS_TX_AWAKE:
|
||||
/* Fall through */
|
||||
|
||||
default:
|
||||
BT_ERR("Spurrious timeout tx state %d", qca->tx_ibs_state);
|
||||
break;
|
||||
}
|
||||
|
||||
spin_unlock_irqrestore(&qca->hci_ibs_lock, flags);
|
||||
|
||||
if (retransmit)
|
||||
hci_uart_tx_wakeup(hu);
|
||||
}
|
||||
|
||||
/* Initialize protocol */
|
||||
static int qca_open(struct hci_uart *hu)
|
||||
{
|
||||
struct qca_data *qca;
|
||||
|
||||
BT_DBG("hu %p qca_open", hu);
|
||||
|
||||
qca = kzalloc(sizeof(struct qca_data), GFP_ATOMIC);
|
||||
if (!qca)
|
||||
return -ENOMEM;
|
||||
|
||||
skb_queue_head_init(&qca->txq);
|
||||
skb_queue_head_init(&qca->tx_wait_q);
|
||||
spin_lock_init(&qca->hci_ibs_lock);
|
||||
qca->workqueue = create_singlethread_workqueue("qca_wq");
|
||||
if (!qca->workqueue) {
|
||||
BT_ERR("QCA Workqueue not initialized properly");
|
||||
kfree(qca);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
INIT_WORK(&qca->ws_awake_rx, qca_wq_awake_rx);
|
||||
INIT_WORK(&qca->ws_awake_device, qca_wq_awake_device);
|
||||
INIT_WORK(&qca->ws_rx_vote_off, qca_wq_serial_rx_clock_vote_off);
|
||||
INIT_WORK(&qca->ws_tx_vote_off, qca_wq_serial_tx_clock_vote_off);
|
||||
|
||||
qca->hu = hu;
|
||||
|
||||
/* Assume we start with both sides asleep -- extra wakes OK */
|
||||
qca->tx_ibs_state = HCI_IBS_TX_ASLEEP;
|
||||
qca->rx_ibs_state = HCI_IBS_RX_ASLEEP;
|
||||
|
||||
/* clocks actually on, but we start votes off */
|
||||
qca->tx_vote = false;
|
||||
qca->rx_vote = false;
|
||||
qca->flags = 0;
|
||||
|
||||
qca->ibs_sent_wacks = 0;
|
||||
qca->ibs_sent_slps = 0;
|
||||
qca->ibs_sent_wakes = 0;
|
||||
qca->ibs_recv_wacks = 0;
|
||||
qca->ibs_recv_slps = 0;
|
||||
qca->ibs_recv_wakes = 0;
|
||||
qca->vote_last_jif = jiffies;
|
||||
qca->vote_on_ms = 0;
|
||||
qca->vote_off_ms = 0;
|
||||
qca->votes_on = 0;
|
||||
qca->votes_off = 0;
|
||||
qca->tx_votes_on = 0;
|
||||
qca->tx_votes_off = 0;
|
||||
qca->rx_votes_on = 0;
|
||||
qca->rx_votes_off = 0;
|
||||
|
||||
hu->priv = qca;
|
||||
|
||||
init_timer(&qca->wake_retrans_timer);
|
||||
qca->wake_retrans_timer.function = hci_ibs_wake_retrans_timeout;
|
||||
qca->wake_retrans_timer.data = (u_long)hu;
|
||||
qca->wake_retrans = IBS_WAKE_RETRANS_TIMEOUT_MS;
|
||||
|
||||
init_timer(&qca->tx_idle_timer);
|
||||
qca->tx_idle_timer.function = hci_ibs_tx_idle_timeout;
|
||||
qca->tx_idle_timer.data = (u_long)hu;
|
||||
qca->tx_idle_delay = IBS_TX_IDLE_TIMEOUT_MS;
|
||||
|
||||
BT_DBG("HCI_UART_QCA open, tx_idle_delay=%u, wake_retrans=%u",
|
||||
qca->tx_idle_delay, qca->wake_retrans);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void qca_debugfs_init(struct hci_dev *hdev)
|
||||
{
|
||||
struct hci_uart *hu = hci_get_drvdata(hdev);
|
||||
struct qca_data *qca = hu->priv;
|
||||
struct dentry *ibs_dir;
|
||||
umode_t mode;
|
||||
|
||||
if (!hdev->debugfs)
|
||||
return;
|
||||
|
||||
ibs_dir = debugfs_create_dir("ibs", hdev->debugfs);
|
||||
|
||||
/* read only */
|
||||
mode = S_IRUGO;
|
||||
debugfs_create_u8("tx_ibs_state", mode, ibs_dir, &qca->tx_ibs_state);
|
||||
debugfs_create_u8("rx_ibs_state", mode, ibs_dir, &qca->rx_ibs_state);
|
||||
debugfs_create_u64("ibs_sent_sleeps", mode, ibs_dir,
|
||||
&qca->ibs_sent_slps);
|
||||
debugfs_create_u64("ibs_sent_wakes", mode, ibs_dir,
|
||||
&qca->ibs_sent_wakes);
|
||||
debugfs_create_u64("ibs_sent_wake_acks", mode, ibs_dir,
|
||||
&qca->ibs_sent_wacks);
|
||||
debugfs_create_u64("ibs_recv_sleeps", mode, ibs_dir,
|
||||
&qca->ibs_recv_slps);
|
||||
debugfs_create_u64("ibs_recv_wakes", mode, ibs_dir,
|
||||
&qca->ibs_recv_wakes);
|
||||
debugfs_create_u64("ibs_recv_wake_acks", mode, ibs_dir,
|
||||
&qca->ibs_recv_wacks);
|
||||
debugfs_create_bool("tx_vote", mode, ibs_dir, &qca->tx_vote);
|
||||
debugfs_create_u64("tx_votes_on", mode, ibs_dir, &qca->tx_votes_on);
|
||||
debugfs_create_u64("tx_votes_off", mode, ibs_dir, &qca->tx_votes_off);
|
||||
debugfs_create_bool("rx_vote", mode, ibs_dir, &qca->rx_vote);
|
||||
debugfs_create_u64("rx_votes_on", mode, ibs_dir, &qca->rx_votes_on);
|
||||
debugfs_create_u64("rx_votes_off", mode, ibs_dir, &qca->rx_votes_off);
|
||||
debugfs_create_u64("votes_on", mode, ibs_dir, &qca->votes_on);
|
||||
debugfs_create_u64("votes_off", mode, ibs_dir, &qca->votes_off);
|
||||
debugfs_create_u32("vote_on_ms", mode, ibs_dir, &qca->vote_on_ms);
|
||||
debugfs_create_u32("vote_off_ms", mode, ibs_dir, &qca->vote_off_ms);
|
||||
|
||||
/* read/write */
|
||||
mode = S_IRUGO | S_IWUSR;
|
||||
debugfs_create_u32("wake_retrans", mode, ibs_dir, &qca->wake_retrans);
|
||||
debugfs_create_u32("tx_idle_delay", mode, ibs_dir,
|
||||
&qca->tx_idle_delay);
|
||||
}
|
||||
|
||||
/* Flush protocol data */
|
||||
static int qca_flush(struct hci_uart *hu)
|
||||
{
|
||||
struct qca_data *qca = hu->priv;
|
||||
|
||||
BT_DBG("hu %p qca flush", hu);
|
||||
|
||||
skb_queue_purge(&qca->tx_wait_q);
|
||||
skb_queue_purge(&qca->txq);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Close protocol */
|
||||
static int qca_close(struct hci_uart *hu)
|
||||
{
|
||||
struct qca_data *qca = hu->priv;
|
||||
|
||||
BT_DBG("hu %p qca close", hu);
|
||||
|
||||
serial_clock_vote(HCI_IBS_VOTE_STATS_UPDATE, hu);
|
||||
|
||||
skb_queue_purge(&qca->tx_wait_q);
|
||||
skb_queue_purge(&qca->txq);
|
||||
del_timer(&qca->tx_idle_timer);
|
||||
del_timer(&qca->wake_retrans_timer);
|
||||
destroy_workqueue(qca->workqueue);
|
||||
qca->hu = NULL;
|
||||
|
||||
kfree_skb(qca->rx_skb);
|
||||
|
||||
hu->priv = NULL;
|
||||
|
||||
kfree(qca);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Called upon a wake-up-indication from the device.
|
||||
*/
|
||||
static void device_want_to_wakeup(struct hci_uart *hu)
|
||||
{
|
||||
unsigned long flags;
|
||||
struct qca_data *qca = hu->priv;
|
||||
|
||||
BT_DBG("hu %p want to wake up", hu);
|
||||
|
||||
spin_lock_irqsave(&qca->hci_ibs_lock, flags);
|
||||
|
||||
qca->ibs_recv_wakes++;
|
||||
|
||||
switch (qca->rx_ibs_state) {
|
||||
case HCI_IBS_RX_ASLEEP:
|
||||
/* Make sure clock is on - we may have turned clock off since
|
||||
* receiving the wake up indicator awake rx clock.
|
||||
*/
|
||||
queue_work(qca->workqueue, &qca->ws_awake_rx);
|
||||
spin_unlock_irqrestore(&qca->hci_ibs_lock, flags);
|
||||
return;
|
||||
|
||||
case HCI_IBS_RX_AWAKE:
|
||||
/* Always acknowledge device wake up,
|
||||
* sending IBS message doesn't count as TX ON.
|
||||
*/
|
||||
if (send_hci_ibs_cmd(HCI_IBS_WAKE_ACK, hu) < 0) {
|
||||
BT_ERR("Failed to acknowledge device wake up");
|
||||
break;
|
||||
}
|
||||
qca->ibs_sent_wacks++;
|
||||
break;
|
||||
|
||||
default:
|
||||
/* Any other state is illegal */
|
||||
BT_ERR("Received HCI_IBS_WAKE_IND in rx state %d",
|
||||
qca->rx_ibs_state);
|
||||
break;
|
||||
}
|
||||
|
||||
spin_unlock_irqrestore(&qca->hci_ibs_lock, flags);
|
||||
|
||||
/* Actually send the packets */
|
||||
hci_uart_tx_wakeup(hu);
|
||||
}
|
||||
|
||||
/* Called upon a sleep-indication from the device.
|
||||
*/
|
||||
static void device_want_to_sleep(struct hci_uart *hu)
|
||||
{
|
||||
unsigned long flags;
|
||||
struct qca_data *qca = hu->priv;
|
||||
|
||||
BT_DBG("hu %p want to sleep", hu);
|
||||
|
||||
spin_lock_irqsave(&qca->hci_ibs_lock, flags);
|
||||
|
||||
qca->ibs_recv_slps++;
|
||||
|
||||
switch (qca->rx_ibs_state) {
|
||||
case HCI_IBS_RX_AWAKE:
|
||||
/* Update state */
|
||||
qca->rx_ibs_state = HCI_IBS_RX_ASLEEP;
|
||||
/* Vote off rx clock under workqueue */
|
||||
queue_work(qca->workqueue, &qca->ws_rx_vote_off);
|
||||
break;
|
||||
|
||||
case HCI_IBS_RX_ASLEEP:
|
||||
/* Fall through */
|
||||
|
||||
default:
|
||||
/* Any other state is illegal */
|
||||
BT_ERR("Received HCI_IBS_SLEEP_IND in rx state %d",
|
||||
qca->rx_ibs_state);
|
||||
break;
|
||||
}
|
||||
|
||||
spin_unlock_irqrestore(&qca->hci_ibs_lock, flags);
|
||||
}
|
||||
|
||||
/* Called upon wake-up-acknowledgement from the device
|
||||
*/
|
||||
static void device_woke_up(struct hci_uart *hu)
|
||||
{
|
||||
unsigned long flags, idle_delay;
|
||||
struct qca_data *qca = hu->priv;
|
||||
struct sk_buff *skb = NULL;
|
||||
|
||||
BT_DBG("hu %p woke up", hu);
|
||||
|
||||
spin_lock_irqsave(&qca->hci_ibs_lock, flags);
|
||||
|
||||
qca->ibs_recv_wacks++;
|
||||
|
||||
switch (qca->tx_ibs_state) {
|
||||
case HCI_IBS_TX_AWAKE:
|
||||
/* Expect one if we send 2 WAKEs */
|
||||
BT_DBG("Received HCI_IBS_WAKE_ACK in tx state %d",
|
||||
qca->tx_ibs_state);
|
||||
break;
|
||||
|
||||
case HCI_IBS_TX_WAKING:
|
||||
/* Send pending packets */
|
||||
while ((skb = skb_dequeue(&qca->tx_wait_q)))
|
||||
skb_queue_tail(&qca->txq, skb);
|
||||
|
||||
/* Switch timers and change state to HCI_IBS_TX_AWAKE */
|
||||
del_timer(&qca->wake_retrans_timer);
|
||||
idle_delay = msecs_to_jiffies(qca->tx_idle_delay);
|
||||
mod_timer(&qca->tx_idle_timer, jiffies + idle_delay);
|
||||
qca->tx_ibs_state = HCI_IBS_TX_AWAKE;
|
||||
break;
|
||||
|
||||
case HCI_IBS_TX_ASLEEP:
|
||||
/* Fall through */
|
||||
|
||||
default:
|
||||
BT_ERR("Received HCI_IBS_WAKE_ACK in tx state %d",
|
||||
qca->tx_ibs_state);
|
||||
break;
|
||||
}
|
||||
|
||||
spin_unlock_irqrestore(&qca->hci_ibs_lock, flags);
|
||||
|
||||
/* Actually send the packets */
|
||||
hci_uart_tx_wakeup(hu);
|
||||
}
|
||||
|
||||
/* Enqueue frame for transmittion (padding, crc, etc) may be called from
|
||||
* two simultaneous tasklets.
|
||||
*/
|
||||
static int qca_enqueue(struct hci_uart *hu, struct sk_buff *skb)
|
||||
{
|
||||
unsigned long flags = 0, idle_delay;
|
||||
struct qca_data *qca = hu->priv;
|
||||
|
||||
BT_DBG("hu %p qca enq skb %p tx_ibs_state %d", hu, skb,
|
||||
qca->tx_ibs_state);
|
||||
|
||||
/* Prepend skb with frame type */
|
||||
memcpy(skb_push(skb, 1), &bt_cb(skb)->pkt_type, 1);
|
||||
|
||||
/* Don't go to sleep in middle of patch download or
|
||||
* Out-Of-Band(GPIOs control) sleep is selected.
|
||||
*/
|
||||
if (!test_bit(STATE_IN_BAND_SLEEP_ENABLED, &qca->flags)) {
|
||||
skb_queue_tail(&qca->txq, skb);
|
||||
return 0;
|
||||
}
|
||||
|
||||
spin_lock_irqsave(&qca->hci_ibs_lock, flags);
|
||||
|
||||
/* Act according to current state */
|
||||
switch (qca->tx_ibs_state) {
|
||||
case HCI_IBS_TX_AWAKE:
|
||||
BT_DBG("Device awake, sending normally");
|
||||
skb_queue_tail(&qca->txq, skb);
|
||||
idle_delay = msecs_to_jiffies(qca->tx_idle_delay);
|
||||
mod_timer(&qca->tx_idle_timer, jiffies + idle_delay);
|
||||
break;
|
||||
|
||||
case HCI_IBS_TX_ASLEEP:
|
||||
BT_DBG("Device asleep, waking up and queueing packet");
|
||||
/* Save packet for later */
|
||||
skb_queue_tail(&qca->tx_wait_q, skb);
|
||||
|
||||
qca->tx_ibs_state = HCI_IBS_TX_WAKING;
|
||||
/* Schedule a work queue to wake up device */
|
||||
queue_work(qca->workqueue, &qca->ws_awake_device);
|
||||
break;
|
||||
|
||||
case HCI_IBS_TX_WAKING:
|
||||
BT_DBG("Device waking up, queueing packet");
|
||||
/* Transient state; just keep packet for later */
|
||||
skb_queue_tail(&qca->tx_wait_q, skb);
|
||||
break;
|
||||
|
||||
default:
|
||||
BT_ERR("Illegal tx state: %d (losing packet)",
|
||||
qca->tx_ibs_state);
|
||||
kfree_skb(skb);
|
||||
break;
|
||||
}
|
||||
|
||||
spin_unlock_irqrestore(&qca->hci_ibs_lock, flags);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int qca_ibs_sleep_ind(struct hci_dev *hdev, struct sk_buff *skb)
|
||||
{
|
||||
struct hci_uart *hu = hci_get_drvdata(hdev);
|
||||
|
||||
BT_DBG("hu %p recv hci ibs cmd 0x%x", hu, HCI_IBS_SLEEP_IND);
|
||||
|
||||
device_want_to_sleep(hu);
|
||||
|
||||
kfree_skb(skb);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int qca_ibs_wake_ind(struct hci_dev *hdev, struct sk_buff *skb)
|
||||
{
|
||||
struct hci_uart *hu = hci_get_drvdata(hdev);
|
||||
|
||||
BT_DBG("hu %p recv hci ibs cmd 0x%x", hu, HCI_IBS_WAKE_IND);
|
||||
|
||||
device_want_to_wakeup(hu);
|
||||
|
||||
kfree_skb(skb);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int qca_ibs_wake_ack(struct hci_dev *hdev, struct sk_buff *skb)
|
||||
{
|
||||
struct hci_uart *hu = hci_get_drvdata(hdev);
|
||||
|
||||
BT_DBG("hu %p recv hci ibs cmd 0x%x", hu, HCI_IBS_WAKE_ACK);
|
||||
|
||||
device_woke_up(hu);
|
||||
|
||||
kfree_skb(skb);
|
||||
return 0;
|
||||
}
|
||||
|
||||
#define QCA_IBS_SLEEP_IND_EVENT \
|
||||
.type = HCI_IBS_SLEEP_IND, \
|
||||
.hlen = 0, \
|
||||
.loff = 0, \
|
||||
.lsize = 0, \
|
||||
.maxlen = HCI_MAX_IBS_SIZE
|
||||
|
||||
#define QCA_IBS_WAKE_IND_EVENT \
|
||||
.type = HCI_IBS_WAKE_IND, \
|
||||
.hlen = 0, \
|
||||
.loff = 0, \
|
||||
.lsize = 0, \
|
||||
.maxlen = HCI_MAX_IBS_SIZE
|
||||
|
||||
#define QCA_IBS_WAKE_ACK_EVENT \
|
||||
.type = HCI_IBS_WAKE_ACK, \
|
||||
.hlen = 0, \
|
||||
.loff = 0, \
|
||||
.lsize = 0, \
|
||||
.maxlen = HCI_MAX_IBS_SIZE
|
||||
|
||||
static const struct h4_recv_pkt qca_recv_pkts[] = {
|
||||
{ H4_RECV_ACL, .recv = hci_recv_frame },
|
||||
{ H4_RECV_SCO, .recv = hci_recv_frame },
|
||||
{ H4_RECV_EVENT, .recv = hci_recv_frame },
|
||||
{ QCA_IBS_WAKE_IND_EVENT, .recv = qca_ibs_wake_ind },
|
||||
{ QCA_IBS_WAKE_ACK_EVENT, .recv = qca_ibs_wake_ack },
|
||||
{ QCA_IBS_SLEEP_IND_EVENT, .recv = qca_ibs_sleep_ind },
|
||||
};
|
||||
|
||||
static int qca_recv(struct hci_uart *hu, const void *data, int count)
|
||||
{
|
||||
struct qca_data *qca = hu->priv;
|
||||
|
||||
if (!test_bit(HCI_UART_REGISTERED, &hu->flags))
|
||||
return -EUNATCH;
|
||||
|
||||
qca->rx_skb = h4_recv_buf(hu->hdev, qca->rx_skb, data, count,
|
||||
qca_recv_pkts, ARRAY_SIZE(qca_recv_pkts));
|
||||
if (IS_ERR(qca->rx_skb)) {
|
||||
int err = PTR_ERR(qca->rx_skb);
|
||||
BT_ERR("%s: Frame reassembly failed (%d)", hu->hdev->name, err);
|
||||
qca->rx_skb = NULL;
|
||||
return err;
|
||||
}
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
static struct sk_buff *qca_dequeue(struct hci_uart *hu)
|
||||
{
|
||||
struct qca_data *qca = hu->priv;
|
||||
|
||||
return skb_dequeue(&qca->txq);
|
||||
}
|
||||
|
||||
static uint8_t qca_get_baudrate_value(int speed)
|
||||
{
|
||||
switch(speed) {
|
||||
case 9600:
|
||||
return QCA_BAUDRATE_9600;
|
||||
case 19200:
|
||||
return QCA_BAUDRATE_19200;
|
||||
case 38400:
|
||||
return QCA_BAUDRATE_38400;
|
||||
case 57600:
|
||||
return QCA_BAUDRATE_57600;
|
||||
case 115200:
|
||||
return QCA_BAUDRATE_115200;
|
||||
case 230400:
|
||||
return QCA_BAUDRATE_230400;
|
||||
case 460800:
|
||||
return QCA_BAUDRATE_460800;
|
||||
case 500000:
|
||||
return QCA_BAUDRATE_500000;
|
||||
case 921600:
|
||||
return QCA_BAUDRATE_921600;
|
||||
case 1000000:
|
||||
return QCA_BAUDRATE_1000000;
|
||||
case 2000000:
|
||||
return QCA_BAUDRATE_2000000;
|
||||
case 3000000:
|
||||
return QCA_BAUDRATE_3000000;
|
||||
case 3500000:
|
||||
return QCA_BAUDRATE_3500000;
|
||||
default:
|
||||
return QCA_BAUDRATE_115200;
|
||||
}
|
||||
}
|
||||
|
||||
static int qca_set_baudrate(struct hci_dev *hdev, uint8_t baudrate)
|
||||
{
|
||||
struct hci_uart *hu = hci_get_drvdata(hdev);
|
||||
struct qca_data *qca = hu->priv;
|
||||
struct sk_buff *skb;
|
||||
u8 cmd[] = { 0x01, 0x48, 0xFC, 0x01, 0x00 };
|
||||
|
||||
if (baudrate > QCA_BAUDRATE_3000000)
|
||||
return -EINVAL;
|
||||
|
||||
cmd[4] = baudrate;
|
||||
|
||||
skb = bt_skb_alloc(sizeof(cmd), GFP_ATOMIC);
|
||||
if (!skb) {
|
||||
BT_ERR("Failed to allocate memory for baudrate packet");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
/* Assign commands to change baudrate and packet type. */
|
||||
memcpy(skb_put(skb, sizeof(cmd)), cmd, sizeof(cmd));
|
||||
bt_cb(skb)->pkt_type = HCI_COMMAND_PKT;
|
||||
|
||||
skb_queue_tail(&qca->txq, skb);
|
||||
hci_uart_tx_wakeup(hu);
|
||||
|
||||
/* wait 300ms to change new baudrate on controller side
|
||||
* controller will come back after they receive this HCI command
|
||||
* then host can communicate with new baudrate to controller
|
||||
*/
|
||||
set_current_state(TASK_UNINTERRUPTIBLE);
|
||||
schedule_timeout(msecs_to_jiffies(BAUDRATE_SETTLE_TIMEOUT_MS));
|
||||
set_current_state(TASK_INTERRUPTIBLE);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int qca_setup(struct hci_uart *hu)
|
||||
{
|
||||
struct hci_dev *hdev = hu->hdev;
|
||||
struct qca_data *qca = hu->priv;
|
||||
unsigned int speed, qca_baudrate = QCA_BAUDRATE_115200;
|
||||
int ret;
|
||||
|
||||
BT_INFO("%s: ROME setup", hdev->name);
|
||||
|
||||
/* Patch downloading has to be done without IBS mode */
|
||||
clear_bit(STATE_IN_BAND_SLEEP_ENABLED, &qca->flags);
|
||||
|
||||
/* Setup initial baudrate */
|
||||
speed = 0;
|
||||
if (hu->init_speed)
|
||||
speed = hu->init_speed;
|
||||
else if (hu->proto->init_speed)
|
||||
speed = hu->proto->init_speed;
|
||||
|
||||
if (speed)
|
||||
hci_uart_set_baudrate(hu, speed);
|
||||
|
||||
/* Setup user speed if needed */
|
||||
speed = 0;
|
||||
if (hu->oper_speed)
|
||||
speed = hu->oper_speed;
|
||||
else if (hu->proto->oper_speed)
|
||||
speed = hu->proto->oper_speed;
|
||||
|
||||
if (speed) {
|
||||
qca_baudrate = qca_get_baudrate_value(speed);
|
||||
|
||||
BT_INFO("%s: Set UART speed to %d", hdev->name, speed);
|
||||
ret = qca_set_baudrate(hdev, qca_baudrate);
|
||||
if (ret) {
|
||||
BT_ERR("%s: Failed to change the baud rate (%d)",
|
||||
hdev->name, ret);
|
||||
return ret;
|
||||
}
|
||||
hci_uart_set_baudrate(hu, speed);
|
||||
}
|
||||
|
||||
/* Setup patch / NVM configurations */
|
||||
ret = qca_uart_setup_rome(hdev, qca_baudrate);
|
||||
if (!ret) {
|
||||
set_bit(STATE_IN_BAND_SLEEP_ENABLED, &qca->flags);
|
||||
qca_debugfs_init(hdev);
|
||||
}
|
||||
|
||||
/* Setup bdaddr */
|
||||
hu->hdev->set_bdaddr = qca_set_bdaddr_rome;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static struct hci_uart_proto qca_proto = {
|
||||
.id = HCI_UART_QCA,
|
||||
.name = "QCA",
|
||||
.init_speed = 115200,
|
||||
.oper_speed = 3000000,
|
||||
.open = qca_open,
|
||||
.close = qca_close,
|
||||
.flush = qca_flush,
|
||||
.setup = qca_setup,
|
||||
.recv = qca_recv,
|
||||
.enqueue = qca_enqueue,
|
||||
.dequeue = qca_dequeue,
|
||||
};
|
||||
|
||||
int __init qca_init(void)
|
||||
{
|
||||
return hci_uart_register_proto(&qca_proto);
|
||||
}
|
||||
|
||||
int __exit qca_deinit(void)
|
||||
{
|
||||
return hci_uart_unregister_proto(&qca_proto);
|
||||
}
|
|
@ -35,7 +35,7 @@
|
|||
#define HCIUARTGETFLAGS _IOR('U', 204, int)
|
||||
|
||||
/* UART protocols */
|
||||
#define HCI_UART_MAX_PROTO 8
|
||||
#define HCI_UART_MAX_PROTO 9
|
||||
|
||||
#define HCI_UART_H4 0
|
||||
#define HCI_UART_BCSP 1
|
||||
|
@ -45,6 +45,7 @@
|
|||
#define HCI_UART_ATH3K 5
|
||||
#define HCI_UART_INTEL 6
|
||||
#define HCI_UART_BCM 7
|
||||
#define HCI_UART_QCA 8
|
||||
|
||||
#define HCI_UART_RAW_DEVICE 0
|
||||
#define HCI_UART_RESET_ON_INIT 1
|
||||
|
@ -167,7 +168,17 @@ int h5_init(void);
|
|||
int h5_deinit(void);
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_BT_HCIUART_INTEL
|
||||
int intel_init(void);
|
||||
int intel_deinit(void);
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_BT_HCIUART_BCM
|
||||
int bcm_init(void);
|
||||
int bcm_deinit(void);
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_BT_HCIUART_QCA
|
||||
int qca_init(void);
|
||||
int qca_deinit(void);
|
||||
#endif
|
||||
|
|
|
@ -871,7 +871,7 @@ repoll:
|
|||
if (is_eth) {
|
||||
wc->sl = be16_to_cpu(cqe->sl_vid) >> 13;
|
||||
if (be32_to_cpu(cqe->vlan_my_qpn) &
|
||||
MLX4_CQE_VLAN_PRESENT_MASK) {
|
||||
MLX4_CQE_CVLAN_PRESENT_MASK) {
|
||||
wc->vlan_id = be16_to_cpu(cqe->sl_vid) &
|
||||
MLX4_CQE_VID_MASK;
|
||||
} else {
|
||||
|
|
|
@ -13,6 +13,7 @@
|
|||
#include <linux/mISDNif.h>
|
||||
#include <linux/mISDNdsp.h>
|
||||
#include <linux/export.h>
|
||||
#include <linux/bitrev.h>
|
||||
#include "core.h"
|
||||
#include "dsp.h"
|
||||
|
||||
|
@ -137,27 +138,14 @@ static unsigned char linear2ulaw(short sample)
|
|||
return ulawbyte;
|
||||
}
|
||||
|
||||
static int reverse_bits(int i)
|
||||
{
|
||||
int z, j;
|
||||
z = 0;
|
||||
|
||||
for (j = 0; j < 8; j++) {
|
||||
if ((i & (1 << j)) != 0)
|
||||
z |= 1 << (7 - j);
|
||||
}
|
||||
return z;
|
||||
}
|
||||
|
||||
|
||||
void dsp_audio_generate_law_tables(void)
|
||||
{
|
||||
int i;
|
||||
for (i = 0; i < 256; i++)
|
||||
dsp_audio_alaw_to_s32[i] = alaw2linear(reverse_bits(i));
|
||||
dsp_audio_alaw_to_s32[i] = alaw2linear(bitrev8((u8)i));
|
||||
|
||||
for (i = 0; i < 256; i++)
|
||||
dsp_audio_ulaw_to_s32[i] = ulaw2linear(reverse_bits(i));
|
||||
dsp_audio_ulaw_to_s32[i] = ulaw2linear(bitrev8((u8)i));
|
||||
|
||||
for (i = 0; i < 256; i++) {
|
||||
dsp_audio_alaw_to_ulaw[i] =
|
||||
|
@ -176,13 +164,13 @@ dsp_audio_generate_s2law_table(void)
|
|||
/* generating ulaw-table */
|
||||
for (i = -32768; i < 32768; i++) {
|
||||
dsp_audio_s16_to_law[i & 0xffff] =
|
||||
reverse_bits(linear2ulaw(i));
|
||||
bitrev8(linear2ulaw(i));
|
||||
}
|
||||
} else {
|
||||
/* generating alaw-table */
|
||||
for (i = -32768; i < 32768; i++) {
|
||||
dsp_audio_s16_to_law[i & 0xffff] =
|
||||
reverse_bits(linear2alaw(i));
|
||||
bitrev8(linear2alaw(i));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -180,8 +180,8 @@ config VXLAN
|
|||
will be called vxlan.
|
||||
|
||||
config GENEVE
|
||||
tristate "Generic Network Virtualization Encapsulation netdev"
|
||||
depends on INET && GENEVE_CORE
|
||||
tristate "Generic Network Virtualization Encapsulation"
|
||||
depends on INET && NET_UDP_TUNNEL
|
||||
select NET_IP_TUNNEL
|
||||
---help---
|
||||
This allows one to create geneve virtual interfaces that provide
|
||||
|
@ -282,7 +282,6 @@ config VETH
|
|||
config VIRTIO_NET
|
||||
tristate "Virtio network driver"
|
||||
depends on VIRTIO
|
||||
select AVERAGE
|
||||
---help---
|
||||
This is the virtual network driver for virtio. It can be used with
|
||||
lguest or QEMU based VMMs (like KVM or Xen). Say Y or M.
|
||||
|
@ -297,6 +296,13 @@ config NLMON
|
|||
diagnostics, etc. This is mostly intended for developers or support
|
||||
to debug netlink issues. If unsure, say N.
|
||||
|
||||
config NET_VRF
|
||||
tristate "Virtual Routing and Forwarding (Lite)"
|
||||
depends on IP_MULTIPLE_TABLES && IPV6_MULTIPLE_TABLES
|
||||
---help---
|
||||
This option enables the support for mapping interfaces into VRF's. The
|
||||
support enables VRF devices.
|
||||
|
||||
endif # NET_CORE
|
||||
|
||||
config SUNGEM_PHY
|
||||
|
@ -407,6 +413,13 @@ config VMXNET3
|
|||
To compile this driver as a module, choose M here: the
|
||||
module will be called vmxnet3.
|
||||
|
||||
config FUJITSU_ES
|
||||
tristate "FUJITSU Extended Socket Network Device driver"
|
||||
depends on ACPI
|
||||
help
|
||||
This driver provides support for Extended Socket network device
|
||||
on Extended Partitioning of FUJITSU PRIMEQUEST 2000 E2 series.
|
||||
|
||||
source "drivers/net/hyperv/Kconfig"
|
||||
|
||||
endif # NETDEVICES
|
||||
|
|
|
@ -25,6 +25,7 @@ obj-$(CONFIG_VIRTIO_NET) += virtio_net.o
|
|||
obj-$(CONFIG_VXLAN) += vxlan.o
|
||||
obj-$(CONFIG_GENEVE) += geneve.o
|
||||
obj-$(CONFIG_NLMON) += nlmon.o
|
||||
obj-$(CONFIG_NET_VRF) += vrf.o
|
||||
|
||||
#
|
||||
# Networking Drivers
|
||||
|
@ -67,3 +68,5 @@ obj-$(CONFIG_USB_NET_DRIVERS) += usb/
|
|||
|
||||
obj-$(CONFIG_HYPERV_NET) += hyperv/
|
||||
obj-$(CONFIG_NTB_NETDEV) += ntb_netdev.o
|
||||
|
||||
obj-$(CONFIG_FUJITSU_ES) += fjes/
|
||||
|
|
|
@ -1870,8 +1870,6 @@ static void ad_marker_info_received(struct bond_marker *marker_info,
|
|||
static void ad_marker_response_received(struct bond_marker *marker,
|
||||
struct port *port)
|
||||
{
|
||||
marker = NULL;
|
||||
port = NULL;
|
||||
/* DO NOTHING, SINCE WE DECIDED NOT TO IMPLEMENT THIS FEATURE FOR NOW */
|
||||
}
|
||||
|
||||
|
|
|
@ -979,7 +979,6 @@ static void bond_poll_controller(struct net_device *bond_dev)
|
|||
if (bond_3ad_get_active_agg_info(bond, &ad_info))
|
||||
return;
|
||||
|
||||
rcu_read_lock_bh();
|
||||
bond_for_each_slave_rcu(bond, slave, iter) {
|
||||
ops = slave->dev->netdev_ops;
|
||||
if (!bond_slave_is_up(slave) || !ops->ndo_poll_controller)
|
||||
|
@ -1000,7 +999,6 @@ static void bond_poll_controller(struct net_device *bond_dev)
|
|||
ops->ndo_poll_controller(slave->dev);
|
||||
up(&ni->dev_lock);
|
||||
}
|
||||
rcu_read_unlock_bh();
|
||||
}
|
||||
|
||||
static void bond_netpoll_cleanup(struct net_device *bond_dev)
|
||||
|
@ -3097,7 +3095,7 @@ static bool bond_flow_dissect(struct bonding *bond, struct sk_buff *skb,
|
|||
int noff, proto = -1;
|
||||
|
||||
if (bond->params.xmit_policy > BOND_XMIT_POLICY_LAYER23)
|
||||
return skb_flow_dissect_flow_keys(skb, fk);
|
||||
return skb_flow_dissect_flow_keys(skb, fk, 0);
|
||||
|
||||
fk->ports.ports = 0;
|
||||
noff = skb_network_offset(skb);
|
||||
|
@ -3780,7 +3778,6 @@ int bond_update_slave_arr(struct bonding *bond, struct slave *skipslave)
|
|||
struct slave *slave;
|
||||
struct list_head *iter;
|
||||
struct bond_up_slave *new_arr, *old_arr;
|
||||
int slaves_in_agg;
|
||||
int agg_id = 0;
|
||||
int ret = 0;
|
||||
|
||||
|
@ -3811,7 +3808,6 @@ int bond_update_slave_arr(struct bonding *bond, struct slave *skipslave)
|
|||
}
|
||||
goto out;
|
||||
}
|
||||
slaves_in_agg = ad_info.ports;
|
||||
agg_id = ad_info.aggregator_id;
|
||||
}
|
||||
bond_for_each_slave(bond, slave, iter) {
|
||||
|
@ -4122,9 +4118,8 @@ void bond_setup(struct net_device *bond_dev)
|
|||
SET_NETDEV_DEVTYPE(bond_dev, &bond_type);
|
||||
|
||||
/* Initialize the device options */
|
||||
bond_dev->tx_queue_len = 0;
|
||||
bond_dev->flags |= IFF_MASTER|IFF_MULTICAST;
|
||||
bond_dev->priv_flags |= IFF_BONDING | IFF_UNICAST_FLT;
|
||||
bond_dev->priv_flags |= IFF_BONDING | IFF_UNICAST_FLT | IFF_NO_QUEUE;
|
||||
bond_dev->priv_flags &= ~(IFF_XMIT_DST_RELEASE | IFF_TX_SKB_SHARING);
|
||||
|
||||
/* don't acquire bond device's netif_tx_lock when transmitting */
|
||||
|
|
|
@ -111,6 +111,7 @@ static const struct nla_policy bond_policy[IFLA_BOND_MAX + 1] = {
|
|||
[IFLA_BOND_AD_USER_PORT_KEY] = { .type = NLA_U16 },
|
||||
[IFLA_BOND_AD_ACTOR_SYSTEM] = { .type = NLA_BINARY,
|
||||
.len = ETH_ALEN },
|
||||
[IFLA_BOND_TLB_DYNAMIC_LB] = { .type = NLA_U8 },
|
||||
};
|
||||
|
||||
static const struct nla_policy bond_slave_policy[IFLA_BOND_SLAVE_MAX + 1] = {
|
||||
|
@ -405,7 +406,6 @@ static int bond_changelink(struct net_device *bond_dev,
|
|||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
if (data[IFLA_BOND_AD_USER_PORT_KEY]) {
|
||||
int port_key =
|
||||
nla_get_u16(data[IFLA_BOND_AD_USER_PORT_KEY]);
|
||||
|
@ -415,7 +415,6 @@ static int bond_changelink(struct net_device *bond_dev,
|
|||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
if (data[IFLA_BOND_AD_ACTOR_SYSTEM]) {
|
||||
if (nla_len(data[IFLA_BOND_AD_ACTOR_SYSTEM]) != ETH_ALEN)
|
||||
return -EINVAL;
|
||||
|
@ -426,6 +425,15 @@ static int bond_changelink(struct net_device *bond_dev,
|
|||
if (err)
|
||||
return err;
|
||||
}
|
||||
if (data[IFLA_BOND_TLB_DYNAMIC_LB]) {
|
||||
int dynamic_lb = nla_get_u8(data[IFLA_BOND_TLB_DYNAMIC_LB]);
|
||||
|
||||
bond_opt_initval(&newval, dynamic_lb);
|
||||
err = __bond_opt_set(bond, BOND_OPT_TLB_DYNAMIC_LB, &newval);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -476,6 +484,7 @@ static size_t bond_get_size(const struct net_device *bond_dev)
|
|||
nla_total_size(sizeof(u16)) + /* IFLA_BOND_AD_ACTOR_SYS_PRIO */
|
||||
nla_total_size(sizeof(u16)) + /* IFLA_BOND_AD_USER_PORT_KEY */
|
||||
nla_total_size(ETH_ALEN) + /* IFLA_BOND_AD_ACTOR_SYSTEM */
|
||||
nla_total_size(sizeof(u8)) + /* IFLA_BOND_TLB_DYNAMIC_LB */
|
||||
0;
|
||||
}
|
||||
|
||||
|
@ -598,6 +607,10 @@ static int bond_fill_info(struct sk_buff *skb,
|
|||
bond->params.ad_select))
|
||||
goto nla_put_failure;
|
||||
|
||||
if (nla_put_u8(skb, IFLA_BOND_TLB_DYNAMIC_LB,
|
||||
bond->params.tlb_dynamic_lb))
|
||||
goto nla_put_failure;
|
||||
|
||||
if (BOND_MODE(bond) == BOND_MODE_8023AD) {
|
||||
struct ad_info info;
|
||||
|
||||
|
|
|
@ -420,6 +420,13 @@ static const struct bond_option bond_opts[BOND_OPT_LAST] = {
|
|||
.flags = BOND_OPTFLAG_IFDOWN,
|
||||
.values = bond_ad_user_port_key_tbl,
|
||||
.set = bond_option_ad_user_port_key_set,
|
||||
},
|
||||
[BOND_OPT_NUM_PEER_NOTIF_ALIAS] = {
|
||||
.id = BOND_OPT_NUM_PEER_NOTIF_ALIAS,
|
||||
.name = "num_grat_arp",
|
||||
.desc = "Number of peer notifications to send on failover event",
|
||||
.values = bond_num_peer_notif_tbl,
|
||||
.set = bond_option_num_peer_notif_set
|
||||
}
|
||||
};
|
||||
|
||||
|
|
|
@ -380,7 +380,7 @@ static ssize_t bonding_show_ad_select(struct device *d,
|
|||
static DEVICE_ATTR(ad_select, S_IRUGO | S_IWUSR,
|
||||
bonding_show_ad_select, bonding_sysfs_store_option);
|
||||
|
||||
/* Show and set the number of peer notifications to send after a failover event. */
|
||||
/* Show the number of peer notifications to send after a failover event. */
|
||||
static ssize_t bonding_show_num_peer_notif(struct device *d,
|
||||
struct device_attribute *attr,
|
||||
char *buf)
|
||||
|
@ -388,24 +388,10 @@ static ssize_t bonding_show_num_peer_notif(struct device *d,
|
|||
struct bonding *bond = to_bond(d);
|
||||
return sprintf(buf, "%d\n", bond->params.num_peer_notif);
|
||||
}
|
||||
|
||||
static ssize_t bonding_store_num_peer_notif(struct device *d,
|
||||
struct device_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct bonding *bond = to_bond(d);
|
||||
int ret;
|
||||
|
||||
ret = bond_opt_tryset_rtnl(bond, BOND_OPT_NUM_PEER_NOTIF, (char *)buf);
|
||||
if (!ret)
|
||||
ret = count;
|
||||
|
||||
return ret;
|
||||
}
|
||||
static DEVICE_ATTR(num_grat_arp, S_IRUGO | S_IWUSR,
|
||||
bonding_show_num_peer_notif, bonding_store_num_peer_notif);
|
||||
bonding_show_num_peer_notif, bonding_sysfs_store_option);
|
||||
static DEVICE_ATTR(num_unsol_na, S_IRUGO | S_IWUSR,
|
||||
bonding_show_num_peer_notif, bonding_store_num_peer_notif);
|
||||
bonding_show_num_peer_notif, bonding_sysfs_store_option);
|
||||
|
||||
/* Show the MII monitor interval. */
|
||||
static ssize_t bonding_show_miimon(struct device *d,
|
||||
|
|
|
@ -1120,7 +1120,7 @@ static void cfhsi_setup(struct net_device *dev)
|
|||
dev->type = ARPHRD_CAIF;
|
||||
dev->flags = IFF_POINTOPOINT | IFF_NOARP;
|
||||
dev->mtu = CFHSI_MAX_CAIF_FRAME_SZ;
|
||||
dev->tx_queue_len = 0;
|
||||
dev->priv_flags |= IFF_NO_QUEUE;
|
||||
dev->destructor = free_netdev;
|
||||
dev->netdev_ops = &cfhsi_netdevops;
|
||||
for (i = 0; i < CFHSI_PRIO_LAST; ++i)
|
||||
|
|
|
@ -427,7 +427,7 @@ static void caifdev_setup(struct net_device *dev)
|
|||
dev->type = ARPHRD_CAIF;
|
||||
dev->flags = IFF_POINTOPOINT | IFF_NOARP;
|
||||
dev->mtu = CAIF_MAX_MTU;
|
||||
dev->tx_queue_len = 0;
|
||||
dev->priv_flags |= IFF_NO_QUEUE;
|
||||
dev->destructor = free_netdev;
|
||||
skb_queue_head_init(&serdev->head);
|
||||
serdev->common.link_select = CAIF_LINK_LOW_LATENCY;
|
||||
|
|
|
@ -710,7 +710,7 @@ static void cfspi_setup(struct net_device *dev)
|
|||
dev->netdev_ops = &cfspi_ops;
|
||||
dev->type = ARPHRD_CAIF;
|
||||
dev->flags = IFF_NOARP | IFF_POINTOPOINT;
|
||||
dev->tx_queue_len = 0;
|
||||
dev->priv_flags |= IFF_NO_QUEUE;
|
||||
dev->mtu = SPI_MAX_PAYLOAD_SIZE;
|
||||
dev->destructor = free_netdev;
|
||||
skb_queue_head_init(&cfspi->qhead);
|
||||
|
|
|
@ -805,7 +805,7 @@ static void flexcan_set_bittiming(struct net_device *dev)
|
|||
if (priv->can.ctrlmode & CAN_CTRLMODE_3_SAMPLES)
|
||||
reg |= FLEXCAN_CTRL_SMP;
|
||||
|
||||
netdev_info(dev, "writing ctrl=0x%08x\n", reg);
|
||||
netdev_dbg(dev, "writing ctrl=0x%08x\n", reg);
|
||||
flexcan_write(reg, ®s->ctrl);
|
||||
|
||||
/* print chip status */
|
||||
|
|
|
@ -162,7 +162,7 @@ struct gs_can {
|
|||
struct can_bittiming_const bt_const;
|
||||
unsigned int channel; /* channel number */
|
||||
|
||||
/* This lock prevents a race condition between xmit and recieve. */
|
||||
/* This lock prevents a race condition between xmit and receive. */
|
||||
spinlock_t tx_ctx_lock;
|
||||
struct gs_tx_context tx_context[GS_MAX_TX_URBS];
|
||||
|
||||
|
@ -274,7 +274,7 @@ static void gs_update_state(struct gs_can *dev, struct can_frame *cf)
|
|||
}
|
||||
}
|
||||
|
||||
static void gs_usb_recieve_bulk_callback(struct urb *urb)
|
||||
static void gs_usb_receive_bulk_callback(struct urb *urb)
|
||||
{
|
||||
struct gs_usb *usbcan = urb->context;
|
||||
struct gs_can *dev;
|
||||
|
@ -376,7 +376,7 @@ static void gs_usb_recieve_bulk_callback(struct urb *urb)
|
|||
usb_rcvbulkpipe(usbcan->udev, GSUSB_ENDPOINT_IN),
|
||||
hf,
|
||||
sizeof(struct gs_host_frame),
|
||||
gs_usb_recieve_bulk_callback,
|
||||
gs_usb_receive_bulk_callback,
|
||||
usbcan
|
||||
);
|
||||
|
||||
|
@ -605,7 +605,7 @@ static int gs_can_open(struct net_device *netdev)
|
|||
GSUSB_ENDPOINT_IN),
|
||||
buf,
|
||||
sizeof(struct gs_host_frame),
|
||||
gs_usb_recieve_bulk_callback,
|
||||
gs_usb_receive_bulk_callback,
|
||||
parent);
|
||||
urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
|
||||
|
||||
|
|
|
@ -46,13 +46,13 @@ config NET_DSA_MV88E6171
|
|||
ethernet switches chips.
|
||||
|
||||
config NET_DSA_MV88E6352
|
||||
tristate "Marvell 88E6172/88E6176/88E6352 ethernet switch chip support"
|
||||
tristate "Marvell 88E6172/6176/6320/6321/6352 ethernet switch chip support"
|
||||
depends on NET_DSA
|
||||
select NET_DSA_MV88E6XXX
|
||||
select NET_DSA_TAG_EDSA
|
||||
---help---
|
||||
This enables support for the Marvell 88E6172, 88E6176 and 88E6352
|
||||
ethernet switch chips.
|
||||
This enables support for the Marvell 88E6172, 88E6176, 88E6320,
|
||||
88E6321 and 88E6352 ethernet switch chips.
|
||||
|
||||
config NET_DSA_BCM_SF2
|
||||
tristate "Broadcom Starfighter 2 Ethernet switch support"
|
||||
|
|
|
@ -901,15 +901,11 @@ static void bcm_sf2_sw_fixed_link_update(struct dsa_switch *ds, int port,
|
|||
struct fixed_phy_status *status)
|
||||
{
|
||||
struct bcm_sf2_priv *priv = ds_to_priv(ds);
|
||||
u32 duplex, pause, speed;
|
||||
u32 duplex, pause;
|
||||
u32 reg;
|
||||
|
||||
duplex = core_readl(priv, CORE_DUPSTS);
|
||||
pause = core_readl(priv, CORE_PAUSESTS);
|
||||
speed = core_readl(priv, CORE_SPDSTS);
|
||||
|
||||
speed >>= (port * SPDSTS_SHIFT);
|
||||
speed &= SPDSTS_MASK;
|
||||
|
||||
status->link = 0;
|
||||
|
||||
|
@ -944,18 +940,6 @@ static void bcm_sf2_sw_fixed_link_update(struct dsa_switch *ds, int port,
|
|||
reg &= ~LINK_STS;
|
||||
core_writel(priv, reg, CORE_STS_OVERRIDE_GMIIP_PORT(port));
|
||||
|
||||
switch (speed) {
|
||||
case SPDSTS_10:
|
||||
status->speed = SPEED_10;
|
||||
break;
|
||||
case SPDSTS_100:
|
||||
status->speed = SPEED_100;
|
||||
break;
|
||||
case SPDSTS_1000:
|
||||
status->speed = SPEED_1000;
|
||||
break;
|
||||
}
|
||||
|
||||
if ((pause & (1 << port)) &&
|
||||
(pause & (1 << (port + PAUSESTS_TX_PAUSE_SHIFT)))) {
|
||||
status->asym_pause = 1;
|
||||
|
|
|
@ -129,6 +129,7 @@ struct dsa_switch_driver mv88e6123_61_65_switch_driver = {
|
|||
.get_strings = mv88e6xxx_get_strings,
|
||||
.get_ethtool_stats = mv88e6xxx_get_ethtool_stats,
|
||||
.get_sset_count = mv88e6xxx_get_sset_count,
|
||||
.adjust_link = mv88e6xxx_adjust_link,
|
||||
#ifdef CONFIG_NET_DSA_HWMON
|
||||
.get_temp = mv88e6xxx_get_temp,
|
||||
#endif
|
||||
|
|
|
@ -182,6 +182,7 @@ struct dsa_switch_driver mv88e6131_switch_driver = {
|
|||
.get_strings = mv88e6xxx_get_strings,
|
||||
.get_ethtool_stats = mv88e6xxx_get_ethtool_stats,
|
||||
.get_sset_count = mv88e6xxx_get_sset_count,
|
||||
.adjust_link = mv88e6xxx_adjust_link,
|
||||
};
|
||||
|
||||
MODULE_ALIAS("platform:mv88e6085");
|
||||
|
|
|
@ -108,6 +108,7 @@ struct dsa_switch_driver mv88e6171_switch_driver = {
|
|||
.get_strings = mv88e6xxx_get_strings,
|
||||
.get_ethtool_stats = mv88e6xxx_get_ethtool_stats,
|
||||
.get_sset_count = mv88e6xxx_get_sset_count,
|
||||
.adjust_link = mv88e6xxx_adjust_link,
|
||||
#ifdef CONFIG_NET_DSA_HWMON
|
||||
.get_temp = mv88e6xxx_get_temp,
|
||||
#endif
|
||||
|
@ -116,9 +117,9 @@ struct dsa_switch_driver mv88e6171_switch_driver = {
|
|||
.port_join_bridge = mv88e6xxx_join_bridge,
|
||||
.port_leave_bridge = mv88e6xxx_leave_bridge,
|
||||
.port_stp_update = mv88e6xxx_port_stp_update,
|
||||
.fdb_add = mv88e6xxx_port_fdb_add,
|
||||
.fdb_del = mv88e6xxx_port_fdb_del,
|
||||
.fdb_getnext = mv88e6xxx_port_fdb_getnext,
|
||||
.port_fdb_add = mv88e6xxx_port_fdb_add,
|
||||
.port_fdb_del = mv88e6xxx_port_fdb_del,
|
||||
.port_fdb_getnext = mv88e6xxx_port_fdb_getnext,
|
||||
};
|
||||
|
||||
MODULE_ALIAS("platform:mv88e6171");
|
||||
|
|
|
@ -36,6 +36,18 @@ static char *mv88e6352_probe(struct device *host_dev, int sw_addr)
|
|||
return "Marvell 88E6172";
|
||||
if ((ret & 0xfff0) == PORT_SWITCH_ID_6176)
|
||||
return "Marvell 88E6176";
|
||||
if (ret == PORT_SWITCH_ID_6320_A1)
|
||||
return "Marvell 88E6320 (A1)";
|
||||
if (ret == PORT_SWITCH_ID_6320_A2)
|
||||
return "Marvell 88e6320 (A2)";
|
||||
if ((ret & 0xfff0) == PORT_SWITCH_ID_6320)
|
||||
return "Marvell 88E6320";
|
||||
if (ret == PORT_SWITCH_ID_6321_A1)
|
||||
return "Marvell 88E6321 (A1)";
|
||||
if (ret == PORT_SWITCH_ID_6321_A2)
|
||||
return "Marvell 88e6321 (A2)";
|
||||
if ((ret & 0xfff0) == PORT_SWITCH_ID_6321)
|
||||
return "Marvell 88E6321";
|
||||
if (ret == PORT_SWITCH_ID_6352_A0)
|
||||
return "Marvell 88E6352 (A0)";
|
||||
if (ret == PORT_SWITCH_ID_6352_A1)
|
||||
|
@ -80,66 +92,6 @@ static int mv88e6352_setup_global(struct dsa_switch *ds)
|
|||
return 0;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_NET_DSA_HWMON
|
||||
|
||||
static int mv88e6352_get_temp(struct dsa_switch *ds, int *temp)
|
||||
{
|
||||
int ret;
|
||||
|
||||
*temp = 0;
|
||||
|
||||
ret = mv88e6xxx_phy_page_read(ds, 0, 6, 27);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
*temp = (ret & 0xff) - 25;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int mv88e6352_get_temp_limit(struct dsa_switch *ds, int *temp)
|
||||
{
|
||||
int ret;
|
||||
|
||||
*temp = 0;
|
||||
|
||||
ret = mv88e6xxx_phy_page_read(ds, 0, 6, 26);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
*temp = (((ret >> 8) & 0x1f) * 5) - 25;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int mv88e6352_set_temp_limit(struct dsa_switch *ds, int temp)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = mv88e6xxx_phy_page_read(ds, 0, 6, 26);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
temp = clamp_val(DIV_ROUND_CLOSEST(temp, 5) + 5, 0, 0x1f);
|
||||
return mv88e6xxx_phy_page_write(ds, 0, 6, 26,
|
||||
(ret & 0xe0ff) | (temp << 8));
|
||||
}
|
||||
|
||||
static int mv88e6352_get_temp_alarm(struct dsa_switch *ds, bool *alarm)
|
||||
{
|
||||
int ret;
|
||||
|
||||
*alarm = false;
|
||||
|
||||
ret = mv88e6xxx_phy_page_read(ds, 0, 6, 26);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
*alarm = !!(ret & 0x40);
|
||||
|
||||
return 0;
|
||||
}
|
||||
#endif /* CONFIG_NET_DSA_HWMON */
|
||||
|
||||
static int mv88e6352_setup(struct dsa_switch *ds)
|
||||
{
|
||||
struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
|
||||
|
@ -171,8 +123,9 @@ static int mv88e6352_read_eeprom_word(struct dsa_switch *ds, int addr)
|
|||
|
||||
mutex_lock(&ps->eeprom_mutex);
|
||||
|
||||
ret = mv88e6xxx_reg_write(ds, REG_GLOBAL2, 0x14,
|
||||
0xc000 | (addr & 0xff));
|
||||
ret = mv88e6xxx_reg_write(ds, REG_GLOBAL2, GLOBAL2_EEPROM_OP,
|
||||
GLOBAL2_EEPROM_OP_READ |
|
||||
(addr & GLOBAL2_EEPROM_OP_ADDR_MASK));
|
||||
if (ret < 0)
|
||||
goto error;
|
||||
|
||||
|
@ -180,7 +133,7 @@ static int mv88e6352_read_eeprom_word(struct dsa_switch *ds, int addr)
|
|||
if (ret < 0)
|
||||
goto error;
|
||||
|
||||
ret = mv88e6xxx_reg_read(ds, REG_GLOBAL2, 0x15);
|
||||
ret = mv88e6xxx_reg_read(ds, REG_GLOBAL2, GLOBAL2_EEPROM_DATA);
|
||||
error:
|
||||
mutex_unlock(&ps->eeprom_mutex);
|
||||
return ret;
|
||||
|
@ -253,11 +206,11 @@ static int mv88e6352_eeprom_is_readonly(struct dsa_switch *ds)
|
|||
{
|
||||
int ret;
|
||||
|
||||
ret = mv88e6xxx_reg_read(ds, REG_GLOBAL2, 0x14);
|
||||
ret = mv88e6xxx_reg_read(ds, REG_GLOBAL2, GLOBAL2_EEPROM_OP);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
if (!(ret & 0x0400))
|
||||
if (!(ret & GLOBAL2_EEPROM_OP_WRITE_EN))
|
||||
return -EROFS;
|
||||
|
||||
return 0;
|
||||
|
@ -271,12 +224,13 @@ static int mv88e6352_write_eeprom_word(struct dsa_switch *ds, int addr,
|
|||
|
||||
mutex_lock(&ps->eeprom_mutex);
|
||||
|
||||
ret = mv88e6xxx_reg_write(ds, REG_GLOBAL2, 0x15, data);
|
||||
ret = mv88e6xxx_reg_write(ds, REG_GLOBAL2, GLOBAL2_EEPROM_DATA, data);
|
||||
if (ret < 0)
|
||||
goto error;
|
||||
|
||||
ret = mv88e6xxx_reg_write(ds, REG_GLOBAL2, 0x14,
|
||||
0xb000 | (addr & 0xff));
|
||||
ret = mv88e6xxx_reg_write(ds, REG_GLOBAL2, GLOBAL2_EEPROM_OP,
|
||||
GLOBAL2_EEPROM_OP_WRITE |
|
||||
(addr & GLOBAL2_EEPROM_OP_ADDR_MASK));
|
||||
if (ret < 0)
|
||||
goto error;
|
||||
|
||||
|
@ -374,13 +328,14 @@ struct dsa_switch_driver mv88e6352_switch_driver = {
|
|||
.get_strings = mv88e6xxx_get_strings,
|
||||
.get_ethtool_stats = mv88e6xxx_get_ethtool_stats,
|
||||
.get_sset_count = mv88e6xxx_get_sset_count,
|
||||
.adjust_link = mv88e6xxx_adjust_link,
|
||||
.set_eee = mv88e6xxx_set_eee,
|
||||
.get_eee = mv88e6xxx_get_eee,
|
||||
#ifdef CONFIG_NET_DSA_HWMON
|
||||
.get_temp = mv88e6352_get_temp,
|
||||
.get_temp_limit = mv88e6352_get_temp_limit,
|
||||
.set_temp_limit = mv88e6352_set_temp_limit,
|
||||
.get_temp_alarm = mv88e6352_get_temp_alarm,
|
||||
.get_temp = mv88e6xxx_get_temp,
|
||||
.get_temp_limit = mv88e6xxx_get_temp_limit,
|
||||
.set_temp_limit = mv88e6xxx_set_temp_limit,
|
||||
.get_temp_alarm = mv88e6xxx_get_temp_alarm,
|
||||
#endif
|
||||
.get_eeprom = mv88e6352_get_eeprom,
|
||||
.set_eeprom = mv88e6352_set_eeprom,
|
||||
|
@ -389,10 +344,18 @@ struct dsa_switch_driver mv88e6352_switch_driver = {
|
|||
.port_join_bridge = mv88e6xxx_join_bridge,
|
||||
.port_leave_bridge = mv88e6xxx_leave_bridge,
|
||||
.port_stp_update = mv88e6xxx_port_stp_update,
|
||||
.fdb_add = mv88e6xxx_port_fdb_add,
|
||||
.fdb_del = mv88e6xxx_port_fdb_del,
|
||||
.fdb_getnext = mv88e6xxx_port_fdb_getnext,
|
||||
.port_pvid_get = mv88e6xxx_port_pvid_get,
|
||||
.port_pvid_set = mv88e6xxx_port_pvid_set,
|
||||
.port_vlan_add = mv88e6xxx_port_vlan_add,
|
||||
.port_vlan_del = mv88e6xxx_port_vlan_del,
|
||||
.vlan_getnext = mv88e6xxx_vlan_getnext,
|
||||
.port_fdb_add = mv88e6xxx_port_fdb_add,
|
||||
.port_fdb_del = mv88e6xxx_port_fdb_del,
|
||||
.port_fdb_getnext = mv88e6xxx_port_fdb_getnext,
|
||||
};
|
||||
|
||||
MODULE_ALIAS("platform:mv88e6352");
|
||||
MODULE_ALIAS("platform:mv88e6172");
|
||||
MODULE_ALIAS("platform:mv88e6176");
|
||||
MODULE_ALIAS("platform:mv88e6320");
|
||||
MODULE_ALIAS("platform:mv88e6321");
|
||||
MODULE_ALIAS("platform:mv88e6352");
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -11,6 +11,8 @@
|
|||
#ifndef __MV88E6XXX_H
|
||||
#define __MV88E6XXX_H
|
||||
|
||||
#include <linux/if_vlan.h>
|
||||
|
||||
#ifndef UINT64_MAX
|
||||
#define UINT64_MAX (u64)(~((u64)0))
|
||||
#endif
|
||||
|
@ -44,6 +46,8 @@
|
|||
#define PORT_STATUS_TX_PAUSED BIT(5)
|
||||
#define PORT_STATUS_FLOW_CTRL BIT(4)
|
||||
#define PORT_PCS_CTRL 0x01
|
||||
#define PORT_PCS_CTRL_RGMII_DELAY_RXCLK BIT(15)
|
||||
#define PORT_PCS_CTRL_RGMII_DELAY_TXCLK BIT(14)
|
||||
#define PORT_PCS_CTRL_FC BIT(7)
|
||||
#define PORT_PCS_CTRL_FORCE_FC BIT(6)
|
||||
#define PORT_PCS_CTRL_LINK_UP BIT(5)
|
||||
|
@ -89,7 +93,12 @@
|
|||
#define PORT_SWITCH_ID_6182 0x1a60
|
||||
#define PORT_SWITCH_ID_6185 0x1a70
|
||||
#define PORT_SWITCH_ID_6240 0x2400
|
||||
#define PORT_SWITCH_ID_6320 0x1250
|
||||
#define PORT_SWITCH_ID_6320 0x1150
|
||||
#define PORT_SWITCH_ID_6320_A1 0x1151
|
||||
#define PORT_SWITCH_ID_6320_A2 0x1152
|
||||
#define PORT_SWITCH_ID_6321 0x3100
|
||||
#define PORT_SWITCH_ID_6321_A1 0x3101
|
||||
#define PORT_SWITCH_ID_6321_A2 0x3102
|
||||
#define PORT_SWITCH_ID_6350 0x3710
|
||||
#define PORT_SWITCH_ID_6351 0x3750
|
||||
#define PORT_SWITCH_ID_6352 0x3520
|
||||
|
@ -124,6 +133,7 @@
|
|||
#define PORT_CONTROL_1 0x05
|
||||
#define PORT_BASE_VLAN 0x06
|
||||
#define PORT_DEFAULT_VLAN 0x07
|
||||
#define PORT_DEFAULT_VLAN_MASK 0xfff
|
||||
#define PORT_CONTROL_2 0x08
|
||||
#define PORT_CONTROL_2_IGNORE_FCS BIT(15)
|
||||
#define PORT_CONTROL_2_VTU_PRI_OVERRIDE BIT(14)
|
||||
|
@ -132,6 +142,11 @@
|
|||
#define PORT_CONTROL_2_JUMBO_1522 (0x00 << 12)
|
||||
#define PORT_CONTROL_2_JUMBO_2048 (0x01 << 12)
|
||||
#define PORT_CONTROL_2_JUMBO_10240 (0x02 << 12)
|
||||
#define PORT_CONTROL_2_8021Q_MASK (0x03 << 10)
|
||||
#define PORT_CONTROL_2_8021Q_DISABLED (0x00 << 10)
|
||||
#define PORT_CONTROL_2_8021Q_FALLBACK (0x01 << 10)
|
||||
#define PORT_CONTROL_2_8021Q_CHECK (0x02 << 10)
|
||||
#define PORT_CONTROL_2_8021Q_SECURE (0x03 << 10)
|
||||
#define PORT_CONTROL_2_DISCARD_TAGGED BIT(9)
|
||||
#define PORT_CONTROL_2_DISCARD_UNTAGGED BIT(8)
|
||||
#define PORT_CONTROL_2_MAP_DA BIT(7)
|
||||
|
@ -164,6 +179,11 @@
|
|||
#define GLOBAL_MAC_01 0x01
|
||||
#define GLOBAL_MAC_23 0x02
|
||||
#define GLOBAL_MAC_45 0x03
|
||||
#define GLOBAL_ATU_FID 0x01 /* 6097 6165 6351 6352 */
|
||||
#define GLOBAL_VTU_FID 0x02 /* 6097 6165 6351 6352 */
|
||||
#define GLOBAL_VTU_FID_MASK 0xfff
|
||||
#define GLOBAL_VTU_SID 0x03 /* 6097 6165 6351 6352 */
|
||||
#define GLOBAL_VTU_SID_MASK 0x3f
|
||||
#define GLOBAL_CONTROL 0x04
|
||||
#define GLOBAL_CONTROL_SW_RESET BIT(15)
|
||||
#define GLOBAL_CONTROL_PPU_ENABLE BIT(14)
|
||||
|
@ -180,10 +200,27 @@
|
|||
#define GLOBAL_CONTROL_TCAM_EN BIT(1)
|
||||
#define GLOBAL_CONTROL_EEPROM_DONE_EN BIT(0)
|
||||
#define GLOBAL_VTU_OP 0x05
|
||||
#define GLOBAL_VTU_OP_BUSY BIT(15)
|
||||
#define GLOBAL_VTU_OP_FLUSH_ALL ((0x01 << 12) | GLOBAL_VTU_OP_BUSY)
|
||||
#define GLOBAL_VTU_OP_VTU_LOAD_PURGE ((0x03 << 12) | GLOBAL_VTU_OP_BUSY)
|
||||
#define GLOBAL_VTU_OP_VTU_GET_NEXT ((0x04 << 12) | GLOBAL_VTU_OP_BUSY)
|
||||
#define GLOBAL_VTU_OP_STU_LOAD_PURGE ((0x05 << 12) | GLOBAL_VTU_OP_BUSY)
|
||||
#define GLOBAL_VTU_OP_STU_GET_NEXT ((0x06 << 12) | GLOBAL_VTU_OP_BUSY)
|
||||
#define GLOBAL_VTU_VID 0x06
|
||||
#define GLOBAL_VTU_VID_MASK 0xfff
|
||||
#define GLOBAL_VTU_VID_VALID BIT(12)
|
||||
#define GLOBAL_VTU_DATA_0_3 0x07
|
||||
#define GLOBAL_VTU_DATA_4_7 0x08
|
||||
#define GLOBAL_VTU_DATA_8_11 0x09
|
||||
#define GLOBAL_VTU_STU_DATA_MASK 0x03
|
||||
#define GLOBAL_VTU_DATA_MEMBER_TAG_UNMODIFIED 0x00
|
||||
#define GLOBAL_VTU_DATA_MEMBER_TAG_UNTAGGED 0x01
|
||||
#define GLOBAL_VTU_DATA_MEMBER_TAG_TAGGED 0x02
|
||||
#define GLOBAL_VTU_DATA_MEMBER_TAG_NON_MEMBER 0x03
|
||||
#define GLOBAL_STU_DATA_PORT_STATE_DISABLED 0x00
|
||||
#define GLOBAL_STU_DATA_PORT_STATE_BLOCKING 0x01
|
||||
#define GLOBAL_STU_DATA_PORT_STATE_LEARNING 0x02
|
||||
#define GLOBAL_STU_DATA_PORT_STATE_FORWARDING 0x03
|
||||
#define GLOBAL_ATU_CONTROL 0x0a
|
||||
#define GLOBAL_ATU_CONTROL_LEARN2ALL BIT(3)
|
||||
#define GLOBAL_ATU_OP 0x0b
|
||||
|
@ -198,6 +235,8 @@
|
|||
#define GLOBAL_ATU_OP_GET_CLR_VIOLATION ((7 << 12) | GLOBAL_ATU_OP_BUSY)
|
||||
#define GLOBAL_ATU_DATA 0x0c
|
||||
#define GLOBAL_ATU_DATA_TRUNK BIT(15)
|
||||
#define GLOBAL_ATU_DATA_TRUNK_ID_MASK 0x00f0
|
||||
#define GLOBAL_ATU_DATA_TRUNK_ID_SHIFT 4
|
||||
#define GLOBAL_ATU_DATA_PORT_VECTOR_MASK 0x3ff0
|
||||
#define GLOBAL_ATU_DATA_PORT_VECTOR_SHIFT 4
|
||||
#define GLOBAL_ATU_DATA_STATE_MASK 0x0f
|
||||
|
@ -280,8 +319,12 @@
|
|||
#define GLOBAL2_PRIO_OVERRIDE_FORCE_ARP BIT(3)
|
||||
#define GLOBAL2_PRIO_OVERRIDE_ARP_SHIFT 0
|
||||
#define GLOBAL2_EEPROM_OP 0x14
|
||||
#define GLOBAL2_EEPROM_OP_BUSY BIT(15)
|
||||
#define GLOBAL2_EEPROM_OP_LOAD BIT(11)
|
||||
#define GLOBAL2_EEPROM_OP_BUSY BIT(15)
|
||||
#define GLOBAL2_EEPROM_OP_WRITE ((3 << 12) | GLOBAL2_EEPROM_OP_BUSY)
|
||||
#define GLOBAL2_EEPROM_OP_READ ((4 << 12) | GLOBAL2_EEPROM_OP_BUSY)
|
||||
#define GLOBAL2_EEPROM_OP_LOAD BIT(11)
|
||||
#define GLOBAL2_EEPROM_OP_WRITE_EN BIT(10)
|
||||
#define GLOBAL2_EEPROM_OP_ADDR_MASK 0xff
|
||||
#define GLOBAL2_EEPROM_DATA 0x15
|
||||
#define GLOBAL2_PTP_AVB_OP 0x16
|
||||
#define GLOBAL2_PTP_AVB_DATA 0x17
|
||||
|
@ -304,6 +347,25 @@
|
|||
#define GLOBAL2_QOS_WEIGHT 0x1c
|
||||
#define GLOBAL2_MISC 0x1d
|
||||
|
||||
struct mv88e6xxx_atu_entry {
|
||||
u16 fid;
|
||||
u8 state;
|
||||
bool trunk;
|
||||
u16 portv_trunkid;
|
||||
u8 mac[ETH_ALEN];
|
||||
};
|
||||
|
||||
struct mv88e6xxx_vtu_stu_entry {
|
||||
/* VTU only */
|
||||
u16 vid;
|
||||
u16 fid;
|
||||
|
||||
/* VTU and STU */
|
||||
u8 sid;
|
||||
bool valid;
|
||||
u8 data[DSA_MAX_PORTS];
|
||||
};
|
||||
|
||||
struct mv88e6xxx_priv_state {
|
||||
/* When using multi-chip addressing, this mutex protects
|
||||
* access to the indirect access registers. (In single-chip
|
||||
|
@ -342,9 +404,9 @@ struct mv88e6xxx_priv_state {
|
|||
|
||||
/* hw bridging */
|
||||
|
||||
u32 fid_mask;
|
||||
u8 fid[DSA_MAX_PORTS];
|
||||
u16 bridge_mask[DSA_MAX_PORTS];
|
||||
DECLARE_BITMAP(fid_bitmap, VLAN_N_VID); /* FIDs 1 to 4095 available */
|
||||
u16 fid[DSA_MAX_PORTS]; /* per (non-bridged) port FID */
|
||||
u16 bridge_mask[DSA_MAX_PORTS]; /* br groups (indexed by FID) */
|
||||
|
||||
unsigned long port_state_update_mask;
|
||||
u8 port_state[DSA_MAX_PORTS];
|
||||
|
@ -386,10 +448,15 @@ void mv88e6xxx_get_ethtool_stats(struct dsa_switch *ds, int port,
|
|||
uint64_t *data);
|
||||
int mv88e6xxx_get_sset_count(struct dsa_switch *ds);
|
||||
int mv88e6xxx_get_sset_count_basic(struct dsa_switch *ds);
|
||||
void mv88e6xxx_adjust_link(struct dsa_switch *ds, int port,
|
||||
struct phy_device *phydev);
|
||||
int mv88e6xxx_get_regs_len(struct dsa_switch *ds, int port);
|
||||
void mv88e6xxx_get_regs(struct dsa_switch *ds, int port,
|
||||
struct ethtool_regs *regs, void *_p);
|
||||
int mv88e6xxx_get_temp(struct dsa_switch *ds, int *temp);
|
||||
int mv88e6xxx_get_temp(struct dsa_switch *ds, int *temp);
|
||||
int mv88e6xxx_get_temp_limit(struct dsa_switch *ds, int *temp);
|
||||
int mv88e6xxx_set_temp_limit(struct dsa_switch *ds, int temp);
|
||||
int mv88e6xxx_get_temp_alarm(struct dsa_switch *ds, bool *alarm);
|
||||
int mv88e6xxx_eeprom_load_wait(struct dsa_switch *ds);
|
||||
int mv88e6xxx_eeprom_busy_wait(struct dsa_switch *ds);
|
||||
int mv88e6xxx_phy_read_indirect(struct dsa_switch *ds, int addr, int regnum);
|
||||
|
@ -401,15 +468,23 @@ int mv88e6xxx_set_eee(struct dsa_switch *ds, int port,
|
|||
int mv88e6xxx_join_bridge(struct dsa_switch *ds, int port, u32 br_port_mask);
|
||||
int mv88e6xxx_leave_bridge(struct dsa_switch *ds, int port, u32 br_port_mask);
|
||||
int mv88e6xxx_port_stp_update(struct dsa_switch *ds, int port, u8 state);
|
||||
int mv88e6xxx_port_pvid_get(struct dsa_switch *ds, int port, u16 *vid);
|
||||
int mv88e6xxx_port_pvid_set(struct dsa_switch *ds, int port, u16 vid);
|
||||
int mv88e6xxx_port_vlan_add(struct dsa_switch *ds, int port, u16 vid,
|
||||
bool untagged);
|
||||
int mv88e6xxx_port_vlan_del(struct dsa_switch *ds, int port, u16 vid);
|
||||
int mv88e6xxx_vlan_getnext(struct dsa_switch *ds, u16 *vid,
|
||||
unsigned long *ports, unsigned long *untagged);
|
||||
int mv88e6xxx_port_fdb_add(struct dsa_switch *ds, int port,
|
||||
const unsigned char *addr, u16 vid);
|
||||
int mv88e6xxx_port_fdb_del(struct dsa_switch *ds, int port,
|
||||
const unsigned char *addr, u16 vid);
|
||||
int mv88e6xxx_port_fdb_getnext(struct dsa_switch *ds, int port,
|
||||
unsigned char *addr, bool *is_static);
|
||||
unsigned char *addr, u16 *vid, bool *is_static);
|
||||
int mv88e6xxx_phy_page_read(struct dsa_switch *ds, int port, int page, int reg);
|
||||
int mv88e6xxx_phy_page_write(struct dsa_switch *ds, int port, int page,
|
||||
int reg, int val);
|
||||
|
||||
extern struct dsa_switch_driver mv88e6131_switch_driver;
|
||||
extern struct dsa_switch_driver mv88e6123_61_65_switch_driver;
|
||||
extern struct dsa_switch_driver mv88e6352_switch_driver;
|
||||
|
|
|
@ -144,10 +144,9 @@ static void dummy_setup(struct net_device *dev)
|
|||
dev->destructor = free_netdev;
|
||||
|
||||
/* Fill in device structure with ethernet-generic values. */
|
||||
dev->tx_queue_len = 0;
|
||||
dev->flags |= IFF_NOARP;
|
||||
dev->flags &= ~IFF_MULTICAST;
|
||||
dev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
|
||||
dev->priv_flags |= IFF_LIVE_ADDR_CHANGE | IFF_NO_QUEUE;
|
||||
dev->features |= NETIF_F_SG | NETIF_F_FRAGLIST | NETIF_F_TSO;
|
||||
dev->features |= NETIF_F_HW_CSUM | NETIF_F_HIGHDMA | NETIF_F_LLTX;
|
||||
eth_hw_addr_random(dev);
|
||||
|
|
|
@ -1726,6 +1726,7 @@ vortex_up(struct net_device *dev)
|
|||
if (vp->cb_fn_base) /* The PCMCIA people are idiots. */
|
||||
iowrite32(0x8000, vp->cb_fn_base + 4);
|
||||
netif_start_queue (dev);
|
||||
netdev_reset_queue(dev);
|
||||
err_out:
|
||||
return err;
|
||||
}
|
||||
|
@ -1935,16 +1936,18 @@ static void vortex_tx_timeout(struct net_device *dev)
|
|||
if (vp->cur_tx - vp->dirty_tx > 0 && ioread32(ioaddr + DownListPtr) == 0)
|
||||
iowrite32(vp->tx_ring_dma + (vp->dirty_tx % TX_RING_SIZE) * sizeof(struct boom_tx_desc),
|
||||
ioaddr + DownListPtr);
|
||||
if (vp->cur_tx - vp->dirty_tx < TX_RING_SIZE)
|
||||
if (vp->cur_tx - vp->dirty_tx < TX_RING_SIZE) {
|
||||
netif_wake_queue (dev);
|
||||
netdev_reset_queue (dev);
|
||||
}
|
||||
if (vp->drv_flags & IS_BOOMERANG)
|
||||
iowrite8(PKT_BUF_SZ>>8, ioaddr + TxFreeThreshold);
|
||||
iowrite16(DownUnstall, ioaddr + EL3_CMD);
|
||||
} else {
|
||||
dev->stats.tx_dropped++;
|
||||
netif_wake_queue(dev);
|
||||
netdev_reset_queue(dev);
|
||||
}
|
||||
|
||||
/* Issue Tx Enable */
|
||||
iowrite16(TxEnable, ioaddr + EL3_CMD);
|
||||
dev->trans_start = jiffies; /* prevent tx timeout */
|
||||
|
@ -2063,6 +2066,7 @@ vortex_start_xmit(struct sk_buff *skb, struct net_device *dev)
|
|||
{
|
||||
struct vortex_private *vp = netdev_priv(dev);
|
||||
void __iomem *ioaddr = vp->ioaddr;
|
||||
int skblen = skb->len;
|
||||
|
||||
/* Put out the doubleword header... */
|
||||
iowrite32(skb->len, ioaddr + TX_FIFO);
|
||||
|
@ -2094,6 +2098,7 @@ vortex_start_xmit(struct sk_buff *skb, struct net_device *dev)
|
|||
}
|
||||
}
|
||||
|
||||
netdev_sent_queue(dev, skblen);
|
||||
|
||||
/* Clear the Tx status stack. */
|
||||
{
|
||||
|
@ -2125,6 +2130,7 @@ boomerang_start_xmit(struct sk_buff *skb, struct net_device *dev)
|
|||
void __iomem *ioaddr = vp->ioaddr;
|
||||
/* Calculate the next Tx descriptor entry. */
|
||||
int entry = vp->cur_tx % TX_RING_SIZE;
|
||||
int skblen = skb->len;
|
||||
struct boom_tx_desc *prev_entry = &vp->tx_ring[(vp->cur_tx-1) % TX_RING_SIZE];
|
||||
unsigned long flags;
|
||||
dma_addr_t dma_addr;
|
||||
|
@ -2230,6 +2236,8 @@ boomerang_start_xmit(struct sk_buff *skb, struct net_device *dev)
|
|||
}
|
||||
|
||||
vp->cur_tx++;
|
||||
netdev_sent_queue(dev, skblen);
|
||||
|
||||
if (vp->cur_tx - vp->dirty_tx > TX_RING_SIZE - 1) {
|
||||
netif_stop_queue (dev);
|
||||
} else { /* Clear previous interrupt enable. */
|
||||
|
@ -2267,6 +2275,7 @@ vortex_interrupt(int irq, void *dev_id)
|
|||
int status;
|
||||
int work_done = max_interrupt_work;
|
||||
int handled = 0;
|
||||
unsigned int bytes_compl = 0, pkts_compl = 0;
|
||||
|
||||
ioaddr = vp->ioaddr;
|
||||
spin_lock(&vp->lock);
|
||||
|
@ -2314,6 +2323,8 @@ vortex_interrupt(int irq, void *dev_id)
|
|||
if (ioread16(ioaddr + Wn7_MasterStatus) & 0x1000) {
|
||||
iowrite16(0x1000, ioaddr + Wn7_MasterStatus); /* Ack the event. */
|
||||
pci_unmap_single(VORTEX_PCI(vp), vp->tx_skb_dma, (vp->tx_skb->len + 3) & ~3, PCI_DMA_TODEVICE);
|
||||
pkts_compl++;
|
||||
bytes_compl += vp->tx_skb->len;
|
||||
dev_kfree_skb_irq(vp->tx_skb); /* Release the transferred buffer */
|
||||
if (ioread16(ioaddr + TxFree) > 1536) {
|
||||
/*
|
||||
|
@ -2358,6 +2369,7 @@ vortex_interrupt(int irq, void *dev_id)
|
|||
iowrite16(AckIntr | IntReq | IntLatch, ioaddr + EL3_CMD);
|
||||
} while ((status = ioread16(ioaddr + EL3_STATUS)) & (IntLatch | RxComplete));
|
||||
|
||||
netdev_completed_queue(dev, pkts_compl, bytes_compl);
|
||||
spin_unlock(&vp->window_lock);
|
||||
|
||||
if (vortex_debug > 4)
|
||||
|
@ -2382,6 +2394,7 @@ boomerang_interrupt(int irq, void *dev_id)
|
|||
int status;
|
||||
int work_done = max_interrupt_work;
|
||||
int handled = 0;
|
||||
unsigned int bytes_compl = 0, pkts_compl = 0;
|
||||
|
||||
ioaddr = vp->ioaddr;
|
||||
|
||||
|
@ -2455,6 +2468,8 @@ boomerang_interrupt(int irq, void *dev_id)
|
|||
pci_unmap_single(VORTEX_PCI(vp),
|
||||
le32_to_cpu(vp->tx_ring[entry].addr), skb->len, PCI_DMA_TODEVICE);
|
||||
#endif
|
||||
pkts_compl++;
|
||||
bytes_compl += skb->len;
|
||||
dev_kfree_skb_irq(skb);
|
||||
vp->tx_skbuff[entry] = NULL;
|
||||
} else {
|
||||
|
@ -2495,6 +2510,7 @@ boomerang_interrupt(int irq, void *dev_id)
|
|||
iowrite32(0x8000, vp->cb_fn_base + 4);
|
||||
|
||||
} while ((status = ioread16(ioaddr + EL3_STATUS)) & IntLatch);
|
||||
netdev_completed_queue(dev, pkts_compl, bytes_compl);
|
||||
|
||||
if (vortex_debug > 4)
|
||||
pr_debug("%s: exiting interrupt, status %4.4x.\n",
|
||||
|
@ -2696,7 +2712,8 @@ vortex_down(struct net_device *dev, int final_down)
|
|||
struct vortex_private *vp = netdev_priv(dev);
|
||||
void __iomem *ioaddr = vp->ioaddr;
|
||||
|
||||
netif_stop_queue (dev);
|
||||
netdev_reset_queue(dev);
|
||||
netif_stop_queue(dev);
|
||||
|
||||
del_timer_sync(&vp->rx_oom_timer);
|
||||
del_timer_sync(&vp->timer);
|
||||
|
|
|
@ -167,6 +167,7 @@ source "drivers/net/ethernet/sgi/Kconfig"
|
|||
source "drivers/net/ethernet/smsc/Kconfig"
|
||||
source "drivers/net/ethernet/stmicro/Kconfig"
|
||||
source "drivers/net/ethernet/sun/Kconfig"
|
||||
source "drivers/net/ethernet/synopsys/Kconfig"
|
||||
source "drivers/net/ethernet/tehuti/Kconfig"
|
||||
source "drivers/net/ethernet/ti/Kconfig"
|
||||
source "drivers/net/ethernet/tile/Kconfig"
|
||||
|
|
|
@ -77,6 +77,7 @@ obj-$(CONFIG_NET_VENDOR_SGI) += sgi/
|
|||
obj-$(CONFIG_NET_VENDOR_SMSC) += smsc/
|
||||
obj-$(CONFIG_NET_VENDOR_STMICRO) += stmicro/
|
||||
obj-$(CONFIG_NET_VENDOR_SUN) += sun/
|
||||
obj-$(CONFIG_NET_VENDOR_SYNOPSYS) += synopsys/
|
||||
obj-$(CONFIG_NET_VENDOR_TEHUTI) += tehuti/
|
||||
obj-$(CONFIG_NET_VENDOR_TI) += ti/
|
||||
obj-$(CONFIG_TILE_NET) += tile/
|
||||
|
|
|
@ -28,6 +28,7 @@
|
|||
#include <linux/of_platform.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/phy.h>
|
||||
#include <linux/soc/sunxi/sunxi_sram.h>
|
||||
|
||||
#include "sun4i-emac.h"
|
||||
|
||||
|
@ -857,11 +858,17 @@ static int emac_probe(struct platform_device *pdev)
|
|||
|
||||
clk_prepare_enable(db->clk);
|
||||
|
||||
ret = sunxi_sram_claim(&pdev->dev);
|
||||
if (ret) {
|
||||
dev_err(&pdev->dev, "Error couldn't map SRAM to device\n");
|
||||
goto out;
|
||||
}
|
||||
|
||||
db->phy_node = of_parse_phandle(np, "phy", 0);
|
||||
if (!db->phy_node) {
|
||||
dev_err(&pdev->dev, "no associated PHY\n");
|
||||
ret = -ENODEV;
|
||||
goto out;
|
||||
goto out_release_sram;
|
||||
}
|
||||
|
||||
/* Read MAC-address from DT */
|
||||
|
@ -893,7 +900,7 @@ static int emac_probe(struct platform_device *pdev)
|
|||
if (ret) {
|
||||
dev_err(&pdev->dev, "Registering netdev failed!\n");
|
||||
ret = -ENODEV;
|
||||
goto out;
|
||||
goto out_release_sram;
|
||||
}
|
||||
|
||||
dev_info(&pdev->dev, "%s: at %p, IRQ %d MAC: %pM\n",
|
||||
|
@ -901,6 +908,8 @@ static int emac_probe(struct platform_device *pdev)
|
|||
|
||||
return 0;
|
||||
|
||||
out_release_sram:
|
||||
sunxi_sram_release(&pdev->dev);
|
||||
out:
|
||||
dev_err(db->dev, "not found (%d).\n", ret);
|
||||
|
||||
|
|
|
@ -71,8 +71,6 @@ int sgdma_initialize(struct altera_tse_private *priv)
|
|||
SGDMA_CTRLREG_INTEN |
|
||||
SGDMA_CTRLREG_ILASTD;
|
||||
|
||||
priv->sgdmadesclen = sizeof(struct sgdma_descrip);
|
||||
|
||||
INIT_LIST_HEAD(&priv->txlisthd);
|
||||
INIT_LIST_HEAD(&priv->rxlisthd);
|
||||
|
||||
|
@ -254,7 +252,7 @@ u32 sgdma_rx_status(struct altera_tse_private *priv)
|
|||
unsigned int pktstatus = 0;
|
||||
dma_sync_single_for_cpu(priv->device,
|
||||
priv->rxdescphys,
|
||||
priv->sgdmadesclen,
|
||||
SGDMA_DESC_LEN,
|
||||
DMA_FROM_DEVICE);
|
||||
|
||||
pktlength = csrrd16(desc, sgdma_descroffs(bytes_xferred));
|
||||
|
@ -374,7 +372,7 @@ static int sgdma_async_read(struct altera_tse_private *priv)
|
|||
|
||||
dma_sync_single_for_device(priv->device,
|
||||
priv->rxdescphys,
|
||||
priv->sgdmadesclen,
|
||||
SGDMA_DESC_LEN,
|
||||
DMA_TO_DEVICE);
|
||||
|
||||
csrwr32(lower_32_bits(sgdma_rxphysaddr(priv, cdesc)),
|
||||
|
@ -402,7 +400,7 @@ static int sgdma_async_write(struct altera_tse_private *priv,
|
|||
csrwr32(0x1f, priv->tx_dma_csr, sgdma_csroffs(status));
|
||||
|
||||
dma_sync_single_for_device(priv->device, priv->txdescphys,
|
||||
priv->sgdmadesclen, DMA_TO_DEVICE);
|
||||
SGDMA_DESC_LEN, DMA_TO_DEVICE);
|
||||
|
||||
csrwr32(lower_32_bits(sgdma_txphysaddr(priv, desc)),
|
||||
priv->tx_dma_csr,
|
||||
|
|
|
@ -50,6 +50,7 @@ struct sgdma_descrip {
|
|||
u8 control;
|
||||
} __packed;
|
||||
|
||||
#define SGDMA_DESC_LEN sizeof(struct sgdma_descrip)
|
||||
|
||||
#define SGDMA_STATUS_ERR BIT(0)
|
||||
#define SGDMA_STATUS_LENGTH_ERR BIT(1)
|
||||
|
|
|
@ -458,7 +458,6 @@ struct altera_tse_private {
|
|||
u32 rxctrlreg;
|
||||
dma_addr_t rxdescphys;
|
||||
dma_addr_t txdescphys;
|
||||
size_t sgdmadesclen;
|
||||
|
||||
struct list_head txlisthd;
|
||||
struct list_head rxlisthd;
|
||||
|
|
|
@ -193,12 +193,16 @@ enum xgene_enet_rm {
|
|||
#define USERINFO_LEN 32
|
||||
#define FPQNUM_POS 32
|
||||
#define FPQNUM_LEN 12
|
||||
#define NV_POS 50
|
||||
#define NV_LEN 1
|
||||
#define LL_POS 51
|
||||
#define LL_LEN 1
|
||||
#define LERR_POS 60
|
||||
#define LERR_LEN 3
|
||||
#define STASH_POS 52
|
||||
#define STASH_LEN 2
|
||||
#define BUFDATALEN_POS 48
|
||||
#define BUFDATALEN_LEN 12
|
||||
#define BUFDATALEN_LEN 15
|
||||
#define DATAADDR_POS 0
|
||||
#define DATAADDR_LEN 42
|
||||
#define COHERENT_POS 63
|
||||
|
@ -215,9 +219,19 @@ enum xgene_enet_rm {
|
|||
#define IPHDR_LEN 6
|
||||
#define EC_POS 22 /* Enable checksum */
|
||||
#define EC_LEN 1
|
||||
#define ET_POS 23 /* Enable TSO */
|
||||
#define IS_POS 24 /* IP protocol select */
|
||||
#define IS_LEN 1
|
||||
#define TYPE_ETH_WORK_MESSAGE_POS 44
|
||||
#define LL_BYTES_MSB_POS 56
|
||||
#define LL_BYTES_MSB_LEN 8
|
||||
#define LL_BYTES_LSB_POS 48
|
||||
#define LL_BYTES_LSB_LEN 12
|
||||
#define LL_LEN_POS 48
|
||||
#define LL_LEN_LEN 8
|
||||
#define DATALEN_MASK GENMASK(11, 0)
|
||||
|
||||
#define LAST_BUFFER (0x7800ULL << BUFDATALEN_POS)
|
||||
|
||||
struct xgene_enet_raw_desc {
|
||||
__le64 m0;
|
||||
|
|
|
@ -147,18 +147,27 @@ static int xgene_enet_tx_completion(struct xgene_enet_desc_ring *cp_ring,
|
|||
{
|
||||
struct sk_buff *skb;
|
||||
struct device *dev;
|
||||
skb_frag_t *frag;
|
||||
dma_addr_t *frag_dma_addr;
|
||||
u16 skb_index;
|
||||
u8 status;
|
||||
int ret = 0;
|
||||
int i, ret = 0;
|
||||
|
||||
skb_index = GET_VAL(USERINFO, le64_to_cpu(raw_desc->m0));
|
||||
skb = cp_ring->cp_skb[skb_index];
|
||||
frag_dma_addr = &cp_ring->frag_dma_addr[skb_index * MAX_SKB_FRAGS];
|
||||
|
||||
dev = ndev_to_dev(cp_ring->ndev);
|
||||
dma_unmap_single(dev, GET_VAL(DATAADDR, le64_to_cpu(raw_desc->m1)),
|
||||
GET_VAL(BUFDATALEN, le64_to_cpu(raw_desc->m1)),
|
||||
skb_headlen(skb),
|
||||
DMA_TO_DEVICE);
|
||||
|
||||
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
|
||||
frag = &skb_shinfo(skb)->frags[i];
|
||||
dma_unmap_page(dev, frag_dma_addr[i], skb_frag_size(frag),
|
||||
DMA_TO_DEVICE);
|
||||
}
|
||||
|
||||
/* Checking for error */
|
||||
status = GET_VAL(LERR, le64_to_cpu(raw_desc->m0));
|
||||
if (unlikely(status > 2)) {
|
||||
|
@ -179,12 +188,16 @@ static int xgene_enet_tx_completion(struct xgene_enet_desc_ring *cp_ring,
|
|||
|
||||
static u64 xgene_enet_work_msg(struct sk_buff *skb)
|
||||
{
|
||||
struct net_device *ndev = skb->dev;
|
||||
struct xgene_enet_pdata *pdata = netdev_priv(ndev);
|
||||
struct iphdr *iph;
|
||||
u8 l3hlen, l4hlen = 0;
|
||||
u8 csum_enable = 0;
|
||||
u8 proto = 0;
|
||||
u8 ethhdr;
|
||||
u64 hopinfo;
|
||||
u8 l3hlen = 0, l4hlen = 0;
|
||||
u8 ethhdr, proto = 0, csum_enable = 0;
|
||||
u64 hopinfo = 0;
|
||||
u32 hdr_len, mss = 0;
|
||||
u32 i, len, nr_frags;
|
||||
|
||||
ethhdr = xgene_enet_hdr_len(skb->data);
|
||||
|
||||
if (unlikely(skb->protocol != htons(ETH_P_IP)) &&
|
||||
unlikely(skb->protocol != htons(ETH_P_8021Q)))
|
||||
|
@ -201,14 +214,40 @@ static u64 xgene_enet_work_msg(struct sk_buff *skb)
|
|||
l4hlen = tcp_hdrlen(skb) >> 2;
|
||||
csum_enable = 1;
|
||||
proto = TSO_IPPROTO_TCP;
|
||||
if (ndev->features & NETIF_F_TSO) {
|
||||
hdr_len = ethhdr + ip_hdrlen(skb) + tcp_hdrlen(skb);
|
||||
mss = skb_shinfo(skb)->gso_size;
|
||||
|
||||
if (skb_is_nonlinear(skb)) {
|
||||
len = skb_headlen(skb);
|
||||
nr_frags = skb_shinfo(skb)->nr_frags;
|
||||
|
||||
for (i = 0; i < 2 && i < nr_frags; i++)
|
||||
len += skb_shinfo(skb)->frags[i].size;
|
||||
|
||||
/* HW requires header must reside in 3 buffer */
|
||||
if (unlikely(hdr_len > len)) {
|
||||
if (skb_linearize(skb))
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
if (!mss || ((skb->len - hdr_len) <= mss))
|
||||
goto out;
|
||||
|
||||
if (mss != pdata->mss) {
|
||||
pdata->mss = mss;
|
||||
pdata->mac_ops->set_mss(pdata);
|
||||
}
|
||||
hopinfo |= SET_BIT(ET);
|
||||
}
|
||||
} else if (iph->protocol == IPPROTO_UDP) {
|
||||
l4hlen = UDP_HDR_SIZE;
|
||||
csum_enable = 1;
|
||||
}
|
||||
out:
|
||||
l3hlen = ip_hdrlen(skb) >> 2;
|
||||
ethhdr = xgene_enet_hdr_len(skb->data);
|
||||
hopinfo = SET_VAL(TCPHDR, l4hlen) |
|
||||
hopinfo |= SET_VAL(TCPHDR, l4hlen) |
|
||||
SET_VAL(IPHDR, l3hlen) |
|
||||
SET_VAL(ETHHDR, ethhdr) |
|
||||
SET_VAL(EC, csum_enable) |
|
||||
|
@ -219,35 +258,170 @@ out:
|
|||
return hopinfo;
|
||||
}
|
||||
|
||||
static u16 xgene_enet_encode_len(u16 len)
|
||||
{
|
||||
return (len == BUFLEN_16K) ? 0 : len;
|
||||
}
|
||||
|
||||
static void xgene_set_addr_len(__le64 *desc, u32 idx, dma_addr_t addr, u32 len)
|
||||
{
|
||||
desc[idx ^ 1] = cpu_to_le64(SET_VAL(DATAADDR, addr) |
|
||||
SET_VAL(BUFDATALEN, len));
|
||||
}
|
||||
|
||||
static __le64 *xgene_enet_get_exp_bufs(struct xgene_enet_desc_ring *ring)
|
||||
{
|
||||
__le64 *exp_bufs;
|
||||
|
||||
exp_bufs = &ring->exp_bufs[ring->exp_buf_tail * MAX_EXP_BUFFS];
|
||||
memset(exp_bufs, 0, sizeof(__le64) * MAX_EXP_BUFFS);
|
||||
ring->exp_buf_tail = (ring->exp_buf_tail + 1) & ((ring->slots / 2) - 1);
|
||||
|
||||
return exp_bufs;
|
||||
}
|
||||
|
||||
static dma_addr_t *xgene_get_frag_dma_array(struct xgene_enet_desc_ring *ring)
|
||||
{
|
||||
return &ring->cp_ring->frag_dma_addr[ring->tail * MAX_SKB_FRAGS];
|
||||
}
|
||||
|
||||
static int xgene_enet_setup_tx_desc(struct xgene_enet_desc_ring *tx_ring,
|
||||
struct sk_buff *skb)
|
||||
{
|
||||
struct device *dev = ndev_to_dev(tx_ring->ndev);
|
||||
struct xgene_enet_raw_desc *raw_desc;
|
||||
dma_addr_t dma_addr;
|
||||
__le64 *exp_desc = NULL, *exp_bufs = NULL;
|
||||
dma_addr_t dma_addr, pbuf_addr, *frag_dma_addr;
|
||||
skb_frag_t *frag;
|
||||
u16 tail = tx_ring->tail;
|
||||
u64 hopinfo;
|
||||
u32 len, hw_len;
|
||||
u8 ll = 0, nv = 0, idx = 0;
|
||||
bool split = false;
|
||||
u32 size, offset, ell_bytes = 0;
|
||||
u32 i, fidx, nr_frags, count = 1;
|
||||
|
||||
raw_desc = &tx_ring->raw_desc[tail];
|
||||
tail = (tail + 1) & (tx_ring->slots - 1);
|
||||
memset(raw_desc, 0, sizeof(struct xgene_enet_raw_desc));
|
||||
|
||||
dma_addr = dma_map_single(dev, skb->data, skb->len, DMA_TO_DEVICE);
|
||||
hopinfo = xgene_enet_work_msg(skb);
|
||||
if (!hopinfo)
|
||||
return -EINVAL;
|
||||
raw_desc->m3 = cpu_to_le64(SET_VAL(HENQNUM, tx_ring->dst_ring_num) |
|
||||
hopinfo);
|
||||
|
||||
len = skb_headlen(skb);
|
||||
hw_len = xgene_enet_encode_len(len);
|
||||
|
||||
dma_addr = dma_map_single(dev, skb->data, len, DMA_TO_DEVICE);
|
||||
if (dma_mapping_error(dev, dma_addr)) {
|
||||
netdev_err(tx_ring->ndev, "DMA mapping error\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Hardware expects descriptor in little endian format */
|
||||
raw_desc->m0 = cpu_to_le64(tail);
|
||||
raw_desc->m1 = cpu_to_le64(SET_VAL(DATAADDR, dma_addr) |
|
||||
SET_VAL(BUFDATALEN, skb->len) |
|
||||
SET_VAL(BUFDATALEN, hw_len) |
|
||||
SET_BIT(COHERENT));
|
||||
hopinfo = xgene_enet_work_msg(skb);
|
||||
raw_desc->m3 = cpu_to_le64(SET_VAL(HENQNUM, tx_ring->dst_ring_num) |
|
||||
hopinfo);
|
||||
tx_ring->cp_ring->cp_skb[tail] = skb;
|
||||
|
||||
return 0;
|
||||
if (!skb_is_nonlinear(skb))
|
||||
goto out;
|
||||
|
||||
/* scatter gather */
|
||||
nv = 1;
|
||||
exp_desc = (void *)&tx_ring->raw_desc[tail];
|
||||
tail = (tail + 1) & (tx_ring->slots - 1);
|
||||
memset(exp_desc, 0, sizeof(struct xgene_enet_raw_desc));
|
||||
|
||||
nr_frags = skb_shinfo(skb)->nr_frags;
|
||||
for (i = nr_frags; i < 4 ; i++)
|
||||
exp_desc[i ^ 1] = cpu_to_le64(LAST_BUFFER);
|
||||
|
||||
frag_dma_addr = xgene_get_frag_dma_array(tx_ring);
|
||||
|
||||
for (i = 0, fidx = 0; split || (fidx < nr_frags); i++) {
|
||||
if (!split) {
|
||||
frag = &skb_shinfo(skb)->frags[fidx];
|
||||
size = skb_frag_size(frag);
|
||||
offset = 0;
|
||||
|
||||
pbuf_addr = skb_frag_dma_map(dev, frag, 0, size,
|
||||
DMA_TO_DEVICE);
|
||||
if (dma_mapping_error(dev, pbuf_addr))
|
||||
return -EINVAL;
|
||||
|
||||
frag_dma_addr[fidx] = pbuf_addr;
|
||||
fidx++;
|
||||
|
||||
if (size > BUFLEN_16K)
|
||||
split = true;
|
||||
}
|
||||
|
||||
if (size > BUFLEN_16K) {
|
||||
len = BUFLEN_16K;
|
||||
size -= BUFLEN_16K;
|
||||
} else {
|
||||
len = size;
|
||||
split = false;
|
||||
}
|
||||
|
||||
dma_addr = pbuf_addr + offset;
|
||||
hw_len = xgene_enet_encode_len(len);
|
||||
|
||||
switch (i) {
|
||||
case 0:
|
||||
case 1:
|
||||
case 2:
|
||||
xgene_set_addr_len(exp_desc, i, dma_addr, hw_len);
|
||||
break;
|
||||
case 3:
|
||||
if (split || (fidx != nr_frags)) {
|
||||
exp_bufs = xgene_enet_get_exp_bufs(tx_ring);
|
||||
xgene_set_addr_len(exp_bufs, idx, dma_addr,
|
||||
hw_len);
|
||||
idx++;
|
||||
ell_bytes += len;
|
||||
} else {
|
||||
xgene_set_addr_len(exp_desc, i, dma_addr,
|
||||
hw_len);
|
||||
}
|
||||
break;
|
||||
default:
|
||||
xgene_set_addr_len(exp_bufs, idx, dma_addr, hw_len);
|
||||
idx++;
|
||||
ell_bytes += len;
|
||||
break;
|
||||
}
|
||||
|
||||
if (split)
|
||||
offset += BUFLEN_16K;
|
||||
}
|
||||
count++;
|
||||
|
||||
if (idx) {
|
||||
ll = 1;
|
||||
dma_addr = dma_map_single(dev, exp_bufs,
|
||||
sizeof(u64) * MAX_EXP_BUFFS,
|
||||
DMA_TO_DEVICE);
|
||||
if (dma_mapping_error(dev, dma_addr)) {
|
||||
dev_kfree_skb_any(skb);
|
||||
return -EINVAL;
|
||||
}
|
||||
i = ell_bytes >> LL_BYTES_LSB_LEN;
|
||||
exp_desc[2] = cpu_to_le64(SET_VAL(DATAADDR, dma_addr) |
|
||||
SET_VAL(LL_BYTES_MSB, i) |
|
||||
SET_VAL(LL_LEN, idx));
|
||||
raw_desc->m2 = cpu_to_le64(SET_VAL(LL_BYTES_LSB, ell_bytes));
|
||||
}
|
||||
|
||||
out:
|
||||
raw_desc->m0 = cpu_to_le64(SET_VAL(LL, ll) | SET_VAL(NV, nv) |
|
||||
SET_VAL(USERINFO, tx_ring->tail));
|
||||
tx_ring->cp_ring->cp_skb[tx_ring->tail] = skb;
|
||||
tx_ring->tail = tail;
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
static netdev_tx_t xgene_enet_start_xmit(struct sk_buff *skb,
|
||||
|
@ -257,6 +431,7 @@ static netdev_tx_t xgene_enet_start_xmit(struct sk_buff *skb,
|
|||
struct xgene_enet_desc_ring *tx_ring = pdata->tx_ring;
|
||||
struct xgene_enet_desc_ring *cp_ring = tx_ring->cp_ring;
|
||||
u32 tx_level, cq_level;
|
||||
int count;
|
||||
|
||||
tx_level = pdata->ring_ops->len(tx_ring);
|
||||
cq_level = pdata->ring_ops->len(cp_ring);
|
||||
|
@ -266,14 +441,17 @@ static netdev_tx_t xgene_enet_start_xmit(struct sk_buff *skb,
|
|||
return NETDEV_TX_BUSY;
|
||||
}
|
||||
|
||||
if (xgene_enet_setup_tx_desc(tx_ring, skb)) {
|
||||
if (skb_padto(skb, XGENE_MIN_ENET_FRAME_SIZE))
|
||||
return NETDEV_TX_OK;
|
||||
|
||||
count = xgene_enet_setup_tx_desc(tx_ring, skb);
|
||||
if (count <= 0) {
|
||||
dev_kfree_skb_any(skb);
|
||||
return NETDEV_TX_OK;
|
||||
}
|
||||
|
||||
pdata->ring_ops->wr_cmd(tx_ring, 1);
|
||||
pdata->ring_ops->wr_cmd(tx_ring, count);
|
||||
skb_tx_timestamp(skb);
|
||||
tx_ring->tail = (tx_ring->tail + 1) & (tx_ring->slots - 1);
|
||||
|
||||
pdata->stats.tx_packets++;
|
||||
pdata->stats.tx_bytes += skb->len;
|
||||
|
@ -326,7 +504,7 @@ static int xgene_enet_rx_frame(struct xgene_enet_desc_ring *rx_ring,
|
|||
|
||||
/* strip off CRC as HW isn't doing this */
|
||||
datalen = GET_VAL(BUFDATALEN, le64_to_cpu(raw_desc->m1));
|
||||
datalen -= 4;
|
||||
datalen = (datalen & DATALEN_MASK) - 4;
|
||||
prefetch(skb->data - NET_IP_ALIGN);
|
||||
skb_put(skb, datalen);
|
||||
|
||||
|
@ -358,26 +536,41 @@ static int xgene_enet_process_ring(struct xgene_enet_desc_ring *ring,
|
|||
int budget)
|
||||
{
|
||||
struct xgene_enet_pdata *pdata = netdev_priv(ring->ndev);
|
||||
struct xgene_enet_raw_desc *raw_desc;
|
||||
struct xgene_enet_raw_desc *raw_desc, *exp_desc;
|
||||
u16 head = ring->head;
|
||||
u16 slots = ring->slots - 1;
|
||||
int ret, count = 0;
|
||||
int ret, count = 0, processed = 0;
|
||||
|
||||
do {
|
||||
raw_desc = &ring->raw_desc[head];
|
||||
exp_desc = NULL;
|
||||
if (unlikely(xgene_enet_is_desc_slot_empty(raw_desc)))
|
||||
break;
|
||||
|
||||
/* read fpqnum field after dataaddr field */
|
||||
dma_rmb();
|
||||
if (GET_BIT(NV, le64_to_cpu(raw_desc->m0))) {
|
||||
head = (head + 1) & slots;
|
||||
exp_desc = &ring->raw_desc[head];
|
||||
|
||||
if (unlikely(xgene_enet_is_desc_slot_empty(exp_desc))) {
|
||||
head = (head - 1) & slots;
|
||||
break;
|
||||
}
|
||||
dma_rmb();
|
||||
count++;
|
||||
}
|
||||
if (is_rx_desc(raw_desc))
|
||||
ret = xgene_enet_rx_frame(ring, raw_desc);
|
||||
else
|
||||
ret = xgene_enet_tx_completion(ring, raw_desc);
|
||||
xgene_enet_mark_desc_slot_empty(raw_desc);
|
||||
if (exp_desc)
|
||||
xgene_enet_mark_desc_slot_empty(exp_desc);
|
||||
|
||||
head = (head + 1) & slots;
|
||||
count++;
|
||||
processed++;
|
||||
|
||||
if (ret)
|
||||
break;
|
||||
|
@ -393,7 +586,7 @@ static int xgene_enet_process_ring(struct xgene_enet_desc_ring *ring,
|
|||
}
|
||||
}
|
||||
|
||||
return count;
|
||||
return processed;
|
||||
}
|
||||
|
||||
static int xgene_enet_napi(struct napi_struct *napi, const int budget)
|
||||
|
@ -738,12 +931,13 @@ static int xgene_enet_create_desc_rings(struct net_device *ndev)
|
|||
struct xgene_enet_desc_ring *rx_ring, *tx_ring, *cp_ring;
|
||||
struct xgene_enet_desc_ring *buf_pool = NULL;
|
||||
enum xgene_ring_owner owner;
|
||||
dma_addr_t dma_exp_bufs;
|
||||
u8 cpu_bufnum = pdata->cpu_bufnum;
|
||||
u8 eth_bufnum = pdata->eth_bufnum;
|
||||
u8 bp_bufnum = pdata->bp_bufnum;
|
||||
u16 ring_num = pdata->ring_num;
|
||||
u16 ring_id;
|
||||
int ret;
|
||||
int ret, size;
|
||||
|
||||
/* allocate rx descriptor ring */
|
||||
owner = xgene_derive_ring_owner(pdata);
|
||||
|
@ -794,6 +988,15 @@ static int xgene_enet_create_desc_rings(struct net_device *ndev)
|
|||
ret = -ENOMEM;
|
||||
goto err;
|
||||
}
|
||||
|
||||
size = (tx_ring->slots / 2) * sizeof(__le64) * MAX_EXP_BUFFS;
|
||||
tx_ring->exp_bufs = dma_zalloc_coherent(dev, size, &dma_exp_bufs,
|
||||
GFP_KERNEL);
|
||||
if (!tx_ring->exp_bufs) {
|
||||
ret = -ENOMEM;
|
||||
goto err;
|
||||
}
|
||||
|
||||
pdata->tx_ring = tx_ring;
|
||||
|
||||
if (!pdata->cq_cnt) {
|
||||
|
@ -818,6 +1021,16 @@ static int xgene_enet_create_desc_rings(struct net_device *ndev)
|
|||
ret = -ENOMEM;
|
||||
goto err;
|
||||
}
|
||||
|
||||
size = sizeof(dma_addr_t) * MAX_SKB_FRAGS;
|
||||
cp_ring->frag_dma_addr = devm_kcalloc(dev, tx_ring->slots,
|
||||
size, GFP_KERNEL);
|
||||
if (!cp_ring->frag_dma_addr) {
|
||||
devm_kfree(dev, cp_ring->cp_skb);
|
||||
ret = -ENOMEM;
|
||||
goto err;
|
||||
}
|
||||
|
||||
pdata->tx_ring->cp_ring = cp_ring;
|
||||
pdata->tx_ring->dst_ring_num = xgene_enet_dst_ring_num(cp_ring);
|
||||
|
||||
|
@ -905,40 +1118,6 @@ static int xgene_get_port_id_dt(struct device *dev, struct xgene_enet_pdata *pda
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int xgene_get_mac_address(struct device *dev,
|
||||
unsigned char *addr)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = device_property_read_u8_array(dev, "local-mac-address", addr, 6);
|
||||
if (ret)
|
||||
ret = device_property_read_u8_array(dev, "mac-address",
|
||||
addr, 6);
|
||||
if (ret)
|
||||
return -ENODEV;
|
||||
|
||||
return ETH_ALEN;
|
||||
}
|
||||
|
||||
static int xgene_get_phy_mode(struct device *dev)
|
||||
{
|
||||
int i, ret;
|
||||
char *modestr;
|
||||
|
||||
ret = device_property_read_string(dev, "phy-connection-type",
|
||||
(const char **)&modestr);
|
||||
if (ret)
|
||||
ret = device_property_read_string(dev, "phy-mode",
|
||||
(const char **)&modestr);
|
||||
if (ret)
|
||||
return -ENODEV;
|
||||
|
||||
for (i = 0; i < PHY_INTERFACE_MODE_MAX; i++) {
|
||||
if (!strcasecmp(modestr, phy_modes(i)))
|
||||
return i;
|
||||
}
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
static int xgene_enet_get_resources(struct xgene_enet_pdata *pdata)
|
||||
{
|
||||
|
@ -998,12 +1177,12 @@ static int xgene_enet_get_resources(struct xgene_enet_pdata *pdata)
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (xgene_get_mac_address(dev, ndev->dev_addr) != ETH_ALEN)
|
||||
if (!device_get_mac_address(dev, ndev->dev_addr, ETH_ALEN))
|
||||
eth_hw_addr_random(ndev);
|
||||
|
||||
memcpy(ndev->perm_addr, ndev->dev_addr, ndev->addr_len);
|
||||
|
||||
pdata->phy_mode = xgene_get_phy_mode(dev);
|
||||
pdata->phy_mode = device_get_phy_mode(dev);
|
||||
if (pdata->phy_mode < 0) {
|
||||
dev_err(dev, "Unable to get phy-connection-type\n");
|
||||
return pdata->phy_mode;
|
||||
|
@ -1207,7 +1386,8 @@ static int xgene_enet_probe(struct platform_device *pdev)
|
|||
xgene_enet_set_ethtool_ops(ndev);
|
||||
ndev->features |= NETIF_F_IP_CSUM |
|
||||
NETIF_F_GSO |
|
||||
NETIF_F_GRO;
|
||||
NETIF_F_GRO |
|
||||
NETIF_F_SG;
|
||||
|
||||
of_id = of_match_device(xgene_enet_of_match, &pdev->dev);
|
||||
if (of_id) {
|
||||
|
@ -1233,6 +1413,12 @@ static int xgene_enet_probe(struct platform_device *pdev)
|
|||
|
||||
xgene_enet_setup_ops(pdata);
|
||||
|
||||
if (pdata->phy_mode == PHY_INTERFACE_MODE_XGMII) {
|
||||
ndev->features |= NETIF_F_TSO;
|
||||
pdata->mss = XGENE_ENET_MSS;
|
||||
}
|
||||
ndev->hw_features = ndev->features;
|
||||
|
||||
ret = register_netdev(ndev);
|
||||
if (ret) {
|
||||
netdev_err(ndev, "Failed to register netdev\n");
|
||||
|
|
|
@ -40,8 +40,12 @@
|
|||
#define XGENE_DRV_VERSION "v1.0"
|
||||
#define XGENE_ENET_MAX_MTU 1536
|
||||
#define SKB_BUFFER_SIZE (XGENE_ENET_MAX_MTU - NET_IP_ALIGN)
|
||||
#define BUFLEN_16K (16 * 1024)
|
||||
#define NUM_PKT_BUF 64
|
||||
#define NUM_BUFPOOL 32
|
||||
#define MAX_EXP_BUFFS 256
|
||||
#define XGENE_ENET_MSS 1448
|
||||
#define XGENE_MIN_ENET_FRAME_SIZE 60
|
||||
|
||||
#define START_CPU_BUFNUM_0 0
|
||||
#define START_ETH_BUFNUM_0 2
|
||||
|
@ -79,6 +83,7 @@ struct xgene_enet_desc_ring {
|
|||
u16 num;
|
||||
u16 head;
|
||||
u16 tail;
|
||||
u16 exp_buf_tail;
|
||||
u16 slots;
|
||||
u16 irq;
|
||||
char irq_name[IRQ_ID_SIZE];
|
||||
|
@ -93,6 +98,7 @@ struct xgene_enet_desc_ring {
|
|||
u8 nbufpool;
|
||||
struct sk_buff *(*rx_skb);
|
||||
struct sk_buff *(*cp_skb);
|
||||
dma_addr_t *frag_dma_addr;
|
||||
enum xgene_enet_ring_cfgsize cfgsize;
|
||||
struct xgene_enet_desc_ring *cp_ring;
|
||||
struct xgene_enet_desc_ring *buf_pool;
|
||||
|
@ -102,6 +108,7 @@ struct xgene_enet_desc_ring {
|
|||
struct xgene_enet_raw_desc *raw_desc;
|
||||
struct xgene_enet_raw_desc16 *raw_desc16;
|
||||
};
|
||||
__le64 *exp_bufs;
|
||||
};
|
||||
|
||||
struct xgene_mac_ops {
|
||||
|
@ -112,6 +119,7 @@ struct xgene_mac_ops {
|
|||
void (*tx_disable)(struct xgene_enet_pdata *pdata);
|
||||
void (*rx_disable)(struct xgene_enet_pdata *pdata);
|
||||
void (*set_mac_addr)(struct xgene_enet_pdata *pdata);
|
||||
void (*set_mss)(struct xgene_enet_pdata *pdata);
|
||||
void (*link_state)(struct work_struct *work);
|
||||
};
|
||||
|
||||
|
@ -170,6 +178,7 @@ struct xgene_enet_pdata {
|
|||
u8 eth_bufnum;
|
||||
u8 bp_bufnum;
|
||||
u16 ring_num;
|
||||
u32 mss;
|
||||
};
|
||||
|
||||
struct xgene_indirect_ctl {
|
||||
|
@ -204,6 +213,9 @@ static inline u64 xgene_enet_get_field_value(int pos, int len, u64 src)
|
|||
#define GET_VAL(field, src) \
|
||||
xgene_enet_get_field_value(field ## _POS, field ## _LEN, src)
|
||||
|
||||
#define GET_BIT(field, src) \
|
||||
xgene_enet_get_field_value(field ## _POS, 1, src)
|
||||
|
||||
static inline struct device *ndev_to_dev(struct net_device *ndev)
|
||||
{
|
||||
return ndev->dev.parent;
|
||||
|
|
|
@ -184,6 +184,11 @@ static void xgene_xgmac_set_mac_addr(struct xgene_enet_pdata *pdata)
|
|||
xgene_enet_wr_mac(pdata, HSTMACADR_MSW_ADDR, addr1);
|
||||
}
|
||||
|
||||
static void xgene_xgmac_set_mss(struct xgene_enet_pdata *pdata)
|
||||
{
|
||||
xgene_enet_wr_csr(pdata, XG_TSIF_MSS_REG0_ADDR, pdata->mss);
|
||||
}
|
||||
|
||||
static u32 xgene_enet_link_status(struct xgene_enet_pdata *pdata)
|
||||
{
|
||||
u32 data;
|
||||
|
@ -204,8 +209,8 @@ static void xgene_xgmac_init(struct xgene_enet_pdata *pdata)
|
|||
data &= ~HSTLENCHK;
|
||||
xgene_enet_wr_mac(pdata, AXGMAC_CONFIG_1, data);
|
||||
|
||||
xgene_enet_wr_mac(pdata, HSTMAXFRAME_LENGTH_ADDR, 0x06000600);
|
||||
xgene_xgmac_set_mac_addr(pdata);
|
||||
xgene_xgmac_set_mss(pdata);
|
||||
|
||||
xgene_enet_rd_csr(pdata, XG_RSIF_CONFIG_REG_ADDR, &data);
|
||||
data |= CFG_RSIF_FPBUFF_TIMEOUT_EN;
|
||||
|
@ -329,6 +334,7 @@ struct xgene_mac_ops xgene_xgmac_ops = {
|
|||
.rx_disable = xgene_xgmac_rx_disable,
|
||||
.tx_disable = xgene_xgmac_tx_disable,
|
||||
.set_mac_addr = xgene_xgmac_set_mac_addr,
|
||||
.set_mss = xgene_xgmac_set_mss,
|
||||
.link_state = xgene_enet_link_state
|
||||
};
|
||||
|
||||
|
|
|
@ -62,7 +62,9 @@
|
|||
#define XCLE_BYPASS_REG0_ADDR 0x0160
|
||||
#define XCLE_BYPASS_REG1_ADDR 0x0164
|
||||
#define XG_CFG_BYPASS_ADDR 0x0204
|
||||
#define XG_CFG_LINK_AGGR_RESUME_0_ADDR 0x0214
|
||||
#define XG_LINK_STATUS_ADDR 0x0228
|
||||
#define XG_TSIF_MSS_REG0_ADDR 0x02a4
|
||||
#define XG_ENET_SPARE_CFG_REG_ADDR 0x040c
|
||||
#define XG_ENET_SPARE_CFG_REG_1_ADDR 0x0410
|
||||
#define XGENET_RX_DV_GATE_REG_0_ADDR 0x0804
|
||||
|
|
|
@ -874,6 +874,8 @@ static void atl1c_clean_tx_ring(struct atl1c_adapter *adapter,
|
|||
atl1c_clean_buffer(pdev, buffer_info);
|
||||
}
|
||||
|
||||
netdev_reset_queue(adapter->netdev);
|
||||
|
||||
/* Zero out Tx-buffers */
|
||||
memset(tpd_ring->desc, 0, sizeof(struct atl1c_tpd_desc) *
|
||||
ring_count);
|
||||
|
@ -1551,6 +1553,7 @@ static bool atl1c_clean_tx_irq(struct atl1c_adapter *adapter,
|
|||
u16 next_to_clean = atomic_read(&tpd_ring->next_to_clean);
|
||||
u16 hw_next_to_clean;
|
||||
u16 reg;
|
||||
unsigned int total_bytes = 0, total_packets = 0;
|
||||
|
||||
reg = type == atl1c_trans_high ? REG_TPD_PRI1_CIDX : REG_TPD_PRI0_CIDX;
|
||||
|
||||
|
@ -1558,12 +1561,18 @@ static bool atl1c_clean_tx_irq(struct atl1c_adapter *adapter,
|
|||
|
||||
while (next_to_clean != hw_next_to_clean) {
|
||||
buffer_info = &tpd_ring->buffer_info[next_to_clean];
|
||||
if (buffer_info->skb) {
|
||||
total_bytes += buffer_info->skb->len;
|
||||
total_packets++;
|
||||
}
|
||||
atl1c_clean_buffer(pdev, buffer_info);
|
||||
if (++next_to_clean == tpd_ring->count)
|
||||
next_to_clean = 0;
|
||||
atomic_set(&tpd_ring->next_to_clean, next_to_clean);
|
||||
}
|
||||
|
||||
netdev_completed_queue(adapter->netdev, total_packets, total_bytes);
|
||||
|
||||
if (netif_queue_stopped(adapter->netdev) &&
|
||||
netif_carrier_ok(adapter->netdev)) {
|
||||
netif_wake_queue(adapter->netdev);
|
||||
|
@ -2256,6 +2265,7 @@ static netdev_tx_t atl1c_xmit_frame(struct sk_buff *skb,
|
|||
spin_unlock_irqrestore(&adapter->tx_lock, flags);
|
||||
dev_kfree_skb_any(skb);
|
||||
} else {
|
||||
netdev_sent_queue(adapter->netdev, skb->len);
|
||||
atl1c_tx_queue(adapter, skb, tpd, type);
|
||||
spin_unlock_irqrestore(&adapter->tx_lock, flags);
|
||||
}
|
||||
|
|
|
@ -139,6 +139,16 @@ config BNX2X_SRIOV
|
|||
Virtualization support in the 578xx and 57712 products. This
|
||||
allows for virtual function acceleration in virtual environments.
|
||||
|
||||
config BNX2X_VXLAN
|
||||
bool "Virtual eXtensible Local Area Network support"
|
||||
default n
|
||||
depends on BNX2X && VXLAN && !(BNX2X=y && VXLAN=m)
|
||||
---help---
|
||||
This enables hardward offload support for VXLAN protocol over the
|
||||
NetXtremeII series adapters.
|
||||
Say Y here if you want to enable hardware offload support for
|
||||
Virtual eXtensible Local Area Network (VXLAN) in the driver.
|
||||
|
||||
config BGMAC
|
||||
tristate "BCMA bus GBit core support"
|
||||
depends on BCMA_HOST_SOC && HAS_DMA && (BCM47XX || ARCH_BCM_5301X)
|
||||
|
|
|
@ -933,6 +933,21 @@ static irqreturn_t bcm_sysport_wol_isr(int irq, void *dev_id)
|
|||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_NET_POLL_CONTROLLER
|
||||
static void bcm_sysport_poll_controller(struct net_device *dev)
|
||||
{
|
||||
struct bcm_sysport_priv *priv = netdev_priv(dev);
|
||||
|
||||
disable_irq(priv->irq0);
|
||||
bcm_sysport_rx_isr(priv->irq0, priv);
|
||||
enable_irq(priv->irq0);
|
||||
|
||||
disable_irq(priv->irq1);
|
||||
bcm_sysport_tx_isr(priv->irq1, priv);
|
||||
enable_irq(priv->irq1);
|
||||
}
|
||||
#endif
|
||||
|
||||
static struct sk_buff *bcm_sysport_insert_tsb(struct sk_buff *skb,
|
||||
struct net_device *dev)
|
||||
{
|
||||
|
@ -1723,6 +1738,9 @@ static const struct net_device_ops bcm_sysport_netdev_ops = {
|
|||
.ndo_set_features = bcm_sysport_set_features,
|
||||
.ndo_set_rx_mode = bcm_sysport_set_rx_mode,
|
||||
.ndo_set_mac_address = bcm_sysport_change_mac,
|
||||
#ifdef CONFIG_NET_POLL_CONTROLLER
|
||||
.ndo_poll_controller = bcm_sysport_poll_controller,
|
||||
#endif
|
||||
};
|
||||
|
||||
#define REV_FMT "v%2x.%02x"
|
||||
|
|
|
@ -1447,7 +1447,7 @@ static int bgmac_fixed_phy_register(struct bgmac *bgmac)
|
|||
struct phy_device *phy_dev;
|
||||
int err;
|
||||
|
||||
phy_dev = fixed_phy_register(PHY_POLL, &fphy_status, NULL);
|
||||
phy_dev = fixed_phy_register(PHY_POLL, &fphy_status, -1, NULL);
|
||||
if (!phy_dev || IS_ERR(phy_dev)) {
|
||||
bgmac_err(bgmac, "Failed to register fixed PHY device\n");
|
||||
return -ENODEV;
|
||||
|
@ -1549,11 +1549,20 @@ static int bgmac_probe(struct bcma_device *core)
|
|||
struct net_device *net_dev;
|
||||
struct bgmac *bgmac;
|
||||
struct ssb_sprom *sprom = &core->bus->sprom;
|
||||
u8 *mac = core->core_unit ? sprom->et1mac : sprom->et0mac;
|
||||
u8 *mac;
|
||||
int err;
|
||||
|
||||
/* We don't support 2nd, 3rd, ... units, SPROM has to be adjusted */
|
||||
if (core->core_unit > 1) {
|
||||
switch (core->core_unit) {
|
||||
case 0:
|
||||
mac = sprom->et0mac;
|
||||
break;
|
||||
case 1:
|
||||
mac = sprom->et1mac;
|
||||
break;
|
||||
case 2:
|
||||
mac = sprom->et2mac;
|
||||
break;
|
||||
default:
|
||||
pr_err("Unsupported core_unit %d\n", core->core_unit);
|
||||
return -ENOTSUPP;
|
||||
}
|
||||
|
@ -1588,8 +1597,17 @@ static int bgmac_probe(struct bcma_device *core)
|
|||
}
|
||||
bgmac->cmn = core->bus->drv_gmac_cmn.core;
|
||||
|
||||
bgmac->phyaddr = core->core_unit ? sprom->et1phyaddr :
|
||||
sprom->et0phyaddr;
|
||||
switch (core->core_unit) {
|
||||
case 0:
|
||||
bgmac->phyaddr = sprom->et0phyaddr;
|
||||
break;
|
||||
case 1:
|
||||
bgmac->phyaddr = sprom->et1phyaddr;
|
||||
break;
|
||||
case 2:
|
||||
bgmac->phyaddr = sprom->et2phyaddr;
|
||||
break;
|
||||
}
|
||||
bgmac->phyaddr &= BGMAC_PHY_MASK;
|
||||
if (bgmac->phyaddr == BGMAC_PHY_MASK) {
|
||||
bgmac_err(bgmac, "No PHY found\n");
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
/* bnx2x.h: Broadcom Everest network driver.
|
||||
/* bnx2x.h: QLogic Everest network driver.
|
||||
*
|
||||
* Copyright (c) 2007-2013 Broadcom Corporation
|
||||
* Copyright (c) 2014 QLogic Corporation
|
||||
* All rights reserved
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License as published by
|
||||
|
@ -30,7 +32,7 @@
|
|||
* (you will need to reboot afterwards) */
|
||||
/* #define BNX2X_STOP_ON_ERROR */
|
||||
|
||||
#define DRV_MODULE_VERSION "1.710.51-0"
|
||||
#define DRV_MODULE_VERSION "1.712.30-0"
|
||||
#define DRV_MODULE_RELDATE "2014/02/10"
|
||||
#define BNX2X_BC_VER 0x040200
|
||||
|
||||
|
@ -1226,6 +1228,10 @@ struct bnx2x_slowpath {
|
|||
struct eth_classify_rules_ramrod_data e2;
|
||||
} mac_rdata;
|
||||
|
||||
union {
|
||||
struct eth_classify_rules_ramrod_data e2;
|
||||
} vlan_rdata;
|
||||
|
||||
union {
|
||||
struct tstorm_eth_mac_filter_config e1x;
|
||||
struct eth_filter_rules_ramrod_data e2;
|
||||
|
@ -1386,6 +1392,8 @@ enum sp_rtnl_flag {
|
|||
BNX2X_SP_RTNL_HYPERVISOR_VLAN,
|
||||
BNX2X_SP_RTNL_TX_STOP,
|
||||
BNX2X_SP_RTNL_GET_DRV_VERSION,
|
||||
BNX2X_SP_RTNL_ADD_VXLAN_PORT,
|
||||
BNX2X_SP_RTNL_DEL_VXLAN_PORT,
|
||||
};
|
||||
|
||||
enum bnx2x_iov_flag {
|
||||
|
@ -1408,6 +1416,9 @@ struct bnx2x_sp_objs {
|
|||
|
||||
/* Queue State object */
|
||||
struct bnx2x_queue_sp_obj q_obj;
|
||||
|
||||
/* VLANs object */
|
||||
struct bnx2x_vlan_mac_obj vlan_obj;
|
||||
};
|
||||
|
||||
struct bnx2x_fp_stats {
|
||||
|
@ -1422,6 +1433,13 @@ enum {
|
|||
SUB_MF_MODE_UNKNOWN = 0,
|
||||
SUB_MF_MODE_UFP,
|
||||
SUB_MF_MODE_NPAR1_DOT_5,
|
||||
SUB_MF_MODE_BD,
|
||||
};
|
||||
|
||||
struct bnx2x_vlan_entry {
|
||||
struct list_head link;
|
||||
u16 vid;
|
||||
bool hw;
|
||||
};
|
||||
|
||||
struct bnx2x {
|
||||
|
@ -1636,6 +1654,8 @@ struct bnx2x {
|
|||
u8 mf_sub_mode;
|
||||
#define IS_MF_UFP(bp) (IS_MF_SD(bp) && \
|
||||
bp->mf_sub_mode == SUB_MF_MODE_UFP)
|
||||
#define IS_MF_BD(bp) (IS_MF_SD(bp) && \
|
||||
bp->mf_sub_mode == SUB_MF_MODE_BD)
|
||||
|
||||
u8 wol;
|
||||
|
||||
|
@ -1860,8 +1880,6 @@ struct bnx2x {
|
|||
int dcb_version;
|
||||
|
||||
/* CAM credit pools */
|
||||
|
||||
/* used only in sriov */
|
||||
struct bnx2x_credit_pool_obj vlans_pool;
|
||||
|
||||
struct bnx2x_credit_pool_obj macs_pool;
|
||||
|
@ -1924,6 +1942,11 @@ struct bnx2x {
|
|||
u16 rx_filter;
|
||||
|
||||
struct bnx2x_link_report_data vf_link_vars;
|
||||
struct list_head vlan_reg;
|
||||
u16 vlan_cnt;
|
||||
u16 vlan_credit;
|
||||
u16 vxlan_dst_port;
|
||||
bool accept_any_vlan;
|
||||
};
|
||||
|
||||
/* Tx queues may be less or equal to Rx queues */
|
||||
|
@ -1951,23 +1974,14 @@ extern int num_queues;
|
|||
#define RSS_IPV6_TCP_CAP_MASK \
|
||||
TSTORM_ETH_FUNCTION_COMMON_CONFIG_RSS_IPV6_TCP_CAPABILITY
|
||||
|
||||
/* func init flags */
|
||||
#define FUNC_FLG_RSS 0x0001
|
||||
#define FUNC_FLG_STATS 0x0002
|
||||
/* removed FUNC_FLG_UNMATCHED 0x0004 */
|
||||
#define FUNC_FLG_TPA 0x0008
|
||||
#define FUNC_FLG_SPQ 0x0010
|
||||
#define FUNC_FLG_LEADING 0x0020 /* PF only */
|
||||
#define FUNC_FLG_LEADING_STATS 0x0040
|
||||
struct bnx2x_func_init_params {
|
||||
/* dma */
|
||||
dma_addr_t fw_stat_map; /* valid iff FUNC_FLG_STATS */
|
||||
dma_addr_t spq_map; /* valid iff FUNC_FLG_SPQ */
|
||||
bool spq_active;
|
||||
dma_addr_t spq_map;
|
||||
u16 spq_prod;
|
||||
|
||||
u16 func_flgs;
|
||||
u16 func_id; /* abs fid */
|
||||
u16 pf_id;
|
||||
u16 spq_prod; /* valid iff FUNC_FLG_SPQ */
|
||||
};
|
||||
|
||||
#define for_each_cnic_queue(bp, var) \
|
||||
|
@ -2077,6 +2091,11 @@ struct bnx2x_func_init_params {
|
|||
int bnx2x_set_mac_one(struct bnx2x *bp, u8 *mac,
|
||||
struct bnx2x_vlan_mac_obj *obj, bool set,
|
||||
int mac_type, unsigned long *ramrod_flags);
|
||||
|
||||
int bnx2x_set_vlan_one(struct bnx2x *bp, u16 vlan,
|
||||
struct bnx2x_vlan_mac_obj *obj, bool set,
|
||||
unsigned long *ramrod_flags);
|
||||
|
||||
/**
|
||||
* bnx2x_del_all_macs - delete all MACs configured for the specific MAC object
|
||||
*
|
||||
|
@ -2481,6 +2500,7 @@ void bnx2x_igu_clear_sb_gen(struct bnx2x *bp, u8 func, u8 idu_sb_id,
|
|||
#define VF_ACQUIRE_THRESH 3
|
||||
#define VF_ACQUIRE_MAC_FILTERS 1
|
||||
#define VF_ACQUIRE_MC_FILTERS 10
|
||||
#define VF_ACQUIRE_VLAN_FILTERS 2 /* VLAN0 + 'real' VLAN */
|
||||
|
||||
#define GOOD_ME_REG(me_reg) (((me_reg) & ME_REG_VF_VALID) && \
|
||||
(!((me_reg) & ME_REG_VF_ERR)))
|
||||
|
@ -2553,6 +2573,10 @@ void bnx2x_notify_link_changed(struct bnx2x *bp);
|
|||
(IS_MF_SD_STORAGE_PERSONALITY_ONLY(bp) || \
|
||||
IS_MF_SI_STORAGE_PERSONALITY_ONLY(bp))
|
||||
|
||||
/* Determines whether BW configuration arrives in 100Mb units or in
|
||||
* percentages from actual physical link speed.
|
||||
*/
|
||||
#define IS_MF_PERCENT_BW(bp) (IS_MF_SI(bp) || IS_MF_UFP(bp) || IS_MF_BD(bp))
|
||||
|
||||
#define SET_FLAG(value, mask, flag) \
|
||||
do {\
|
||||
|
@ -2577,6 +2601,8 @@ void bnx2x_set_local_cmng(struct bnx2x *bp);
|
|||
|
||||
void bnx2x_update_mng_version(struct bnx2x *bp);
|
||||
|
||||
void bnx2x_update_mfw_dump(struct bnx2x *bp);
|
||||
|
||||
#define MCPR_SCRATCH_BASE(bp) \
|
||||
(CHIP_IS_E1x(bp) ? MCP_REG_MCPR_SCRATCH : MCP_A_REG_MCPR_SCRATCH)
|
||||
|
||||
|
@ -2589,4 +2615,9 @@ void bnx2x_set_rx_ts(struct bnx2x *bp, struct sk_buff *skb);
|
|||
#define BNX2X_MAX_PHC_DRIFT 31000000
|
||||
#define BNX2X_PTP_TX_TIMEOUT
|
||||
|
||||
/* Re-configure all previously configured vlan filters.
|
||||
* Meant for implicit re-load flows.
|
||||
*/
|
||||
int bnx2x_vlan_reconfigure_vid(struct bnx2x *bp);
|
||||
|
||||
#endif /* bnx2x.h */
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
/* bnx2x_cmn.c: Broadcom Everest network driver.
|
||||
/* bnx2x_cmn.c: QLogic Everest network driver.
|
||||
*
|
||||
* Copyright (c) 2007-2013 Broadcom Corporation
|
||||
* Copyright (c) 2014 QLogic Corporation
|
||||
* All rights reserved
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License as published by
|
||||
|
@ -1188,7 +1190,7 @@ u16 bnx2x_get_mf_speed(struct bnx2x *bp)
|
|||
/* Calculate the current MAX line speed limit for the MF
|
||||
* devices
|
||||
*/
|
||||
if (IS_MF_SI(bp))
|
||||
if (IS_MF_PERCENT_BW(bp))
|
||||
line_speed = (line_speed * maxCfg) / 100;
|
||||
else { /* SD mode */
|
||||
u16 vn_max_rate = maxCfg * 100;
|
||||
|
@ -2103,9 +2105,14 @@ int bnx2x_rss(struct bnx2x *bp, struct bnx2x_rss_config_obj *rss_obj,
|
|||
if (rss_obj->udp_rss_v6)
|
||||
__set_bit(BNX2X_RSS_IPV6_UDP, ¶ms.rss_flags);
|
||||
|
||||
if (!CHIP_IS_E1x(bp))
|
||||
if (!CHIP_IS_E1x(bp)) {
|
||||
/* valid only for TUNN_MODE_VXLAN tunnel mode */
|
||||
__set_bit(BNX2X_RSS_IPV4_VXLAN, ¶ms.rss_flags);
|
||||
__set_bit(BNX2X_RSS_IPV6_VXLAN, ¶ms.rss_flags);
|
||||
|
||||
/* valid only for TUNN_MODE_GRE tunnel mode */
|
||||
__set_bit(BNX2X_RSS_GRE_INNER_HDRS, ¶ms.rss_flags);
|
||||
__set_bit(BNX2X_RSS_TUNN_INNER_HDRS, ¶ms.rss_flags);
|
||||
}
|
||||
} else {
|
||||
__set_bit(BNX2X_RSS_MODE_DISABLED, ¶ms.rss_flags);
|
||||
}
|
||||
|
@ -2510,6 +2517,20 @@ static void bnx2x_bz_fp(struct bnx2x *bp, int index)
|
|||
fp->mode = TPA_MODE_DISABLED;
|
||||
}
|
||||
|
||||
void bnx2x_set_os_driver_state(struct bnx2x *bp, u32 state)
|
||||
{
|
||||
u32 cur;
|
||||
|
||||
if (!IS_MF_BD(bp) || !SHMEM2_HAS(bp, os_driver_state) || IS_VF(bp))
|
||||
return;
|
||||
|
||||
cur = SHMEM2_RD(bp, os_driver_state[BP_FW_MB_IDX(bp)]);
|
||||
DP(NETIF_MSG_IFUP, "Driver state %08x-->%08x\n",
|
||||
cur, state);
|
||||
|
||||
SHMEM2_WR(bp, os_driver_state[BP_FW_MB_IDX(bp)], state);
|
||||
}
|
||||
|
||||
int bnx2x_load_cnic(struct bnx2x *bp)
|
||||
{
|
||||
int i, rc, port = BP_PORT(bp);
|
||||
|
@ -2827,6 +2848,11 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode)
|
|||
|
||||
/* Start fast path */
|
||||
|
||||
/* Re-configure vlan filters */
|
||||
rc = bnx2x_vlan_reconfigure_vid(bp);
|
||||
if (rc)
|
||||
LOAD_ERROR_EXIT(bp, load_error3);
|
||||
|
||||
/* Initialize Rx filter. */
|
||||
bnx2x_set_rx_mode_inner(bp);
|
||||
|
||||
|
@ -2873,6 +2899,8 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode)
|
|||
/* mark driver is loaded in shmem2 */
|
||||
u32 val;
|
||||
val = SHMEM2_RD(bp, drv_capabilities_flag[BP_FW_MB_IDX(bp)]);
|
||||
val &= ~DRV_FLAGS_MTU_MASK;
|
||||
val |= (bp->dev->mtu << DRV_FLAGS_MTU_SHIFT);
|
||||
SHMEM2_WR(bp, drv_capabilities_flag[BP_FW_MB_IDX(bp)],
|
||||
val | DRV_FLAGS_CAPABILITIES_LOADED_SUPPORTED |
|
||||
DRV_FLAGS_CAPABILITIES_LOADED_L2);
|
||||
|
@ -2885,10 +2913,17 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode)
|
|||
return -EBUSY;
|
||||
}
|
||||
|
||||
/* Update driver data for On-Chip MFW dump. */
|
||||
if (IS_PF(bp))
|
||||
bnx2x_update_mfw_dump(bp);
|
||||
|
||||
/* If PMF - send ADMIN DCBX msg to MFW to initiate DCBX FSM */
|
||||
if (bp->port.pmf && (bp->state != BNX2X_STATE_DIAG))
|
||||
bnx2x_dcbx_init(bp, false);
|
||||
|
||||
if (!IS_MF_SD_STORAGE_PERSONALITY_ONLY(bp))
|
||||
bnx2x_set_os_driver_state(bp, OS_DRIVER_STATE_ACTIVE);
|
||||
|
||||
DP(NETIF_MSG_IFUP, "Ending successfully NIC load\n");
|
||||
|
||||
return 0;
|
||||
|
@ -2956,6 +2991,9 @@ int bnx2x_nic_unload(struct bnx2x *bp, int unload_mode, bool keep_link)
|
|||
|
||||
DP(NETIF_MSG_IFUP, "Starting NIC unload\n");
|
||||
|
||||
if (!IS_MF_SD_STORAGE_PERSONALITY_ONLY(bp))
|
||||
bnx2x_set_os_driver_state(bp, OS_DRIVER_STATE_DISABLED);
|
||||
|
||||
/* mark driver is unloaded in shmem2 */
|
||||
if (IS_PF(bp) && SHMEM2_HAS(bp, drv_capabilities_flag)) {
|
||||
u32 val;
|
||||
|
@ -3677,7 +3715,7 @@ static void bnx2x_update_pbds_gso_enc(struct sk_buff *skb,
|
|||
pbd2->fw_ip_hdr_to_payload_w =
|
||||
hlen_w - ((sizeof(struct ipv6hdr)) >> 1);
|
||||
pbd_e2->data.tunnel_data.flags |=
|
||||
ETH_TUNNEL_DATA_IP_HDR_TYPE_OUTER;
|
||||
ETH_TUNNEL_DATA_IPV6_OUTER;
|
||||
}
|
||||
|
||||
pbd2->tcp_send_seq = bswab32(inner_tcp_hdr(skb)->seq);
|
||||
|
@ -4184,6 +4222,41 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
|
|||
return NETDEV_TX_OK;
|
||||
}
|
||||
|
||||
void bnx2x_get_c2s_mapping(struct bnx2x *bp, u8 *c2s_map, u8 *c2s_default)
|
||||
{
|
||||
int mfw_vn = BP_FW_MB_IDX(bp);
|
||||
u32 tmp;
|
||||
|
||||
/* If the shmem shouldn't affect configuration, reflect */
|
||||
if (!IS_MF_BD(bp)) {
|
||||
int i;
|
||||
|
||||
for (i = 0; i < BNX2X_MAX_PRIORITY; i++)
|
||||
c2s_map[i] = i;
|
||||
*c2s_default = 0;
|
||||
|
||||
return;
|
||||
}
|
||||
|
||||
tmp = SHMEM2_RD(bp, c2s_pcp_map_lower[mfw_vn]);
|
||||
tmp = (__force u32)be32_to_cpu((__force __be32)tmp);
|
||||
c2s_map[0] = tmp & 0xff;
|
||||
c2s_map[1] = (tmp >> 8) & 0xff;
|
||||
c2s_map[2] = (tmp >> 16) & 0xff;
|
||||
c2s_map[3] = (tmp >> 24) & 0xff;
|
||||
|
||||
tmp = SHMEM2_RD(bp, c2s_pcp_map_upper[mfw_vn]);
|
||||
tmp = (__force u32)be32_to_cpu((__force __be32)tmp);
|
||||
c2s_map[4] = tmp & 0xff;
|
||||
c2s_map[5] = (tmp >> 8) & 0xff;
|
||||
c2s_map[6] = (tmp >> 16) & 0xff;
|
||||
c2s_map[7] = (tmp >> 24) & 0xff;
|
||||
|
||||
tmp = SHMEM2_RD(bp, c2s_pcp_map_default[mfw_vn]);
|
||||
tmp = (__force u32)be32_to_cpu((__force __be32)tmp);
|
||||
*c2s_default = (tmp >> (8 * mfw_vn)) & 0xff;
|
||||
}
|
||||
|
||||
/**
|
||||
* bnx2x_setup_tc - routine to configure net_device for multi tc
|
||||
*
|
||||
|
@ -4194,8 +4267,9 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
|
|||
*/
|
||||
int bnx2x_setup_tc(struct net_device *dev, u8 num_tc)
|
||||
{
|
||||
int cos, prio, count, offset;
|
||||
struct bnx2x *bp = netdev_priv(dev);
|
||||
u8 c2s_map[BNX2X_MAX_PRIORITY], c2s_def;
|
||||
int cos, prio, count, offset;
|
||||
|
||||
/* setup tc must be called under rtnl lock */
|
||||
ASSERT_RTNL();
|
||||
|
@ -4219,12 +4293,16 @@ int bnx2x_setup_tc(struct net_device *dev, u8 num_tc)
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
bnx2x_get_c2s_mapping(bp, c2s_map, &c2s_def);
|
||||
|
||||
/* configure priority to traffic class mapping */
|
||||
for (prio = 0; prio < BNX2X_MAX_PRIORITY; prio++) {
|
||||
netdev_set_prio_tc_map(dev, prio, bp->prio_to_cos[prio]);
|
||||
int outer_prio = c2s_map[prio];
|
||||
|
||||
netdev_set_prio_tc_map(dev, prio, bp->prio_to_cos[outer_prio]);
|
||||
DP(BNX2X_MSG_SP | NETIF_MSG_IFUP,
|
||||
"mapping priority %d to tc %d\n",
|
||||
prio, bp->prio_to_cos[prio]);
|
||||
outer_prio, bp->prio_to_cos[outer_prio]);
|
||||
}
|
||||
|
||||
/* Use this configuration to differentiate tc0 from other COSes
|
||||
|
@ -4278,6 +4356,9 @@ int bnx2x_change_mac_addr(struct net_device *dev, void *p)
|
|||
if (netif_running(dev))
|
||||
rc = bnx2x_set_eth_mac(bp, true);
|
||||
|
||||
if (IS_PF(bp) && SHMEM2_HAS(bp, curr_cfg))
|
||||
SHMEM2_WR(bp, curr_cfg, CURR_CFG_MET_OS);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
|
@ -4831,6 +4912,9 @@ int bnx2x_change_mtu(struct net_device *dev, int new_mtu)
|
|||
*/
|
||||
dev->mtu = new_mtu;
|
||||
|
||||
if (IS_PF(bp) && SHMEM2_HAS(bp, curr_cfg))
|
||||
SHMEM2_WR(bp, curr_cfg, CURR_CFG_MET_OS);
|
||||
|
||||
return bnx2x_reload_if_running(dev);
|
||||
}
|
||||
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
/* bnx2x_cmn.h: Broadcom Everest network driver.
|
||||
/* bnx2x_cmn.h: QLogic Everest network driver.
|
||||
*
|
||||
* Copyright (c) 2007-2013 Broadcom Corporation
|
||||
* Copyright (c) 2014 QLogic Corporation
|
||||
* All rights reserved
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License as published by
|
||||
|
@ -620,6 +622,14 @@ int bnx2x_set_features(struct net_device *dev, netdev_features_t features);
|
|||
*/
|
||||
void bnx2x_tx_timeout(struct net_device *dev);
|
||||
|
||||
/** bnx2x_get_c2s_mapping - read inner-to-outer vlan configuration
|
||||
* c2s_map should have BNX2X_MAX_PRIORITY entries.
|
||||
* @bp: driver handle
|
||||
* @c2s_map: should have BNX2X_MAX_PRIORITY entries for mapping
|
||||
* @c2s_default: entry for non-tagged configuration
|
||||
*/
|
||||
void bnx2x_get_c2s_mapping(struct bnx2x *bp, u8 *c2s_map, u8 *c2s_default);
|
||||
|
||||
/*********************** Inlines **********************************/
|
||||
/*********************** Fast path ********************************/
|
||||
static inline void bnx2x_update_fpsb_idx(struct bnx2x_fastpath *fp)
|
||||
|
@ -931,14 +941,35 @@ static inline int bnx2x_func_start(struct bnx2x *bp)
|
|||
start_params->mf_mode = bp->mf_mode;
|
||||
start_params->sd_vlan_tag = bp->mf_ov;
|
||||
|
||||
/* Configure Ethertype for BD mode */
|
||||
if (IS_MF_BD(bp)) {
|
||||
DP(NETIF_MSG_IFUP, "Configuring ethertype 0x88a8 for BD\n");
|
||||
start_params->sd_vlan_eth_type = ETH_P_8021AD;
|
||||
REG_WR(bp, PRS_REG_VLAN_TYPE_0, ETH_P_8021AD);
|
||||
REG_WR(bp, PBF_REG_VLAN_TYPE_0, ETH_P_8021AD);
|
||||
REG_WR(bp, NIG_REG_LLH_E1HOV_TYPE_1, ETH_P_8021AD);
|
||||
|
||||
bnx2x_get_c2s_mapping(bp, start_params->c2s_pri,
|
||||
&start_params->c2s_pri_default);
|
||||
start_params->c2s_pri_valid = 1;
|
||||
|
||||
DP(NETIF_MSG_IFUP,
|
||||
"Inner-to-Outer priority: %02x %02x %02x %02x %02x %02x %02x %02x [Default %02x]\n",
|
||||
start_params->c2s_pri[0], start_params->c2s_pri[1],
|
||||
start_params->c2s_pri[2], start_params->c2s_pri[3],
|
||||
start_params->c2s_pri[4], start_params->c2s_pri[5],
|
||||
start_params->c2s_pri[6], start_params->c2s_pri[7],
|
||||
start_params->c2s_pri_default);
|
||||
}
|
||||
|
||||
if (CHIP_IS_E2(bp) || CHIP_IS_E3(bp))
|
||||
start_params->network_cos_mode = STATIC_COS;
|
||||
else /* CHIP_IS_E1X */
|
||||
start_params->network_cos_mode = FW_WRR;
|
||||
|
||||
start_params->tunnel_mode = TUNN_MODE_GRE;
|
||||
start_params->gre_tunnel_type = IPGRE_TUNNEL;
|
||||
start_params->inner_gre_rss_en = 1;
|
||||
start_params->vxlan_dst_port = bp->vxlan_dst_port;
|
||||
|
||||
start_params->inner_rss = 1;
|
||||
|
||||
if (IS_MF_UFP(bp) && BNX2X_IS_MF_SD_PROTOCOL_FCOE(bp)) {
|
||||
start_params->class_fail_ethtype = ETH_P_FIP;
|
||||
|
@ -1037,6 +1068,15 @@ static inline void bnx2x_init_vlan_mac_fp_objs(struct bnx2x_fastpath *fp,
|
|||
BNX2X_FILTER_MAC_PENDING,
|
||||
&bp->sp_state, obj_type,
|
||||
&bp->macs_pool);
|
||||
|
||||
if (!CHIP_IS_E1x(bp))
|
||||
bnx2x_init_vlan_obj(bp, &bnx2x_sp_obj(bp, fp).vlan_obj,
|
||||
fp->cl_id, fp->cid, BP_FUNC(bp),
|
||||
bnx2x_sp(bp, vlan_rdata),
|
||||
bnx2x_sp_mapping(bp, vlan_rdata),
|
||||
BNX2X_FILTER_VLAN_PENDING,
|
||||
&bp->sp_state, obj_type,
|
||||
&bp->vlans_pool);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -1096,7 +1136,7 @@ static inline void bnx2x_init_bp_objs(struct bnx2x *bp)
|
|||
bnx2x_init_mac_credit_pool(bp, &bp->macs_pool, BP_FUNC(bp),
|
||||
bnx2x_get_path_func_num(bp));
|
||||
|
||||
bnx2x_init_vlan_credit_pool(bp, &bp->vlans_pool, BP_ABS_FUNC(bp)>>1,
|
||||
bnx2x_init_vlan_credit_pool(bp, &bp->vlans_pool, BP_FUNC(bp),
|
||||
bnx2x_get_path_func_num(bp));
|
||||
|
||||
/* RSS configuration object */
|
||||
|
@ -1106,6 +1146,8 @@ static inline void bnx2x_init_bp_objs(struct bnx2x *bp)
|
|||
bnx2x_sp_mapping(bp, rss_rdata),
|
||||
BNX2X_FILTER_RSS_CONF_PENDING, &bp->sp_state,
|
||||
BNX2X_OBJ_TYPE_RX);
|
||||
|
||||
bp->vlan_credit = PF_VLAN_CREDIT_E2(bp, bnx2x_get_path_func_num(bp));
|
||||
}
|
||||
|
||||
static inline u8 bnx2x_fp_qzone_id(struct bnx2x_fastpath *fp)
|
||||
|
@ -1339,4 +1381,23 @@ void bnx2x_squeeze_objects(struct bnx2x *bp);
|
|||
void bnx2x_schedule_sp_rtnl(struct bnx2x*, enum sp_rtnl_flag,
|
||||
u32 verbose);
|
||||
|
||||
/**
|
||||
* bnx2x_set_os_driver_state - write driver state for management FW usage
|
||||
*
|
||||
* @bp: driver handle
|
||||
* @state: OS_DRIVER_STATE_* value reflecting current driver state
|
||||
*/
|
||||
void bnx2x_set_os_driver_state(struct bnx2x *bp, u32 state);
|
||||
|
||||
/**
|
||||
* bnx2x_nvram_read - reads data from nvram [might sleep]
|
||||
*
|
||||
* @bp: driver handle
|
||||
* @offset: byte offset in nvram
|
||||
* @ret_buf: pointer to buffer where data is to be stored
|
||||
* @buf_size: Length of 'ret_buf' in bytes
|
||||
*/
|
||||
int bnx2x_nvram_read(struct bnx2x *bp, u32 offset, u8 *ret_buf,
|
||||
int buf_size);
|
||||
|
||||
#endif /* BNX2X_CMN_H */
|
||||
|
|
|
@ -1,15 +1,17 @@
|
|||
/* bnx2x_dcb.c: Broadcom Everest network driver.
|
||||
/* bnx2x_dcb.c: QLogic Everest network driver.
|
||||
*
|
||||
* Copyright 2009-2013 Broadcom Corporation
|
||||
* Copyright 2014 QLogic Corporation
|
||||
* All rights reserved
|
||||
*
|
||||
* Unless you and Broadcom execute a separate written software license
|
||||
* Unless you and QLogic execute a separate written software license
|
||||
* agreement governing use of this software, this software is licensed to you
|
||||
* under the terms of the GNU General Public License version 2, available
|
||||
* at http://www.gnu.org/licenses/old-licenses/gpl-2.0.html (the "GPL").
|
||||
*
|
||||
* Notwithstanding the above, under no circumstances may you combine this
|
||||
* software in any way with any other Broadcom software provided under a
|
||||
* license other than the GPL, without Broadcom's express prior written
|
||||
* software in any way with any other QLogic software provided under a
|
||||
* license other than the GPL, without QLogic's express prior written
|
||||
* consent.
|
||||
*
|
||||
* Maintained by: Ariel Elior <ariel.elior@qlogic.com>
|
||||
|
@ -1850,6 +1852,8 @@ static void bnx2x_dcbx_fw_struct(struct bnx2x *bp,
|
|||
if (bp->dcbx_port_params.ets.cos_params[cos].
|
||||
pri_bitmask & pri_bit)
|
||||
tt2cos[pri].cos = cos;
|
||||
|
||||
pfc_fw_cfg->dcb_outer_pri[pri] = ttp[pri];
|
||||
}
|
||||
|
||||
/* we never want the FW to add a 0 vlan tag */
|
||||
|
|
|
@ -1,15 +1,17 @@
|
|||
/* bnx2x_dcb.h: Broadcom Everest network driver.
|
||||
/* bnx2x_dcb.h: QLogic Everest network driver.
|
||||
*
|
||||
* Copyright 2009-2013 Broadcom Corporation
|
||||
* Copyright 2014 QLogic Corporation
|
||||
* All rights reserved
|
||||
*
|
||||
* Unless you and Broadcom execute a separate written software license
|
||||
* Unless you and QLogic execute a separate written software license
|
||||
* agreement governing use of this software, this software is licensed to you
|
||||
* under the terms of the GNU General Public License version 2, available
|
||||
* at http://www.gnu.org/licenses/old-licenses/gpl-2.0.html (the "GPL").
|
||||
*
|
||||
* Notwithstanding the above, under no circumstances may you combine this
|
||||
* software in any way with any other Broadcom software provided under a
|
||||
* license other than the GPL, without Broadcom's express prior written
|
||||
* software in any way with any other QLogic software provided under a
|
||||
* license other than the GPL, without QLogic's express prior written
|
||||
* consent.
|
||||
*
|
||||
* Maintained by: Ariel Elior <ariel.elior@qlogic.com>
|
||||
|
|
|
@ -1,15 +1,17 @@
|
|||
/* bnx2x_dump.h: Broadcom Everest network driver.
|
||||
/* bnx2x_dump.h: QLogic Everest network driver.
|
||||
*
|
||||
* Copyright (c) 2012-2013 Broadcom Corporation
|
||||
* Copyright (c) 2014 QLogic Corporation
|
||||
* All rights reserved
|
||||
*
|
||||
* Unless you and Broadcom execute a separate written software license
|
||||
* Unless you and QLogic execute a separate written software license
|
||||
* agreement governing use of this software, this software is licensed to you
|
||||
* under the terms of the GNU General Public License version 2, available
|
||||
* at http://www.gnu.org/licenses/old-licenses/gpl-2.0.html (the "GPL").
|
||||
*
|
||||
* Notwithstanding the above, under no circumstances may you combine this
|
||||
* software in any way with any other Broadcom software provided under a
|
||||
* license other than the GPL, without Broadcom's express prior written
|
||||
* software in any way with any other QLogic software provided under a
|
||||
* license other than the GPL, without QLogic's express prior written
|
||||
* consent.
|
||||
*/
|
||||
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
/* bnx2x_ethtool.c: Broadcom Everest network driver.
|
||||
/* bnx2x_ethtool.c: QLogic Everest network driver.
|
||||
*
|
||||
* Copyright (c) 2007-2013 Broadcom Corporation
|
||||
* Copyright (c) 2014 QLogic Corporation
|
||||
* All rights reserved
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License as published by
|
||||
|
@ -1129,6 +1131,9 @@ static int bnx2x_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
|
|||
} else
|
||||
bp->wol = 0;
|
||||
|
||||
if (SHMEM2_HAS(bp, curr_cfg))
|
||||
SHMEM2_WR(bp, curr_cfg, CURR_CFG_MET_OS);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -1343,8 +1348,8 @@ static int bnx2x_nvram_read_dword(struct bnx2x *bp, u32 offset, __be32 *ret_val,
|
|||
return rc;
|
||||
}
|
||||
|
||||
static int bnx2x_nvram_read(struct bnx2x *bp, u32 offset, u8 *ret_buf,
|
||||
int buf_size)
|
||||
int bnx2x_nvram_read(struct bnx2x *bp, u32 offset, u8 *ret_buf,
|
||||
int buf_size)
|
||||
{
|
||||
int rc;
|
||||
u32 cmd_flags;
|
||||
|
@ -3578,17 +3583,8 @@ static int bnx2x_get_ts_info(struct net_device *dev,
|
|||
|
||||
info->rx_filters = (1 << HWTSTAMP_FILTER_NONE) |
|
||||
(1 << HWTSTAMP_FILTER_PTP_V1_L4_EVENT) |
|
||||
(1 << HWTSTAMP_FILTER_PTP_V1_L4_SYNC) |
|
||||
(1 << HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ) |
|
||||
(1 << HWTSTAMP_FILTER_PTP_V2_L4_EVENT) |
|
||||
(1 << HWTSTAMP_FILTER_PTP_V2_L4_SYNC) |
|
||||
(1 << HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ) |
|
||||
(1 << HWTSTAMP_FILTER_PTP_V2_L2_EVENT) |
|
||||
(1 << HWTSTAMP_FILTER_PTP_V2_L2_SYNC) |
|
||||
(1 << HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ) |
|
||||
(1 << HWTSTAMP_FILTER_PTP_V2_EVENT) |
|
||||
(1 << HWTSTAMP_FILTER_PTP_V2_SYNC) |
|
||||
(1 << HWTSTAMP_FILTER_PTP_V2_DELAY_REQ);
|
||||
(1 << HWTSTAMP_FILTER_PTP_V2_EVENT);
|
||||
|
||||
info->tx_types = (1 << HWTSTAMP_TX_OFF)|(1 << HWTSTAMP_TX_ON);
|
||||
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
/* bnx2x_fw_defs.h: Broadcom Everest network driver.
|
||||
/* bnx2x_fw_defs.h: Qlogic Everest network driver.
|
||||
*
|
||||
* Copyright (c) 2007-2013 Broadcom Corporation
|
||||
* Copyright (c) 2014 QLogic Corporation
|
||||
* All rights reserved
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License as published by
|
||||
|
@ -372,7 +374,7 @@
|
|||
#define MAX_COS_NUMBER 4
|
||||
#define MAX_TRAFFIC_TYPES 8
|
||||
#define MAX_PFC_PRIORITIES 8
|
||||
|
||||
#define MAX_VLAN_PRIORITIES 8
|
||||
/* used by array traffic_type_to_priority[] to mark traffic type \
|
||||
that is not mapped to priority*/
|
||||
#define LLFC_TRAFFIC_TYPE_TO_PRIORITY_UNMAPPED 0xFF
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue