Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next-2.6

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next-2.6: (1674 commits)
  qlcnic: adding co maintainer
  ixgbe: add support for active DA cables
  ixgbe: dcb, do not tag tc_prio_control frames
  ixgbe: fix ixgbe_tx_is_paused logic
  ixgbe: always enable vlan strip/insert when DCB is enabled
  ixgbe: remove some redundant code in setting FCoE FIP filter
  ixgbe: fix wrong offset to fc_frame_header in ixgbe_fcoe_ddp
  ixgbe: fix header len when unsplit packet overflows to data buffer
  ipv6: Never schedule DAD timer on dead address
  ipv6: Use POSTDAD state
  ipv6: Use state_lock to protect ifa state
  ipv6: Replace inet6_ifaddr->dead with state
  cxgb4: notify upper drivers if the device is already up when they load
  cxgb4: keep interrupts available when the ports are brought down
  cxgb4: fix initial addition of MAC address
  cnic: Return SPQ credit to bnx2x after ring setup and shutdown.
  cnic: Convert cnic_local_flags to atomic ops.
  can: Fix SJA1000 command register writes on SMP systems
  bridge: fix build for CONFIG_SYSFS disabled
  ARCNET: Limit com20020 PCI ID matches for SOHARD cards
  ...

Fix up various conflicts with pcmcia tree drivers/net/
{pcmcia/3c589_cs.c, wireless/orinoco/orinoco_cs.c and
wireless/orinoco/spectrum_cs.c} and feature removal
(Documentation/feature-removal-schedule.txt).

Also fix a non-content conflict due to pm_qos_requirement getting
renamed in the PM tree (now pm_qos_request) in net/mac80211/scan.c
This commit is contained in:
Linus Torvalds 2010-05-20 21:04:44 -07:00
commit f8965467f3
1455 changed files with 95973 additions and 48342 deletions

View File

@ -0,0 +1,29 @@
rfkill - radio frequency (RF) connector kill switch support
For details to this subsystem look at Documentation/rfkill.txt.
What: /sys/class/rfkill/rfkill[0-9]+/state
Date: 09-Jul-2007
KernelVersion v2.6.22
Contact: linux-wireless@vger.kernel.org
Description: Current state of the transmitter.
This file is deprecated and sheduled to be removed in 2014,
because its not possible to express the 'soft and hard block'
state of the rfkill driver.
Values: A numeric value.
0: RFKILL_STATE_SOFT_BLOCKED
transmitter is turned off by software
1: RFKILL_STATE_UNBLOCKED
transmitter is (potentially) active
2: RFKILL_STATE_HARD_BLOCKED
transmitter is forced off by something outside of
the driver's control.
What: /sys/class/rfkill/rfkill[0-9]+/claim
Date: 09-Jul-2007
KernelVersion v2.6.22
Contact: linux-wireless@vger.kernel.org
Description: This file is deprecated because there no longer is a way to
claim just control over a single rfkill instance.
This file is scheduled to be removed in 2012.
Values: 0: Kernel handles events

View File

@ -0,0 +1,67 @@
rfkill - radio frequency (RF) connector kill switch support
For details to this subsystem look at Documentation/rfkill.txt.
For the deprecated /sys/class/rfkill/*/state and
/sys/class/rfkill/*/claim knobs of this interface look in
Documentation/ABI/obsolete/sysfs-class-rfkill.
What: /sys/class/rfkill
Date: 09-Jul-2007
KernelVersion: v2.6.22
Contact: linux-wireless@vger.kernel.org,
Description: The rfkill class subsystem folder.
Each registered rfkill driver is represented by an rfkillX
subfolder (X being an integer > 0).
What: /sys/class/rfkill/rfkill[0-9]+/name
Date: 09-Jul-2007
KernelVersion v2.6.22
Contact: linux-wireless@vger.kernel.org
Description: Name assigned by driver to this key (interface or driver name).
Values: arbitrary string.
What: /sys/class/rfkill/rfkill[0-9]+/type
Date: 09-Jul-2007
KernelVersion v2.6.22
Contact: linux-wireless@vger.kernel.org
Description: Driver type string ("wlan", "bluetooth", etc).
Values: See include/linux/rfkill.h.
What: /sys/class/rfkill/rfkill[0-9]+/persistent
Date: 09-Jul-2007
KernelVersion v2.6.22
Contact: linux-wireless@vger.kernel.org
Description: Whether the soft blocked state is initialised from non-volatile
storage at startup.
Values: A numeric value.
0: false
1: true
What: /sys/class/rfkill/rfkill[0-9]+/hard
Date: 12-March-2010
KernelVersion v2.6.34
Contact: linux-wireless@vger.kernel.org
Description: Current hardblock state. This file is read only.
Values: A numeric value.
0: inactive
The transmitter is (potentially) active.
1: active
The transmitter is forced off by something outside of
the driver's control.
What: /sys/class/rfkill/rfkill[0-9]+/soft
Date: 12-March-2010
KernelVersion v2.6.34
Contact: linux-wireless@vger.kernel.org
Description: Current softblock state. This file is read and write.
Values: A numeric value.
0: inactive
The transmitter is (potentially) active.
1: active
The transmitter is turned off by software.

View File

@ -49,7 +49,7 @@ o oprofile 0.9 # oprofiled --version
o udev 081 # udevinfo -V
o grub 0.93 # grub --version
o mcelog 0.6
o iptables 1.4.1 # iptables -V
o iptables 1.4.2 # iptables -V
Kernel compilation

View File

@ -241,16 +241,6 @@ Who: Thomas Gleixner <tglx@linutronix.de>
---------------------------
What (Why):
- xt_recent: the old ipt_recent proc dir
(superseded by /proc/net/xt_recent)
When: January 2009 or Linux 2.7.0, whichever comes first
Why: Superseded by newer revisions or modules
Who: Jan Engelhardt <jengelh@computergmbh.de>
---------------------------
What: GPIO autorequest on gpio_direction_{input,output}() in gpiolib
When: February 2010
Why: All callers should use explicit gpio_request()/gpio_free().
@ -520,6 +510,24 @@ Who: Hans de Goede <hdegoede@redhat.com>
----------------------------
What: sysfs-class-rfkill state file
When: Feb 2014
Files: net/rfkill/core.c
Why: Documented as obsolete since Feb 2010. This file is limited to 3
states while the rfkill drivers can have 4 states.
Who: anybody or Florian Mickler <florian@mickler.org>
----------------------------
What: sysfs-class-rfkill claim file
When: Feb 2012
Files: net/rfkill/core.c
Why: It is not possible to claim an rfkill driver since 2007. This is
Documented as obsolete since Feb 2010.
Who: anybody or Florian Mickler <florian@mickler.org>
----------------------------
What: capifs
When: February 2011
Files: drivers/isdn/capi/capifs.*
@ -579,6 +587,35 @@ Who: Len Brown <len.brown@intel.com>
----------------------------
What: iwlwifi 50XX module parameters
When: 2.6.40
Why: The "..50" modules parameters were used to configure 5000 series and
up devices; different set of module parameters also available for 4965
with same functionalities. Consolidate both set into single place
in drivers/net/wireless/iwlwifi/iwl-agn.c
Who: Wey-Yi Guy <wey-yi.w.guy@intel.com>
----------------------------
What: iwl4965 alias support
When: 2.6.40
Why: Internal alias support has been present in module-init-tools for some
time, the MODULE_ALIAS("iwl4965") boilerplate aliases can be removed
with no impact.
Who: Wey-Yi Guy <wey-yi.w.guy@intel.com>
---------------------------
What: xt_NOTRACK
Files: net/netfilter/xt_NOTRACK.c
When: April 2011
Why: Superseded by xt_CT
Who: Netfilter developer team <netfilter-devel@vger.kernel.org>
---------------------------
What: video4linux /dev/vtx teletext API support
When: 2.6.35
Files: drivers/media/video/saa5246a.c drivers/media/video/saa5249.c

View File

@ -0,0 +1,212 @@
Linux CAIF
===========
copyright (C) ST-Ericsson AB 2010
Author: Sjur Brendeland/ sjur.brandeland@stericsson.com
License terms: GNU General Public License (GPL) version 2
Introduction
------------
CAIF is a MUX protocol used by ST-Ericsson cellular modems for
communication between Modem and host. The host processes can open virtual AT
channels, initiate GPRS Data connections, Video channels and Utility Channels.
The Utility Channels are general purpose pipes between modem and host.
ST-Ericsson modems support a number of transports between modem
and host. Currently, UART and Loopback are available for Linux.
Architecture:
------------
The implementation of CAIF is divided into:
* CAIF Socket Layer, Kernel API, and Net Device.
* CAIF Core Protocol Implementation
* CAIF Link Layer, implemented as NET devices.
RTNL
!
! +------+ +------+ +------+
! +------+! +------+! +------+!
! ! Sock !! !Kernel!! ! Net !!
! ! API !+ ! API !+ ! Dev !+ <- CAIF Client APIs
! +------+ +------! +------+
! ! ! !
! +----------!----------+
! +------+ <- CAIF Protocol Implementation
+-------> ! CAIF !
! Core !
+------+
+--------!--------+
! !
+------+ +-----+
! ! ! TTY ! <- Link Layer (Net Devices)
+------+ +-----+
Using the Kernel API
----------------------
The Kernel API is used for accessing CAIF channels from the
kernel.
The user of the API has to implement two callbacks for receive
and control.
The receive callback gives a CAIF packet as a SKB. The control
callback will
notify of channel initialization complete, and flow-on/flow-
off.
struct caif_device caif_dev = {
.caif_config = {
.name = "MYDEV"
.type = CAIF_CHTY_AT
}
.receive_cb = my_receive,
.control_cb = my_control,
};
caif_add_device(&caif_dev);
caif_transmit(&caif_dev, skb);
See the caif_kernel.h for details about the CAIF kernel API.
I M P L E M E N T A T I O N
===========================
===========================
CAIF Core Protocol Layer
=========================================
CAIF Core layer implements the CAIF protocol as defined by ST-Ericsson.
It implements the CAIF protocol stack in a layered approach, where
each layer described in the specification is implemented as a separate layer.
The architecture is inspired by the design patterns "Protocol Layer" and
"Protocol Packet".
== CAIF structure ==
The Core CAIF implementation contains:
- Simple implementation of CAIF.
- Layered architecture (a la Streams), each layer in the CAIF
specification is implemented in a separate c-file.
- Clients must implement PHY layer to access physical HW
with receive and transmit functions.
- Clients must call configuration function to add PHY layer.
- Clients must implement CAIF layer to consume/produce
CAIF payload with receive and transmit functions.
- Clients must call configuration function to add and connect the
Client layer.
- When receiving / transmitting CAIF Packets (cfpkt), ownership is passed
to the called function (except for framing layers' receive functions
or if a transmit function returns an error, in which case the caller
must free the packet).
Layered Architecture
--------------------
The CAIF protocol can be divided into two parts: Support functions and Protocol
Implementation. The support functions include:
- CFPKT CAIF Packet. Implementation of CAIF Protocol Packet. The
CAIF Packet has functions for creating, destroying and adding content
and for adding/extracting header and trailers to protocol packets.
- CFLST CAIF list implementation.
- CFGLUE CAIF Glue. Contains OS Specifics, such as memory
allocation, endianness, etc.
The CAIF Protocol implementation contains:
- CFCNFG CAIF Configuration layer. Configures the CAIF Protocol
Stack and provides a Client interface for adding Link-Layer and
Driver interfaces on top of the CAIF Stack.
- CFCTRL CAIF Control layer. Encodes and Decodes control messages
such as enumeration and channel setup. Also matches request and
response messages.
- CFSERVL General CAIF Service Layer functionality; handles flow
control and remote shutdown requests.
- CFVEI CAIF VEI layer. Handles CAIF AT Channels on VEI (Virtual
External Interface). This layer encodes/decodes VEI frames.
- CFDGML CAIF Datagram layer. Handles CAIF Datagram layer (IP
traffic), encodes/decodes Datagram frames.
- CFMUX CAIF Mux layer. Handles multiplexing between multiple
physical bearers and multiple channels such as VEI, Datagram, etc.
The MUX keeps track of the existing CAIF Channels and
Physical Instances and selects the apropriate instance based
on Channel-Id and Physical-ID.
- CFFRML CAIF Framing layer. Handles Framing i.e. Frame length
and frame checksum.
- CFSERL CAIF Serial layer. Handles concatenation/split of frames
into CAIF Frames with correct length.
+---------+
| Config |
| CFCNFG |
+---------+
!
+---------+ +---------+ +---------+
| AT | | Control | | Datagram|
| CFVEIL | | CFCTRL | | CFDGML |
+---------+ +---------+ +---------+
\_____________!______________/
!
+---------+
| MUX |
| |
+---------+
_____!_____
/ \
+---------+ +---------+
| CFFRML | | CFFRML |
| Framing | | Framing |
+---------+ +---------+
! !
+---------+ +---------+
| | | Serial |
| | | CFSERL |
+---------+ +---------+
In this layered approach the following "rules" apply.
- All layers embed the same structure "struct cflayer"
- A layer does not depend on any other layer's private data.
- Layers are stacked by setting the pointers
layer->up , layer->dn
- In order to send data upwards, each layer should do
layer->up->receive(layer->up, packet);
- In order to send data downwards, each layer should do
layer->dn->transmit(layer->dn, packet);
Linux Driver Implementation
===========================
Linux GPRS Net Device and CAIF socket are implemented on top of the
CAIF Core protocol. The Net device and CAIF socket have an instance of
'struct cflayer', just like the CAIF Core protocol stack.
Net device and Socket implement the 'receive()' function defined by
'struct cflayer', just like the rest of the CAIF stack. In this way, transmit and
receive of packets is handled as by the rest of the layers: the 'dn->transmit()'
function is called in order to transmit data.
The layer on top of the CAIF Core implementation is
sometimes referred to as the "Client layer".
Configuration of Link Layer
---------------------------
The Link Layer is implemented as Linux net devices (struct net_device).
Payload handling and registration is done using standard Linux mechanisms.
The CAIF Protocol relies on a loss-less link layer without implementing
retransmission. This implies that packet drops must not happen.
Therefore a flow-control mechanism is implemented where the physical
interface can initiate flow stop for all CAIF Channels.

View File

@ -0,0 +1,109 @@
Copyright (C) ST-Ericsson AB 2010
Author: Sjur Brendeland/ sjur.brandeland@stericsson.com
License terms: GNU General Public License (GPL) version 2
---------------------------------------------------------
=== Start ===
If you have compiled CAIF for modules do:
$modprobe crc_ccitt
$modprobe caif
$modprobe caif_socket
$modprobe chnl_net
=== Preparing the setup with a STE modem ===
If you are working on integration of CAIF you should make sure
that the kernel is built with module support.
There are some things that need to be tweaked to get the host TTY correctly
set up to talk to the modem.
Since the CAIF stack is running in the kernel and we want to use the existing
TTY, we are installing our physical serial driver as a line discipline above
the TTY device.
To achieve this we need to install the N_CAIF ldisc from user space.
The benefit is that we can hook up to any TTY.
The use of Start-of-frame-extension (STX) must also be set as
module parameter "ser_use_stx".
Normally Frame Checksum is always used on UART, but this is also provided as a
module parameter "ser_use_fcs".
$ modprobe caif_serial ser_ttyname=/dev/ttyS0 ser_use_stx=yes
$ ifconfig caif_ttyS0 up
PLEASE NOTE: There is a limitation in Android shell.
It only accepts one argument to insmod/modprobe!
=== Trouble shooting ===
There are debugfs parameters provided for serial communication.
/sys/kernel/debug/caif_serial/<tty-name>/
* ser_state: Prints the bit-mask status where
- 0x02 means SENDING, this is a transient state.
- 0x10 means FLOW_OFF_SENT, i.e. the previous frame has not been sent
and is blocking further send operation. Flow OFF has been propagated
to all CAIF Channels using this TTY.
* tty_status: Prints the bit-mask tty status information
- 0x01 - tty->warned is on.
- 0x02 - tty->low_latency is on.
- 0x04 - tty->packed is on.
- 0x08 - tty->flow_stopped is on.
- 0x10 - tty->hw_stopped is on.
- 0x20 - tty->stopped is on.
* last_tx_msg: Binary blob Prints the last transmitted frame.
This can be printed with
$od --format=x1 /sys/kernel/debug/caif_serial/<tty>/last_rx_msg.
The first two tx messages sent look like this. Note: The initial
byte 02 is start of frame extension (STX) used for re-syncing
upon errors.
- Enumeration:
0000000 02 05 00 00 03 01 d2 02
| | | | | |
STX(1) | | | |
Length(2)| | |
Control Channel(1)
Command:Enumeration(1)
Link-ID(1)
Checksum(2)
- Channel Setup:
0000000 02 07 00 00 00 21 a1 00 48 df
| | | | | | | |
STX(1) | | | | | |
Length(2)| | | | |
Control Channel(1)
Command:Channel Setup(1)
Channel Type(1)
Priority and Link-ID(1)
Endpoint(1)
Checksum(2)
* last_rx_msg: Prints the last transmitted frame.
The RX messages for LinkSetup look almost identical but they have the
bit 0x20 set in the command bit, and Channel Setup has added one byte
before Checksum containing Channel ID.
NOTE: Several CAIF Messages might be concatenated. The maximum debug
buffer size is 128 bytes.
== Error Scenarios:
- last_tx_msg contains channel setup message and last_rx_msg is empty ->
The host seems to be able to send over the UART, at least the CAIF ldisc get
notified that sending is completed.
- last_tx_msg contains enumeration message and last_rx_msg is empty ->
The host is not able to send the message from UART, the tty has not been
able to complete the transmit operation.
- if /sys/kernel/debug/caif_serial/<tty>/tty_status is non-zero there
might be problems transmitting over UART.
E.g. host and modem wiring is not correct you will typically see
tty_status = 0x10 (hw_stopped) and ser_state = 0x10 (FLOW_OFF_SENT).
You will probably see the enumeration message in last_tx_message
and empty last_rx_message.

View File

@ -588,6 +588,37 @@ ip_local_port_range - 2 INTEGERS
(i.e. by default) range 1024-4999 is enough to issue up to
2000 connections per second to systems supporting timestamps.
ip_local_reserved_ports - list of comma separated ranges
Specify the ports which are reserved for known third-party
applications. These ports will not be used by automatic port
assignments (e.g. when calling connect() or bind() with port
number 0). Explicit port allocation behavior is unchanged.
The format used for both input and output is a comma separated
list of ranges (e.g. "1,2-4,10-10" for ports 1, 2, 3, 4 and
10). Writing to the file will clear all previously reserved
ports and update the current list with the one given in the
input.
Note that ip_local_port_range and ip_local_reserved_ports
settings are independent and both are considered by the kernel
when determining which ports are available for automatic port
assignments.
You can reserve ports which are not in the current
ip_local_port_range, e.g.:
$ cat /proc/sys/net/ipv4/ip_local_port_range
32000 61000
$ cat /proc/sys/net/ipv4/ip_local_reserved_ports
8080,9148
although this is redundant. However such a setting is useful
if later the port range is changed to a value that will
include the reserved ports.
Default: Empty
ip_nonlocal_bind - BOOLEAN
If set, allows processes to bind() to non-local IP addresses,
which can be quite useful - but may break some applications.

View File

@ -1,44 +1,95 @@
This brief document describes how to use the kernel's PPPoL2TP driver
to provide L2TP functionality. L2TP is a protocol that tunnels one or
more PPP sessions over a UDP tunnel. It is commonly used for VPNs
This document describes how to use the kernel's L2TP drivers to
provide L2TP functionality. L2TP is a protocol that tunnels one or
more sessions over an IP tunnel. It is commonly used for VPNs
(L2TP/IPSec) and by ISPs to tunnel subscriber PPP sessions over an IP
network infrastructure.
network infrastructure. With L2TPv3, it is also useful as a Layer-2
tunneling infrastructure.
Features
========
L2TPv2 (PPP over L2TP (UDP tunnels)).
L2TPv3 ethernet pseudowires.
L2TPv3 PPP pseudowires.
L2TPv3 IP encapsulation.
Netlink sockets for L2TPv3 configuration management.
History
=======
The original pppol2tp driver was introduced in 2.6.23 and provided
L2TPv2 functionality (rfc2661). L2TPv2 is used to tunnel one or more PPP
sessions over a UDP tunnel.
L2TPv3 (rfc3931) changes the protocol to allow different frame types
to be passed over an L2TP tunnel by moving the PPP-specific parts of
the protocol out of the core L2TP packet headers. Each frame type is
known as a pseudowire type. Ethernet, PPP, HDLC, Frame Relay and ATM
pseudowires for L2TP are defined in separate RFC standards. Another
change for L2TPv3 is that it can be carried directly over IP with no
UDP header (UDP is optional). It is also possible to create static
unmanaged L2TPv3 tunnels manually without a control protocol
(userspace daemon) to manage them.
To support L2TPv3, the original pppol2tp driver was split up to
separate the L2TP and PPP functionality. Existing L2TPv2 userspace
apps should be unaffected as the original pppol2tp sockets API is
retained. L2TPv3, however, uses netlink to manage L2TPv3 tunnels and
sessions.
Design
======
The PPPoL2TP driver, drivers/net/pppol2tp.c, provides a mechanism by
which PPP frames carried through an L2TP session are passed through
the kernel's PPP subsystem. The standard PPP daemon, pppd, handles all
PPP interaction with the peer. PPP network interfaces are created for
each local PPP endpoint.
The L2TP protocol separates control and data frames. The L2TP kernel
drivers handle only L2TP data frames; control frames are always
handled by userspace. L2TP control frames carry messages between L2TP
clients/servers and are used to setup / teardown tunnels and
sessions. An L2TP client or server is implemented in userspace.
The L2TP protocol http://www.faqs.org/rfcs/rfc2661.html defines L2TP
control and data frames. L2TP control frames carry messages between
L2TP clients/servers and are used to setup / teardown tunnels and
sessions. An L2TP client or server is implemented in userspace and
will use a regular UDP socket per tunnel. L2TP data frames carry PPP
frames, which may be PPP control or PPP data. The kernel's PPP
Each L2TP tunnel is implemented using a UDP or L2TPIP socket; L2TPIP
provides L2TPv3 IP encapsulation (no UDP) and is implemented using a
new l2tpip socket family. The tunnel socket is typically created by
userspace, though for unmanaged L2TPv3 tunnels, the socket can also be
created by the kernel. Each L2TP session (pseudowire) gets a network
interface instance. In the case of PPP, these interfaces are created
indirectly by pppd using a pppol2tp socket. In the case of ethernet,
the netdevice is created upon a netlink request to create an L2TPv3
ethernet pseudowire.
For PPP, the PPPoL2TP driver, net/l2tp/l2tp_ppp.c, provides a
mechanism by which PPP frames carried through an L2TP session are
passed through the kernel's PPP subsystem. The standard PPP daemon,
pppd, handles all PPP interaction with the peer. PPP network
interfaces are created for each local PPP endpoint. The kernel's PPP
subsystem arranges for PPP control frames to be delivered to pppd,
while data frames are forwarded as usual.
For ethernet, the L2TPETH driver, net/l2tp/l2tp_eth.c, implements a
netdevice driver, managing virtual ethernet devices, one per
pseudowire. These interfaces can be managed using standard Linux tools
such as "ip" and "ifconfig". If only IP frames are passed over the
tunnel, the interface can be given an IP addresses of itself and its
peer. If non-IP frames are to be passed over the tunnel, the interface
can be added to a bridge using brctl. All L2TP datapath protocol
functions are handled by the L2TP core driver.
Each tunnel and session within a tunnel is assigned a unique tunnel_id
and session_id. These ids are carried in the L2TP header of every
control and data packet. The pppol2tp driver uses them to lookup
internal tunnel and/or session contexts. Zero tunnel / session ids are
treated specially - zero ids are never assigned to tunnels or sessions
in the network. In the driver, the tunnel context keeps a pointer to
the tunnel UDP socket. The session context keeps a pointer to the
PPPoL2TP socket, as well as other data that lets the driver interface
to the kernel PPP subsystem.
control and data packet. (Actually, in L2TPv3, the tunnel_id isn't
present in data frames - it is inferred from the IP connection on
which the packet was received.) The L2TP driver uses the ids to lookup
internal tunnel and/or session contexts to determine how to handle the
packet. Zero tunnel / session ids are treated specially - zero ids are
never assigned to tunnels or sessions in the network. In the driver,
the tunnel context keeps a reference to the tunnel UDP or L2TPIP
socket. The session context holds data that lets the driver interface
to the kernel's network frame type subsystems, i.e. PPP, ethernet.
Note that the pppol2tp kernel driver handles only L2TP data frames;
L2TP control frames are simply passed up to userspace in the UDP
tunnel socket. The kernel handles all datapath aspects of the
protocol, including data packet resequencing (if enabled).
Userspace Programming
=====================
There are a number of requirements on the userspace L2TP daemon in
order to use the pppol2tp driver.
For L2TPv2, there are a number of requirements on the userspace L2TP
daemon in order to use the pppol2tp driver.
1. Use a UDP socket per tunnel.
@ -86,6 +137,35 @@ In addition to the standard PPP ioctls, a PPPIOCGL2TPSTATS is provided
to retrieve tunnel and session statistics from the kernel using the
PPPoX socket of the appropriate tunnel or session.
For L2TPv3, userspace must use the netlink API defined in
include/linux/l2tp.h to manage tunnel and session contexts. The
general procedure to create a new L2TP tunnel with one session is:-
1. Open a GENL socket using L2TP_GENL_NAME for configuring the kernel
using netlink.
2. Create a UDP or L2TPIP socket for the tunnel.
3. Create a new L2TP tunnel using a L2TP_CMD_TUNNEL_CREATE
request. Set attributes according to desired tunnel parameters,
referencing the UDP or L2TPIP socket created in the previous step.
4. Create a new L2TP session in the tunnel using a
L2TP_CMD_SESSION_CREATE request.
The tunnel and all of its sessions are closed when the tunnel socket
is closed. The netlink API may also be used to delete sessions and
tunnels. Configuration and status info may be set or read using netlink.
The L2TP driver also supports static (unmanaged) L2TPv3 tunnels. These
are where there is no L2TP control message exchange with the peer to
setup the tunnel; the tunnel is configured manually at each end of the
tunnel. There is no need for an L2TP userspace application in this
case -- the tunnel socket is created by the kernel and configured
using parameters sent in the L2TP_CMD_TUNNEL_CREATE netlink
request. The "ip" utility of iproute2 has commands for managing static
L2TPv3 tunnels; do "ip l2tp help" for more information.
Debugging
=========
@ -102,6 +182,69 @@ PPPOL2TP_MSG_CONTROL userspace - kernel interface
PPPOL2TP_MSG_SEQ sequence numbers handling
PPPOL2TP_MSG_DATA data packets
If enabled, files under a l2tp debugfs directory can be used to dump
kernel state about L2TP tunnels and sessions. To access it, the
debugfs filesystem must first be mounted.
# mount -t debugfs debugfs /debug
Files under the l2tp directory can then be accessed.
# cat /debug/l2tp/tunnels
The debugfs files should not be used by applications to obtain L2TP
state information because the file format is subject to change. It is
implemented to provide extra debug information to help diagnose
problems.) Users should use the netlink API.
/proc/net/pppol2tp is also provided for backwards compaibility with
the original pppol2tp driver. It lists information about L2TPv2
tunnels and sessions only. Its use is discouraged.
Unmanaged L2TPv3 Tunnels
========================
Some commercial L2TP products support unmanaged L2TPv3 ethernet
tunnels, where there is no L2TP control protocol; tunnels are
configured at each side manually. New commands are available in
iproute2's ip utility to support this.
To create an L2TPv3 ethernet pseudowire between local host 192.168.1.1
and peer 192.168.1.2, using IP addresses 10.5.1.1 and 10.5.1.2 for the
tunnel endpoints:-
# modprobe l2tp_eth
# modprobe l2tp_netlink
# ip l2tp add tunnel tunnel_id 1 peer_tunnel_id 1 udp_sport 5000 \
udp_dport 5000 encap udp local 192.168.1.1 remote 192.168.1.2
# ip l2tp add session tunnel_id 1 session_id 1 peer_session_id 1
# ifconfig -a
# ip addr add 10.5.1.2/32 peer 10.5.1.1/32 dev l2tpeth0
# ifconfig l2tpeth0 up
Choose IP addresses to be the address of a local IP interface and that
of the remote system. The IP addresses of the l2tpeth0 interface can be
anything suitable.
Repeat the above at the peer, with ports, tunnel/session ids and IP
addresses reversed. The tunnel and session IDs can be any non-zero
32-bit number, but the values must be reversed at the peer.
Host 1 Host2
udp_sport=5000 udp_sport=5001
udp_dport=5001 udp_dport=5000
tunnel_id=42 tunnel_id=45
peer_tunnel_id=45 peer_tunnel_id=42
session_id=128 session_id=5196755
peer_session_id=5196755 peer_session_id=128
When done at both ends of the tunnel, it should be possible to send
data over the network. e.g.
# ping 10.5.1.1
Sample Userspace Code
=====================
@ -158,12 +301,48 @@ Sample Userspace Code
}
return 0;
Miscellaneous
============
Internal Implementation
=======================
The PPPoL2TP driver was developed as part of the OpenL2TP project by
The driver keeps a struct l2tp_tunnel context per L2TP tunnel and a
struct l2tp_session context for each session. The l2tp_tunnel is
always associated with a UDP or L2TP/IP socket and keeps a list of
sessions in the tunnel. The l2tp_session context keeps kernel state
about the session. It has private data which is used for data specific
to the session type. With L2TPv2, the session always carried PPP
traffic. With L2TPv3, the session can also carry ethernet frames
(ethernet pseudowire) or other data types such as ATM, HDLC or Frame
Relay.
When a tunnel is first opened, the reference count on the socket is
increased using sock_hold(). This ensures that the kernel socket
cannot be removed while L2TP's data structures reference it.
Some L2TP sessions also have a socket (PPP pseudowires) while others
do not (ethernet pseudowires). We can't use the socket reference count
as the reference count for session contexts. The L2TP implementation
therefore has its own internal reference counts on the session
contexts.
To Do
=====
Add L2TP tunnel switching support. This would route tunneled traffic
from one L2TP tunnel into another. Specified in
http://tools.ietf.org/html/draft-ietf-l2tpext-tunnel-switching-08
Add L2TPv3 VLAN pseudowire support.
Add L2TPv3 IP pseudowire support.
Add L2TPv3 ATM pseudowire support.
Miscellaneous
=============
The L2TP drivers were developed as part of the OpenL2TP project by
Katalix Systems Ltd. OpenL2TP is a full-featured L2TP client / server,
designed from the ground up to have the L2TP datapath in the
kernel. The project also implemented the pppol2tp plugin for pppd
which allows pppd to use the kernel driver. Details can be found at
http://openl2tp.sourceforge.net.
http://www.openl2tp.org.

View File

@ -20,23 +20,23 @@ the rest of the skbuff, if any more information does exist.
Packet Layer to Device Driver
-----------------------------
First Byte = 0x00
First Byte = 0x00 (X25_IFACE_DATA)
This indicates that the rest of the skbuff contains data to be transmitted
over the LAPB link. The LAPB link should already exist before any data is
passed down.
First Byte = 0x01
First Byte = 0x01 (X25_IFACE_CONNECT)
Establish the LAPB link. If the link is already established then the connect
confirmation message should be returned as soon as possible.
First Byte = 0x02
First Byte = 0x02 (X25_IFACE_DISCONNECT)
Terminate the LAPB link. If it is already disconnected then the disconnect
confirmation message should be returned as soon as possible.
First Byte = 0x03
First Byte = 0x03 (X25_IFACE_PARAMS)
LAPB parameters. To be defined.
@ -44,22 +44,22 @@ LAPB parameters. To be defined.
Device Driver to Packet Layer
-----------------------------
First Byte = 0x00
First Byte = 0x00 (X25_IFACE_DATA)
This indicates that the rest of the skbuff contains data that has been
received over the LAPB link.
First Byte = 0x01
First Byte = 0x01 (X25_IFACE_CONNECT)
LAPB link has been established. The same message is used for both a LAPB
link connect_confirmation and a connect_indication.
First Byte = 0x02
First Byte = 0x02 (X25_IFACE_DISCONNECT)
LAPB link has been terminated. This same message is used for both a LAPB
link disconnect_confirmation and a disconnect_indication.
First Byte = 0x03
First Byte = 0x03 (X25_IFACE_PARAMS)
LAPB parameters. To be defined.

View File

@ -99,37 +99,15 @@ system. Also, it is possible to switch all rfkill drivers (or all drivers of
a specified type) into a state which also updates the default state for
hotplugged devices.
After an application opens /dev/rfkill, it can read the current state of
all devices, and afterwards can poll the descriptor for hotplug or state
change events.
After an application opens /dev/rfkill, it can read the current state of all
devices. Changes can be either obtained by either polling the descriptor for
hotplug or state change events or by listening for uevents emitted by the
rfkill core framework.
Applications must ignore operations (the "op" field) they do not handle,
this allows the API to be extended in the future.
Additionally, each rfkill device is registered in sysfs and emits uevents.
Additionally, each rfkill device is registered in sysfs and there has the
following attributes:
name: Name assigned by driver to this key (interface or driver name).
type: Driver type string ("wlan", "bluetooth", etc).
persistent: Whether the soft blocked state is initialised from
non-volatile storage at startup.
state: Current state of the transmitter
0: RFKILL_STATE_SOFT_BLOCKED
transmitter is turned off by software
1: RFKILL_STATE_UNBLOCKED
transmitter is (potentially) active
2: RFKILL_STATE_HARD_BLOCKED
transmitter is forced off by something outside of
the driver's control.
This file is deprecated because it can only properly show
three of the four possible states, soft-and-hard-blocked is
missing.
claim: 0: Kernel handles events
This file is deprecated because there no longer is a way to
claim just control over a single rfkill instance.
rfkill devices also issue uevents (with an action of "change"), with the
following environment variables set:
rfkill devices issue uevents (with an action of "change"), with the following
environment variables set:
RFKILL_NAME
RFKILL_STATE
@ -137,3 +115,7 @@ RFKILL_TYPE
The contents of these variables corresponds to the "name", "state" and
"type" sysfs files explained above.
For further details consult Documentation/ABI/stable/dev-rfkill and
Documentation/ABI/stable/sysfs-class-rfkill.

View File

@ -84,6 +84,16 @@ netdev_max_backlog
Maximum number of packets, queued on the INPUT side, when the interface
receives packets faster than kernel can process them.
netdev_tstamp_prequeue
----------------------
If set to 0, RX packet timestamps can be sampled after RPS processing, when
the target CPU processes packets. It might give some delay on timestamps, but
permit to distribute the load on several cpus.
If set to 1 (default), timestamps are sampled as soon as possible, before
queueing.
optmem_max
----------

View File

@ -1521,9 +1521,10 @@ M: Andy Whitcroft <apw@canonical.com>
S: Supported
F: scripts/checkpatch.pl
CISCO 10G ETHERNET DRIVER
CISCO VIC ETHERNET NIC DRIVER
M: Scott Feldman <scofeldm@cisco.com>
M: Joe Eykholt <jeykholt@cisco.com>
M: Vasanthy Kolluri <vkolluri@cisco.com>
M: Roopa Prabhu <roprabhu@cisco.com>
S: Supported
F: drivers/net/enic/
@ -3044,10 +3045,9 @@ F: net/ipv4/netfilter/ipt_MASQUERADE.c
IP1000A 10/100/1000 GIGABIT ETHERNET DRIVER
M: Francois Romieu <romieu@fr.zoreil.com>
M: Sorbica Shieh <sorbica@icplus.com.tw>
M: Jesse Huang <jesse@icplus.com.tw>
L: netdev@vger.kernel.org
S: Maintained
F: drivers/net/ipg.c
F: drivers/net/ipg.*
IPATH DRIVER
M: Ralph Campbell <infinipath@qlogic.com>
@ -3895,7 +3895,6 @@ M: Ramkrishna Vepa <ram.vepa@neterion.com>
M: Rastapur Santosh <santosh.rastapur@neterion.com>
M: Sivakumar Subramani <sivakumar.subramani@neterion.com>
M: Sreenivasa Honnur <sreenivasa.honnur@neterion.com>
M: Anil Murthy <anil.murthy@neterion.com>
L: netdev@vger.kernel.org
W: http://trac.neterion.com/cgi-bin/trac.cgi/wiki/Linux?Anonymous
W: http://trac.neterion.com/cgi-bin/trac.cgi/wiki/X3100Linux?Anonymous
@ -4000,6 +3999,7 @@ F: net/rfkill/
F: net/wireless/
F: include/net/ieee80211*
F: include/linux/wireless.h
F: include/linux/iw_handler.h
F: drivers/net/wireless/
NETWORKING DRIVERS
@ -4631,6 +4631,7 @@ F: drivers/net/qla3xxx.*
QLOGIC QLCNIC (1/10)Gb ETHERNET DRIVER
M: Amit Kumar Salecha <amit.salecha@qlogic.com>
M: Anirban Chakraborty <anirban.chakraborty@qlogic.com>
M: linux-driver@qlogic.com
L: netdev@vger.kernel.org
S: Supported

View File

@ -201,9 +201,9 @@ static struct resource pcm970_sja1000_resources[] = {
};
struct sja1000_platform_data pcm970_sja1000_platform_data = {
.clock = 16000000 / 2,
.ocr = 0x40 | 0x18,
.cdr = 0x40,
.osc_freq = 16000000,
.ocr = OCR_TX1_PULLDOWN | OCR_TX0_PUSHPULL,
.cdr = CDR_CBP,
};
static struct platform_device pcm970_sja1000 = {

View File

@ -530,9 +530,9 @@ static struct resource pcm970_sja1000_resources[] = {
};
struct sja1000_platform_data pcm970_sja1000_platform_data = {
.clock = 16000000 / 2,
.ocr = 0x40 | 0x18,
.cdr = 0x40,
.osc_freq = 16000000,
.ocr = OCR_TX1_PULLDOWN | OCR_TX0_PUSHPULL,
.cdr = CDR_CBP,
};
static struct platform_device pcm970_sja1000 = {

View File

@ -73,7 +73,6 @@ static struct pxa2xx_spi_chip mcp251x_chip_info4 = {
static struct mcp251x_platform_data mcp251x_info = {
.oscillator_frequency = 16E6,
.model = CAN_MCP251X_MCP2515,
.board_specific_setup = NULL,
.power_enable = NULL,
.transceiver_enable = NULL
@ -81,7 +80,7 @@ static struct mcp251x_platform_data mcp251x_info = {
static struct spi_board_info mcp251x_board_info[] = {
{
.modalias = "mcp251x",
.modalias = "mcp2515",
.max_speed_hz = 6500000,
.bus_num = 3,
.chip_select = 0,
@ -90,7 +89,7 @@ static struct spi_board_info mcp251x_board_info[] = {
.irq = gpio_to_irq(ICONTROL_MCP251x_nIRQ1)
},
{
.modalias = "mcp251x",
.modalias = "mcp2515",
.max_speed_hz = 6500000,
.bus_num = 3,
.chip_select = 1,
@ -99,7 +98,7 @@ static struct spi_board_info mcp251x_board_info[] = {
.irq = gpio_to_irq(ICONTROL_MCP251x_nIRQ2)
},
{
.modalias = "mcp251x",
.modalias = "mcp2515",
.max_speed_hz = 6500000,
.bus_num = 4,
.chip_select = 0,
@ -108,7 +107,7 @@ static struct spi_board_info mcp251x_board_info[] = {
.irq = gpio_to_irq(ICONTROL_MCP251x_nIRQ3)
},
{
.modalias = "mcp251x",
.modalias = "mcp2515",
.max_speed_hz = 6500000,
.bus_num = 4,
.chip_select = 1,

View File

@ -414,15 +414,13 @@ static int zeus_mcp2515_transceiver_enable(int enable)
static struct mcp251x_platform_data zeus_mcp2515_pdata = {
.oscillator_frequency = 16*1000*1000,
.model = CAN_MCP251X_MCP2515,
.board_specific_setup = zeus_mcp2515_setup,
.transceiver_enable = zeus_mcp2515_transceiver_enable,
.power_enable = zeus_mcp2515_transceiver_enable,
};
static struct spi_board_info zeus_spi_board_info[] = {
[0] = {
.modalias = "mcp251x",
.modalias = "mcp2515",
.platform_data = &zeus_mcp2515_pdata,
.irq = gpio_to_irq(ZEUS_CAN_GPIO),
.max_speed_hz = 1*1000*1000,

View File

@ -12,6 +12,7 @@
#include <asm/registers.h>
#include <asm/setup.h>
#include <asm/irqflags.h>
#include <asm/cache.h>
#include <asm-generic/cmpxchg.h>
#include <asm-generic/cmpxchg-local.h>
@ -96,4 +97,14 @@ extern struct dentry *of_debugfs_root;
#define arch_align_stack(x) (x)
/*
* MicroBlaze doesn't handle unaligned accesses in hardware.
*
* Based on this we force the IP header alignment in network drivers.
* We also modify NET_SKB_PAD to be a cacheline in size, thus maintaining
* cacheline alignment of buffers.
*/
#define NET_IP_ALIGN 2
#define NET_SKB_PAD L1_CACHE_BYTES
#endif /* _ASM_MICROBLAZE_SYSTEM_H */

View File

@ -83,3 +83,57 @@ static int __init swarm_pata_init(void)
device_initcall(swarm_pata_init);
#endif /* defined(CONFIG_SIBYTE_SWARM) || defined(CONFIG_SIBYTE_LITTLESUR) */
#define sb1250_dev_struct(num) \
static struct resource sb1250_res##num = { \
.name = "SB1250 MAC " __stringify(num), \
.flags = IORESOURCE_MEM, \
.start = A_MAC_CHANNEL_BASE(num), \
.end = A_MAC_CHANNEL_BASE(num + 1) -1, \
};\
static struct platform_device sb1250_dev##num = { \
.name = "sb1250-mac", \
.id = num, \
.resource = &sb1250_res##num, \
.num_resources = 1, \
}
sb1250_dev_struct(0);
sb1250_dev_struct(1);
sb1250_dev_struct(2);
sb1250_dev_struct(3);
static struct platform_device *sb1250_devs[] __initdata = {
&sb1250_dev0,
&sb1250_dev1,
&sb1250_dev2,
&sb1250_dev3,
};
static int __init sb1250_device_init(void)
{
int ret;
/* Set the number of available units based on the SOC type. */
switch (soc_type) {
case K_SYS_SOC_TYPE_BCM1250:
case K_SYS_SOC_TYPE_BCM1250_ALT:
ret = platform_add_devices(sb1250_devs, 3);
break;
case K_SYS_SOC_TYPE_BCM1120:
case K_SYS_SOC_TYPE_BCM1125:
case K_SYS_SOC_TYPE_BCM1125H:
case K_SYS_SOC_TYPE_BCM1250_ALT2: /* Hybrid */
ret = platform_add_devices(sb1250_devs, 2);
break;
case K_SYS_SOC_TYPE_BCM1x55:
case K_SYS_SOC_TYPE_BCM1x80:
ret = platform_add_devices(sb1250_devs, 4);
break;
default:
ret = -ENODEV;
break;
}
return ret;
}
device_initcall(sb1250_device_init);

View File

@ -394,6 +394,7 @@ config ATM_HE_USE_SUNI
config ATM_SOLOS
tristate "Solos ADSL2+ PCI Multiport card driver"
depends on PCI
select FW_LOADER
help
Support for the Solos multiport ADSL2+ card.

View File

@ -68,7 +68,7 @@ static int atmtcp_send_control(struct atm_vcc *vcc,int type,
*(struct atm_vcc **) &new_msg->vcc = vcc;
old_test = test_bit(flag,&vcc->flags);
out_vcc->push(out_vcc,skb);
add_wait_queue(sk_atm(vcc)->sk_sleep, &wait);
add_wait_queue(sk_sleep(sk_atm(vcc)), &wait);
while (test_bit(flag,&vcc->flags) == old_test) {
mb();
out_vcc = PRIV(vcc->dev) ? PRIV(vcc->dev)->vcc : NULL;
@ -80,7 +80,7 @@ static int atmtcp_send_control(struct atm_vcc *vcc,int type,
schedule();
}
set_current_state(TASK_RUNNING);
remove_wait_queue(sk_atm(vcc)->sk_sleep, &wait);
remove_wait_queue(sk_sleep(sk_atm(vcc)), &wait);
return error;
}
@ -105,7 +105,7 @@ static int atmtcp_recv_control(const struct atmtcp_control *msg)
msg->type);
return -EINVAL;
}
wake_up(sk_atm(vcc)->sk_sleep);
wake_up(sk_sleep(sk_atm(vcc)));
return 0;
}

View File

@ -1131,7 +1131,7 @@ DPRINTK("doing direct send\n"); /* @@@ well, this doesn't work anyway */
if (i == -1)
put_dma(tx->index,eni_dev->dma,&j,(unsigned long)
skb->data,
skb->len - skb->data_len);
skb_headlen(skb));
else
put_dma(tx->index,eni_dev->dma,&j,(unsigned long)
skb_shinfo(skb)->frags[i].page + skb_shinfo(skb)->frags[i].page_offset,

View File

@ -2664,8 +2664,8 @@ he_send(struct atm_vcc *vcc, struct sk_buff *skb)
#ifdef USE_SCATTERGATHER
tpd->iovec[slot].addr = pci_map_single(he_dev->pci_dev, skb->data,
skb->len - skb->data_len, PCI_DMA_TODEVICE);
tpd->iovec[slot].len = skb->len - skb->data_len;
skb_headlen(skb), PCI_DMA_TODEVICE);
tpd->iovec[slot].len = skb_headlen(skb);
++slot;
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {

View File

@ -42,6 +42,8 @@ struct btmrvl_device {
void *card;
struct hci_dev *hcidev;
u8 dev_type;
u8 tx_dnld_rdy;
u8 psmode;
@ -88,8 +90,11 @@ struct btmrvl_private {
#define BT_CMD_HOST_SLEEP_ENABLE 0x5A
#define BT_CMD_MODULE_CFG_REQ 0x5B
/* Sub-commands: Module Bringup/Shutdown Request */
/* Sub-commands: Module Bringup/Shutdown Request/Response */
#define MODULE_BRINGUP_REQ 0xF1
#define MODULE_BROUGHT_UP 0x00
#define MODULE_ALREADY_UP 0x0C
#define MODULE_SHUTDOWN_REQ 0xF2
#define BT_EVENT_POWER_STATE 0x20
@ -123,6 +128,7 @@ struct btmrvl_event {
/* Prototype of global function */
int btmrvl_register_hdev(struct btmrvl_private *priv);
struct btmrvl_private *btmrvl_add_card(void *card);
int btmrvl_remove_card(struct btmrvl_private *priv);

View File

@ -66,7 +66,7 @@ int btmrvl_process_event(struct btmrvl_private *priv, struct sk_buff *skb)
{
struct btmrvl_adapter *adapter = priv->adapter;
struct btmrvl_event *event;
u8 ret = 0;
int ret = 0;
event = (struct btmrvl_event *) skb->data;
if (event->ec != 0xff) {
@ -112,8 +112,17 @@ int btmrvl_process_event(struct btmrvl_private *priv, struct sk_buff *skb)
case BT_CMD_MODULE_CFG_REQ:
if (priv->btmrvl_dev.sendcmdflag &&
event->data[1] == MODULE_BRINGUP_REQ) {
BT_DBG("EVENT:%s", (event->data[2]) ?
"Bring-up failed" : "Bring-up succeed");
BT_DBG("EVENT:%s",
((event->data[2] == MODULE_BROUGHT_UP) ||
(event->data[2] == MODULE_ALREADY_UP)) ?
"Bring-up succeed" : "Bring-up failed");
if (event->length > 3)
priv->btmrvl_dev.dev_type = event->data[3];
else
priv->btmrvl_dev.dev_type = HCI_BREDR;
BT_DBG("dev_type: %d", priv->btmrvl_dev.dev_type);
} else if (priv->btmrvl_dev.sendcmdflag &&
event->data[1] == MODULE_SHUTDOWN_REQ) {
BT_DBG("EVENT:%s", (event->data[2]) ?
@ -522,12 +531,63 @@ static int btmrvl_service_main_thread(void *data)
return 0;
}
struct btmrvl_private *btmrvl_add_card(void *card)
int btmrvl_register_hdev(struct btmrvl_private *priv)
{
struct hci_dev *hdev = NULL;
struct btmrvl_private *priv;
int ret;
hdev = hci_alloc_dev();
if (!hdev) {
BT_ERR("Can not allocate HCI device");
goto err_hdev;
}
priv->btmrvl_dev.hcidev = hdev;
hdev->driver_data = priv;
hdev->bus = HCI_SDIO;
hdev->open = btmrvl_open;
hdev->close = btmrvl_close;
hdev->flush = btmrvl_flush;
hdev->send = btmrvl_send_frame;
hdev->destruct = btmrvl_destruct;
hdev->ioctl = btmrvl_ioctl;
hdev->owner = THIS_MODULE;
btmrvl_send_module_cfg_cmd(priv, MODULE_BRINGUP_REQ);
hdev->dev_type = priv->btmrvl_dev.dev_type;
ret = hci_register_dev(hdev);
if (ret < 0) {
BT_ERR("Can not register HCI device");
goto err_hci_register_dev;
}
#ifdef CONFIG_DEBUG_FS
btmrvl_debugfs_init(hdev);
#endif
return 0;
err_hci_register_dev:
hci_free_dev(hdev);
err_hdev:
/* Stop the thread servicing the interrupts */
kthread_stop(priv->main_thread.task);
btmrvl_free_adapter(priv);
kfree(priv);
return -ENOMEM;
}
EXPORT_SYMBOL_GPL(btmrvl_register_hdev);
struct btmrvl_private *btmrvl_add_card(void *card)
{
struct btmrvl_private *priv;
priv = kzalloc(sizeof(*priv), GFP_KERNEL);
if (!priv) {
BT_ERR("Can not allocate priv");
@ -542,12 +602,6 @@ struct btmrvl_private *btmrvl_add_card(void *card)
btmrvl_init_adapter(priv);
hdev = hci_alloc_dev();
if (!hdev) {
BT_ERR("Can not allocate HCI device");
goto err_hdev;
}
BT_DBG("Starting kthread...");
priv->main_thread.priv = priv;
spin_lock_init(&priv->driver_lock);
@ -556,43 +610,11 @@ struct btmrvl_private *btmrvl_add_card(void *card)
priv->main_thread.task = kthread_run(btmrvl_service_main_thread,
&priv->main_thread, "btmrvl_main_service");
priv->btmrvl_dev.hcidev = hdev;
priv->btmrvl_dev.card = card;
hdev->driver_data = priv;
priv->btmrvl_dev.tx_dnld_rdy = true;
hdev->bus = HCI_SDIO;
hdev->open = btmrvl_open;
hdev->close = btmrvl_close;
hdev->flush = btmrvl_flush;
hdev->send = btmrvl_send_frame;
hdev->destruct = btmrvl_destruct;
hdev->ioctl = btmrvl_ioctl;
hdev->owner = THIS_MODULE;
ret = hci_register_dev(hdev);
if (ret < 0) {
BT_ERR("Can not register HCI device");
goto err_hci_register_dev;
}
#ifdef CONFIG_DEBUG_FS
btmrvl_debugfs_init(hdev);
#endif
return priv;
err_hci_register_dev:
/* Stop the thread servicing the interrupts */
kthread_stop(priv->main_thread.task);
hci_free_dev(hdev);
err_hdev:
btmrvl_free_adapter(priv);
err_adapter:
kfree(priv);

View File

@ -931,7 +931,12 @@ static int btmrvl_sdio_probe(struct sdio_func *func,
priv->hw_host_to_card = btmrvl_sdio_host_to_card;
priv->hw_wakeup_firmware = btmrvl_sdio_wakeup_fw;
btmrvl_send_module_cfg_cmd(priv, MODULE_BRINGUP_REQ);
if (btmrvl_register_hdev(priv)) {
BT_ERR("Register hdev failed!");
ret = -ENODEV;
goto disable_host_int;
}
priv->btmrvl_dev.psmode = 1;
btmrvl_enable_ps(priv);

View File

@ -246,7 +246,7 @@ static int h4_recv(struct hci_uart *hu, void *data, int count)
BT_ERR("Can't allocate mem for new packet");
h4->rx_state = H4_W4_PACKET_TYPE;
h4->rx_count = 0;
return 0;
return -ENOMEM;
}
h4->rx_skb->dev = (void *) hu->hdev;

View File

@ -402,7 +402,7 @@ static int ll_recv(struct hci_uart *hu, void *data, int count)
continue;
case HCILL_W4_EVENT_HDR:
eh = (struct hci_event_hdr *) ll->rx_skb->data;
eh = hci_event_hdr(ll->rx_skb);
BT_DBG("Event header: evt 0x%2.2x plen %d", eh->evt, eh->plen);
@ -410,7 +410,7 @@ static int ll_recv(struct hci_uart *hu, void *data, int count)
continue;
case HCILL_W4_ACL_HDR:
ah = (struct hci_acl_hdr *) ll->rx_skb->data;
ah = hci_acl_hdr(ll->rx_skb);
dlen = __le16_to_cpu(ah->dlen);
BT_DBG("ACL header: dlen %d", dlen);
@ -419,7 +419,7 @@ static int ll_recv(struct hci_uart *hu, void *data, int count)
continue;
case HCILL_W4_SCO_HDR:
sh = (struct hci_sco_hdr *) ll->rx_skb->data;
sh = hci_sco_hdr(ll->rx_skb);
BT_DBG("SCO header: dlen %d", sh->dlen);
@ -491,7 +491,7 @@ static int ll_recv(struct hci_uart *hu, void *data, int count)
BT_ERR("Can't allocate mem for new packet");
ll->rx_state = HCILL_W4_PACKET_TYPE;
ll->rx_count = 0;
return 0;
return -ENOMEM;
}
ll->rx_skb->dev = (void *) hu->hdev;

View File

@ -157,7 +157,7 @@ static inline ssize_t vhci_put_user(struct vhci_data *data,
break;
case HCI_SCODATA_PKT:
data->hdev->stat.cmd_tx++;
data->hdev->stat.sco_tx++;
break;
};

View File

@ -877,7 +877,7 @@ static void nes_netdev_set_multicast_list(struct net_device *netdev)
if (!mc_all_on) {
char *addrs;
int i;
struct dev_mc_list *mcaddr;
struct netdev_hw_addr *ha;
addrs = kmalloc(ETH_ALEN * mc_count, GFP_ATOMIC);
if (!addrs) {
@ -885,9 +885,8 @@ static void nes_netdev_set_multicast_list(struct net_device *netdev)
goto unlock;
}
i = 0;
netdev_for_each_mc_addr(mcaddr, netdev)
memcpy(get_addr(addrs, i++),
mcaddr->dmi_addr, ETH_ALEN);
netdev_for_each_mc_addr(ha, netdev)
memcpy(get_addr(addrs, i++), ha->addr, ETH_ALEN);
perfect_filter_register_address = NES_IDX_PERFECT_FILTER_LOW +
pft_entries_preallocated * 0x8;

View File

@ -768,11 +768,8 @@ void ipoib_mcast_dev_flush(struct net_device *dev)
}
}
static int ipoib_mcast_addr_is_valid(const u8 *addr, unsigned int addrlen,
const u8 *broadcast)
static int ipoib_mcast_addr_is_valid(const u8 *addr, const u8 *broadcast)
{
if (addrlen != INFINIBAND_ALEN)
return 0;
/* reserved QPN, prefix, scope */
if (memcmp(addr, broadcast, 6))
return 0;
@ -787,7 +784,7 @@ void ipoib_mcast_restart_task(struct work_struct *work)
struct ipoib_dev_priv *priv =
container_of(work, struct ipoib_dev_priv, restart_task);
struct net_device *dev = priv->dev;
struct dev_mc_list *mclist;
struct netdev_hw_addr *ha;
struct ipoib_mcast *mcast, *tmcast;
LIST_HEAD(remove_list);
unsigned long flags;
@ -812,15 +809,13 @@ void ipoib_mcast_restart_task(struct work_struct *work)
clear_bit(IPOIB_MCAST_FLAG_FOUND, &mcast->flags);
/* Mark all of the entries that are found or don't exist */
netdev_for_each_mc_addr(mclist, dev) {
netdev_for_each_mc_addr(ha, dev) {
union ib_gid mgid;
if (!ipoib_mcast_addr_is_valid(mclist->dmi_addr,
mclist->dmi_addrlen,
dev->broadcast))
if (!ipoib_mcast_addr_is_valid(ha->addr, dev->broadcast))
continue;
memcpy(mgid.raw, mclist->dmi_addr + 4, sizeof mgid);
memcpy(mgid.raw, ha->addr + 4, sizeof mgid);
mcast = __ipoib_mcast_find(dev, &mgid);
if (!mcast || test_bit(IPOIB_MCAST_FLAG_SENDONLY, &mcast->flags)) {

View File

@ -194,7 +194,7 @@ static int isdn_x25iface_receive(struct concap_proto *cprot, struct sk_buff *skb
if ( ( (ix25_pdata_t*) (cprot->proto_data) )
-> state == WAN_CONNECTED ){
if( skb_push(skb, 1)){
skb -> data[0]=0x00;
skb->data[0] = X25_IFACE_DATA;
skb->protocol = x25_type_trans(skb, cprot->net_dev);
netif_rx(skb);
return 0;
@ -224,7 +224,7 @@ static int isdn_x25iface_connect_ind(struct concap_proto *cprot)
skb = dev_alloc_skb(1);
if( skb ){
*( skb_put(skb, 1) ) = 0x01;
*(skb_put(skb, 1)) = X25_IFACE_CONNECT;
skb->protocol = x25_type_trans(skb, cprot->net_dev);
netif_rx(skb);
return 0;
@ -253,7 +253,7 @@ static int isdn_x25iface_disconn_ind(struct concap_proto *cprot)
*state_p = WAN_DISCONNECTED;
skb = dev_alloc_skb(1);
if( skb ){
*( skb_put(skb, 1) ) = 0x02;
*(skb_put(skb, 1)) = X25_IFACE_DISCONNECT;
skb->protocol = x25_type_trans(skb, cprot->net_dev);
netif_rx(skb);
return 0;
@ -272,9 +272,10 @@ static int isdn_x25iface_xmit(struct concap_proto *cprot, struct sk_buff *skb)
unsigned char firstbyte = skb->data[0];
enum wan_states *state = &((ix25_pdata_t*)cprot->proto_data)->state;
int ret = 0;
IX25DEBUG( "isdn_x25iface_xmit: %s first=%x state=%d \n", MY_DEVNAME(cprot -> net_dev), firstbyte, *state );
IX25DEBUG("isdn_x25iface_xmit: %s first=%x state=%d\n",
MY_DEVNAME(cprot->net_dev), firstbyte, *state);
switch ( firstbyte ){
case 0x00: /* dl_data request */
case X25_IFACE_DATA:
if( *state == WAN_CONNECTED ){
skb_pull(skb, 1);
cprot -> net_dev -> trans_start = jiffies;
@ -285,7 +286,7 @@ static int isdn_x25iface_xmit(struct concap_proto *cprot, struct sk_buff *skb)
}
illegal_state_warn( *state, firstbyte );
break;
case 0x01: /* dl_connect request */
case X25_IFACE_CONNECT:
if( *state == WAN_DISCONNECTED ){
*state = WAN_CONNECTING;
ret = cprot -> dops -> connect_req(cprot);
@ -298,7 +299,7 @@ static int isdn_x25iface_xmit(struct concap_proto *cprot, struct sk_buff *skb)
illegal_state_warn( *state, firstbyte );
}
break;
case 0x02: /* dl_disconnect request */
case X25_IFACE_DISCONNECT:
switch ( *state ){
case WAN_DISCONNECTED:
/* Should not happen. However, give upper layer a
@ -318,7 +319,7 @@ static int isdn_x25iface_xmit(struct concap_proto *cprot, struct sk_buff *skb)
illegal_state_warn( *state, firstbyte );
}
break;
case 0x03: /* changing lapb parameters requested */
case X25_IFACE_PARAMS:
printk(KERN_WARNING "isdn_x25iface_xmit: setting of lapb"
" options not yet supported\n");
break;

View File

@ -1109,14 +1109,14 @@ static int dvb_net_feed_stop(struct net_device *dev)
}
static int dvb_set_mc_filter (struct net_device *dev, struct dev_mc_list *mc)
static int dvb_set_mc_filter(struct net_device *dev, unsigned char *addr)
{
struct dvb_net_priv *priv = netdev_priv(dev);
if (priv->multi_num == DVB_NET_MULTICAST_MAX)
return -ENOMEM;
memcpy(priv->multi_macs[priv->multi_num], mc->dmi_addr, 6);
memcpy(priv->multi_macs[priv->multi_num], addr, ETH_ALEN);
priv->multi_num++;
return 0;
@ -1140,8 +1140,7 @@ static void wq_set_multicast_list (struct work_struct *work)
dprintk("%s: allmulti mode\n", dev->name);
priv->rx_mode = RX_MODE_ALL_MULTI;
} else if (!netdev_mc_empty(dev)) {
int mci;
struct dev_mc_list *mc;
struct netdev_hw_addr *ha;
dprintk("%s: set_mc_list, %d entries\n",
dev->name, netdev_mc_count(dev));
@ -1149,11 +1148,8 @@ static void wq_set_multicast_list (struct work_struct *work)
priv->rx_mode = RX_MODE_MULTI;
priv->multi_num = 0;
for (mci = 0, mc=dev->mc_list;
mci < netdev_mc_count(dev);
mc = mc->next, mci++) {
dvb_set_mc_filter(dev, mc);
}
netdev_for_each_mc_addr(ha, dev)
dvb_set_mc_filter(dev, ha->addr);
}
netif_addr_unlock_bh(dev);

View File

@ -480,7 +480,6 @@ static netdev_tx_t el_start_xmit(struct sk_buff *skb, struct net_device *dev)
/* fire ... Trigger xmit. */
outb(AX_XMIT, AX_CMD);
lp->loading = 0;
dev->trans_start = jiffies;
if (el_debug > 2)
pr_debug(" queued xmit.\n");
dev_kfree_skb(skb);
@ -727,7 +726,6 @@ static void el_receive(struct net_device *dev)
dev->stats.rx_packets++;
dev->stats.rx_bytes += pkt_len;
}
return;
}
/**

View File

@ -380,6 +380,12 @@ out:
return retval;
}
static irqreturn_t el2_probe_interrupt(int irq, void *seen)
{
*(bool *)seen = true;
return IRQ_HANDLED;
}
static int
el2_open(struct net_device *dev)
{
@ -391,23 +397,35 @@ el2_open(struct net_device *dev)
outb(EGACFR_NORM, E33G_GACFR); /* Enable RAM and interrupts. */
do {
retval = request_irq(*irqp, NULL, 0, "bogus", dev);
if (retval >= 0) {
bool seen;
retval = request_irq(*irqp, el2_probe_interrupt, 0,
dev->name, &seen);
if (retval == -EBUSY)
continue;
if (retval < 0)
goto err_disable;
/* Twinkle the interrupt, and check if it's seen. */
unsigned long cookie = probe_irq_on();
seen = false;
smp_wmb();
outb_p(0x04 << ((*irqp == 9) ? 2 : *irqp), E33G_IDCFR);
outb_p(0x00, E33G_IDCFR);
if (*irqp == probe_irq_off(cookie) && /* It's a good IRQ line! */
((retval = request_irq(dev->irq = *irqp,
eip_interrupt, 0,
dev->name, dev)) == 0))
break;
} else {
if (retval != -EBUSY)
return retval;
}
msleep(1);
free_irq(*irqp, el2_probe_interrupt);
if (!seen)
continue;
retval = request_irq(dev->irq = *irqp, eip_interrupt, 0,
dev->name, dev);
if (retval == -EBUSY)
continue;
if (retval < 0)
goto err_disable;
} while (*++irqp);
if (*irqp == 0) {
err_disable:
outb(EGACFR_IRQOFF, E33G_GACFR); /* disable interrupts. */
return -EAGAIN;
}
@ -555,7 +573,6 @@ el2_block_output(struct net_device *dev, int count,
}
blocked:;
outb_p(ei_status.interface_num==0 ? ECNTRL_THIN : ECNTRL_AUI, E33G_CNTRL);
return;
}
/* Read the 4 byte, page aligned 8390 specific header. */
@ -671,7 +688,6 @@ el2_block_input(struct net_device *dev, int count, struct sk_buff *skb, int ring
}
blocked:;
outb_p(ei_status.interface_num == 0 ? ECNTRL_THIN : ECNTRL_AUI, E33G_CNTRL);
return;
}

View File

@ -1055,7 +1055,7 @@ static void elp_timeout(struct net_device *dev)
(stat & ACRF) ? "interrupt" : "command");
if (elp_debug >= 1)
pr_debug("%s: status %#02x\n", dev->name, stat);
dev->trans_start = jiffies;
dev->trans_start = jiffies; /* prevent tx timeout */
dev->stats.tx_dropped++;
netif_wake_queue(dev);
}
@ -1093,11 +1093,6 @@ static netdev_tx_t elp_start_xmit(struct sk_buff *skb, struct net_device *dev)
if (elp_debug >= 3)
pr_debug("%s: packet of length %d sent\n", dev->name, (int) skb->len);
/*
* start the transmit timeout
*/
dev->trans_start = jiffies;
prime_rx(dev);
spin_unlock_irqrestore(&adapter->lock, flags);
netif_start_queue(dev);
@ -1216,7 +1211,7 @@ static int elp_close(struct net_device *dev)
static void elp_set_mc_list(struct net_device *dev)
{
elp_device *adapter = netdev_priv(dev);
struct dev_mc_list *dmi;
struct netdev_hw_addr *ha;
int i;
unsigned long flags;
@ -1231,8 +1226,9 @@ static void elp_set_mc_list(struct net_device *dev)
adapter->tx_pcb.command = CMD_LOAD_MULTICAST_LIST;
adapter->tx_pcb.length = 6 * netdev_mc_count(dev);
i = 0;
netdev_for_each_mc_addr(dmi, dev)
memcpy(adapter->tx_pcb.data.multicast[i++], dmi->dmi_addr, 6);
netdev_for_each_mc_addr(ha, dev)
memcpy(adapter->tx_pcb.data.multicast[i++],
ha->addr, 6);
adapter->got[CMD_LOAD_MULTICAST_LIST] = 0;
if (!send_pcb(dev, &adapter->tx_pcb))
pr_err("%s: couldn't send set_multicast command\n", dev->name);

View File

@ -449,7 +449,6 @@ static int __init el16_probe1(struct net_device *dev, int ioaddr)
pr_debug("%s", version);
lp = netdev_priv(dev);
memset(lp, 0, sizeof(*lp));
spin_lock_init(&lp->lock);
lp->base = ioremap(dev->mem_start, RX_BUF_END);
if (!lp->base) {
@ -505,7 +504,7 @@ static void el16_tx_timeout (struct net_device *dev)
outb (0, ioaddr + SIGNAL_CA); /* Issue channel-attn. */
lp->last_restart = dev->stats.tx_packets;
}
dev->trans_start = jiffies;
dev->trans_start = jiffies; /* prevent tx timeout */
netif_wake_queue (dev);
}
@ -529,7 +528,6 @@ static netdev_tx_t el16_send_packet (struct sk_buff *skb,
hardware_send_packet (dev, buf, skb->len, length - skb->len);
dev->trans_start = jiffies;
/* Enable the 82586 interrupt input. */
outb (0x84, ioaddr + MISC_CTRL);
@ -766,7 +764,6 @@ static void init_82586_mem(struct net_device *dev)
if (net_debug > 4)
pr_debug("%s: Initialized 82586, status %04x.\n", dev->name,
readw(shmem+iSCB_STATUS));
return;
}
static void hardware_send_packet(struct net_device *dev, void *buf, short length, short pad)

View File

@ -807,7 +807,7 @@ el3_tx_timeout (struct net_device *dev)
dev->name, inb(ioaddr + TX_STATUS), inw(ioaddr + EL3_STATUS),
inw(ioaddr + TX_FREE));
dev->stats.tx_errors++;
dev->trans_start = jiffies;
dev->trans_start = jiffies; /* prevent tx timeout */
/* Issue TX_RESET and TX_START commands. */
outw(TxReset, ioaddr + EL3_CMD);
outw(TxEnable, ioaddr + EL3_CMD);
@ -868,7 +868,6 @@ el3_start_xmit(struct sk_buff *skb, struct net_device *dev)
/* ... and the packet rounded to a doubleword. */
outsl(ioaddr + TX_FIFO, skb->data, (skb->len + 3) >> 2);
dev->trans_start = jiffies;
if (inw(ioaddr + TX_FREE) > 1536)
netif_start_queue(dev);
else
@ -1038,7 +1037,6 @@ static void update_stats(struct net_device *dev)
/* Back to window 1, and turn statistics back on. */
EL3WINDOW(1);
outw(StatsEnable, ioaddr + EL3_CMD);
return;
}
static int

View File

@ -958,7 +958,6 @@ static void corkscrew_timer(unsigned long data)
dev->name, media_tbl[dev->if_port].name);
#endif /* AUTOMEDIA */
return;
}
static void corkscrew_timeout(struct net_device *dev)
@ -992,7 +991,7 @@ static void corkscrew_timeout(struct net_device *dev)
if (!(inw(ioaddr + EL3_STATUS) & CmdInProgress))
break;
outw(TxEnable, ioaddr + EL3_CMD);
dev->trans_start = jiffies;
dev->trans_start = jiffies; /* prevent tx timeout */
dev->stats.tx_errors++;
dev->stats.tx_dropped++;
netif_wake_queue(dev);
@ -1055,7 +1054,6 @@ static netdev_tx_t corkscrew_start_xmit(struct sk_buff *skb,
prev_entry->status &= ~0x80000000;
netif_wake_queue(dev);
}
dev->trans_start = jiffies;
return NETDEV_TX_OK;
}
/* Put out the doubleword header... */
@ -1091,7 +1089,6 @@ static netdev_tx_t corkscrew_start_xmit(struct sk_buff *skb,
outw(SetTxThreshold + (1536 >> 2), ioaddr + EL3_CMD);
#endif /* bus master */
dev->trans_start = jiffies;
/* Clear the Tx status stack. */
{
@ -1518,7 +1515,6 @@ static void update_stats(int ioaddr, struct net_device *dev)
/* We change back to window 7 (not 1) with the Vortex. */
EL3WINDOW(7);
return;
}
/* This new version of set_rx_mode() supports v1.4 kernels.

View File

@ -503,7 +503,6 @@ static int __init do_elmc_probe(struct net_device *dev)
break;
}
memset(pr, 0, sizeof(struct priv));
pr->slot = slot;
pr_info("%s: 3Com 3c523 Rev 0x%x at %#lx\n", dev->name, (int) revision,
@ -624,7 +623,7 @@ static int init586(struct net_device *dev)
volatile struct iasetup_cmd_struct *ias_cmd;
volatile struct tdr_cmd_struct *tdr_cmd;
volatile struct mcsetup_cmd_struct *mc_cmd;
struct dev_mc_list *dmi;
struct netdev_hw_addr *ha;
int num_addrs = netdev_mc_count(dev);
ptr = (void *) ((char *) p->scb + sizeof(struct scb_struct));
@ -787,8 +786,9 @@ static int init586(struct net_device *dev)
mc_cmd->cmd_link = 0xffff;
mc_cmd->mc_cnt = num_addrs * 6;
i = 0;
netdev_for_each_mc_addr(dmi, dev)
memcpy((char *) mc_cmd->mc_list[i++], dmi->dmi_addr, 6);
netdev_for_each_mc_addr(ha, dev)
memcpy((char *) mc_cmd->mc_list[i++],
ha->addr, 6);
p->scb->cbl_offset = make16(mc_cmd);
p->scb->cmd = CUC_START;
elmc_id_attn586();
@ -1152,7 +1152,6 @@ static netdev_tx_t elmc_send_packet(struct sk_buff *skb, struct net_device *dev)
p->scb->cmd = CUC_START;
p->xmit_cmds[0]->cmd_status = 0;
elmc_attn586();
dev->trans_start = jiffies;
if (!i) {
dev_kfree_skb(skb);
}
@ -1176,7 +1175,6 @@ static netdev_tx_t elmc_send_packet(struct sk_buff *skb, struct net_device *dev)
p->xmit_cmds[0]->cmd_status = p->nop_cmds[next_nop]->cmd_status = 0;
p->nop_cmds[p->nop_point]->cmd_link = make16((p->xmit_cmds[0]));
dev->trans_start = jiffies;
p->nop_point = next_nop;
dev_kfree_skb(skb);
#endif
@ -1190,7 +1188,6 @@ static netdev_tx_t elmc_send_packet(struct sk_buff *skb, struct net_device *dev)
= make16((p->nop_cmds[next_nop]));
p->nop_cmds[next_nop]->cmd_status = 0;
p->nop_cmds[p->xmit_count]->cmd_link = make16((p->xmit_cmds[p->xmit_count]));
dev->trans_start = jiffies;
p->xmit_count = next_nop;
if (p->xmit_count != p->xmit_last)
netif_wake_queue(dev);

View File

@ -1533,7 +1533,7 @@ static void do_mc32_set_multicast_list(struct net_device *dev, int retry)
{
unsigned char block[62];
unsigned char *bp;
struct dev_mc_list *dmc;
struct netdev_hw_addr *ha;
if(retry==0)
lp->mc_list_valid = 0;
@ -1543,8 +1543,8 @@ static void do_mc32_set_multicast_list(struct net_device *dev, int retry)
block[0]=netdev_mc_count(dev);
bp=block+2;
netdev_for_each_mc_addr(dmc, dev) {
memcpy(bp, dmc->dmi_addr, 6);
netdev_for_each_mc_addr(ha, dev) {
memcpy(bp, ha->addr, 6);
bp+=6;
}
if(mc32_command_nowait(dev, 2, block,

View File

@ -1855,7 +1855,6 @@ leave_media_alone:
mod_timer(&vp->timer, RUN_AT(next_tick));
if (vp->deferred)
iowrite16(FakeIntr, ioaddr + EL3_CMD);
return;
}
static void vortex_tx_timeout(struct net_device *dev)
@ -1917,7 +1916,7 @@ static void vortex_tx_timeout(struct net_device *dev)
/* Issue Tx Enable */
iowrite16(TxEnable, ioaddr + EL3_CMD);
dev->trans_start = jiffies;
dev->trans_start = jiffies; /* prevent tx timeout */
/* Switch to register set 7 for normal use. */
EL3WINDOW(7);
@ -2063,7 +2062,6 @@ vortex_start_xmit(struct sk_buff *skb, struct net_device *dev)
}
}
dev->trans_start = jiffies;
/* Clear the Tx status stack. */
{
@ -2129,8 +2127,8 @@ boomerang_start_xmit(struct sk_buff *skb, struct net_device *dev)
int i;
vp->tx_ring[entry].frag[0].addr = cpu_to_le32(pci_map_single(VORTEX_PCI(vp), skb->data,
skb->len-skb->data_len, PCI_DMA_TODEVICE));
vp->tx_ring[entry].frag[0].length = cpu_to_le32(skb->len-skb->data_len);
skb_headlen(skb), PCI_DMA_TODEVICE));
vp->tx_ring[entry].frag[0].length = cpu_to_le32(skb_headlen(skb));
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
@ -2174,7 +2172,6 @@ boomerang_start_xmit(struct sk_buff *skb, struct net_device *dev)
}
iowrite16(DownUnstall, ioaddr + EL3_CMD);
spin_unlock_irqrestore(&vp->lock, flags);
dev->trans_start = jiffies;
return NETDEV_TX_OK;
}
@ -2800,7 +2797,6 @@ static void update_stats(void __iomem *ioaddr, struct net_device *dev)
}
EL3WINDOW(old_window >> 13);
return;
}
static int vortex_nway_reset(struct net_device *dev)
@ -3122,7 +3118,6 @@ static void mdio_write(struct net_device *dev, int phy_id, int location, int val
iowrite16(MDIO_ENB_IN | MDIO_SHIFT_CLK, mdio_addr);
mdio_delay();
}
return;
}
/* ACPI: Advanced Configuration and Power Interface. */

View File

@ -262,7 +262,7 @@ static int lance_reset (struct net_device *dev)
load_csrs (lp);
lance_init_ring (dev);
dev->trans_start = jiffies;
dev->trans_start = jiffies; /* prevent tx timeout */
status = init_restart_lance (lp);
#ifdef DEBUG_DRIVER
printk ("Lance restart=%d\n", status);
@ -526,7 +526,7 @@ void lance_tx_timeout(struct net_device *dev)
{
printk("lance_tx_timeout\n");
lance_reset(dev);
dev->trans_start = jiffies;
dev->trans_start = jiffies; /* prevent tx timeout */
netif_wake_queue (dev);
}
EXPORT_SYMBOL_GPL(lance_tx_timeout);
@ -574,7 +574,6 @@ int lance_start_xmit (struct sk_buff *skb, struct net_device *dev)
outs++;
/* Kick the lance: transmit now */
WRITERDP(lp, LE_C0_INEA | LE_C0_TDMD);
dev->trans_start = jiffies;
dev_kfree_skb (skb);
spin_lock_irqsave (&lp->devlock, flags);
@ -594,7 +593,7 @@ static void lance_load_multicast (struct net_device *dev)
struct lance_private *lp = netdev_priv(dev);
volatile struct lance_init_block *ib = lp->init_block;
volatile u16 *mcast_table = (u16 *)&ib->filter;
struct dev_mc_list *dmi;
struct netdev_hw_addr *ha;
char *addrs;
u32 crc;
@ -609,8 +608,8 @@ static void lance_load_multicast (struct net_device *dev)
ib->filter [1] = 0;
/* Add addresses */
netdev_for_each_mc_addr(dmi, dev) {
addrs = dmi->dmi_addr;
netdev_for_each_mc_addr(ha, dev) {
addrs = ha->addr;
/* multicast address? */
if (!(*addrs & 1))
@ -620,7 +619,6 @@ static void lance_load_multicast (struct net_device *dev)
crc = crc >> 26;
mcast_table [crc >> 4] |= 1 << (crc & 0xf);
}
return;
}

View File

@ -882,7 +882,6 @@ static netdev_tx_t cp_start_xmit (struct sk_buff *skb,
spin_unlock_irqrestore(&cp->lock, intr_flags);
cpw8(TxPoll, NormalTxPoll);
dev->trans_start = jiffies;
return NETDEV_TX_OK;
}
@ -910,11 +909,11 @@ static void __cp_set_rx_mode (struct net_device *dev)
rx_mode = AcceptBroadcast | AcceptMulticast | AcceptMyPhys;
mc_filter[1] = mc_filter[0] = 0xffffffff;
} else {
struct dev_mc_list *mclist;
struct netdev_hw_addr *ha;
rx_mode = AcceptBroadcast | AcceptMyPhys;
mc_filter[1] = mc_filter[0] = 0;
netdev_for_each_mc_addr(mclist, dev) {
int bit_nr = ether_crc(ETH_ALEN, mclist->dmi_addr) >> 26;
netdev_for_each_mc_addr(ha, dev) {
int bit_nr = ether_crc(ETH_ALEN, ha->addr) >> 26;
mc_filter[bit_nr >> 5] |= 1 << (bit_nr & 31);
rx_mode |= AcceptMulticast;
@ -1225,8 +1224,6 @@ static void cp_tx_timeout(struct net_device *dev)
netif_wake_queue(dev);
spin_unlock_irqrestore(&cp->lock, flags);
return;
}
#ifdef BROKEN

View File

@ -1716,8 +1716,6 @@ static netdev_tx_t rtl8139_start_xmit (struct sk_buff *skb,
RTL_W32_F (TxStatus0 + (entry * sizeof (u32)),
tp->tx_flag | max(len, (unsigned int)ETH_ZLEN));
dev->trans_start = jiffies;
tp->cur_tx++;
if ((tp->cur_tx - NUM_TX_DESC) == tp->dirty_tx)
@ -2503,11 +2501,11 @@ static void __set_rx_mode (struct net_device *dev)
rx_mode = AcceptBroadcast | AcceptMulticast | AcceptMyPhys;
mc_filter[1] = mc_filter[0] = 0xffffffff;
} else {
struct dev_mc_list *mclist;
struct netdev_hw_addr *ha;
rx_mode = AcceptBroadcast | AcceptMyPhys;
mc_filter[1] = mc_filter[0] = 0;
netdev_for_each_mc_addr(mclist, dev) {
int bit_nr = ether_crc(ETH_ALEN, mclist->dmi_addr) >> 26;
netdev_for_each_mc_addr(ha, dev) {
int bit_nr = ether_crc(ETH_ALEN, ha->addr) >> 26;
mc_filter[bit_nr >> 5] |= 1 << (bit_nr & 31);
rx_mode |= AcceptMulticast;

View File

@ -1050,7 +1050,7 @@ static void i596_tx_timeout (struct net_device *dev)
lp->last_restart = dev->stats.tx_packets;
}
dev->trans_start = jiffies;
dev->trans_start = jiffies; /* prevent tx timeout */
netif_wake_queue (dev);
}
@ -1060,7 +1060,6 @@ static netdev_tx_t i596_start_xmit(struct sk_buff *skb, struct net_device *dev)
struct tx_cmd *tx_cmd;
struct i596_tbd *tbd;
short length = skb->len;
dev->trans_start = jiffies;
DEB(DEB_STARTTX,printk(KERN_DEBUG "%s: i596_start_xmit(%x,%p) called\n",
dev->name, skb->len, skb->data));
@ -1542,7 +1541,7 @@ static void set_multicast_list(struct net_device *dev)
}
if (!netdev_mc_empty(dev)) {
struct dev_mc_list *dmi;
struct netdev_hw_addr *ha;
unsigned char *cp;
struct mc_cmd *cmd;
@ -1552,10 +1551,10 @@ static void set_multicast_list(struct net_device *dev)
cmd->cmd.command = CmdMulticastList;
cmd->mc_cnt = cnt * ETH_ALEN;
cp = cmd->mc_addrs;
netdev_for_each_mc_addr(dmi, dev) {
netdev_for_each_mc_addr(ha, dev) {
if (!cnt--)
break;
memcpy(cp, dmi->dmi_addr, ETH_ALEN);
memcpy(cp, ha->addr, ETH_ALEN);
if (i596_debug > 1)
DEB(DEB_MULTI,printk(KERN_INFO "%s: Adding address %pM\n",
dev->name, cp));

View File

@ -483,7 +483,7 @@ config XTENSA_XT2000_SONIC
This is the driver for the onboard card of the Xtensa XT2000 board.
config MIPS_AU1X00_ENET
bool "MIPS AU1000 Ethernet support"
tristate "MIPS AU1000 Ethernet support"
depends on SOC_AU1X00
select PHYLIB
select CRC32
@ -887,6 +887,13 @@ config BFIN_MAC_RMII
help
Use Reduced PHY MII Interface
config BFIN_MAC_USE_HWSTAMP
bool "Use IEEE 1588 hwstamp"
depends on BFIN_MAC && BF518
default y
help
To support the IEEE 1588 Precision Time Protocol (PTP), select y here
config SMC9194
tristate "SMC 9194 support"
depends on NET_VENDOR_SMC && (ISA || MAC && BROKEN)
@ -1453,20 +1460,6 @@ config FORCEDETH
To compile this driver as a module, choose M here. The module
will be called forcedeth.
config FORCEDETH_NAPI
bool "Use Rx Polling (NAPI) (EXPERIMENTAL)"
depends on FORCEDETH && EXPERIMENTAL
help
NAPI is a new driver API designed to reduce CPU and interrupt load
when the driver is receiving lots of packets from the card. It is
still somewhat experimental and thus not yet enabled by default.
If your estimated Rx load is 10kpps or more, or if the card will be
deployed on potentially unfriendly networks (e.g. in a firewall),
then say Y here.
If in doubt, say N.
config CS89x0
tristate "CS89x0 support"
depends on NET_ETHERNET && (ISA || EISA || MACH_IXDP2351 \
@ -1916,6 +1909,7 @@ config FEC
bool "FEC ethernet controller (of ColdFire and some i.MX CPUs)"
depends on M523x || M527x || M5272 || M528x || M520x || M532x || \
MACH_MX27 || ARCH_MX35 || ARCH_MX25 || ARCH_MX5
select PHYLIB
help
Say Y here if you want to use the built-in 10/100 Fast ethernet
controller on some Motorola ColdFire and Freescale i.MX processors.
@ -2434,8 +2428,8 @@ config MV643XX_ETH
config XILINX_LL_TEMAC
tristate "Xilinx LL TEMAC (LocalLink Tri-mode Ethernet MAC) driver"
depends on PPC || MICROBLAZE
select PHYLIB
depends on PPC_DCR_NATIVE
help
This driver supports the Xilinx 10/100/1000 LocalLink TEMAC
core used in Xilinx Spartan and Virtex FPGAs
@ -2618,11 +2612,11 @@ config EHEA
will be called ehea.
config ENIC
tristate "Cisco 10G Ethernet NIC support"
tristate "Cisco VIC Ethernet NIC Support"
depends on PCI && INET
select INET_LRO
help
This enables the support for the Cisco 10G Ethernet card.
This enables the support for the Cisco VIC Ethernet card.
config IXGBE
tristate "Intel(R) 10GbE PCI Express adapters support"
@ -2862,6 +2856,8 @@ source "drivers/ieee802154/Kconfig"
source "drivers/s390/net/Kconfig"
source "drivers/net/caif/Kconfig"
config XEN_NETDEV_FRONTEND
tristate "Xen network device frontend driver"
depends on XEN
@ -3180,17 +3176,12 @@ config PPPOATM
config PPPOL2TP
tristate "PPP over L2TP (EXPERIMENTAL)"
depends on EXPERIMENTAL && PPP && INET
depends on EXPERIMENTAL && L2TP && PPP
help
Support for PPP-over-L2TP socket family. L2TP is a protocol
used by ISPs and enterprises to tunnel PPP traffic over UDP
tunnels. L2TP is replacing PPTP for VPN uses.
This kernel component handles only L2TP data packets: a
userland daemon handles L2TP the control protocol (tunnel
and session setup). One such daemon is OpenL2TP
(http://openl2tp.sourceforge.net/).
config SLIP
tristate "SLIP (serial line) support"
---help---
@ -3277,15 +3268,14 @@ config NET_FC
"SCSI generic support".
config NETCONSOLE
tristate "Network console logging support (EXPERIMENTAL)"
depends on EXPERIMENTAL
tristate "Network console logging support"
---help---
If you want to log kernel messages over the network, enable this.
See <file:Documentation/networking/netconsole.txt> for details.
config NETCONSOLE_DYNAMIC
bool "Dynamic reconfiguration of logging targets (EXPERIMENTAL)"
depends on NETCONSOLE && SYSFS && EXPERIMENTAL
bool "Dynamic reconfiguration of logging targets"
depends on NETCONSOLE && SYSFS
select CONFIGFS_FS
help
This option enables the ability to dynamically reconfigure target

View File

@ -161,7 +161,7 @@ obj-$(CONFIG_PPP_DEFLATE) += ppp_deflate.o
obj-$(CONFIG_PPP_BSDCOMP) += bsd_comp.o
obj-$(CONFIG_PPP_MPPE) += ppp_mppe.o
obj-$(CONFIG_PPPOE) += pppox.o pppoe.o
obj-$(CONFIG_PPPOL2TP) += pppox.o pppol2tp.o
obj-$(CONFIG_PPPOL2TP) += pppox.o
obj-$(CONFIG_SLIP) += slip.o
obj-$(CONFIG_SLHC) += slhc.o
@ -292,5 +292,6 @@ obj-$(CONFIG_VIRTIO_NET) += virtio_net.o
obj-$(CONFIG_SFC) += sfc/
obj-$(CONFIG_WIMAX) += wimax/
obj-$(CONFIG_CAIF) += caif/
obj-$(CONFIG_OCTEON_MGMT_ETHERNET) += octeon/

View File

@ -525,7 +525,7 @@ static inline int lance_reset (struct net_device *dev)
load_csrs (lp);
lance_init_ring (dev);
dev->trans_start = jiffies;
dev->trans_start = jiffies; /* prevent tx timeout */
netif_start_queue(dev);
status = init_restart_lance (lp);
@ -588,7 +588,6 @@ static netdev_tx_t lance_start_xmit (struct sk_buff *skb,
/* Kick the lance: transmit now */
ll->rdp = LE_C0_INEA | LE_C0_TDMD;
dev->trans_start = jiffies;
dev_kfree_skb (skb);
local_irq_restore(flags);
@ -602,7 +601,7 @@ static void lance_load_multicast (struct net_device *dev)
struct lance_private *lp = netdev_priv(dev);
volatile struct lance_init_block *ib = lp->init_block;
volatile u16 *mcast_table = (u16 *)&ib->filter;
struct dev_mc_list *dmi;
struct netdev_hw_addr *ha;
char *addrs;
u32 crc;
@ -617,8 +616,8 @@ static void lance_load_multicast (struct net_device *dev)
ib->filter [1] = 0;
/* Add addresses */
netdev_for_each_mc_addr(dmi, dev) {
addrs = dmi->dmi_addr;
netdev_for_each_mc_addr(ha, dev) {
addrs = ha->addr;
/* multicast address? */
if (!(*addrs & 1))
@ -628,7 +627,6 @@ static void lance_load_multicast (struct net_device *dev)
crc = crc >> 26;
mcast_table [crc >> 4] |= 1 << (crc & 0xf);
}
return;
}
static void lance_set_multicast (struct net_device *dev)

View File

@ -307,8 +307,6 @@ static void ac_reset_8390(struct net_device *dev)
ei_status.txing = 0;
outb(AC_ENABLE, ioaddr + AC_RESET_PORT);
if (ei_debug > 1) printk("reset done\n");
return;
}
/* Grab the 8390 specific header. Similar to the block_input routine, but

View File

@ -661,7 +661,7 @@ static void __devexit acenic_remove_one(struct pci_dev *pdev)
dma_addr_t mapping;
ringp = &ap->skb->rx_std_skbuff[i];
mapping = pci_unmap_addr(ringp, mapping);
mapping = dma_unmap_addr(ringp, mapping);
pci_unmap_page(ap->pdev, mapping,
ACE_STD_BUFSIZE,
PCI_DMA_FROMDEVICE);
@ -681,7 +681,7 @@ static void __devexit acenic_remove_one(struct pci_dev *pdev)
dma_addr_t mapping;
ringp = &ap->skb->rx_mini_skbuff[i];
mapping = pci_unmap_addr(ringp,mapping);
mapping = dma_unmap_addr(ringp,mapping);
pci_unmap_page(ap->pdev, mapping,
ACE_MINI_BUFSIZE,
PCI_DMA_FROMDEVICE);
@ -700,7 +700,7 @@ static void __devexit acenic_remove_one(struct pci_dev *pdev)
dma_addr_t mapping;
ringp = &ap->skb->rx_jumbo_skbuff[i];
mapping = pci_unmap_addr(ringp, mapping);
mapping = dma_unmap_addr(ringp, mapping);
pci_unmap_page(ap->pdev, mapping,
ACE_JUMBO_BUFSIZE,
PCI_DMA_FROMDEVICE);
@ -1683,7 +1683,7 @@ static void ace_load_std_rx_ring(struct ace_private *ap, int nr_bufs)
ACE_STD_BUFSIZE,
PCI_DMA_FROMDEVICE);
ap->skb->rx_std_skbuff[idx].skb = skb;
pci_unmap_addr_set(&ap->skb->rx_std_skbuff[idx],
dma_unmap_addr_set(&ap->skb->rx_std_skbuff[idx],
mapping, mapping);
rd = &ap->rx_std_ring[idx];
@ -1744,7 +1744,7 @@ static void ace_load_mini_rx_ring(struct ace_private *ap, int nr_bufs)
ACE_MINI_BUFSIZE,
PCI_DMA_FROMDEVICE);
ap->skb->rx_mini_skbuff[idx].skb = skb;
pci_unmap_addr_set(&ap->skb->rx_mini_skbuff[idx],
dma_unmap_addr_set(&ap->skb->rx_mini_skbuff[idx],
mapping, mapping);
rd = &ap->rx_mini_ring[idx];
@ -1800,7 +1800,7 @@ static void ace_load_jumbo_rx_ring(struct ace_private *ap, int nr_bufs)
ACE_JUMBO_BUFSIZE,
PCI_DMA_FROMDEVICE);
ap->skb->rx_jumbo_skbuff[idx].skb = skb;
pci_unmap_addr_set(&ap->skb->rx_jumbo_skbuff[idx],
dma_unmap_addr_set(&ap->skb->rx_jumbo_skbuff[idx],
mapping, mapping);
rd = &ap->rx_jumbo_ring[idx];
@ -2013,7 +2013,7 @@ static void ace_rx_int(struct net_device *dev, u32 rxretprd, u32 rxretcsm)
skb = rip->skb;
rip->skb = NULL;
pci_unmap_page(ap->pdev,
pci_unmap_addr(rip, mapping),
dma_unmap_addr(rip, mapping),
mapsize,
PCI_DMA_FROMDEVICE);
skb_put(skb, retdesc->size);
@ -2078,18 +2078,16 @@ static inline void ace_tx_int(struct net_device *dev,
do {
struct sk_buff *skb;
dma_addr_t mapping;
struct tx_ring_info *info;
info = ap->skb->tx_skbuff + idx;
skb = info->skb;
mapping = pci_unmap_addr(info, mapping);
if (mapping) {
pci_unmap_page(ap->pdev, mapping,
pci_unmap_len(info, maplen),
if (dma_unmap_len(info, maplen)) {
pci_unmap_page(ap->pdev, dma_unmap_addr(info, mapping),
dma_unmap_len(info, maplen),
PCI_DMA_TODEVICE);
pci_unmap_addr_set(info, mapping, 0);
dma_unmap_len_set(info, maplen, 0);
}
if (skb) {
@ -2377,14 +2375,12 @@ static int ace_close(struct net_device *dev)
for (i = 0; i < ACE_TX_RING_ENTRIES(ap); i++) {
struct sk_buff *skb;
dma_addr_t mapping;
struct tx_ring_info *info;
info = ap->skb->tx_skbuff + i;
skb = info->skb;
mapping = pci_unmap_addr(info, mapping);
if (mapping) {
if (dma_unmap_len(info, maplen)) {
if (ACE_IS_TIGON_I(ap)) {
/* NB: TIGON_1 is special, tx_ring is in io space */
struct tx_desc __iomem *tx;
@ -2395,10 +2391,10 @@ static int ace_close(struct net_device *dev)
} else
memset(ap->tx_ring + i, 0,
sizeof(struct tx_desc));
pci_unmap_page(ap->pdev, mapping,
pci_unmap_len(info, maplen),
pci_unmap_page(ap->pdev, dma_unmap_addr(info, mapping),
dma_unmap_len(info, maplen),
PCI_DMA_TODEVICE);
pci_unmap_addr_set(info, mapping, 0);
dma_unmap_len_set(info, maplen, 0);
}
if (skb) {
dev_kfree_skb(skb);
@ -2433,8 +2429,8 @@ ace_map_tx_skb(struct ace_private *ap, struct sk_buff *skb,
info = ap->skb->tx_skbuff + idx;
info->skb = tail;
pci_unmap_addr_set(info, mapping, mapping);
pci_unmap_len_set(info, maplen, skb->len);
dma_unmap_addr_set(info, mapping, mapping);
dma_unmap_len_set(info, maplen, skb->len);
return mapping;
}
@ -2553,8 +2549,8 @@ restart:
} else {
info->skb = NULL;
}
pci_unmap_addr_set(info, mapping, mapping);
pci_unmap_len_set(info, maplen, frag->size);
dma_unmap_addr_set(info, mapping, mapping);
dma_unmap_len_set(info, maplen, frag->size);
ace_load_tx_bd(ap, desc, mapping, flagsize, vlan_tag);
}
}
@ -2923,8 +2919,6 @@ static void __devinit ace_clear(struct ace_regs __iomem *regs, u32 dest, int siz
dest += tsize;
size -= tsize;
}
return;
}

View File

@ -589,7 +589,7 @@ struct ace_info {
struct ring_info {
struct sk_buff *skb;
DECLARE_PCI_UNMAP_ADDR(mapping)
DEFINE_DMA_UNMAP_ADDR(mapping);
};
@ -600,8 +600,8 @@ struct ring_info {
*/
struct tx_ring_info {
struct sk_buff *skb;
DECLARE_PCI_UNMAP_ADDR(mapping)
DECLARE_PCI_UNMAP_LEN(maplen)
DEFINE_DMA_UNMAP_ADDR(mapping);
DEFINE_DMA_UNMAP_LEN(maplen);
};

View File

@ -1339,8 +1339,6 @@ static netdev_tx_t amd8111e_start_xmit(struct sk_buff *skb,
writel( VAL1 | TDMD0, lp->mmio + CMD0);
writel( VAL2 | RDMD0,lp->mmio + CMD0);
dev->trans_start = jiffies;
if(amd8111e_tx_queue_avail(lp) < 0){
netif_stop_queue(dev);
}
@ -1376,7 +1374,7 @@ list to the device.
*/
static void amd8111e_set_multicast_list(struct net_device *dev)
{
struct dev_mc_list *mc_ptr;
struct netdev_hw_addr *ha;
struct amd8111e_priv *lp = netdev_priv(dev);
u32 mc_filter[2] ;
int bit_num;
@ -1407,8 +1405,8 @@ static void amd8111e_set_multicast_list(struct net_device *dev)
/* load all the multicast addresses in the logic filter */
lp->options |= OPTION_MULTICAST_ENABLE;
mc_filter[1] = mc_filter[0] = 0;
netdev_for_each_mc_addr(mc_ptr, dev) {
bit_num = (ether_crc_le(ETH_ALEN, mc_ptr->dmi_addr) >> 26) & 0x3f;
netdev_for_each_mc_addr(ha, dev) {
bit_num = (ether_crc_le(ETH_ALEN, ha->addr) >> 26) & 0x3f;
mc_filter[bit_num >> 5] |= 1 << (bit_num & 31);
}
amd8111e_writeq(*(u64*)mc_filter,lp->mmio+ LADRF);

View File

@ -521,7 +521,6 @@ apne_block_output(struct net_device *dev, int count,
outb(ENISR_RDC, nic_base + NE_EN0_ISR); /* Ack intr. */
ei_status.dmaing &= ~0x01;
return;
}
static irqreturn_t apne_interrupt(int irq, void *dev_id)

View File

@ -593,8 +593,6 @@ static void cops_load (struct net_device *dev)
tangent_wait_reset(ioaddr);
inb(ioaddr); /* Clear initial ready signal. */
}
return;
}
/*
@ -701,8 +699,6 @@ static void cops_poll(unsigned long ltdev)
/* poll 20 times per second */
cops_timer.expires = jiffies + HZ/20;
add_timer(&cops_timer);
return;
}
/*
@ -866,7 +862,7 @@ static void cops_timeout(struct net_device *dev)
}
printk(KERN_WARNING "%s: Transmit timed out.\n", dev->name);
cops_jumpstart(dev); /* Restart the card. */
dev->trans_start = jiffies;
dev->trans_start = jiffies; /* prevent tx timeout */
netif_wake_queue(dev);
}
@ -919,7 +915,6 @@ static netdev_tx_t cops_send_packet(struct sk_buff *skb,
/* Done sending packet, update counters and cleanup. */
dev->stats.tx_packets++;
dev->stats.tx_bytes += skb->len;
dev->trans_start = jiffies;
dev_kfree_skb (skb);
return NETDEV_TX_OK;
}

View File

@ -641,7 +641,6 @@ done:
inb_p(base+7);
inb_p(base+7);
}
return;
}

View File

@ -654,7 +654,6 @@ netdev_tx_t arcnet_send_packet(struct sk_buff *skb,
}
}
retval = NETDEV_TX_OK;
dev->trans_start = jiffies;
lp->next_tx = txbuf;
} else {
retval = NETDEV_TX_BUSY;

View File

@ -164,8 +164,8 @@ static DEFINE_PCI_DEVICE_TABLE(com20020pci_id_table) = {
{ 0x1571, 0xa204, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_CAN_10MBIT },
{ 0x1571, 0xa205, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_CAN_10MBIT },
{ 0x1571, 0xa206, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_CAN_10MBIT },
{ 0x10B5, 0x9030, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_CAN_10MBIT },
{ 0x10B5, 0x9050, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_CAN_10MBIT },
{ 0x10B5, 0x9030, 0x10B5, 0x2978, 0, 0, ARC_CAN_10MBIT },
{ 0x10B5, 0x9050, 0x10B5, 0x2273, 0, 0, ARC_CAN_10MBIT },
{ 0x14BA, 0x6000, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_CAN_10MBIT },
{ 0x10B5, 0x2200, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_CAN_10MBIT },
{0,}

View File

@ -677,8 +677,6 @@ static netdev_tx_t ariadne_start_xmit(struct sk_buff *skb,
lance->RAP = CSR0; /* PCnet-ISA Controller Status */
lance->RDP = INEA|TDMD;
dev->trans_start = jiffies;
if (lowb(priv->tx_ring[(entry+1) % TX_RING_SIZE]->TMD1) != 0) {
netif_stop_queue(dev);
priv->tx_full = 1;

View File

@ -383,12 +383,12 @@ static void am79c961_setmulticastlist (struct net_device *dev)
} else if (dev->flags & IFF_ALLMULTI) {
memset(multi_hash, 0xff, sizeof(multi_hash));
} else {
struct dev_mc_list *dmi;
struct netdev_hw_addr *ha;
memset(multi_hash, 0x00, sizeof(multi_hash));
netdev_for_each_mc_addr(dmi, dev)
am79c961_mc_hash(dmi->dmi_addr, multi_hash);
netdev_for_each_mc_addr(ha, dev)
am79c961_mc_hash(ha->addr, multi_hash);
}
spin_lock_irqsave(&priv->chip_lock, flags);
@ -469,7 +469,6 @@ am79c961_sendpacket(struct sk_buff *skb, struct net_device *dev)
spin_lock_irqsave(&priv->chip_lock, flags);
write_rreg (dev->base_addr, CSR0, CSR0_TDMD|CSR0_IENA);
dev->trans_start = jiffies;
spin_unlock_irqrestore(&priv->chip_lock, flags);
/*

View File

@ -557,14 +557,14 @@ static int hash_get_index(__u8 *addr)
*/
static void at91ether_sethashtable(struct net_device *dev)
{
struct dev_mc_list *curr;
struct netdev_hw_addr *ha;
unsigned long mc_filter[2];
unsigned int bitnr;
mc_filter[0] = mc_filter[1] = 0;
netdev_for_each_mc_addr(curr, dev) {
bitnr = hash_get_index(curr->dmi_addr);
netdev_for_each_mc_addr(ha, dev) {
bitnr = hash_get_index(ha->addr);
mc_filter[bitnr >> 5] |= 1 << (bitnr & 31);
}
@ -824,7 +824,6 @@ static int at91ether_start_xmit(struct sk_buff *skb, struct net_device *dev)
/* Set length of the packet in the Transmit Control register */
at91_emac_write(AT91_EMAC_TCR, skb->len);
dev->trans_start = jiffies;
} else {
printk(KERN_ERR "at91_ether.c: at91ether_start_xmit() called, but device is busy!\n");
return NETDEV_TX_BUSY; /* if we return anything but zero, dev.c:1055 calls kfree_skb(skb)

View File

@ -374,8 +374,6 @@ static int ep93xx_xmit(struct sk_buff *skb, struct net_device *dev)
skb->len, DMA_TO_DEVICE);
dev_kfree_skb(skb);
dev->trans_start = jiffies;
spin_lock_irq(&ep->tx_pending_lock);
ep->tx_pending++;
if (ep->tx_pending == TX_QUEUE_ENTRIES)

View File

@ -736,7 +736,6 @@ ether1_sendpacket (struct sk_buff *skb, struct net_device *dev)
local_irq_restore(flags);
/* handle transmit */
dev->trans_start = jiffies;
/* check to see if we have room for a full sized ether frame */
tmp = priv(dev)->tx_head;

View File

@ -529,7 +529,6 @@ ether3_sendpacket(struct sk_buff *skb, struct net_device *dev)
return NETDEV_TX_BUSY; /* unable to queue */
}
dev->trans_start = jiffies;
ptr = 0x600 * priv(dev)->tx_head;
priv(dev)->tx_head = next_ptr;
next_ptr *= 0x600;

View File

@ -708,7 +708,6 @@ static int eth_xmit(struct sk_buff *skb, struct net_device *dev)
/* NPE firmware pads short frames with zeros internally */
wmb();
queue_put_desc(TX_QUEUE(port->id), tx_desc_phys(port, n), desc);
dev->trans_start = jiffies;
if (qmgr_stat_below_low_watermark(txreadyq)) { /* empty */
#if DEBUG_TX
@ -736,7 +735,7 @@ static int eth_xmit(struct sk_buff *skb, struct net_device *dev)
static void eth_set_mcast_list(struct net_device *dev)
{
struct port *port = netdev_priv(dev);
struct dev_mc_list *mclist;
struct netdev_hw_addr *ha;
u8 diffs[ETH_ALEN], *addr;
int i;
@ -749,11 +748,11 @@ static void eth_set_mcast_list(struct net_device *dev)
memset(diffs, 0, ETH_ALEN);
addr = NULL;
netdev_for_each_mc_addr(mclist, dev) {
netdev_for_each_mc_addr(ha, dev) {
if (!addr)
addr = mclist->dmi_addr; /* first MAC address */
addr = ha->addr; /* first MAC address */
for (i = 0; i < ETH_ALEN; i++)
diffs[i] |= addr[i] ^ mclist->dmi_addr[i];
diffs[i] |= addr[i] ^ ha->addr[i];
}
for (i = 0; i < ETH_ALEN; i++) {

View File

@ -332,16 +332,16 @@ ks8695_init_partial_multicast(struct ks8695_priv *ksp,
{
u32 low, high;
int i;
struct dev_mc_list *dmi;
struct netdev_hw_addr *ha;
i = 0;
netdev_for_each_mc_addr(dmi, ndev) {
netdev_for_each_mc_addr(ha, ndev) {
/* Ran out of space in chip? */
BUG_ON(i == KS8695_NR_ADDRESSES);
low = (dmi->dmi_addr[2] << 24) | (dmi->dmi_addr[3] << 16) |
(dmi->dmi_addr[4] << 8) | (dmi->dmi_addr[5]);
high = (dmi->dmi_addr[0] << 8) | (dmi->dmi_addr[1]);
low = (ha->addr[2] << 24) | (ha->addr[3] << 16) |
(ha->addr[4] << 8) | (ha->addr[5]);
high = (ha->addr[0] << 8) | (ha->addr[1]);
ks8695_writereg(ksp, KS8695_AAL_(i), low);
ks8695_writereg(ksp, KS8695_AAH_(i), AAH_E | high);
@ -1302,8 +1302,6 @@ ks8695_start_xmit(struct sk_buff *skb, struct net_device *ndev)
if (++ksp->tx_ring_used == MAX_TX_DESC)
netif_stop_queue(ndev);
ndev->trans_start = jiffies;
/* Kick the TX DMA in case it decided to go IDLE */
ks8695_writereg(ksp, KS8695_DTSC, 0);
@ -1472,7 +1470,6 @@ ks8695_probe(struct platform_device *pdev)
/* Configure our private structure a little */
ksp = netdev_priv(ndev);
memset(ksp, 0, sizeof(struct ks8695_priv));
ksp->dev = &pdev->dev;
ksp->ndev = ndev;

View File

@ -483,7 +483,7 @@ static void w90p910_reset_mac(struct net_device *dev)
w90p910_init_desc(dev);
dev->trans_start = jiffies;
dev->trans_start = jiffies; /* prevent tx timeout */
ether->cur_tx = 0x0;
ether->finish_tx = 0x0;
ether->cur_rx = 0x0;
@ -497,7 +497,7 @@ static void w90p910_reset_mac(struct net_device *dev)
w90p910_trigger_tx(dev);
w90p910_trigger_rx(dev);
dev->trans_start = jiffies;
dev->trans_start = jiffies; /* prevent tx timeout */
if (netif_queue_stopped(dev))
netif_wake_queue(dev);
@ -634,8 +634,6 @@ static int w90p910_send_frame(struct net_device *dev,
txbd = &ether->tdesc->desclist[ether->cur_tx];
dev->trans_start = jiffies;
if (txbd->mode & TX_OWEN_DMA)
netif_stop_queue(dev);
@ -744,7 +742,6 @@ static void netdev_rx(struct net_device *dev)
return;
}
skb->dev = dev;
skb_reserve(skb, 2);
skb_put(skb, length);
skb_copy_to_linear_data(skb, data, length);

View File

@ -583,7 +583,7 @@ static void net_tx_timeout (struct net_device *dev)
outb (0x00, ioaddr + TX_START);
outb (0x03, ioaddr + COL16CNTL);
dev->trans_start = jiffies;
dev->trans_start = jiffies; /* prevent tx timeout */
lp->tx_started = 0;
lp->tx_queue_ready = 1;
@ -636,7 +636,6 @@ static netdev_tx_t net_send_packet (struct sk_buff *skb,
outb (0x80 | lp->tx_queue, ioaddr + TX_START);
lp->tx_queue = 0;
lp->tx_queue_len = 0;
dev->trans_start = jiffies;
lp->tx_started = 1;
netif_start_queue (dev);
} else if (lp->tx_queue_len < 4096 - 1502)
@ -796,7 +795,6 @@ net_rx(struct net_device *dev)
printk("%s: Exint Rx packet with mode %02x after %d ticks.\n",
dev->name, inb(ioaddr + RX_MODE), i);
}
return;
}
/* The inverse routine to net_open(). */
@ -847,12 +845,12 @@ set_rx_mode(struct net_device *dev)
memset(mc_filter, 0x00, sizeof(mc_filter));
outb(1, ioaddr + RX_MODE); /* Ignore almost all multicasts. */
} else {
struct dev_mc_list *mclist;
struct netdev_hw_addr *ha;
memset(mc_filter, 0, sizeof(mc_filter));
netdev_for_each_mc_addr(mclist, dev) {
netdev_for_each_mc_addr(ha, dev) {
unsigned int bit =
ether_crc_le(ETH_ALEN, mclist->dmi_addr) >> 26;
ether_crc_le(ETH_ALEN, ha->addr) >> 26;
mc_filter[bit >> 3] |= (1 << bit);
}
outb(0x02, ioaddr + RX_MODE); /* Use normal mode. */
@ -870,7 +868,6 @@ set_rx_mode(struct net_device *dev)
outw(saved_bank, ioaddr + CONFIG_0);
}
spin_unlock_irqrestore (&lp->lock, flags);
return;
}
#ifdef MODULE

View File

@ -767,7 +767,7 @@ static void lance_tx_timeout (struct net_device *dev)
/* lance_restart, essentially */
lance_init_ring(dev);
REGA( CSR0 ) = CSR0_INEA | CSR0_INIT | CSR0_STRT;
dev->trans_start = jiffies;
dev->trans_start = jiffies; /* prevent tx timeout */
netif_wake_queue(dev);
}
@ -836,7 +836,6 @@ static int lance_start_xmit( struct sk_buff *skb, struct net_device *dev )
/* Trigger an immediate send poll. */
DREG = CSR0_INEA | CSR0_TDMD;
dev->trans_start = jiffies;
if ((MEM->tx_head[(entry+1) & TX_RING_MOD_MASK].flag & TMD1_OWN) ==
TMD1_OWN_HOST)

View File

@ -263,8 +263,6 @@ static void atl1c_get_wol(struct net_device *netdev,
wol->wolopts |= WAKE_MAGIC;
if (adapter->wol & AT_WUFC_LNKC)
wol->wolopts |= WAKE_PHY;
return;
}
static int atl1c_set_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)

View File

@ -317,8 +317,6 @@ static void atl1c_common_task(struct work_struct *work)
if (adapter->work_event & ATL1C_WORK_EVENT_LINK_CHANGE)
atl1c_check_link_status(adapter);
return;
}
@ -354,7 +352,7 @@ static void atl1c_set_multi(struct net_device *netdev)
{
struct atl1c_adapter *adapter = netdev_priv(netdev);
struct atl1c_hw *hw = &adapter->hw;
struct dev_mc_list *mc_ptr;
struct netdev_hw_addr *ha;
u32 mac_ctrl_data;
u32 hash_value;
@ -377,8 +375,8 @@ static void atl1c_set_multi(struct net_device *netdev)
AT_WRITE_REG_ARRAY(hw, REG_RX_HASH_TABLE, 1, 0);
/* comoute mc addresses' hash value ,and put it into hash table */
netdev_for_each_mc_addr(mc_ptr, netdev) {
hash_value = atl1c_hash_mc_addr(hw, mc_ptr->dmi_addr);
netdev_for_each_mc_addr(ha, netdev) {
hash_value = atl1c_hash_mc_addr(hw, ha->addr);
atl1c_hash_set(hw, hash_value);
}
}
@ -1817,7 +1815,6 @@ rrs_checked:
atl1c_clean_rfd(rfd_ring, rrs, rfd_num);
skb_put(skb, length - ETH_FCS_LEN);
skb->protocol = eth_type_trans(skb, netdev);
skb->dev = netdev;
atl1c_rx_checksum(adapter, skb, rrs);
if (unlikely(adapter->vlgrp) && rrs->word3 & RRS_VLAN_INS) {
u16 vlan;

View File

@ -338,8 +338,6 @@ static void atl1e_get_wol(struct net_device *netdev,
wol->wolopts |= WAKE_MAGIC;
if (adapter->wol & AT_WUFC_LNKC)
wol->wolopts |= WAKE_PHY;
return;
}
static int atl1e_set_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)

View File

@ -284,7 +284,7 @@ static void atl1e_set_multi(struct net_device *netdev)
{
struct atl1e_adapter *adapter = netdev_priv(netdev);
struct atl1e_hw *hw = &adapter->hw;
struct dev_mc_list *mc_ptr;
struct netdev_hw_addr *ha;
u32 mac_ctrl_data = 0;
u32 hash_value;
@ -307,8 +307,8 @@ static void atl1e_set_multi(struct net_device *netdev)
AT_WRITE_REG_ARRAY(hw, REG_RX_HASH_TABLE, 1, 0);
/* comoute mc addresses' hash value ,and put it into hash table */
netdev_for_each_mc_addr(mc_ptr, netdev) {
hash_value = atl1e_hash_mc_addr(hw, mc_ptr->dmi_addr);
netdev_for_each_mc_addr(ha, netdev) {
hash_value = atl1e_hash_mc_addr(hw, ha->addr);
atl1e_hash_set(hw, hash_value);
}
}
@ -707,8 +707,6 @@ static void atl1e_init_ring_resources(struct atl1e_adapter *adapter)
adapter->ring_vir_addr = NULL;
adapter->rx_ring.desc = NULL;
rwlock_init(&adapter->tx_ring.tx_lock);
return;
}
/*
@ -905,8 +903,6 @@ static inline void atl1e_configure_des_ring(const struct atl1e_adapter *adapter)
AT_WRITE_REG(hw, REG_HOST_RXFPAGE_SIZE, rx_ring->page_size);
/* Load all of base address above */
AT_WRITE_REG(hw, REG_LOAD_PTR, 1);
return;
}
static inline void atl1e_configure_tx(struct atl1e_adapter *adapter)
@ -950,7 +946,6 @@ static inline void atl1e_configure_tx(struct atl1e_adapter *adapter)
(((u16)hw->tpd_burst & TXQ_CTRL_NUM_TPD_BURST_MASK)
<< TXQ_CTRL_NUM_TPD_BURST_SHIFT)
| TXQ_CTRL_ENH_MODE | TXQ_CTRL_EN);
return;
}
static inline void atl1e_configure_rx(struct atl1e_adapter *adapter)
@ -1004,7 +999,6 @@ static inline void atl1e_configure_rx(struct atl1e_adapter *adapter)
RXQ_CTRL_CUT_THRU_EN | RXQ_CTRL_EN;
AT_WRITE_REG(hw, REG_RXQ_CTRL, rxq_ctrl_data);
return;
}
static inline void atl1e_configure_dma(struct atl1e_adapter *adapter)
@ -1024,7 +1018,6 @@ static inline void atl1e_configure_dma(struct atl1e_adapter *adapter)
<< DMA_CTRL_DMAW_DLY_CNT_SHIFT;
AT_WRITE_REG(hw, REG_DMA_CTRL, dma_ctrl_data);
return;
}
static void atl1e_setup_mac_ctrl(struct atl1e_adapter *adapter)
@ -1428,7 +1421,6 @@ static void atl1e_clean_rx_irq(struct atl1e_adapter *adapter, u8 que,
"Memory squeeze, deferring packet\n");
goto skip_pkt;
}
skb->dev = netdev;
memcpy(skb->data, (u8 *)(prrs + 1), packet_size);
skb_put(skb, packet_size);
skb->protocol = eth_type_trans(skb, netdev);
@ -1680,7 +1672,7 @@ static void atl1e_tx_map(struct atl1e_adapter *adapter,
{
struct atl1e_tpd_desc *use_tpd = NULL;
struct atl1e_tx_buffer *tx_buffer = NULL;
u16 buf_len = skb->len - skb->data_len;
u16 buf_len = skb_headlen(skb);
u16 map_len = 0;
u16 mapped_len = 0;
u16 hdr_len = 0;

View File

@ -1830,8 +1830,6 @@ static void atl1_rx_checksum(struct atl1_adapter *adapter,
adapter->hw_csum_good++;
return;
}
return;
}
/*
@ -2347,7 +2345,7 @@ static netdev_tx_t atl1_xmit_frame(struct sk_buff *skb,
{
struct atl1_adapter *adapter = netdev_priv(netdev);
struct atl1_tpd_ring *tpd_ring = &adapter->tpd_ring;
int len = skb->len;
int len;
int tso;
int count = 1;
int ret_val;
@ -2359,7 +2357,7 @@ static netdev_tx_t atl1_xmit_frame(struct sk_buff *skb,
unsigned int f;
unsigned int proto_hdr_len;
len -= skb->data_len;
len = skb_headlen(skb);
if (unlikely(skb->len <= 0)) {
dev_kfree_skb_any(skb);
@ -3390,7 +3388,6 @@ static void atl1_get_wol(struct net_device *netdev,
wol->wolopts = 0;
if (adapter->wol & ATLX_WUFC_MAG)
wol->wolopts |= WAKE_MAGIC;
return;
}
static int atl1_set_wol(struct net_device *netdev,

View File

@ -136,7 +136,7 @@ static void atl2_set_multi(struct net_device *netdev)
{
struct atl2_adapter *adapter = netdev_priv(netdev);
struct atl2_hw *hw = &adapter->hw;
struct dev_mc_list *mc_ptr;
struct netdev_hw_addr *ha;
u32 rctl;
u32 hash_value;
@ -158,8 +158,8 @@ static void atl2_set_multi(struct net_device *netdev)
ATL2_WRITE_REG_ARRAY(hw, REG_RX_HASH_TABLE, 1, 0);
/* comoute mc addresses' hash value ,and put it into hash table */
netdev_for_each_mc_addr(mc_ptr, netdev) {
hash_value = atl2_hash_mc_addr(hw, mc_ptr->dmi_addr);
netdev_for_each_mc_addr(ha, netdev) {
hash_value = atl2_hash_mc_addr(hw, ha->addr);
atl2_hash_set(hw, hash_value);
}
}
@ -422,7 +422,6 @@ static void atl2_intr_rx(struct atl2_adapter *adapter)
netdev->stats.rx_dropped++;
break;
}
skb->dev = netdev;
memcpy(skb->data, rxd->packet, rx_size);
skb_put(skb, rx_size);
skb->protocol = eth_type_trans(skb, netdev);
@ -893,7 +892,6 @@ static netdev_tx_t atl2_xmit_frame(struct sk_buff *skb,
(adapter->txd_write_ptr >> 2));
mmiowb();
netdev->trans_start = jiffies;
dev_kfree_skb_any(skb);
return NETDEV_TX_OK;
}

View File

@ -123,7 +123,7 @@ static void atlx_set_multi(struct net_device *netdev)
{
struct atlx_adapter *adapter = netdev_priv(netdev);
struct atlx_hw *hw = &adapter->hw;
struct dev_mc_list *mc_ptr;
struct netdev_hw_addr *ha;
u32 rctl;
u32 hash_value;
@ -144,8 +144,8 @@ static void atlx_set_multi(struct net_device *netdev)
iowrite32(0, (hw->hw_addr + REG_RX_HASH_TABLE) + (1 << 2));
/* compute mc addresses' hash value ,and put it into hash table */
netdev_for_each_mc_addr(mc_ptr, netdev) {
hash_value = atlx_hash_mc_addr(hw, mc_ptr->dmi_addr);
netdev_for_each_mc_addr(ha, netdev) {
hash_value = atlx_hash_mc_addr(hw, ha->addr);
atlx_hash_set(hw, hash_value);
}
}

View File

@ -547,7 +547,7 @@ static void tx_timeout(struct net_device *dev)
dev->stats.tx_errors++;
/* Try to restart the adapter. */
hardware_init(dev);
dev->trans_start = jiffies;
dev->trans_start = jiffies; /* prevent tx timeout */
netif_wake_queue(dev);
dev->stats.tx_errors++;
}
@ -586,7 +586,6 @@ static netdev_tx_t atp_send_packet(struct sk_buff *skb,
write_reg(ioaddr, IMR, ISR_RxOK | ISR_TxErr | ISR_TxOK);
write_reg_high(ioaddr, IMR, ISRh_RxErr);
dev->trans_start = jiffies;
dev_kfree_skb (skb);
return NETDEV_TX_OK;
}
@ -803,7 +802,6 @@ static void net_rx(struct net_device *dev)
done:
write_reg(ioaddr, CMR1, CMR1_NextPkt);
lp->last_rx_time = jiffies;
return;
}
static void read_block(long ioaddr, int length, unsigned char *p, int data_mode)
@ -882,11 +880,11 @@ static void set_rx_mode_8012(struct net_device *dev)
memset(mc_filter, 0xff, sizeof(mc_filter));
new_mode = CMR2h_Normal;
} else {
struct dev_mc_list *mclist;
struct netdev_hw_addr *ha;
memset(mc_filter, 0, sizeof(mc_filter));
netdev_for_each_mc_addr(mclist, dev) {
int filterbit = ether_crc_le(ETH_ALEN, mclist->dmi_addr) & 0x3f;
netdev_for_each_mc_addr(ha, dev) {
int filterbit = ether_crc_le(ETH_ALEN, ha->addr) & 0x3f;
mc_filter[filterbit >> 5] |= 1 << (filterbit & 31);
}
new_mode = CMR2h_Normal;

View File

@ -75,14 +75,19 @@ static int au1000_debug = 5;
static int au1000_debug = 3;
#endif
#define AU1000_DEF_MSG_ENABLE (NETIF_MSG_DRV | \
NETIF_MSG_PROBE | \
NETIF_MSG_LINK)
#define DRV_NAME "au1000_eth"
#define DRV_VERSION "1.6"
#define DRV_VERSION "1.7"
#define DRV_AUTHOR "Pete Popov <ppopov@embeddedalley.com>"
#define DRV_DESC "Au1xxx on-chip Ethernet driver"
MODULE_AUTHOR(DRV_AUTHOR);
MODULE_DESCRIPTION(DRV_DESC);
MODULE_LICENSE("GPL");
MODULE_VERSION(DRV_VERSION);
/*
* Theory of operation
@ -148,7 +153,7 @@ struct au1000_private *au_macs[NUM_ETH_INTERFACES];
* specific irq-map
*/
static void enable_mac(struct net_device *dev, int force_reset)
static void au1000_enable_mac(struct net_device *dev, int force_reset)
{
unsigned long flags;
struct au1000_private *aup = netdev_priv(dev);
@ -182,8 +187,7 @@ static int au1000_mdio_read(struct net_device *dev, int phy_addr, int reg)
while (*mii_control_reg & MAC_MII_BUSY) {
mdelay(1);
if (--timedout == 0) {
printk(KERN_ERR "%s: read_MII busy timeout!!\n",
dev->name);
netdev_err(dev, "read_MII busy timeout!!\n");
return -1;
}
}
@ -197,8 +201,7 @@ static int au1000_mdio_read(struct net_device *dev, int phy_addr, int reg)
while (*mii_control_reg & MAC_MII_BUSY) {
mdelay(1);
if (--timedout == 0) {
printk(KERN_ERR "%s: mdio_read busy timeout!!\n",
dev->name);
netdev_err(dev, "mdio_read busy timeout!!\n");
return -1;
}
}
@ -217,8 +220,7 @@ static void au1000_mdio_write(struct net_device *dev, int phy_addr,
while (*mii_control_reg & MAC_MII_BUSY) {
mdelay(1);
if (--timedout == 0) {
printk(KERN_ERR "%s: mdio_write busy timeout!!\n",
dev->name);
netdev_err(dev, "mdio_write busy timeout!!\n");
return;
}
}
@ -236,7 +238,7 @@ static int au1000_mdiobus_read(struct mii_bus *bus, int phy_addr, int regnum)
* _NOT_ hold (e.g. when PHY is accessed through other MAC's MII bus) */
struct net_device *const dev = bus->priv;
enable_mac(dev, 0); /* make sure the MAC associated with this
au1000_enable_mac(dev, 0); /* make sure the MAC associated with this
* mii_bus is enabled */
return au1000_mdio_read(dev, phy_addr, regnum);
}
@ -246,7 +248,7 @@ static int au1000_mdiobus_write(struct mii_bus *bus, int phy_addr, int regnum,
{
struct net_device *const dev = bus->priv;
enable_mac(dev, 0); /* make sure the MAC associated with this
au1000_enable_mac(dev, 0); /* make sure the MAC associated with this
* mii_bus is enabled */
au1000_mdio_write(dev, phy_addr, regnum, value);
return 0;
@ -256,28 +258,26 @@ static int au1000_mdiobus_reset(struct mii_bus *bus)
{
struct net_device *const dev = bus->priv;
enable_mac(dev, 0); /* make sure the MAC associated with this
au1000_enable_mac(dev, 0); /* make sure the MAC associated with this
* mii_bus is enabled */
return 0;
}
static void hard_stop(struct net_device *dev)
static void au1000_hard_stop(struct net_device *dev)
{
struct au1000_private *aup = netdev_priv(dev);
if (au1000_debug > 4)
printk(KERN_INFO "%s: hard stop\n", dev->name);
netif_dbg(aup, drv, dev, "hard stop\n");
aup->mac->control &= ~(MAC_RX_ENABLE | MAC_TX_ENABLE);
au_sync_delay(10);
}
static void enable_rx_tx(struct net_device *dev)
static void au1000_enable_rx_tx(struct net_device *dev)
{
struct au1000_private *aup = netdev_priv(dev);
if (au1000_debug > 4)
printk(KERN_INFO "%s: enable_rx_tx\n", dev->name);
netif_dbg(aup, hw, dev, "enable_rx_tx\n");
aup->mac->control |= (MAC_RX_ENABLE | MAC_TX_ENABLE);
au_sync_delay(10);
@ -297,16 +297,15 @@ au1000_adjust_link(struct net_device *dev)
spin_lock_irqsave(&aup->lock, flags);
if (phydev->link && (aup->old_speed != phydev->speed)) {
// speed changed
/* speed changed */
switch (phydev->speed) {
case SPEED_10:
case SPEED_100:
break;
default:
printk(KERN_WARNING
"%s: Speed (%d) is not 10/100 ???\n",
dev->name, phydev->speed);
netdev_warn(dev, "Speed (%d) is not 10/100 ???\n",
phydev->speed);
break;
}
@ -316,10 +315,10 @@ au1000_adjust_link(struct net_device *dev)
}
if (phydev->link && (aup->old_duplex != phydev->duplex)) {
// duplex mode changed
/* duplex mode changed */
/* switching duplex mode requires to disable rx and tx! */
hard_stop(dev);
au1000_hard_stop(dev);
if (DUPLEX_FULL == phydev->duplex)
aup->mac->control = ((aup->mac->control
@ -331,14 +330,14 @@ au1000_adjust_link(struct net_device *dev)
| MAC_DISABLE_RX_OWN);
au_sync_delay(1);
enable_rx_tx(dev);
au1000_enable_rx_tx(dev);
aup->old_duplex = phydev->duplex;
status_change = 1;
}
if (phydev->link != aup->old_link) {
// link state changed
/* link state changed */
if (!phydev->link) {
/* link went down */
@ -354,15 +353,15 @@ au1000_adjust_link(struct net_device *dev)
if (status_change) {
if (phydev->link)
printk(KERN_INFO "%s: link up (%d/%s)\n",
dev->name, phydev->speed,
netdev_info(dev, "link up (%d/%s)\n",
phydev->speed,
DUPLEX_FULL == phydev->duplex ? "Full" : "Half");
else
printk(KERN_INFO "%s: link down\n", dev->name);
netdev_info(dev, "link down\n");
}
}
static int mii_probe (struct net_device *dev)
static int au1000_mii_probe (struct net_device *dev)
{
struct au1000_private *const aup = netdev_priv(dev);
struct phy_device *phydev = NULL;
@ -373,8 +372,7 @@ static int mii_probe (struct net_device *dev)
if (aup->phy_addr)
phydev = aup->mii_bus->phy_map[aup->phy_addr];
else
printk (KERN_INFO DRV_NAME ":%s: using PHY-less setup\n",
dev->name);
netdev_info(dev, "using PHY-less setup\n");
return 0;
} else {
int phy_addr;
@ -391,7 +389,7 @@ static int mii_probe (struct net_device *dev)
/* try harder to find a PHY */
if (!phydev && (aup->mac_id == 1)) {
/* no PHY found, maybe we have a dual PHY? */
printk (KERN_INFO DRV_NAME ": no PHY found on MAC1, "
dev_info(&dev->dev, ": no PHY found on MAC1, "
"let's see if it's attached to MAC0...\n");
/* find the first (lowest address) non-attached PHY on
@ -417,7 +415,7 @@ static int mii_probe (struct net_device *dev)
}
if (!phydev) {
printk (KERN_ERR DRV_NAME ":%s: no PHY found\n", dev->name);
netdev_err(dev, "no PHY found\n");
return -1;
}
@ -428,7 +426,7 @@ static int mii_probe (struct net_device *dev)
0, PHY_INTERFACE_MODE_MII);
if (IS_ERR(phydev)) {
printk(KERN_ERR "%s: Could not attach to PHY\n", dev->name);
netdev_err(dev, "Could not attach to PHY\n");
return PTR_ERR(phydev);
}
@ -449,8 +447,8 @@ static int mii_probe (struct net_device *dev)
aup->old_duplex = -1;
aup->phy_dev = phydev;
printk(KERN_INFO "%s: attached PHY driver [%s] "
"(mii_bus:phy_addr=%s, irq=%d)\n", dev->name,
netdev_info(dev, "attached PHY driver [%s] "
"(mii_bus:phy_addr=%s, irq=%d)\n",
phydev->drv->name, dev_name(&phydev->dev), phydev->irq);
return 0;
@ -462,7 +460,7 @@ static int mii_probe (struct net_device *dev)
* has the virtual and dma address of a buffer suitable for
* both, receive and transmit operations.
*/
static db_dest_t *GetFreeDB(struct au1000_private *aup)
static db_dest_t *au1000_GetFreeDB(struct au1000_private *aup)
{
db_dest_t *pDB;
pDB = aup->pDBfree;
@ -473,7 +471,7 @@ static db_dest_t *GetFreeDB(struct au1000_private *aup)
return pDB;
}
void ReleaseDB(struct au1000_private *aup, db_dest_t *pDB)
void au1000_ReleaseDB(struct au1000_private *aup, db_dest_t *pDB)
{
db_dest_t *pDBfree = aup->pDBfree;
if (pDBfree)
@ -481,12 +479,12 @@ void ReleaseDB(struct au1000_private *aup, db_dest_t *pDB)
aup->pDBfree = pDB;
}
static void reset_mac_unlocked(struct net_device *dev)
static void au1000_reset_mac_unlocked(struct net_device *dev)
{
struct au1000_private *const aup = netdev_priv(dev);
int i;
hard_stop(dev);
au1000_hard_stop(dev);
*aup->enable = MAC_EN_CLOCK_ENABLE;
au_sync_delay(2);
@ -507,18 +505,17 @@ static void reset_mac_unlocked(struct net_device *dev)
}
static void reset_mac(struct net_device *dev)
static void au1000_reset_mac(struct net_device *dev)
{
struct au1000_private *const aup = netdev_priv(dev);
unsigned long flags;
if (au1000_debug > 4)
printk(KERN_INFO "%s: reset mac, aup %x\n",
dev->name, (unsigned)aup);
netif_dbg(aup, hw, dev, "reset mac, aup %x\n",
(unsigned)aup);
spin_lock_irqsave(&aup->lock, flags);
reset_mac_unlocked (dev);
au1000_reset_mac_unlocked (dev);
spin_unlock_irqrestore(&aup->lock, flags);
}
@ -529,7 +526,7 @@ static void reset_mac(struct net_device *dev)
* these are not descriptors sitting in memory.
*/
static void
setup_hw_rings(struct au1000_private *aup, u32 rx_base, u32 tx_base)
au1000_setup_hw_rings(struct au1000_private *aup, u32 rx_base, u32 tx_base)
{
int i;
@ -582,11 +579,25 @@ au1000_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info)
info->regdump_len = 0;
}
static void au1000_set_msglevel(struct net_device *dev, u32 value)
{
struct au1000_private *aup = netdev_priv(dev);
aup->msg_enable = value;
}
static u32 au1000_get_msglevel(struct net_device *dev)
{
struct au1000_private *aup = netdev_priv(dev);
return aup->msg_enable;
}
static const struct ethtool_ops au1000_ethtool_ops = {
.get_settings = au1000_get_settings,
.set_settings = au1000_set_settings,
.get_drvinfo = au1000_get_drvinfo,
.get_link = ethtool_op_get_link,
.get_msglevel = au1000_get_msglevel,
.set_msglevel = au1000_set_msglevel,
};
@ -606,11 +617,10 @@ static int au1000_init(struct net_device *dev)
int i;
u32 control;
if (au1000_debug > 4)
printk("%s: au1000_init\n", dev->name);
netif_dbg(aup, hw, dev, "au1000_init\n");
/* bring the device out of reset */
enable_mac(dev, 1);
au1000_enable_mac(dev, 1);
spin_lock_irqsave(&aup->lock, flags);
@ -649,7 +659,7 @@ static int au1000_init(struct net_device *dev)
return 0;
}
static inline void update_rx_stats(struct net_device *dev, u32 status)
static inline void au1000_update_rx_stats(struct net_device *dev, u32 status)
{
struct net_device_stats *ps = &dev->stats;
@ -667,8 +677,7 @@ static inline void update_rx_stats(struct net_device *dev, u32 status)
ps->rx_crc_errors++;
if (status & RX_COLL)
ps->collisions++;
}
else
} else
ps->rx_bytes += status & RX_FRAME_LEN_MASK;
}
@ -685,15 +694,14 @@ static int au1000_rx(struct net_device *dev)
db_dest_t *pDB;
u32 frmlen;
if (au1000_debug > 5)
printk("%s: au1000_rx head %d\n", dev->name, aup->rx_head);
netif_dbg(aup, rx_status, dev, "au1000_rx head %d\n", aup->rx_head);
prxd = aup->rx_dma_ring[aup->rx_head];
buff_stat = prxd->buff_stat;
while (buff_stat & RX_T_DONE) {
status = prxd->status;
pDB = aup->rx_db_inuse[aup->rx_head];
update_rx_stats(dev, status);
au1000_update_rx_stats(dev, status);
if (!(status & RX_ERROR)) {
/* good frame */
@ -701,9 +709,7 @@ static int au1000_rx(struct net_device *dev)
frmlen -= 4; /* Remove FCS */
skb = dev_alloc_skb(frmlen + 2);
if (skb == NULL) {
printk(KERN_ERR
"%s: Memory squeeze, dropping packet.\n",
dev->name);
netdev_err(dev, "Memory squeeze, dropping packet.\n");
dev->stats.rx_dropped++;
continue;
}
@ -713,8 +719,7 @@ static int au1000_rx(struct net_device *dev)
skb_put(skb, frmlen);
skb->protocol = eth_type_trans(skb, dev);
netif_rx(skb); /* pass the packet to upper layers */
}
else {
} else {
if (au1000_debug > 4) {
if (status & RX_MISSED_FRAME)
printk("rx miss\n");
@ -747,7 +752,7 @@ static int au1000_rx(struct net_device *dev)
return 0;
}
static void update_tx_stats(struct net_device *dev, u32 status)
static void au1000_update_tx_stats(struct net_device *dev, u32 status)
{
struct au1000_private *aup = netdev_priv(dev);
struct net_device_stats *ps = &dev->stats;
@ -760,8 +765,7 @@ static void update_tx_stats(struct net_device *dev, u32 status)
ps->tx_errors++;
ps->tx_aborted_errors++;
}
}
else {
} else {
ps->tx_errors++;
ps->tx_aborted_errors++;
if (status & (TX_NO_CARRIER | TX_LOSS_CARRIER))
@ -783,7 +787,7 @@ static void au1000_tx_ack(struct net_device *dev)
ptxd = aup->tx_dma_ring[aup->tx_tail];
while (ptxd->buff_stat & TX_T_DONE) {
update_tx_stats(dev, ptxd->status);
au1000_update_tx_stats(dev, ptxd->status);
ptxd->buff_stat &= ~TX_T_DONE;
ptxd->len = 0;
au_sync();
@ -817,18 +821,18 @@ static int au1000_open(struct net_device *dev)
int retval;
struct au1000_private *aup = netdev_priv(dev);
if (au1000_debug > 4)
printk("%s: open: dev=%p\n", dev->name, dev);
netif_dbg(aup, drv, dev, "open: dev=%p\n", dev);
if ((retval = request_irq(dev->irq, au1000_interrupt, 0,
dev->name, dev))) {
printk(KERN_ERR "%s: unable to get IRQ %d\n",
dev->name, dev->irq);
retval = request_irq(dev->irq, au1000_interrupt, 0,
dev->name, dev);
if (retval) {
netdev_err(dev, "unable to get IRQ %d\n", dev->irq);
return retval;
}
if ((retval = au1000_init(dev))) {
printk(KERN_ERR "%s: error in au1000_init\n", dev->name);
retval = au1000_init(dev);
if (retval) {
netdev_err(dev, "error in au1000_init\n");
free_irq(dev->irq, dev);
return retval;
}
@ -841,8 +845,7 @@ static int au1000_open(struct net_device *dev)
netif_start_queue(dev);
if (au1000_debug > 4)
printk("%s: open: Initialization done.\n", dev->name);
netif_dbg(aup, drv, dev, "open: Initialization done.\n");
return 0;
}
@ -852,15 +855,14 @@ static int au1000_close(struct net_device *dev)
unsigned long flags;
struct au1000_private *const aup = netdev_priv(dev);
if (au1000_debug > 4)
printk("%s: close: dev=%p\n", dev->name, dev);
netif_dbg(aup, drv, dev, "close: dev=%p\n", dev);
if (aup->phy_dev)
phy_stop(aup->phy_dev);
spin_lock_irqsave(&aup->lock, flags);
reset_mac_unlocked (dev);
au1000_reset_mac_unlocked (dev);
/* stop the device */
netif_stop_queue(dev);
@ -884,9 +886,8 @@ static netdev_tx_t au1000_tx(struct sk_buff *skb, struct net_device *dev)
db_dest_t *pDB;
int i;
if (au1000_debug > 5)
printk("%s: tx: aup %x len=%d, data=%p, head %d\n",
dev->name, (unsigned)aup, skb->len,
netif_dbg(aup, tx_queued, dev, "tx: aup %x len=%d, data=%p, head %d\n",
(unsigned)aup, skb->len,
skb->data, aup->tx_head);
ptxd = aup->tx_dma_ring[aup->tx_head];
@ -896,9 +897,8 @@ static netdev_tx_t au1000_tx(struct sk_buff *skb, struct net_device *dev)
netif_stop_queue(dev);
aup->tx_full = 1;
return NETDEV_TX_BUSY;
}
else if (buff_stat & TX_T_DONE) {
update_tx_stats(dev, ptxd->status);
} else if (buff_stat & TX_T_DONE) {
au1000_update_tx_stats(dev, ptxd->status);
ptxd->len = 0;
}
@ -914,8 +914,7 @@ static netdev_tx_t au1000_tx(struct sk_buff *skb, struct net_device *dev)
((char *)pDB->vaddr)[i] = 0;
}
ptxd->len = ETH_ZLEN;
}
else
} else
ptxd->len = skb->len;
ps->tx_packets++;
@ -925,7 +924,6 @@ static netdev_tx_t au1000_tx(struct sk_buff *skb, struct net_device *dev)
au_sync();
dev_kfree_skb(skb);
aup->tx_head = (aup->tx_head + 1) & (NUM_TX_DMA - 1);
dev->trans_start = jiffies;
return NETDEV_TX_OK;
}
@ -935,10 +933,10 @@ static netdev_tx_t au1000_tx(struct sk_buff *skb, struct net_device *dev)
*/
static void au1000_tx_timeout(struct net_device *dev)
{
printk(KERN_ERR "%s: au1000_tx_timeout: dev=%p\n", dev->name, dev);
reset_mac(dev);
netdev_err(dev, "au1000_tx_timeout: dev=%p\n", dev);
au1000_reset_mac(dev);
au1000_init(dev);
dev->trans_start = jiffies;
dev->trans_start = jiffies; /* prevent tx timeout */
netif_wake_queue(dev);
}
@ -946,8 +944,7 @@ static void au1000_multicast_list(struct net_device *dev)
{
struct au1000_private *aup = netdev_priv(dev);
if (au1000_debug > 4)
printk("%s: au1000_multicast_list: flags=%x\n", dev->name, dev->flags);
netif_dbg(aup, drv, dev, "au1000_multicast_list: flags=%x\n", dev->flags);
if (dev->flags & IFF_PROMISC) { /* Set promiscuous. */
aup->mac->control |= MAC_PROMISCUOUS;
@ -955,14 +952,14 @@ static void au1000_multicast_list(struct net_device *dev)
netdev_mc_count(dev) > MULTICAST_FILTER_LIMIT) {
aup->mac->control |= MAC_PASS_ALL_MULTI;
aup->mac->control &= ~MAC_PROMISCUOUS;
printk(KERN_INFO "%s: Pass all multicast\n", dev->name);
netdev_info(dev, "Pass all multicast\n");
} else {
struct dev_mc_list *mclist;
struct netdev_hw_addr *ha;
u32 mc_filter[2]; /* Multicast hash filter */
mc_filter[1] = mc_filter[0] = 0;
netdev_for_each_mc_addr(mclist, dev)
set_bit(ether_crc(ETH_ALEN, mclist->dmi_addr)>>26,
netdev_for_each_mc_addr(ha, dev)
set_bit(ether_crc(ETH_ALEN, ha->addr)>>26,
(long *)mc_filter);
aup->mac->multi_hash_high = mc_filter[1];
aup->mac->multi_hash_low = mc_filter[0];
@ -975,9 +972,11 @@ static int au1000_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
{
struct au1000_private *aup = netdev_priv(dev);
if (!netif_running(dev)) return -EINVAL;
if (!netif_running(dev))
return -EINVAL;
if (!aup->phy_dev) return -EINVAL; // PHY not controllable
if (!aup->phy_dev)
return -EINVAL; /* PHY not controllable */
return phy_mii_ioctl(aup->phy_dev, if_mii(rq), cmd);
}
@ -996,7 +995,7 @@ static const struct net_device_ops au1000_netdev_ops = {
static int __devinit au1000_probe(struct platform_device *pdev)
{
static unsigned version_printed = 0;
static unsigned version_printed;
struct au1000_private *aup = NULL;
struct au1000_eth_platform_data *pd;
struct net_device *dev = NULL;
@ -1007,40 +1006,40 @@ static int __devinit au1000_probe(struct platform_device *pdev)
base = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!base) {
printk(KERN_ERR DRV_NAME ": failed to retrieve base register\n");
dev_err(&pdev->dev, "failed to retrieve base register\n");
err = -ENODEV;
goto out;
}
macen = platform_get_resource(pdev, IORESOURCE_MEM, 1);
if (!macen) {
printk(KERN_ERR DRV_NAME ": failed to retrieve MAC Enable register\n");
dev_err(&pdev->dev, "failed to retrieve MAC Enable register\n");
err = -ENODEV;
goto out;
}
irq = platform_get_irq(pdev, 0);
if (irq < 0) {
printk(KERN_ERR DRV_NAME ": failed to retrieve IRQ\n");
dev_err(&pdev->dev, "failed to retrieve IRQ\n");
err = -ENODEV;
goto out;
}
if (!request_mem_region(base->start, resource_size(base), pdev->name)) {
printk(KERN_ERR DRV_NAME ": failed to request memory region for base registers\n");
dev_err(&pdev->dev, "failed to request memory region for base registers\n");
err = -ENXIO;
goto out;
}
if (!request_mem_region(macen->start, resource_size(macen), pdev->name)) {
printk(KERN_ERR DRV_NAME ": failed to request memory region for MAC enable register\n");
dev_err(&pdev->dev, "failed to request memory region for MAC enable register\n");
err = -ENXIO;
goto err_request;
}
dev = alloc_etherdev(sizeof(struct au1000_private));
if (!dev) {
printk(KERN_ERR "%s: alloc_etherdev failed\n", DRV_NAME);
dev_err(&pdev->dev, "alloc_etherdev failed\n");
err = -ENOMEM;
goto err_alloc;
}
@ -1050,6 +1049,7 @@ static int __devinit au1000_probe(struct platform_device *pdev)
aup = netdev_priv(dev);
spin_lock_init(&aup->lock);
aup->msg_enable = (au1000_debug < 4 ? AU1000_DEF_MSG_ENABLE : au1000_debug);
/* Allocate the data buffers */
/* Snooping works fine with eth on all au1xxx */
@ -1057,7 +1057,7 @@ static int __devinit au1000_probe(struct platform_device *pdev)
(NUM_TX_BUFFS + NUM_RX_BUFFS),
&aup->dma_addr, 0);
if (!aup->vaddr) {
printk(KERN_ERR DRV_NAME ": failed to allocate data buffers\n");
dev_err(&pdev->dev, "failed to allocate data buffers\n");
err = -ENOMEM;
goto err_vaddr;
}
@ -1065,7 +1065,7 @@ static int __devinit au1000_probe(struct platform_device *pdev)
/* aup->mac is the base address of the MAC's registers */
aup->mac = (volatile mac_reg_t *)ioremap_nocache(base->start, resource_size(base));
if (!aup->mac) {
printk(KERN_ERR DRV_NAME ": failed to ioremap MAC registers\n");
dev_err(&pdev->dev, "failed to ioremap MAC registers\n");
err = -ENXIO;
goto err_remap1;
}
@ -1073,7 +1073,7 @@ static int __devinit au1000_probe(struct platform_device *pdev)
/* Setup some variables for quick register address access */
aup->enable = (volatile u32 *)ioremap_nocache(macen->start, resource_size(macen));
if (!aup->enable) {
printk(KERN_ERR DRV_NAME ": failed to ioremap MAC enable register\n");
dev_err(&pdev->dev, "failed to ioremap MAC enable register\n");
err = -ENXIO;
goto err_remap2;
}
@ -1083,14 +1083,13 @@ static int __devinit au1000_probe(struct platform_device *pdev)
if (prom_get_ethernet_addr(ethaddr) == 0)
memcpy(au1000_mac_addr, ethaddr, sizeof(au1000_mac_addr));
else {
printk(KERN_INFO "%s: No MAC address found\n",
dev->name);
netdev_info(dev, "No MAC address found\n");
/* Use the hard coded MAC addresses */
}
setup_hw_rings(aup, MAC0_RX_DMA_ADDR, MAC0_TX_DMA_ADDR);
au1000_setup_hw_rings(aup, MAC0_RX_DMA_ADDR, MAC0_TX_DMA_ADDR);
} else if (pdev->id == 1)
setup_hw_rings(aup, MAC1_RX_DMA_ADDR, MAC1_TX_DMA_ADDR);
au1000_setup_hw_rings(aup, MAC1_RX_DMA_ADDR, MAC1_TX_DMA_ADDR);
/*
* Assign to the Ethernet ports two consecutive MAC addresses
@ -1104,7 +1103,7 @@ static int __devinit au1000_probe(struct platform_device *pdev)
pd = pdev->dev.platform_data;
if (!pd) {
printk(KERN_INFO DRV_NAME ": no platform_data passed, PHY search on MAC0\n");
dev_info(&pdev->dev, "no platform_data passed, PHY search on MAC0\n");
aup->phy1_search_mac0 = 1;
} else {
aup->phy_static_config = pd->phy_static_config;
@ -1116,7 +1115,7 @@ static int __devinit au1000_probe(struct platform_device *pdev)
}
if (aup->phy_busid && aup->phy_busid > 0) {
printk(KERN_ERR DRV_NAME ": MAC0-associated PHY attached 2nd MACs MII"
dev_err(&pdev->dev, "MAC0-associated PHY attached 2nd MACs MII"
"bus not supported yet\n");
err = -ENODEV;
goto err_mdiobus_alloc;
@ -1124,7 +1123,7 @@ static int __devinit au1000_probe(struct platform_device *pdev)
aup->mii_bus = mdiobus_alloc();
if (aup->mii_bus == NULL) {
printk(KERN_ERR DRV_NAME ": failed to allocate mdiobus structure\n");
dev_err(&pdev->dev, "failed to allocate mdiobus structure\n");
err = -ENOMEM;
goto err_mdiobus_alloc;
}
@ -1148,11 +1147,11 @@ static int __devinit au1000_probe(struct platform_device *pdev)
err = mdiobus_register(aup->mii_bus);
if (err) {
printk(KERN_ERR DRV_NAME " failed to register MDIO bus\n");
dev_err(&pdev->dev, "failed to register MDIO bus\n");
goto err_mdiobus_reg;
}
if (mii_probe(dev) != 0)
if (au1000_mii_probe(dev) != 0)
goto err_out;
pDBfree = NULL;
@ -1168,7 +1167,7 @@ static int __devinit au1000_probe(struct platform_device *pdev)
aup->pDBfree = pDBfree;
for (i = 0; i < NUM_RX_DMA; i++) {
pDB = GetFreeDB(aup);
pDB = au1000_GetFreeDB(aup);
if (!pDB) {
goto err_out;
}
@ -1176,7 +1175,7 @@ static int __devinit au1000_probe(struct platform_device *pdev)
aup->rx_db_inuse[i] = pDB;
}
for (i = 0; i < NUM_TX_DMA; i++) {
pDB = GetFreeDB(aup);
pDB = au1000_GetFreeDB(aup);
if (!pDB) {
goto err_out;
}
@ -1195,17 +1194,16 @@ static int __devinit au1000_probe(struct platform_device *pdev)
* The boot code uses the ethernet controller, so reset it to start
* fresh. au1000_init() expects that the device is in reset state.
*/
reset_mac(dev);
au1000_reset_mac(dev);
err = register_netdev(dev);
if (err) {
printk(KERN_ERR DRV_NAME "%s: Cannot register net device, aborting.\n",
dev->name);
netdev_err(dev, "Cannot register net device, aborting.\n");
goto err_out;
}
printk("%s: Au1xx0 Ethernet found at 0x%lx, irq %d\n",
dev->name, (unsigned long)base->start, irq);
netdev_info(dev, "Au1xx0 Ethernet found at 0x%lx, irq %d\n",
(unsigned long)base->start, irq);
if (version_printed++ == 0)
printk("%s version %s %s\n", DRV_NAME, DRV_VERSION, DRV_AUTHOR);
@ -1217,15 +1215,15 @@ err_out:
/* here we should have a valid dev plus aup-> register addresses
* so we can reset the mac properly.*/
reset_mac(dev);
au1000_reset_mac(dev);
for (i = 0; i < NUM_RX_DMA; i++) {
if (aup->rx_db_inuse[i])
ReleaseDB(aup, aup->rx_db_inuse[i]);
au1000_ReleaseDB(aup, aup->rx_db_inuse[i]);
}
for (i = 0; i < NUM_TX_DMA; i++) {
if (aup->tx_db_inuse[i])
ReleaseDB(aup, aup->tx_db_inuse[i]);
au1000_ReleaseDB(aup, aup->tx_db_inuse[i]);
}
err_mdiobus_reg:
mdiobus_free(aup->mii_bus);
@ -1261,11 +1259,11 @@ static int __devexit au1000_remove(struct platform_device *pdev)
for (i = 0; i < NUM_RX_DMA; i++)
if (aup->rx_db_inuse[i])
ReleaseDB(aup, aup->rx_db_inuse[i]);
au1000_ReleaseDB(aup, aup->rx_db_inuse[i]);
for (i = 0; i < NUM_TX_DMA; i++)
if (aup->tx_db_inuse[i])
ReleaseDB(aup, aup->tx_db_inuse[i]);
au1000_ReleaseDB(aup, aup->tx_db_inuse[i]);
dma_free_noncoherent(NULL, MAX_BUF_SIZE *
(NUM_TX_BUFFS + NUM_RX_BUFFS),

View File

@ -35,7 +35,7 @@
#define NUM_TX_BUFFS 4
#define MAX_BUF_SIZE 2048
#define ETH_TX_TIMEOUT HZ/4
#define ETH_TX_TIMEOUT (HZ/4)
#define MAC_MIN_PKT_SIZE 64
#define MULTICAST_FILTER_LIMIT 64
@ -125,4 +125,6 @@ struct au1000_private {
dma_addr_t dma_addr; /* dma address of rx/tx buffers */
spinlock_t lock; /* Serialise access to device */
u32 msg_enable;
};

View File

@ -303,7 +303,6 @@ static void ax_block_output(struct net_device *dev, int count,
ei_outb(ENISR_RDC, nic_base + EN0_ISR); /* Ack intr. */
ei_status.dmaing &= ~0x01;
return;
}
/* definitions for accessing MII/EEPROM interface */

View File

@ -1014,8 +1014,6 @@ static netdev_tx_t b44_start_xmit(struct sk_buff *skb, struct net_device *dev)
if (TX_BUFFS_AVAIL(bp) < 1)
netif_stop_queue(dev);
dev->trans_start = jiffies;
out_unlock:
spin_unlock_irqrestore(&bp->lock, flags);
@ -1681,15 +1679,15 @@ static struct net_device_stats *b44_get_stats(struct net_device *dev)
static int __b44_load_mcast(struct b44 *bp, struct net_device *dev)
{
struct dev_mc_list *mclist;
struct netdev_hw_addr *ha;
int i, num_ents;
num_ents = min_t(int, netdev_mc_count(dev), B44_MCAST_TABLE_SIZE);
i = 0;
netdev_for_each_mc_addr(mclist, dev) {
netdev_for_each_mc_addr(ha, dev) {
if (i == num_ents)
break;
__b44_cam_write(bp, mclist->dmi_addr, i++ + 1);
__b44_cam_write(bp, ha->addr, i++ + 1);
}
return i+1;
}

View File

@ -341,11 +341,9 @@ static int bcm_enet_receive_queue(struct net_device *dev, int budget)
}
skb_put(skb, len);
skb->dev = dev;
skb->protocol = eth_type_trans(skb, dev);
priv->stats.rx_packets++;
priv->stats.rx_bytes += len;
dev->last_rx = jiffies;
netif_receive_skb(skb);
} while (--budget > 0);
@ -567,7 +565,6 @@ static int bcm_enet_start_xmit(struct sk_buff *skb, struct net_device *dev)
priv->stats.tx_bytes += skb->len;
priv->stats.tx_packets++;
dev->trans_start = jiffies;
ret = NETDEV_TX_OK;
out_unlock:
@ -605,7 +602,7 @@ static int bcm_enet_set_mac_address(struct net_device *dev, void *p)
static void bcm_enet_set_multicast_list(struct net_device *dev)
{
struct bcm_enet_priv *priv;
struct dev_mc_list *mc_list;
struct netdev_hw_addr *ha;
u32 val;
int i;
@ -633,14 +630,14 @@ static void bcm_enet_set_multicast_list(struct net_device *dev)
}
i = 0;
netdev_for_each_mc_addr(mc_list, dev) {
netdev_for_each_mc_addr(ha, dev) {
u8 *dmi_addr;
u32 tmp;
if (i == 3)
break;
/* update perfect match registers */
dmi_addr = mc_list->dmi_addr;
dmi_addr = ha->addr;
tmp = (dmi_addr[2] << 24) | (dmi_addr[3] << 16) |
(dmi_addr[4] << 8) | dmi_addr[5];
enet_writel(priv, tmp, ENET_PML_REG(i + 1));
@ -960,7 +957,9 @@ static int bcm_enet_open(struct net_device *dev)
/* all set, enable mac and interrupts, start dma engine and
* kick rx dma channel */
wmb();
enet_writel(priv, ENET_CTL_ENABLE_MASK, ENET_CTL_REG);
val = enet_readl(priv, ENET_CTL_REG);
val |= ENET_CTL_ENABLE_MASK;
enet_writel(priv, val, ENET_CTL_REG);
enet_dma_writel(priv, ENETDMA_CFG_EN_MASK, ENETDMA_CFG_REG);
enet_dma_writel(priv, ENETDMA_CHANCFG_EN_MASK,
ENETDMA_CHANCFG_REG(priv->rx_chan));
@ -1647,7 +1646,6 @@ static int __devinit bcm_enet_probe(struct platform_device *pdev)
if (!dev)
return -ENOMEM;
priv = netdev_priv(dev);
memset(priv, 0, sizeof(*priv));
ret = compute_hw_mtu(priv, dev->mtu);
if (ret)

View File

@ -84,6 +84,8 @@ static inline char *nic_name(struct pci_dev *pdev)
#define FW_VER_LEN 32
#define BE_MAX_VF 32
struct be_dma_mem {
void *va;
dma_addr_t dma;
@ -207,7 +209,7 @@ struct be_tx_obj {
/* Struct to remember the pages posted for rx frags */
struct be_rx_page_info {
struct page *page;
dma_addr_t bus;
DEFINE_DMA_UNMAP_ADDR(bus);
u16 page_offset;
bool last_page_user;
};
@ -281,8 +283,15 @@ struct be_adapter {
u8 port_type;
u8 transceiver;
u8 generation; /* BladeEngine ASIC generation */
bool sriov_enabled;
u32 vf_if_handle[BE_MAX_VF];
u32 vf_pmac_id[BE_MAX_VF];
u8 base_eq_id;
};
#define be_physfn(adapter) (!adapter->pdev->is_virtfn)
/* BladeEngine Generation numbers */
#define BE_GEN2 2
#define BE_GEN3 3

View File

@ -843,7 +843,8 @@ int be_cmd_q_destroy(struct be_adapter *adapter, struct be_queue_info *q,
* Uses mbox
*/
int be_cmd_if_create(struct be_adapter *adapter, u32 cap_flags, u32 en_flags,
u8 *mac, bool pmac_invalid, u32 *if_handle, u32 *pmac_id)
u8 *mac, bool pmac_invalid, u32 *if_handle, u32 *pmac_id,
u32 domain)
{
struct be_mcc_wrb *wrb;
struct be_cmd_req_if_create *req;
@ -860,6 +861,7 @@ int be_cmd_if_create(struct be_adapter *adapter, u32 cap_flags, u32 en_flags,
be_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_COMMON,
OPCODE_COMMON_NTWK_INTERFACE_CREATE, sizeof(*req));
req->hdr.domain = domain;
req->capability_flags = cpu_to_le32(cap_flags);
req->enable_flags = cpu_to_le32(en_flags);
req->pmac_invalid = pmac_invalid;
@ -1111,6 +1113,10 @@ int be_cmd_promiscuous_config(struct be_adapter *adapter, u8 port_num, bool en)
be_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_ETH,
OPCODE_ETH_PROMISCUOUS, sizeof(*req));
/* In FW versions X.102.149/X.101.487 and later,
* the port setting associated only with the
* issuing pci function will take effect
*/
if (port_num)
req->port1_promiscuous = en;
else
@ -1157,13 +1163,13 @@ int be_cmd_multicast_set(struct be_adapter *adapter, u32 if_id,
req->interface_id = if_id;
if (netdev) {
int i;
struct dev_mc_list *mc;
struct netdev_hw_addr *ha;
req->num_mac = cpu_to_le16(netdev_mc_count(netdev));
i = 0;
netdev_for_each_mc_addr(mc, netdev)
memcpy(req->mac[i].byte, mc->dmi_addr, ETH_ALEN);
netdev_for_each_mc_addr(ha, netdev)
memcpy(req->mac[i].byte, ha->addr, ETH_ALEN);
} else {
req->promiscuous = 1;
}

View File

@ -878,7 +878,7 @@ extern int be_cmd_pmac_add(struct be_adapter *adapter, u8 *mac_addr,
extern int be_cmd_pmac_del(struct be_adapter *adapter, u32 if_id, u32 pmac_id);
extern int be_cmd_if_create(struct be_adapter *adapter, u32 cap_flags,
u32 en_flags, u8 *mac, bool pmac_invalid,
u32 *if_handle, u32 *pmac_id);
u32 *if_handle, u32 *pmac_id, u32 domain);
extern int be_cmd_if_destroy(struct be_adapter *adapter, u32 if_handle);
extern int be_cmd_eq_create(struct be_adapter *adapter,
struct be_queue_info *eq, int eq_delay);

View File

@ -276,8 +276,6 @@ be_get_ethtool_stats(struct net_device *netdev,
data[i] = (et_stats[i].size == sizeof(u64)) ?
*(u64 *)p: *(u32 *)p;
}
return;
}
static void
@ -466,7 +464,6 @@ be_get_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
else
wol->wolopts = 0;
memset(&wol->sopass, 0, sizeof(wol->sopass));
return;
}
static int

View File

@ -99,6 +99,9 @@
/* Number of entries posted */
#define DB_MCCQ_NUM_POSTED_SHIFT (16) /* bits 16 - 29 */
/********** SRIOV VF PCICFG OFFSET ********/
#define SRIOV_VF_PCICFG_OFFSET (4096)
/* Flashrom related descriptors */
#define IMAGE_TYPE_FIRMWARE 160
#define IMAGE_TYPE_BOOTCODE 224

View File

@ -26,8 +26,11 @@ MODULE_AUTHOR("ServerEngines Corporation");
MODULE_LICENSE("GPL");
static unsigned int rx_frag_size = 2048;
static unsigned int num_vfs;
module_param(rx_frag_size, uint, S_IRUGO);
module_param(num_vfs, uint, S_IRUGO);
MODULE_PARM_DESC(rx_frag_size, "Size of a fragment that holds rcvd data.");
MODULE_PARM_DESC(num_vfs, "Number of PCI VFs to initialize");
static DEFINE_PCI_DEVICE_TABLE(be_dev_ids) = {
{ PCI_DEVICE(BE_VENDOR_ID, BE_DEVICE_ID1) },
@ -138,12 +141,19 @@ static int be_mac_addr_set(struct net_device *netdev, void *p)
if (!is_valid_ether_addr(addr->sa_data))
return -EADDRNOTAVAIL;
/* MAC addr configuration will be done in hardware for VFs
* by their corresponding PFs. Just copy to netdev addr here
*/
if (!be_physfn(adapter))
goto netdev_addr;
status = be_cmd_pmac_del(adapter, adapter->if_handle, adapter->pmac_id);
if (status)
return status;
status = be_cmd_pmac_add(adapter, (u8 *)addr->sa_data,
adapter->if_handle, &adapter->pmac_id);
netdev_addr:
if (!status)
memcpy(netdev->dev_addr, addr->sa_data, netdev->addr_len);
@ -386,26 +396,48 @@ static void wrb_fill_hdr(struct be_eth_hdr_wrb *hdr, struct sk_buff *skb,
AMAP_SET_BITS(struct amap_eth_hdr_wrb, len, hdr, len);
}
static void unmap_tx_frag(struct pci_dev *pdev, struct be_eth_wrb *wrb,
bool unmap_single)
{
dma_addr_t dma;
be_dws_le_to_cpu(wrb, sizeof(*wrb));
dma = (u64)wrb->frag_pa_hi << 32 | (u64)wrb->frag_pa_lo;
if (wrb->frag_len) {
if (unmap_single)
pci_unmap_single(pdev, dma, wrb->frag_len,
PCI_DMA_TODEVICE);
else
pci_unmap_page(pdev, dma, wrb->frag_len,
PCI_DMA_TODEVICE);
}
}
static int make_tx_wrbs(struct be_adapter *adapter,
struct sk_buff *skb, u32 wrb_cnt, bool dummy_wrb)
{
u64 busaddr;
u32 i, copied = 0;
dma_addr_t busaddr;
int i, copied = 0;
struct pci_dev *pdev = adapter->pdev;
struct sk_buff *first_skb = skb;
struct be_queue_info *txq = &adapter->tx_obj.q;
struct be_eth_wrb *wrb;
struct be_eth_hdr_wrb *hdr;
bool map_single = false;
u16 map_head;
hdr = queue_head_node(txq);
atomic_add(wrb_cnt, &txq->used);
queue_head_inc(txq);
map_head = txq->head;
if (skb->len > skb->data_len) {
int len = skb->len - skb->data_len;
int len = skb_headlen(skb);
busaddr = pci_map_single(pdev, skb->data, len,
PCI_DMA_TODEVICE);
if (pci_dma_mapping_error(pdev, busaddr))
goto dma_err;
map_single = true;
wrb = queue_head_node(txq);
wrb_fill(wrb, busaddr, len);
be_dws_cpu_to_le(wrb, sizeof(*wrb));
@ -419,6 +451,8 @@ static int make_tx_wrbs(struct be_adapter *adapter,
busaddr = pci_map_page(pdev, frag->page,
frag->page_offset,
frag->size, PCI_DMA_TODEVICE);
if (pci_dma_mapping_error(pdev, busaddr))
goto dma_err;
wrb = queue_head_node(txq);
wrb_fill(wrb, busaddr, frag->size);
be_dws_cpu_to_le(wrb, sizeof(*wrb));
@ -438,6 +472,16 @@ static int make_tx_wrbs(struct be_adapter *adapter,
be_dws_cpu_to_le(hdr, sizeof(*hdr));
return copied;
dma_err:
txq->head = map_head;
while (copied) {
wrb = queue_head_node(txq);
unmap_tx_frag(pdev, wrb, map_single);
map_single = false;
copied -= wrb->frag_len;
queue_head_inc(txq);
}
return 0;
}
static netdev_tx_t be_xmit(struct sk_buff *skb,
@ -462,6 +506,7 @@ static netdev_tx_t be_xmit(struct sk_buff *skb,
* *BEFORE* ringing the tx doorbell, so that we serialze the
* tx compls of the current transmit which'll wake up the queue
*/
atomic_add(wrb_cnt, &txq->used);
if ((BE_MAX_TX_FRAG_COUNT + atomic_read(&txq->used)) >=
txq->len) {
netif_stop_queue(netdev);
@ -541,6 +586,9 @@ static void be_vlan_add_vid(struct net_device *netdev, u16 vid)
{
struct be_adapter *adapter = netdev_priv(netdev);
if (!be_physfn(adapter))
return;
adapter->vlan_tag[vid] = 1;
adapter->vlans_added++;
if (adapter->vlans_added <= (adapter->max_vlans + 1))
@ -551,6 +599,9 @@ static void be_vlan_rem_vid(struct net_device *netdev, u16 vid)
{
struct be_adapter *adapter = netdev_priv(netdev);
if (!be_physfn(adapter))
return;
adapter->vlan_tag[vid] = 0;
vlan_group_set_device(adapter->vlan_grp, vid, NULL);
adapter->vlans_added--;
@ -588,6 +639,28 @@ done:
return;
}
static int be_set_vf_mac(struct net_device *netdev, int vf, u8 *mac)
{
struct be_adapter *adapter = netdev_priv(netdev);
int status;
if (!adapter->sriov_enabled)
return -EPERM;
if (!is_valid_ether_addr(mac) || (vf >= num_vfs))
return -EINVAL;
status = be_cmd_pmac_del(adapter, adapter->vf_if_handle[vf],
adapter->vf_pmac_id[vf]);
status = be_cmd_pmac_add(adapter, mac, adapter->vf_if_handle[vf],
&adapter->vf_pmac_id[vf]);
if (!status)
dev_err(&adapter->pdev->dev, "MAC %pM set on VF %d Failed\n",
mac, vf);
return status;
}
static void be_rx_rate_update(struct be_adapter *adapter)
{
struct be_drvr_stats *stats = drvr_stats(adapter);
@ -647,7 +720,7 @@ get_rx_page_info(struct be_adapter *adapter, u16 frag_idx)
BUG_ON(!rx_page_info->page);
if (rx_page_info->last_page_user) {
pci_unmap_page(adapter->pdev, pci_unmap_addr(rx_page_info, bus),
pci_unmap_page(adapter->pdev, dma_unmap_addr(rx_page_info, bus),
adapter->big_page_size, PCI_DMA_FROMDEVICE);
rx_page_info->last_page_user = false;
}
@ -757,7 +830,6 @@ static void skb_fill_rx_data(struct be_adapter *adapter,
done:
be_rx_stats_update(adapter, pktsize, num_rcvd);
return;
}
/* Process the RX completion indicated by rxcp when GRO is disabled */
@ -791,7 +863,6 @@ static void be_rx_compl_process(struct be_adapter *adapter,
skb->truesize = skb->len + sizeof(struct sk_buff);
skb->protocol = eth_type_trans(skb, adapter->netdev);
skb->dev = adapter->netdev;
vlanf = AMAP_GET_BITS(struct amap_eth_rx_compl, vtp, rxcp);
vtm = AMAP_GET_BITS(struct amap_eth_rx_compl, vtm, rxcp);
@ -812,8 +883,6 @@ static void be_rx_compl_process(struct be_adapter *adapter,
} else {
netif_receive_skb(skb);
}
return;
}
/* Process the RX completion indicated by rxcp when GRO is enabled */
@ -893,7 +962,6 @@ static void be_rx_compl_process_gro(struct be_adapter *adapter,
}
be_rx_stats_update(adapter, pkt_size, num_rcvd);
return;
}
static struct be_eth_rx_compl *be_rx_compl_get(struct be_adapter *adapter)
@ -959,7 +1027,7 @@ static void be_post_rx_frags(struct be_adapter *adapter)
}
page_offset = page_info->page_offset;
page_info->page = pagep;
pci_unmap_addr_set(page_info, bus, page_dmaaddr);
dma_unmap_addr_set(page_info, bus, page_dmaaddr);
frag_dmaaddr = page_dmaaddr + page_info->page_offset;
rxd = queue_head_node(rxq);
@ -987,8 +1055,6 @@ static void be_post_rx_frags(struct be_adapter *adapter)
/* Let be_worker replenish when memory is available */
adapter->rx_post_starved = true;
}
return;
}
static struct be_eth_tx_compl *be_tx_compl_get(struct be_queue_info *tx_cq)
@ -1012,35 +1078,26 @@ static void be_tx_compl_process(struct be_adapter *adapter, u16 last_index)
struct be_eth_wrb *wrb;
struct sk_buff **sent_skbs = adapter->tx_obj.sent_skb_list;
struct sk_buff *sent_skb;
u64 busaddr;
u16 cur_index, num_wrbs = 0;
u16 cur_index, num_wrbs = 1; /* account for hdr wrb */
bool unmap_skb_hdr = true;
cur_index = txq->tail;
sent_skb = sent_skbs[cur_index];
sent_skb = sent_skbs[txq->tail];
BUG_ON(!sent_skb);
sent_skbs[cur_index] = NULL;
wrb = queue_tail_node(txq);
be_dws_le_to_cpu(wrb, sizeof(*wrb));
busaddr = ((u64)wrb->frag_pa_hi << 32) | (u64)wrb->frag_pa_lo;
if (busaddr != 0) {
pci_unmap_single(adapter->pdev, busaddr,
wrb->frag_len, PCI_DMA_TODEVICE);
}
num_wrbs++;
sent_skbs[txq->tail] = NULL;
/* skip header wrb */
queue_tail_inc(txq);
while (cur_index != last_index) {
do {
cur_index = txq->tail;
wrb = queue_tail_node(txq);
be_dws_le_to_cpu(wrb, sizeof(*wrb));
busaddr = ((u64)wrb->frag_pa_hi << 32) | (u64)wrb->frag_pa_lo;
if (busaddr != 0) {
pci_unmap_page(adapter->pdev, busaddr,
wrb->frag_len, PCI_DMA_TODEVICE);
}
unmap_tx_frag(adapter->pdev, wrb, (unmap_skb_hdr &&
skb_headlen(sent_skb)));
unmap_skb_hdr = false;
num_wrbs++;
queue_tail_inc(txq);
}
} while (cur_index != last_index);
atomic_sub(num_wrbs, &txq->used);
@ -1255,6 +1312,8 @@ static int be_tx_queues_create(struct be_adapter *adapter)
/* Ask BE to create Tx Event queue */
if (be_cmd_eq_create(adapter, eq, adapter->tx_eq.cur_eqd))
goto tx_eq_free;
adapter->base_eq_id = adapter->tx_eq.q.id;
/* Alloc TX eth compl queue */
cq = &adapter->tx_obj.cq;
if (be_queue_alloc(adapter, cq, TX_CQ_LEN,
@ -1382,7 +1441,7 @@ rx_eq_free:
/* There are 8 evt ids per func. Retruns the evt id's bit number */
static inline int be_evt_bit_get(struct be_adapter *adapter, u32 eq_id)
{
return eq_id % 8;
return eq_id - adapter->base_eq_id;
}
static irqreturn_t be_intx(int irq, void *dev)
@ -1557,7 +1616,27 @@ static void be_msix_enable(struct be_adapter *adapter)
BE_NUM_MSIX_VECTORS);
if (status == 0)
adapter->msix_enabled = true;
return;
}
static void be_sriov_enable(struct be_adapter *adapter)
{
#ifdef CONFIG_PCI_IOV
int status;
if (be_physfn(adapter) && num_vfs) {
status = pci_enable_sriov(adapter->pdev, num_vfs);
adapter->sriov_enabled = status ? false : true;
}
#endif
}
static void be_sriov_disable(struct be_adapter *adapter)
{
#ifdef CONFIG_PCI_IOV
if (adapter->sriov_enabled) {
pci_disable_sriov(adapter->pdev);
adapter->sriov_enabled = false;
}
#endif
}
static inline int be_msix_vec_get(struct be_adapter *adapter, u32 eq_id)
@ -1617,6 +1696,9 @@ static int be_irq_register(struct be_adapter *adapter)
status = be_msix_register(adapter);
if (status == 0)
goto done;
/* INTx is not supported for VF */
if (!be_physfn(adapter))
return status;
}
/* INTx */
@ -1651,7 +1733,6 @@ static void be_irq_unregister(struct be_adapter *adapter)
be_free_irq(adapter, &adapter->rx_eq);
done:
adapter->isr_registered = false;
return;
}
static int be_open(struct net_device *netdev)
@ -1690,14 +1771,17 @@ static int be_open(struct net_device *netdev)
goto ret_sts;
be_link_status_update(adapter, link_up);
if (be_physfn(adapter))
status = be_vid_config(adapter);
if (status)
goto ret_sts;
if (be_physfn(adapter)) {
status = be_cmd_set_flow_control(adapter,
adapter->tx_fc, adapter->rx_fc);
if (status)
goto ret_sts;
}
schedule_delayed_work(&adapter->work, msecs_to_jiffies(100));
ret_sts:
@ -1745,22 +1829,48 @@ static int be_setup_wol(struct be_adapter *adapter, bool enable)
static int be_setup(struct be_adapter *adapter)
{
struct net_device *netdev = adapter->netdev;
u32 cap_flags, en_flags;
u32 cap_flags, en_flags, vf = 0;
int status;
u8 mac[ETH_ALEN];
cap_flags = BE_IF_FLAGS_UNTAGGED | BE_IF_FLAGS_BROADCAST |
BE_IF_FLAGS_MCAST_PROMISCUOUS |
cap_flags = en_flags = BE_IF_FLAGS_UNTAGGED | BE_IF_FLAGS_BROADCAST;
if (be_physfn(adapter)) {
cap_flags |= BE_IF_FLAGS_MCAST_PROMISCUOUS |
BE_IF_FLAGS_PROMISCUOUS |
BE_IF_FLAGS_PASS_L3L4_ERRORS;
en_flags = BE_IF_FLAGS_UNTAGGED | BE_IF_FLAGS_BROADCAST |
BE_IF_FLAGS_PASS_L3L4_ERRORS;
en_flags |= BE_IF_FLAGS_PASS_L3L4_ERRORS;
}
status = be_cmd_if_create(adapter, cap_flags, en_flags,
netdev->dev_addr, false/* pmac_invalid */,
&adapter->if_handle, &adapter->pmac_id);
&adapter->if_handle, &adapter->pmac_id, 0);
if (status != 0)
goto do_none;
if (be_physfn(adapter)) {
while (vf < num_vfs) {
cap_flags = en_flags = BE_IF_FLAGS_UNTAGGED
| BE_IF_FLAGS_BROADCAST;
status = be_cmd_if_create(adapter, cap_flags, en_flags,
mac, true, &adapter->vf_if_handle[vf],
NULL, vf+1);
if (status) {
dev_err(&adapter->pdev->dev,
"Interface Create failed for VF %d\n", vf);
goto if_destroy;
}
vf++;
} while (vf < num_vfs);
} else if (!be_physfn(adapter)) {
status = be_cmd_mac_addr_query(adapter, mac,
MAC_ADDRESS_TYPE_NETWORK, false, adapter->if_handle);
if (!status) {
memcpy(adapter->netdev->dev_addr, mac, ETH_ALEN);
memcpy(adapter->netdev->perm_addr, mac, ETH_ALEN);
}
}
status = be_tx_queues_create(adapter);
if (status != 0)
goto if_destroy;
@ -1782,6 +1892,9 @@ rx_qs_destroy:
tx_qs_destroy:
be_tx_queues_destroy(adapter);
if_destroy:
for (vf = 0; vf < num_vfs; vf++)
if (adapter->vf_if_handle[vf])
be_cmd_if_destroy(adapter, adapter->vf_if_handle[vf]);
be_cmd_if_destroy(adapter, adapter->if_handle);
do_none:
return status;
@ -2061,6 +2174,7 @@ static struct net_device_ops be_netdev_ops = {
.ndo_vlan_rx_register = be_vlan_register,
.ndo_vlan_rx_add_vid = be_vlan_add_vid,
.ndo_vlan_rx_kill_vid = be_vlan_rem_vid,
.ndo_set_vf_mac = be_set_vf_mac
};
static void be_netdev_init(struct net_device *netdev)
@ -2102,37 +2216,48 @@ static void be_unmap_pci_bars(struct be_adapter *adapter)
iounmap(adapter->csr);
if (adapter->db)
iounmap(adapter->db);
if (adapter->pcicfg)
if (adapter->pcicfg && be_physfn(adapter))
iounmap(adapter->pcicfg);
}
static int be_map_pci_bars(struct be_adapter *adapter)
{
u8 __iomem *addr;
int pcicfg_reg;
int pcicfg_reg, db_reg;
if (be_physfn(adapter)) {
addr = ioremap_nocache(pci_resource_start(adapter->pdev, 2),
pci_resource_len(adapter->pdev, 2));
if (addr == NULL)
return -ENOMEM;
adapter->csr = addr;
}
addr = ioremap_nocache(pci_resource_start(adapter->pdev, 4),
128 * 1024);
if (adapter->generation == BE_GEN2) {
pcicfg_reg = 1;
db_reg = 4;
} else {
pcicfg_reg = 0;
if (be_physfn(adapter))
db_reg = 4;
else
db_reg = 0;
}
addr = ioremap_nocache(pci_resource_start(adapter->pdev, db_reg),
pci_resource_len(adapter->pdev, db_reg));
if (addr == NULL)
goto pci_map_err;
adapter->db = addr;
if (adapter->generation == BE_GEN2)
pcicfg_reg = 1;
else
pcicfg_reg = 0;
addr = ioremap_nocache(pci_resource_start(adapter->pdev, pcicfg_reg),
if (be_physfn(adapter)) {
addr = ioremap_nocache(
pci_resource_start(adapter->pdev, pcicfg_reg),
pci_resource_len(adapter->pdev, pcicfg_reg));
if (addr == NULL)
goto pci_map_err;
adapter->pcicfg = addr;
} else
adapter->pcicfg = adapter->db + SRIOV_VF_PCICFG_OFFSET;
return 0;
pci_map_err:
@ -2246,6 +2371,8 @@ static void __devexit be_remove(struct pci_dev *pdev)
be_ctrl_cleanup(adapter);
be_sriov_disable(adapter);
be_msix_disable(adapter);
pci_set_drvdata(pdev, NULL);
@ -2270,8 +2397,11 @@ static int be_get_config(struct be_adapter *adapter)
return status;
memset(mac, 0, ETH_ALEN);
if (be_physfn(adapter)) {
status = be_cmd_mac_addr_query(adapter, mac,
MAC_ADDRESS_TYPE_NETWORK, true /*permanent */, 0);
if (status)
return status;
@ -2280,6 +2410,7 @@ static int be_get_config(struct be_adapter *adapter)
memcpy(adapter->netdev->dev_addr, mac, ETH_ALEN);
memcpy(adapter->netdev->perm_addr, mac, ETH_ALEN);
}
if (adapter->cap & 0x400)
adapter->max_vlans = BE_NUM_VLANS_SUPPORTED/4;
@ -2296,6 +2427,7 @@ static int __devinit be_probe(struct pci_dev *pdev,
struct be_adapter *adapter;
struct net_device *netdev;
status = pci_enable_device(pdev);
if (status)
goto do_none;
@ -2344,21 +2476,25 @@ static int __devinit be_probe(struct pci_dev *pdev,
}
}
be_sriov_enable(adapter);
status = be_ctrl_init(adapter);
if (status)
goto free_netdev;
/* sync up with fw's ready state */
if (be_physfn(adapter)) {
status = be_cmd_POST(adapter);
if (status)
goto ctrl_clean;
/* tell fw we're ready to fire cmds */
status = be_cmd_fw_init(adapter);
status = be_cmd_reset_function(adapter);
if (status)
goto ctrl_clean;
}
status = be_cmd_reset_function(adapter);
/* tell fw we're ready to fire cmds */
status = be_cmd_fw_init(adapter);
if (status)
goto ctrl_clean;
@ -2391,6 +2527,7 @@ ctrl_clean:
be_ctrl_cleanup(adapter);
free_netdev:
be_msix_disable(adapter);
be_sriov_disable(adapter);
free_netdev(adapter->netdev);
pci_set_drvdata(pdev, NULL);
rel_reg:
@ -2474,8 +2611,6 @@ static void be_shutdown(struct pci_dev *pdev)
be_setup_wol(adapter, true);
pci_disable_device(pdev);
return;
}
static pci_ers_result_t be_eeh_err_detected(struct pci_dev *pdev,
@ -2557,7 +2692,6 @@ static void be_eeh_resume(struct pci_dev *pdev)
return;
err:
dev_err(&adapter->pdev->dev, "EEH resume failed\n");
return;
}
static struct pci_error_handlers be_eeh_handlers = {
@ -2587,6 +2721,13 @@ static int __init be_init_module(void)
rx_frag_size = 2048;
}
if (num_vfs > 32) {
printk(KERN_WARNING DRV_NAME
" : Module param num_vfs must not be greater than 32."
"Using 32\n");
num_vfs = 32;
}
return pci_register_driver(&be_driver);
}
module_init(be_init_module);

View File

@ -33,6 +33,7 @@
#include <asm/dma.h>
#include <linux/dma-mapping.h>
#include <asm/div64.h>
#include <asm/dpmc.h>
#include <asm/blackfin.h>
#include <asm/cacheflush.h>
@ -80,9 +81,6 @@ static u16 pin_req[] = P_RMII0;
static u16 pin_req[] = P_MII0;
#endif
static void bfin_mac_disable(void);
static void bfin_mac_enable(void);
static void desc_list_free(void)
{
struct net_dma_desc_rx *r;
@ -202,6 +200,11 @@ static int desc_list_init(void)
goto init_error;
}
skb_reserve(new_skb, NET_IP_ALIGN);
/* Invidate the data cache of skb->data range when it is write back
* cache. It will prevent overwritting the new data from DMA
*/
blackfin_dcache_invalidate_range((unsigned long)new_skb->head,
(unsigned long)new_skb->end);
r->skb = new_skb;
/*
@ -254,7 +257,7 @@ init_error:
* MII operations
*/
/* Wait until the previous MDC/MDIO transaction has completed */
static void bfin_mdio_poll(void)
static int bfin_mdio_poll(void)
{
int timeout_cnt = MAX_TIMEOUT_CNT;
@ -264,22 +267,30 @@ static void bfin_mdio_poll(void)
if (timeout_cnt-- < 0) {
printk(KERN_ERR DRV_NAME
": wait MDC/MDIO transaction to complete timeout\n");
break;
return -ETIMEDOUT;
}
}
return 0;
}
/* Read an off-chip register in a PHY through the MDC/MDIO port */
static int bfin_mdiobus_read(struct mii_bus *bus, int phy_addr, int regnum)
{
bfin_mdio_poll();
int ret;
ret = bfin_mdio_poll();
if (ret)
return ret;
/* read mode */
bfin_write_EMAC_STAADD(SET_PHYAD((u16) phy_addr) |
SET_REGAD((u16) regnum) |
STABUSY);
bfin_mdio_poll();
ret = bfin_mdio_poll();
if (ret)
return ret;
return (int) bfin_read_EMAC_STADAT();
}
@ -288,7 +299,11 @@ static int bfin_mdiobus_read(struct mii_bus *bus, int phy_addr, int regnum)
static int bfin_mdiobus_write(struct mii_bus *bus, int phy_addr, int regnum,
u16 value)
{
bfin_mdio_poll();
int ret;
ret = bfin_mdio_poll();
if (ret)
return ret;
bfin_write_EMAC_STADAT((u32) value);
@ -298,9 +313,7 @@ static int bfin_mdiobus_write(struct mii_bus *bus, int phy_addr, int regnum,
STAOP |
STABUSY);
bfin_mdio_poll();
return 0;
return bfin_mdio_poll();
}
static int bfin_mdiobus_reset(struct mii_bus *bus)
@ -458,6 +471,14 @@ static int mii_probe(struct net_device *dev)
* Ethtool support
*/
/*
* interrupt routine for magic packet wakeup
*/
static irqreturn_t bfin_mac_wake_interrupt(int irq, void *dev_id)
{
return IRQ_HANDLED;
}
static int
bfin_mac_ethtool_getsettings(struct net_device *dev, struct ethtool_cmd *cmd)
{
@ -492,11 +513,57 @@ static void bfin_mac_ethtool_getdrvinfo(struct net_device *dev,
strcpy(info->bus_info, dev_name(&dev->dev));
}
static void bfin_mac_ethtool_getwol(struct net_device *dev,
struct ethtool_wolinfo *wolinfo)
{
struct bfin_mac_local *lp = netdev_priv(dev);
wolinfo->supported = WAKE_MAGIC;
wolinfo->wolopts = lp->wol;
}
static int bfin_mac_ethtool_setwol(struct net_device *dev,
struct ethtool_wolinfo *wolinfo)
{
struct bfin_mac_local *lp = netdev_priv(dev);
int rc;
if (wolinfo->wolopts & (WAKE_MAGICSECURE |
WAKE_UCAST |
WAKE_MCAST |
WAKE_BCAST |
WAKE_ARP))
return -EOPNOTSUPP;
lp->wol = wolinfo->wolopts;
if (lp->wol && !lp->irq_wake_requested) {
/* register wake irq handler */
rc = request_irq(IRQ_MAC_WAKEDET, bfin_mac_wake_interrupt,
IRQF_DISABLED, "EMAC_WAKE", dev);
if (rc)
return rc;
lp->irq_wake_requested = true;
}
if (!lp->wol && lp->irq_wake_requested) {
free_irq(IRQ_MAC_WAKEDET, dev);
lp->irq_wake_requested = false;
}
/* Make sure the PHY driver doesn't suspend */
device_init_wakeup(&dev->dev, lp->wol);
return 0;
}
static const struct ethtool_ops bfin_mac_ethtool_ops = {
.get_settings = bfin_mac_ethtool_getsettings,
.set_settings = bfin_mac_ethtool_setsettings,
.get_link = ethtool_op_get_link,
.get_drvinfo = bfin_mac_ethtool_getdrvinfo,
.get_wol = bfin_mac_ethtool_getwol,
.set_wol = bfin_mac_ethtool_setwol,
};
/**************************************************************************/
@ -509,10 +576,11 @@ void setup_system_regs(struct net_device *dev)
* Configure checksum support and rcve frame word alignment
*/
sysctl = bfin_read_EMAC_SYSCTL();
#if defined(BFIN_MAC_CSUM_OFFLOAD)
sysctl |= RXDWA | RXCKS;
#else
sysctl |= RXDWA;
#if defined(BFIN_MAC_CSUM_OFFLOAD)
sysctl |= RXCKS;
#else
sysctl &= ~RXCKS;
#endif
bfin_write_EMAC_SYSCTL(sysctl);
@ -551,6 +619,309 @@ static int bfin_mac_set_mac_address(struct net_device *dev, void *p)
return 0;
}
#ifdef CONFIG_BFIN_MAC_USE_HWSTAMP
#define bfin_mac_hwtstamp_is_none(cfg) ((cfg) == HWTSTAMP_FILTER_NONE)
static int bfin_mac_hwtstamp_ioctl(struct net_device *netdev,
struct ifreq *ifr, int cmd)
{
struct hwtstamp_config config;
struct bfin_mac_local *lp = netdev_priv(netdev);
u16 ptpctl;
u32 ptpfv1, ptpfv2, ptpfv3, ptpfoff;
if (copy_from_user(&config, ifr->ifr_data, sizeof(config)))
return -EFAULT;
pr_debug("%s config flag:0x%x, tx_type:0x%x, rx_filter:0x%x\n",
__func__, config.flags, config.tx_type, config.rx_filter);
/* reserved for future extensions */
if (config.flags)
return -EINVAL;
if ((config.tx_type != HWTSTAMP_TX_OFF) &&
(config.tx_type != HWTSTAMP_TX_ON))
return -ERANGE;
ptpctl = bfin_read_EMAC_PTP_CTL();
switch (config.rx_filter) {
case HWTSTAMP_FILTER_NONE:
/*
* Dont allow any timestamping
*/
ptpfv3 = 0xFFFFFFFF;
bfin_write_EMAC_PTP_FV3(ptpfv3);
break;
case HWTSTAMP_FILTER_PTP_V1_L4_EVENT:
case HWTSTAMP_FILTER_PTP_V1_L4_SYNC:
case HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ:
/*
* Clear the five comparison mask bits (bits[12:8]) in EMAC_PTP_CTL)
* to enable all the field matches.
*/
ptpctl &= ~0x1F00;
bfin_write_EMAC_PTP_CTL(ptpctl);
/*
* Keep the default values of the EMAC_PTP_FOFF register.
*/
ptpfoff = 0x4A24170C;
bfin_write_EMAC_PTP_FOFF(ptpfoff);
/*
* Keep the default values of the EMAC_PTP_FV1 and EMAC_PTP_FV2
* registers.
*/
ptpfv1 = 0x11040800;
bfin_write_EMAC_PTP_FV1(ptpfv1);
ptpfv2 = 0x0140013F;
bfin_write_EMAC_PTP_FV2(ptpfv2);
/*
* The default value (0xFFFC) allows the timestamping of both
* received Sync messages and Delay_Req messages.
*/
ptpfv3 = 0xFFFFFFFC;
bfin_write_EMAC_PTP_FV3(ptpfv3);
config.rx_filter = HWTSTAMP_FILTER_PTP_V1_L4_EVENT;
break;
case HWTSTAMP_FILTER_PTP_V2_L4_EVENT:
case HWTSTAMP_FILTER_PTP_V2_L4_SYNC:
case HWTSTAMP_FILTER_PTP_V2_DELAY_REQ:
/* Clear all five comparison mask bits (bits[12:8]) in the
* EMAC_PTP_CTL register to enable all the field matches.
*/
ptpctl &= ~0x1F00;
bfin_write_EMAC_PTP_CTL(ptpctl);
/*
* Keep the default values of the EMAC_PTP_FOFF register, except set
* the PTPCOF field to 0x2A.
*/
ptpfoff = 0x2A24170C;
bfin_write_EMAC_PTP_FOFF(ptpfoff);
/*
* Keep the default values of the EMAC_PTP_FV1 and EMAC_PTP_FV2
* registers.
*/
ptpfv1 = 0x11040800;
bfin_write_EMAC_PTP_FV1(ptpfv1);
ptpfv2 = 0x0140013F;
bfin_write_EMAC_PTP_FV2(ptpfv2);
/*
* To allow the timestamping of Pdelay_Req and Pdelay_Resp, set
* the value to 0xFFF0.
*/
ptpfv3 = 0xFFFFFFF0;
bfin_write_EMAC_PTP_FV3(ptpfv3);
config.rx_filter = HWTSTAMP_FILTER_PTP_V2_L4_EVENT;
break;
case HWTSTAMP_FILTER_PTP_V2_L2_EVENT:
case HWTSTAMP_FILTER_PTP_V2_L2_SYNC:
case HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ:
/*
* Clear bits 8 and 12 of the EMAC_PTP_CTL register to enable only the
* EFTM and PTPCM field comparison.
*/
ptpctl &= ~0x1100;
bfin_write_EMAC_PTP_CTL(ptpctl);
/*
* Keep the default values of all the fields of the EMAC_PTP_FOFF
* register, except set the PTPCOF field to 0x0E.
*/
ptpfoff = 0x0E24170C;
bfin_write_EMAC_PTP_FOFF(ptpfoff);
/*
* Program bits [15:0] of the EMAC_PTP_FV1 register to 0x88F7, which
* corresponds to PTP messages on the MAC layer.
*/
ptpfv1 = 0x110488F7;
bfin_write_EMAC_PTP_FV1(ptpfv1);
ptpfv2 = 0x0140013F;
bfin_write_EMAC_PTP_FV2(ptpfv2);
/*
* To allow the timestamping of Pdelay_Req and Pdelay_Resp
* messages, set the value to 0xFFF0.
*/
ptpfv3 = 0xFFFFFFF0;
bfin_write_EMAC_PTP_FV3(ptpfv3);
config.rx_filter = HWTSTAMP_FILTER_PTP_V2_L2_EVENT;
break;
default:
return -ERANGE;
}
if (config.tx_type == HWTSTAMP_TX_OFF &&
bfin_mac_hwtstamp_is_none(config.rx_filter)) {
ptpctl &= ~PTP_EN;
bfin_write_EMAC_PTP_CTL(ptpctl);
SSYNC();
} else {
ptpctl |= PTP_EN;
bfin_write_EMAC_PTP_CTL(ptpctl);
/*
* clear any existing timestamp
*/
bfin_read_EMAC_PTP_RXSNAPLO();
bfin_read_EMAC_PTP_RXSNAPHI();
bfin_read_EMAC_PTP_TXSNAPLO();
bfin_read_EMAC_PTP_TXSNAPHI();
/*
* Set registers so that rollover occurs soon to test this.
*/
bfin_write_EMAC_PTP_TIMELO(0x00000000);
bfin_write_EMAC_PTP_TIMEHI(0xFF800000);
SSYNC();
lp->compare.last_update = 0;
timecounter_init(&lp->clock,
&lp->cycles,
ktime_to_ns(ktime_get_real()));
timecompare_update(&lp->compare, 0);
}
lp->stamp_cfg = config;
return copy_to_user(ifr->ifr_data, &config, sizeof(config)) ?
-EFAULT : 0;
}
static void bfin_dump_hwtamp(char *s, ktime_t *hw, ktime_t *ts, struct timecompare *cmp)
{
ktime_t sys = ktime_get_real();
pr_debug("%s %s hardware:%d,%d transform system:%d,%d system:%d,%d, cmp:%lld, %lld\n",
__func__, s, hw->tv.sec, hw->tv.nsec, ts->tv.sec, ts->tv.nsec, sys.tv.sec,
sys.tv.nsec, cmp->offset, cmp->skew);
}
static void bfin_tx_hwtstamp(struct net_device *netdev, struct sk_buff *skb)
{
struct bfin_mac_local *lp = netdev_priv(netdev);
union skb_shared_tx *shtx = skb_tx(skb);
if (shtx->hardware) {
int timeout_cnt = MAX_TIMEOUT_CNT;
/* When doing time stamping, keep the connection to the socket
* a while longer
*/
shtx->in_progress = 1;
/*
* The timestamping is done at the EMAC module's MII/RMII interface
* when the module sees the Start of Frame of an event message packet. This
* interface is the closest possible place to the physical Ethernet transmission
* medium, providing the best timing accuracy.
*/
while ((!(bfin_read_EMAC_PTP_ISTAT() & TXTL)) && (--timeout_cnt))
udelay(1);
if (timeout_cnt == 0)
printk(KERN_ERR DRV_NAME
": fails to timestamp the TX packet\n");
else {
struct skb_shared_hwtstamps shhwtstamps;
u64 ns;
u64 regval;
regval = bfin_read_EMAC_PTP_TXSNAPLO();
regval |= (u64)bfin_read_EMAC_PTP_TXSNAPHI() << 32;
memset(&shhwtstamps, 0, sizeof(shhwtstamps));
ns = timecounter_cyc2time(&lp->clock,
regval);
timecompare_update(&lp->compare, ns);
shhwtstamps.hwtstamp = ns_to_ktime(ns);
shhwtstamps.syststamp =
timecompare_transform(&lp->compare, ns);
skb_tstamp_tx(skb, &shhwtstamps);
bfin_dump_hwtamp("TX", &shhwtstamps.hwtstamp, &shhwtstamps.syststamp, &lp->compare);
}
}
}
static void bfin_rx_hwtstamp(struct net_device *netdev, struct sk_buff *skb)
{
struct bfin_mac_local *lp = netdev_priv(netdev);
u32 valid;
u64 regval, ns;
struct skb_shared_hwtstamps *shhwtstamps;
if (bfin_mac_hwtstamp_is_none(lp->stamp_cfg.rx_filter))
return;
valid = bfin_read_EMAC_PTP_ISTAT() & RXEL;
if (!valid)
return;
shhwtstamps = skb_hwtstamps(skb);
regval = bfin_read_EMAC_PTP_RXSNAPLO();
regval |= (u64)bfin_read_EMAC_PTP_RXSNAPHI() << 32;
ns = timecounter_cyc2time(&lp->clock, regval);
timecompare_update(&lp->compare, ns);
memset(shhwtstamps, 0, sizeof(*shhwtstamps));
shhwtstamps->hwtstamp = ns_to_ktime(ns);
shhwtstamps->syststamp = timecompare_transform(&lp->compare, ns);
bfin_dump_hwtamp("RX", &shhwtstamps->hwtstamp, &shhwtstamps->syststamp, &lp->compare);
}
/*
* bfin_read_clock - read raw cycle counter (to be used by time counter)
*/
static cycle_t bfin_read_clock(const struct cyclecounter *tc)
{
u64 stamp;
stamp = bfin_read_EMAC_PTP_TIMELO();
stamp |= (u64)bfin_read_EMAC_PTP_TIMEHI() << 32ULL;
return stamp;
}
#define PTP_CLK 25000000
static void bfin_mac_hwtstamp_init(struct net_device *netdev)
{
struct bfin_mac_local *lp = netdev_priv(netdev);
u64 append;
/* Initialize hardware timer */
append = PTP_CLK * (1ULL << 32);
do_div(append, get_sclk());
bfin_write_EMAC_PTP_ADDEND((u32)append);
memset(&lp->cycles, 0, sizeof(lp->cycles));
lp->cycles.read = bfin_read_clock;
lp->cycles.mask = CLOCKSOURCE_MASK(64);
lp->cycles.mult = 1000000000 / PTP_CLK;
lp->cycles.shift = 0;
/* Synchronize our NIC clock against system wall clock */
memset(&lp->compare, 0, sizeof(lp->compare));
lp->compare.source = &lp->clock;
lp->compare.target = ktime_get_real;
lp->compare.num_samples = 10;
/* Initialize hwstamp config */
lp->stamp_cfg.rx_filter = HWTSTAMP_FILTER_NONE;
lp->stamp_cfg.tx_type = HWTSTAMP_TX_OFF;
}
#else
# define bfin_mac_hwtstamp_is_none(cfg) 0
# define bfin_mac_hwtstamp_init(dev)
# define bfin_mac_hwtstamp_ioctl(dev, ifr, cmd) (-EOPNOTSUPP)
# define bfin_rx_hwtstamp(dev, skb)
# define bfin_tx_hwtstamp(dev, skb)
#endif
static void adjust_tx_list(void)
{
int timeout_cnt = MAX_TIMEOUT_CNT;
@ -608,18 +979,32 @@ static int bfin_mac_hard_start_xmit(struct sk_buff *skb,
{
u16 *data;
u32 data_align = (unsigned long)(skb->data) & 0x3;
union skb_shared_tx *shtx = skb_tx(skb);
current_tx_ptr->skb = skb;
if (data_align == 0x2) {
/* move skb->data to current_tx_ptr payload */
data = (u16 *)(skb->data) - 1;
*data = (u16)(skb->len);
/*
* When transmitting an Ethernet packet, the PTP_TSYNC module requires
* a DMA_Length_Word field associated with the packet. The lower 12 bits
* of this field are the length of the packet payload in bytes and the higher
* 4 bits are the timestamping enable field.
*/
if (shtx->hardware)
*data |= 0x1000;
current_tx_ptr->desc_a.start_addr = (u32)data;
/* this is important! */
blackfin_dcache_flush_range((u32)data,
(u32)((u8 *)data + skb->len + 4));
} else {
*((u16 *)(current_tx_ptr->packet)) = (u16)(skb->len);
/* enable timestamping for the sent packet */
if (shtx->hardware)
*((u16 *)(current_tx_ptr->packet)) |= 0x1000;
memcpy((u8 *)(current_tx_ptr->packet + 2), skb->data,
skb->len);
current_tx_ptr->desc_a.start_addr =
@ -653,20 +1038,42 @@ static int bfin_mac_hard_start_xmit(struct sk_buff *skb,
out:
adjust_tx_list();
bfin_tx_hwtstamp(dev, skb);
current_tx_ptr = current_tx_ptr->next;
dev->trans_start = jiffies;
dev->stats.tx_packets++;
dev->stats.tx_bytes += (skb->len);
return NETDEV_TX_OK;
}
#define IP_HEADER_OFF 0
#define RX_ERROR_MASK (RX_LONG | RX_ALIGN | RX_CRC | RX_LEN | \
RX_FRAG | RX_ADDR | RX_DMAO | RX_PHY | RX_LATE | RX_RANGE)
static void bfin_mac_rx(struct net_device *dev)
{
struct sk_buff *skb, *new_skb;
unsigned short len;
struct bfin_mac_local *lp __maybe_unused = netdev_priv(dev);
#if defined(BFIN_MAC_CSUM_OFFLOAD)
unsigned int i;
unsigned char fcs[ETH_FCS_LEN + 1];
#endif
/* check if frame status word reports an error condition
* we which case we simply drop the packet
*/
if (current_rx_ptr->status.status_word & RX_ERROR_MASK) {
printk(KERN_NOTICE DRV_NAME
": rx: receive error - packet dropped\n");
dev->stats.rx_dropped++;
goto out;
}
/* allocate a new skb for next time receive */
skb = current_rx_ptr->skb;
new_skb = dev_alloc_skb(PKT_BUF_SZ + NET_IP_ALIGN);
if (!new_skb) {
printk(KERN_NOTICE DRV_NAME
@ -676,34 +1083,59 @@ static void bfin_mac_rx(struct net_device *dev)
}
/* reserve 2 bytes for RXDWA padding */
skb_reserve(new_skb, NET_IP_ALIGN);
current_rx_ptr->skb = new_skb;
current_rx_ptr->desc_a.start_addr = (unsigned long)new_skb->data - 2;
/* Invidate the data cache of skb->data range when it is write back
* cache. It will prevent overwritting the new data from DMA
*/
blackfin_dcache_invalidate_range((unsigned long)new_skb->head,
(unsigned long)new_skb->end);
current_rx_ptr->skb = new_skb;
current_rx_ptr->desc_a.start_addr = (unsigned long)new_skb->data - 2;
len = (unsigned short)((current_rx_ptr->status.status_word) & RX_FRLEN);
/* Deduce Ethernet FCS length from Ethernet payload length */
len -= ETH_FCS_LEN;
skb_put(skb, len);
blackfin_dcache_invalidate_range((unsigned long)skb->head,
(unsigned long)skb->tail);
skb->protocol = eth_type_trans(skb, dev);
bfin_rx_hwtstamp(dev, skb);
#if defined(BFIN_MAC_CSUM_OFFLOAD)
/* Checksum offloading only works for IPv4 packets with the standard IP header
* length of 20 bytes, because the blackfin MAC checksum calculation is
* based on that assumption. We must NOT use the calculated checksum if our
* IP version or header break that assumption.
*/
if (skb->data[IP_HEADER_OFF] == 0x45) {
skb->csum = current_rx_ptr->status.ip_payload_csum;
/*
* Deduce Ethernet FCS from hardware generated IP payload checksum.
* IP checksum is based on 16-bit one's complement algorithm.
* To deduce a value from checksum is equal to add its inversion.
* If the IP payload len is odd, the inversed FCS should also
* begin from odd address and leave first byte zero.
*/
if (skb->len % 2) {
fcs[0] = 0;
for (i = 0; i < ETH_FCS_LEN; i++)
fcs[i + 1] = ~skb->data[skb->len + i];
skb->csum = csum_partial(fcs, ETH_FCS_LEN + 1, skb->csum);
} else {
for (i = 0; i < ETH_FCS_LEN; i++)
fcs[i] = ~skb->data[skb->len + i];
skb->csum = csum_partial(fcs, ETH_FCS_LEN, skb->csum);
}
skb->ip_summed = CHECKSUM_COMPLETE;
}
#endif
netif_rx(skb);
dev->stats.rx_packets++;
dev->stats.rx_bytes += len;
out:
current_rx_ptr->status.status_word = 0x00000000;
current_rx_ptr = current_rx_ptr->next;
out:
return;
}
/* interrupt routine to handle rx and error signal */
@ -755,8 +1187,9 @@ static void bfin_mac_disable(void)
/*
* Enable Interrupts, Receive, and Transmit
*/
static void bfin_mac_enable(void)
static int bfin_mac_enable(void)
{
int ret;
u32 opmode;
pr_debug("%s: %s\n", DRV_NAME, __func__);
@ -766,7 +1199,9 @@ static void bfin_mac_enable(void)
bfin_write_DMA1_CONFIG(rx_list_head->desc_a.config);
/* Wait MII done */
bfin_mdio_poll();
ret = bfin_mdio_poll();
if (ret)
return ret;
/* We enable only RX here */
/* ASTP : Enable Automatic Pad Stripping
@ -790,6 +1225,8 @@ static void bfin_mac_enable(void)
#endif
/* Turn on the EMAC rx */
bfin_write_EMAC_OPMODE(opmode);
return 0;
}
/* Our watchdog timed out. Called by the networking layer */
@ -805,21 +1242,21 @@ static void bfin_mac_timeout(struct net_device *dev)
bfin_mac_enable();
/* We can accept TX packets again */
dev->trans_start = jiffies;
dev->trans_start = jiffies; /* prevent tx timeout */
netif_wake_queue(dev);
}
static void bfin_mac_multicast_hash(struct net_device *dev)
{
u32 emac_hashhi, emac_hashlo;
struct dev_mc_list *dmi;
struct netdev_hw_addr *ha;
char *addrs;
u32 crc;
emac_hashhi = emac_hashlo = 0;
netdev_for_each_mc_addr(dmi, dev) {
addrs = dmi->dmi_addr;
netdev_for_each_mc_addr(ha, dev) {
addrs = ha->addr;
/* skip non-multicast addresses */
if (!(*addrs & 1))
@ -836,8 +1273,6 @@ static void bfin_mac_multicast_hash(struct net_device *dev)
bfin_write_EMAC_HASHHI(emac_hashhi);
bfin_write_EMAC_HASHLO(emac_hashlo);
return;
}
/*
@ -853,7 +1288,7 @@ static void bfin_mac_set_multicast_list(struct net_device *dev)
if (dev->flags & IFF_PROMISC) {
printk(KERN_INFO "%s: set to promisc mode\n", dev->name);
sysctl = bfin_read_EMAC_OPMODE();
sysctl |= RAF;
sysctl |= PR;
bfin_write_EMAC_OPMODE(sysctl);
} else if (dev->flags & IFF_ALLMULTI) {
/* accept all multicast */
@ -874,6 +1309,16 @@ static void bfin_mac_set_multicast_list(struct net_device *dev)
}
}
static int bfin_mac_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd)
{
switch (cmd) {
case SIOCSHWTSTAMP:
return bfin_mac_hwtstamp_ioctl(netdev, ifr, cmd);
default:
return -EOPNOTSUPP;
}
}
/*
* this puts the device in an inactive state
*/
@ -894,7 +1339,7 @@ static void bfin_mac_shutdown(struct net_device *dev)
static int bfin_mac_open(struct net_device *dev)
{
struct bfin_mac_local *lp = netdev_priv(dev);
int retval;
int ret;
pr_debug("%s: %s\n", dev->name, __func__);
/*
@ -908,18 +1353,21 @@ static int bfin_mac_open(struct net_device *dev)
}
/* initial rx and tx list */
retval = desc_list_init();
if (retval)
return retval;
ret = desc_list_init();
if (ret)
return ret;
phy_start(lp->phydev);
phy_write(lp->phydev, MII_BMCR, BMCR_RESET);
setup_system_regs(dev);
setup_mac_addr(dev->dev_addr);
bfin_mac_disable();
bfin_mac_enable();
ret = bfin_mac_enable();
if (ret)
return ret;
pr_debug("hardware init finished\n");
netif_start_queue(dev);
netif_carrier_on(dev);
@ -958,6 +1406,7 @@ static const struct net_device_ops bfin_mac_netdev_ops = {
.ndo_set_mac_address = bfin_mac_set_mac_address,
.ndo_tx_timeout = bfin_mac_timeout,
.ndo_set_multicast_list = bfin_mac_set_multicast_list,
.ndo_do_ioctl = bfin_mac_ioctl,
.ndo_validate_addr = eth_validate_addr,
.ndo_change_mtu = eth_change_mtu,
#ifdef CONFIG_NET_POLL_CONTROLLER
@ -1017,6 +1466,11 @@ static int __devinit bfin_mac_probe(struct platform_device *pdev)
}
pd = pdev->dev.platform_data;
lp->mii_bus = platform_get_drvdata(pd);
if (!lp->mii_bus) {
dev_err(&pdev->dev, "Cannot get mii_bus!\n");
rc = -ENODEV;
goto out_err_mii_bus_probe;
}
lp->mii_bus->priv = ndev;
rc = mii_probe(ndev);
@ -1049,6 +1503,8 @@ static int __devinit bfin_mac_probe(struct platform_device *pdev)
goto out_err_reg_ndev;
}
bfin_mac_hwtstamp_init(ndev);
/* now, print out the card info, in a short format.. */
dev_info(&pdev->dev, "%s, Version %s\n", DRV_DESC, DRV_VERSION);
@ -1060,6 +1516,7 @@ out_err_request_irq:
out_err_mii_probe:
mdiobus_unregister(lp->mii_bus);
mdiobus_free(lp->mii_bus);
out_err_mii_bus_probe:
peripheral_free_list(pin_req);
out_err_probe_mac:
platform_set_drvdata(pdev, NULL);
@ -1092,9 +1549,16 @@ static int __devexit bfin_mac_remove(struct platform_device *pdev)
static int bfin_mac_suspend(struct platform_device *pdev, pm_message_t mesg)
{
struct net_device *net_dev = platform_get_drvdata(pdev);
struct bfin_mac_local *lp = netdev_priv(net_dev);
if (lp->wol) {
bfin_write_EMAC_OPMODE((bfin_read_EMAC_OPMODE() & ~TE) | RE);
bfin_write_EMAC_WKUP_CTL(MPKE);
enable_irq_wake(IRQ_MAC_WAKEDET);
} else {
if (netif_running(net_dev))
bfin_mac_close(net_dev);
}
return 0;
}
@ -1102,9 +1566,16 @@ static int bfin_mac_suspend(struct platform_device *pdev, pm_message_t mesg)
static int bfin_mac_resume(struct platform_device *pdev)
{
struct net_device *net_dev = platform_get_drvdata(pdev);
struct bfin_mac_local *lp = netdev_priv(net_dev);
if (lp->wol) {
bfin_write_EMAC_OPMODE(bfin_read_EMAC_OPMODE() | TE);
bfin_write_EMAC_WKUP_CTL(0);
disable_irq_wake(IRQ_MAC_WAKEDET);
} else {
if (netif_running(net_dev))
bfin_mac_open(net_dev);
}
return 0;
}

View File

@ -7,6 +7,12 @@
*
* Licensed under the GPL-2 or later.
*/
#ifndef _BFIN_MAC_H_
#define _BFIN_MAC_H_
#include <linux/net_tstamp.h>
#include <linux/clocksource.h>
#include <linux/timecompare.h>
#define BFIN_MAC_CSUM_OFFLOAD
@ -60,6 +66,9 @@ struct bfin_mac_local {
unsigned char Mac[6]; /* MAC address of the board */
spinlock_t lock;
int wol; /* Wake On Lan */
int irq_wake_requested;
/* MII and PHY stuffs */
int old_link; /* used by bf537_adjust_link */
int old_speed;
@ -67,6 +76,15 @@ struct bfin_mac_local {
struct phy_device *phydev;
struct mii_bus *mii_bus;
#if defined(CONFIG_BFIN_MAC_USE_HWSTAMP)
struct cyclecounter cycles;
struct timecounter clock;
struct timecompare compare;
struct hwtstamp_config stamp_cfg;
#endif
};
extern void bfin_get_ether_addr(char *addr);
#endif

View File

@ -167,7 +167,6 @@ static inline void
dbdma_st32(volatile __u32 __iomem *a, unsigned long x)
{
__asm__ volatile( "stwbrx %0,0,%1" : : "r" (x), "r" (a) : "memory");
return;
}
static inline unsigned long
@ -382,8 +381,6 @@ bmac_init_registers(struct net_device *dev)
bmwrite(dev, RXCFG, RxCRCNoStrip | RxHashFilterEnable | RxRejectOwnPackets);
bmwrite(dev, INTDISABLE, EnableNormal);
return;
}
#if 0
@ -972,7 +969,7 @@ bmac_remove_multi(struct net_device *dev,
*/
static void bmac_set_multicast(struct net_device *dev)
{
struct dev_mc_list *dmi;
struct netdev_hw_addr *ha;
struct bmac_data *bp = netdev_priv(dev);
int num_addrs = netdev_mc_count(dev);
unsigned short rx_cfg;
@ -1001,8 +998,8 @@ static void bmac_set_multicast(struct net_device *dev)
rx_cfg = bmac_rx_on(dev, 0, 0);
XXDEBUG(("bmac: multi disabled, rx_cfg=%#08x\n", rx_cfg));
} else {
netdev_for_each_mc_addr(dmi, dev)
bmac_addhash(bp, dmi->dmi_addr);
netdev_for_each_mc_addr(ha, dev)
bmac_addhash(bp, ha->addr);
bmac_update_hash_table_mask(dev, bp);
rx_cfg = bmac_rx_on(dev, 1, 0);
XXDEBUG(("bmac: multi enabled, rx_cfg=%#08x\n", rx_cfg));
@ -1016,7 +1013,7 @@ static void bmac_set_multicast(struct net_device *dev)
static void bmac_set_multicast(struct net_device *dev)
{
struct dev_mc_list *dmi;
struct netdev_hw_addr *ha;
char *addrs;
int i;
unsigned short rx_cfg;
@ -1040,8 +1037,8 @@ static void bmac_set_multicast(struct net_device *dev)
for(i = 0; i < 4; i++) hash_table[i] = 0;
netdev_for_each_mc_addr(dmi, dev) {
addrs = dmi->dmi_addr;
netdev_for_each_mc_addr(ha, dev) {
addrs = ha->addr;
if(!(*addrs & 1))
continue;

View File

@ -58,11 +58,11 @@
#include "bnx2_fw.h"
#define DRV_MODULE_NAME "bnx2"
#define DRV_MODULE_VERSION "2.0.9"
#define DRV_MODULE_RELDATE "April 27, 2010"
#define DRV_MODULE_VERSION "2.0.15"
#define DRV_MODULE_RELDATE "May 4, 2010"
#define FW_MIPS_FILE_06 "bnx2/bnx2-mips-06-5.0.0.j6.fw"
#define FW_RV2P_FILE_06 "bnx2/bnx2-rv2p-06-5.0.0.j3.fw"
#define FW_MIPS_FILE_09 "bnx2/bnx2-mips-09-5.0.0.j9.fw"
#define FW_MIPS_FILE_09 "bnx2/bnx2-mips-09-5.0.0.j15.fw"
#define FW_RV2P_FILE_09_Ax "bnx2/bnx2-rv2p-09ax-5.0.0.j10.fw"
#define FW_RV2P_FILE_09 "bnx2/bnx2-rv2p-09-5.0.0.j10.fw"
@ -656,19 +656,11 @@ bnx2_netif_stop(struct bnx2 *bp, bool stop_cnic)
if (stop_cnic)
bnx2_cnic_stop(bp);
if (netif_running(bp->dev)) {
int i;
bnx2_napi_disable(bp);
netif_tx_disable(bp->dev);
/* prevent tx timeout */
for (i = 0; i < bp->dev->num_tx_queues; i++) {
struct netdev_queue *txq;
txq = netdev_get_tx_queue(bp->dev, i);
txq->trans_start = jiffies;
}
}
bnx2_disable_int_sync(bp);
netif_carrier_off(bp->dev); /* prevent tx timeout */
}
static void
@ -677,6 +669,10 @@ bnx2_netif_start(struct bnx2 *bp, bool start_cnic)
if (atomic_dec_and_test(&bp->intr_sem)) {
if (netif_running(bp->dev)) {
netif_tx_wake_all_queues(bp->dev);
spin_lock_bh(&bp->phy_lock);
if (bp->link_up)
netif_carrier_on(bp->dev);
spin_unlock_bh(&bp->phy_lock);
bnx2_napi_enable(bp);
bnx2_enable_int(bp);
if (start_cnic)
@ -2672,7 +2668,7 @@ bnx2_alloc_rx_page(struct bnx2 *bp, struct bnx2_rx_ring_info *rxr, u16 index)
}
rx_pg->page = page;
pci_unmap_addr_set(rx_pg, mapping, mapping);
dma_unmap_addr_set(rx_pg, mapping, mapping);
rxbd->rx_bd_haddr_hi = (u64) mapping >> 32;
rxbd->rx_bd_haddr_lo = (u64) mapping & 0xffffffff;
return 0;
@ -2687,7 +2683,7 @@ bnx2_free_rx_page(struct bnx2 *bp, struct bnx2_rx_ring_info *rxr, u16 index)
if (!page)
return;
pci_unmap_page(bp->pdev, pci_unmap_addr(rx_pg, mapping), PAGE_SIZE,
pci_unmap_page(bp->pdev, dma_unmap_addr(rx_pg, mapping), PAGE_SIZE,
PCI_DMA_FROMDEVICE);
__free_page(page);
@ -2719,7 +2715,8 @@ bnx2_alloc_rx_skb(struct bnx2 *bp, struct bnx2_rx_ring_info *rxr, u16 index)
}
rx_buf->skb = skb;
pci_unmap_addr_set(rx_buf, mapping, mapping);
rx_buf->desc = (struct l2_fhdr *) skb->data;
dma_unmap_addr_set(rx_buf, mapping, mapping);
rxbd->rx_bd_haddr_hi = (u64) mapping >> 32;
rxbd->rx_bd_haddr_lo = (u64) mapping & 0xffffffff;
@ -2818,7 +2815,7 @@ bnx2_tx_int(struct bnx2 *bp, struct bnx2_napi *bnapi, int budget)
}
}
pci_unmap_single(bp->pdev, pci_unmap_addr(tx_buf, mapping),
pci_unmap_single(bp->pdev, dma_unmap_addr(tx_buf, mapping),
skb_headlen(skb), PCI_DMA_TODEVICE);
tx_buf->skb = NULL;
@ -2828,7 +2825,7 @@ bnx2_tx_int(struct bnx2 *bp, struct bnx2_napi *bnapi, int budget)
sw_cons = NEXT_TX_BD(sw_cons);
pci_unmap_page(bp->pdev,
pci_unmap_addr(
dma_unmap_addr(
&txr->tx_buf_ring[TX_RING_IDX(sw_cons)],
mapping),
skb_shinfo(skb)->frags[i].size,
@ -2910,8 +2907,8 @@ bnx2_reuse_rx_skb_pages(struct bnx2 *bp, struct bnx2_rx_ring_info *rxr,
if (prod != cons) {
prod_rx_pg->page = cons_rx_pg->page;
cons_rx_pg->page = NULL;
pci_unmap_addr_set(prod_rx_pg, mapping,
pci_unmap_addr(cons_rx_pg, mapping));
dma_unmap_addr_set(prod_rx_pg, mapping,
dma_unmap_addr(cons_rx_pg, mapping));
prod_bd->rx_bd_haddr_hi = cons_bd->rx_bd_haddr_hi;
prod_bd->rx_bd_haddr_lo = cons_bd->rx_bd_haddr_lo;
@ -2935,18 +2932,19 @@ bnx2_reuse_rx_skb(struct bnx2 *bp, struct bnx2_rx_ring_info *rxr,
prod_rx_buf = &rxr->rx_buf_ring[prod];
pci_dma_sync_single_for_device(bp->pdev,
pci_unmap_addr(cons_rx_buf, mapping),
dma_unmap_addr(cons_rx_buf, mapping),
BNX2_RX_OFFSET + BNX2_RX_COPY_THRESH, PCI_DMA_FROMDEVICE);
rxr->rx_prod_bseq += bp->rx_buf_use_size;
prod_rx_buf->skb = skb;
prod_rx_buf->desc = (struct l2_fhdr *) skb->data;
if (cons == prod)
return;
pci_unmap_addr_set(prod_rx_buf, mapping,
pci_unmap_addr(cons_rx_buf, mapping));
dma_unmap_addr_set(prod_rx_buf, mapping,
dma_unmap_addr(cons_rx_buf, mapping));
cons_bd = &rxr->rx_desc_ring[RX_RING(cons)][RX_IDX(cons)];
prod_bd = &rxr->rx_desc_ring[RX_RING(prod)][RX_IDX(prod)];
@ -3019,7 +3017,7 @@ bnx2_rx_skb(struct bnx2 *bp, struct bnx2_rx_ring_info *rxr, struct sk_buff *skb,
/* Don't unmap yet. If we're unable to allocate a new
* page, we need to recycle the page and the DMA addr.
*/
mapping_old = pci_unmap_addr(rx_pg, mapping);
mapping_old = dma_unmap_addr(rx_pg, mapping);
if (i == pages - 1)
frag_len -= 4;
@ -3074,6 +3072,7 @@ bnx2_rx_int(struct bnx2 *bp, struct bnx2_napi *bnapi, int budget)
u16 hw_cons, sw_cons, sw_ring_cons, sw_prod, sw_ring_prod;
struct l2_fhdr *rx_hdr;
int rx_pkt = 0, pg_ring_used = 0;
struct pci_dev *pdev = bp->pdev;
hw_cons = bnx2_get_hw_rx_cons(bnapi);
sw_cons = rxr->rx_cons;
@ -3086,7 +3085,7 @@ bnx2_rx_int(struct bnx2 *bp, struct bnx2_napi *bnapi, int budget)
while (sw_cons != hw_cons) {
unsigned int len, hdr_len;
u32 status;
struct sw_bd *rx_buf;
struct sw_bd *rx_buf, *next_rx_buf;
struct sk_buff *skb;
dma_addr_t dma_addr;
u16 vtag = 0;
@ -3097,16 +3096,23 @@ bnx2_rx_int(struct bnx2 *bp, struct bnx2_napi *bnapi, int budget)
rx_buf = &rxr->rx_buf_ring[sw_ring_cons];
skb = rx_buf->skb;
prefetchw(skb);
if (!get_dma_ops(&pdev->dev)->sync_single_for_cpu) {
next_rx_buf =
&rxr->rx_buf_ring[
RX_RING_IDX(NEXT_RX_BD(sw_cons))];
prefetch(next_rx_buf->desc);
}
rx_buf->skb = NULL;
dma_addr = pci_unmap_addr(rx_buf, mapping);
dma_addr = dma_unmap_addr(rx_buf, mapping);
pci_dma_sync_single_for_cpu(bp->pdev, dma_addr,
BNX2_RX_OFFSET + BNX2_RX_COPY_THRESH,
PCI_DMA_FROMDEVICE);
rx_hdr = (struct l2_fhdr *) skb->data;
rx_hdr = rx_buf->desc;
len = rx_hdr->l2_fhdr_pkt_len;
status = rx_hdr->l2_fhdr_status;
@ -3207,10 +3213,10 @@ bnx2_rx_int(struct bnx2 *bp, struct bnx2_napi *bnapi, int budget)
#ifdef BCM_VLAN
if (hw_vlan)
vlan_hwaccel_receive_skb(skb, bp->vlgrp, vtag);
vlan_gro_receive(&bnapi->napi, bp->vlgrp, vtag, skb);
else
#endif
netif_receive_skb(skb);
napi_gro_receive(&bnapi->napi, skb);
rx_pkt++;
@ -3548,7 +3554,6 @@ bnx2_set_rx_mode(struct net_device *dev)
}
else {
/* Accept one or more multicast(s). */
struct dev_mc_list *mclist;
u32 mc_filter[NUM_MC_HASH_REGISTERS];
u32 regidx;
u32 bit;
@ -3556,8 +3561,8 @@ bnx2_set_rx_mode(struct net_device *dev)
memset(mc_filter, 0, 4 * NUM_MC_HASH_REGISTERS);
netdev_for_each_mc_addr(mclist, dev) {
crc = ether_crc_le(ETH_ALEN, mclist->dmi_addr);
netdev_for_each_mc_addr(ha, dev) {
crc = ether_crc_le(ETH_ALEN, ha->addr);
bit = crc & 0xff;
regidx = (bit & 0xe0) >> 5;
bit &= 0x1f;
@ -5318,7 +5323,7 @@ bnx2_free_tx_skbs(struct bnx2 *bp)
}
pci_unmap_single(bp->pdev,
pci_unmap_addr(tx_buf, mapping),
dma_unmap_addr(tx_buf, mapping),
skb_headlen(skb),
PCI_DMA_TODEVICE);
@ -5329,7 +5334,7 @@ bnx2_free_tx_skbs(struct bnx2 *bp)
for (k = 0; k < last; k++, j++) {
tx_buf = &txr->tx_buf_ring[TX_RING_IDX(j)];
pci_unmap_page(bp->pdev,
pci_unmap_addr(tx_buf, mapping),
dma_unmap_addr(tx_buf, mapping),
skb_shinfo(skb)->frags[k].size,
PCI_DMA_TODEVICE);
}
@ -5359,7 +5364,7 @@ bnx2_free_rx_skbs(struct bnx2 *bp)
continue;
pci_unmap_single(bp->pdev,
pci_unmap_addr(rx_buf, mapping),
dma_unmap_addr(rx_buf, mapping),
bp->rx_buf_use_size,
PCI_DMA_FROMDEVICE);
@ -5765,11 +5770,11 @@ bnx2_run_loopback(struct bnx2 *bp, int loopback_mode)
rx_buf = &rxr->rx_buf_ring[rx_start_idx];
rx_skb = rx_buf->skb;
rx_hdr = (struct l2_fhdr *) rx_skb->data;
rx_hdr = rx_buf->desc;
skb_reserve(rx_skb, BNX2_RX_OFFSET);
pci_dma_sync_single_for_cpu(bp->pdev,
pci_unmap_addr(rx_buf, mapping),
dma_unmap_addr(rx_buf, mapping),
bp->rx_buf_size, PCI_DMA_FROMDEVICE);
if (rx_hdr->l2_fhdr_status &
@ -6292,14 +6297,23 @@ static void
bnx2_dump_state(struct bnx2 *bp)
{
struct net_device *dev = bp->dev;
u32 mcp_p0, mcp_p1;
netdev_err(dev, "DEBUG: intr_sem[%x]\n", atomic_read(&bp->intr_sem));
netdev_err(dev, "DEBUG: EMAC_TX_STATUS[%08x] RPM_MGMT_PKT_CTRL[%08x]\n",
netdev_err(dev, "DEBUG: EMAC_TX_STATUS[%08x] EMAC_RX_STATUS[%08x]\n",
REG_RD(bp, BNX2_EMAC_TX_STATUS),
REG_RD(bp, BNX2_EMAC_RX_STATUS));
netdev_err(dev, "DEBUG: RPM_MGMT_PKT_CTRL[%08x]\n",
REG_RD(bp, BNX2_RPM_MGMT_PKT_CTRL));
if (CHIP_NUM(bp) == CHIP_NUM_5709) {
mcp_p0 = BNX2_MCP_STATE_P0;
mcp_p1 = BNX2_MCP_STATE_P1;
} else {
mcp_p0 = BNX2_MCP_STATE_P0_5708;
mcp_p1 = BNX2_MCP_STATE_P1_5708;
}
netdev_err(dev, "DEBUG: MCP_STATE_P0[%08x] MCP_STATE_P1[%08x]\n",
bnx2_reg_rd_ind(bp, BNX2_MCP_STATE_P0),
bnx2_reg_rd_ind(bp, BNX2_MCP_STATE_P1));
bnx2_reg_rd_ind(bp, mcp_p0), bnx2_reg_rd_ind(bp, mcp_p1));
netdev_err(dev, "DEBUG: HC_STATS_INTERRUPT_STATUS[%08x]\n",
REG_RD(bp, BNX2_HC_STATS_INTERRUPT_STATUS));
if (bp->flags & BNX2_FLAG_USING_MSIX)
@ -6429,7 +6443,7 @@ bnx2_start_xmit(struct sk_buff *skb, struct net_device *dev)
tx_buf = &txr->tx_buf_ring[ring_prod];
tx_buf->skb = skb;
pci_unmap_addr_set(tx_buf, mapping, mapping);
dma_unmap_addr_set(tx_buf, mapping, mapping);
txbd = &txr->tx_desc_ring[ring_prod];
@ -6454,7 +6468,7 @@ bnx2_start_xmit(struct sk_buff *skb, struct net_device *dev)
len, PCI_DMA_TODEVICE);
if (pci_dma_mapping_error(bp->pdev, mapping))
goto dma_error;
pci_unmap_addr_set(&txr->tx_buf_ring[ring_prod], mapping,
dma_unmap_addr_set(&txr->tx_buf_ring[ring_prod], mapping,
mapping);
txbd->tx_bd_haddr_hi = (u64) mapping >> 32;
@ -6491,7 +6505,7 @@ dma_error:
ring_prod = TX_RING_IDX(prod);
tx_buf = &txr->tx_buf_ring[ring_prod];
tx_buf->skb = NULL;
pci_unmap_single(bp->pdev, pci_unmap_addr(tx_buf, mapping),
pci_unmap_single(bp->pdev, dma_unmap_addr(tx_buf, mapping),
skb_headlen(skb), PCI_DMA_TODEVICE);
/* unmap remaining mapped pages */
@ -6499,7 +6513,7 @@ dma_error:
prod = NEXT_TX_BD(prod);
ring_prod = TX_RING_IDX(prod);
tx_buf = &txr->tx_buf_ring[ring_prod];
pci_unmap_page(bp->pdev, pci_unmap_addr(tx_buf, mapping),
pci_unmap_page(bp->pdev, dma_unmap_addr(tx_buf, mapping),
skb_shinfo(skb)->frags[i].size,
PCI_DMA_TODEVICE);
}
@ -8297,7 +8311,7 @@ bnx2_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
memcpy(dev->dev_addr, bp->mac_addr, 6);
memcpy(dev->perm_addr, bp->mac_addr, 6);
dev->features |= NETIF_F_IP_CSUM | NETIF_F_SG;
dev->features |= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_GRO;
vlan_features_add(dev, NETIF_F_IP_CSUM | NETIF_F_SG);
if (CHIP_NUM(bp) == CHIP_NUM_5709) {
dev->features |= NETIF_F_IPV6_CSUM;

View File

@ -6347,6 +6347,8 @@ struct l2_fhdr {
#define BNX2_MCP_SCRATCH 0x00160000
#define BNX2_MCP_STATE_P1 0x0016f9c8
#define BNX2_MCP_STATE_P0 0x0016fdc8
#define BNX2_MCP_STATE_P1_5708 0x001699c8
#define BNX2_MCP_STATE_P0_5708 0x00169dc8
#define BNX2_SHM_HDR_SIGNATURE BNX2_MCP_SCRATCH
#define BNX2_SHM_HDR_SIGNATURE_SIG_MASK 0xffff0000
@ -6551,17 +6553,18 @@ struct l2_fhdr {
struct sw_bd {
struct sk_buff *skb;
DECLARE_PCI_UNMAP_ADDR(mapping)
struct l2_fhdr *desc;
DEFINE_DMA_UNMAP_ADDR(mapping);
};
struct sw_pg {
struct page *page;
DECLARE_PCI_UNMAP_ADDR(mapping)
DEFINE_DMA_UNMAP_ADDR(mapping);
};
struct sw_tx_bd {
struct sk_buff *skb;
DECLARE_PCI_UNMAP_ADDR(mapping)
DEFINE_DMA_UNMAP_ADDR(mapping);
unsigned short is_gso;
unsigned short nr_frags;
};

View File

@ -24,17 +24,26 @@
#define BCM_VLAN 1
#endif
#if defined(CONFIG_CNIC) || defined(CONFIG_CNIC_MODULE)
#define BCM_CNIC 1
#include "cnic_if.h"
#endif
#define BNX2X_MULTI_QUEUE
#define BNX2X_NEW_NAPI
#if defined(CONFIG_CNIC) || defined(CONFIG_CNIC_MODULE)
#define BCM_CNIC 1
#include "cnic_if.h"
#endif
#ifdef BCM_CNIC
#define BNX2X_MIN_MSIX_VEC_CNT 3
#define BNX2X_MSIX_VEC_FP_START 2
#else
#define BNX2X_MIN_MSIX_VEC_CNT 2
#define BNX2X_MSIX_VEC_FP_START 1
#endif
#include <linux/mdio.h>
#include "bnx2x_reg.h"
#include "bnx2x_fw_defs.h"
@ -85,6 +94,11 @@ do { \
##__args); \
} while (0)
#define BNX2X_ERROR(__fmt, __args...) do { \
pr_err("[%s:%d]" __fmt, __func__, __LINE__, ##__args); \
} while (0)
/* before we have a dev->name use dev_info() */
#define BNX2X_DEV_INFO(__fmt, __args...) \
do { \
@ -155,15 +169,21 @@ do { \
#define SHMEM2_RD(bp, field) REG_RD(bp, SHMEM2_ADDR(bp, field))
#define SHMEM2_WR(bp, field, val) REG_WR(bp, SHMEM2_ADDR(bp, field), val)
#define MF_CFG_RD(bp, field) SHMEM_RD(bp, mf_cfg.field)
#define MF_CFG_WR(bp, field, val) SHMEM_WR(bp, mf_cfg.field, val)
#define EMAC_RD(bp, reg) REG_RD(bp, emac_base + reg)
#define EMAC_WR(bp, reg, val) REG_WR(bp, emac_base + reg, val)
#define AEU_IN_ATTN_BITS_PXPPCICLOCKCLIENT_PARITY_ERROR \
AEU_INPUTS_ATTN_BITS_PXPPCICLOCKCLIENT_PARITY_ERROR
/* fast path */
struct sw_rx_bd {
struct sk_buff *skb;
DECLARE_PCI_UNMAP_ADDR(mapping)
DEFINE_DMA_UNMAP_ADDR(mapping);
};
struct sw_tx_bd {
@ -176,7 +196,7 @@ struct sw_tx_bd {
struct sw_rx_page {
struct page *page;
DECLARE_PCI_UNMAP_ADDR(mapping)
DEFINE_DMA_UNMAP_ADDR(mapping);
};
union db_prod {
@ -261,7 +281,7 @@ struct bnx2x_eth_q_stats {
u32 hw_csum_err;
};
#define BNX2X_NUM_Q_STATS 11
#define BNX2X_NUM_Q_STATS 13
#define Q_STATS_OFFSET32(stat_name) \
(offsetof(struct bnx2x_eth_q_stats, stat_name) / 4)
@ -767,7 +787,7 @@ struct bnx2x_eth_stats {
u32 nig_timer_max;
};
#define BNX2X_NUM_STATS 41
#define BNX2X_NUM_STATS 43
#define STATS_OFFSET32(stat_name) \
(offsetof(struct bnx2x_eth_stats, stat_name) / 4)
@ -818,6 +838,12 @@ struct attn_route {
u32 sig[4];
};
typedef enum {
BNX2X_RECOVERY_DONE,
BNX2X_RECOVERY_INIT,
BNX2X_RECOVERY_WAIT,
} bnx2x_recovery_state_t;
struct bnx2x {
/* Fields used in the tx and intr/napi performance paths
* are grouped together in the beginning of the structure
@ -835,6 +861,9 @@ struct bnx2x {
struct pci_dev *pdev;
atomic_t intr_sem;
bnx2x_recovery_state_t recovery_state;
int is_leader;
#ifdef BCM_CNIC
struct msix_entry msix_table[MAX_CONTEXT+2];
#else
@ -842,7 +871,6 @@ struct bnx2x {
#endif
#define INT_MODE_INTx 1
#define INT_MODE_MSI 2
#define INT_MODE_MSIX 3
int tx_ring_size;
@ -924,8 +952,7 @@ struct bnx2x {
int mrrs;
struct delayed_work sp_task;
struct work_struct reset_task;
struct delayed_work reset_task;
struct timer_list timer;
int current_interval;
@ -961,6 +988,8 @@ struct bnx2x {
u16 rx_quick_cons_trip;
u16 rx_ticks_int;
u16 rx_ticks;
/* Maximal coalescing timeout in us */
#define BNX2X_MAX_COALESCE_TOUT (0xf0*12)
u32 lin_cnt;
@ -1075,6 +1104,7 @@ struct bnx2x {
#define INIT_CSEM_INT_TABLE_DATA(bp) (bp->csem_int_table_data)
#define INIT_CSEM_PRAM_DATA(bp) (bp->csem_pram_data)
char fw_ver[32];
const struct firmware *firmware;
};
@ -1125,6 +1155,7 @@ static inline u32 reg_poll(struct bnx2x *bp, u32 reg, u32 expected, int ms,
#define LOAD_DIAG 2
#define UNLOAD_NORMAL 0
#define UNLOAD_CLOSE 1
#define UNLOAD_RECOVERY 2
/* DMAE command defines */
@ -1152,7 +1183,7 @@ static inline u32 reg_poll(struct bnx2x *bp, u32 reg, u32 expected, int ms,
#define DMAE_CMD_E1HVN_SHIFT DMAE_COMMAND_E1HVN_SHIFT
#define DMAE_LEN32_RD_MAX 0x80
#define DMAE_LEN32_WR_MAX 0x400
#define DMAE_LEN32_WR_MAX(bp) (CHIP_IS_E1(bp) ? 0x400 : 0x2000)
#define DMAE_COMP_VAL 0xe0d0d0ae
@ -1294,8 +1325,12 @@ static inline u32 reg_poll(struct bnx2x *bp, u32 reg, u32 expected, int ms,
AEU_INPUTS_ATTN_BITS_IGU_PARITY_ERROR | \
AEU_INPUTS_ATTN_BITS_MISC_PARITY_ERROR)
#define HW_PRTY_ASSERT_SET_3 (AEU_INPUTS_ATTN_BITS_MCP_LATCHED_ROM_PARITY | \
AEU_INPUTS_ATTN_BITS_MCP_LATCHED_UMP_RX_PARITY | \
AEU_INPUTS_ATTN_BITS_MCP_LATCHED_UMP_TX_PARITY | \
AEU_INPUTS_ATTN_BITS_MCP_LATCHED_SCPAD_PARITY)
#define MULTI_FLAGS(bp) \
#define RSS_FLAGS(bp) \
(TSTORM_ETH_FUNCTION_COMMON_CONFIG_RSS_IPV4_CAPABILITY | \
TSTORM_ETH_FUNCTION_COMMON_CONFIG_RSS_IPV4_TCP_CAPABILITY | \
TSTORM_ETH_FUNCTION_COMMON_CONFIG_RSS_IPV6_CAPABILITY | \
@ -1333,6 +1368,9 @@ static inline u32 reg_poll(struct bnx2x *bp, u32 reg, u32 expected, int ms,
#define PXP2_REG_PXP2_INT_STS PXP2_REG_PXP2_INT_STS_0
#endif
#define BNX2X_VPD_LEN 128
#define VENDOR_ID_LEN 4
/* MISC_REG_RESET_REG - this is here for the hsi to work don't touch */
#endif /* bnx2x.h */

File diff suppressed because it is too large Load Diff

View File

@ -766,6 +766,8 @@
#define MCP_REG_MCPR_NVM_SW_ARB 0x86420
#define MCP_REG_MCPR_NVM_WRITE 0x86408
#define MCP_REG_MCPR_SCRATCH 0xa0000
#define MISC_AEU_GENERAL_MASK_REG_AEU_NIG_CLOSE_MASK (0x1<<1)
#define MISC_AEU_GENERAL_MASK_REG_AEU_PXP_CLOSE_MASK (0x1<<0)
/* [R 32] read first 32 bit after inversion of function 0. mapped as
follows: [0] NIG attention for function0; [1] NIG attention for
function1; [2] GPIO1 mcp; [3] GPIO2 mcp; [4] GPIO3 mcp; [5] GPIO4 mcp;
@ -1249,6 +1251,8 @@
#define MISC_REG_E1HMF_MODE 0xa5f8
/* [RW 32] Debug only: spare RW register reset by core reset */
#define MISC_REG_GENERIC_CR_0 0xa460
/* [RW 32] Debug only: spare RW register reset by por reset */
#define MISC_REG_GENERIC_POR_1 0xa474
/* [RW 32] GPIO. [31-28] FLOAT port 0; [27-24] FLOAT port 0; When any of
these bits is written as a '1'; the corresponding SPIO bit will turn off
it's drivers and become an input. This is the reset state of all GPIO
@ -1438,7 +1442,7 @@
(~misc_registers_sw_timer_cfg_4.sw_timer_cfg_4[1] ) is set */
#define MISC_REG_SW_TIMER_RELOAD_VAL_4 0xa2fc
/* [RW 32] the value of the counter for sw timers1-8. there are 8 addresses
in this register. addres 0 - timer 1; address - timer 2<EFBFBD>address 7 -
in this register. addres 0 - timer 1; address 1 - timer 2, ... address 7 -
timer 8 */
#define MISC_REG_SW_TIMER_VAL 0xa5c0
/* [RW 1] Set by the MCP to remember if one or more of the drivers is/are
@ -2407,10 +2411,16 @@
/* [R 8] debug only: A bit mask for all PSWHST arbiter clients. '1' means
this client is waiting for the arbiter. */
#define PXP_REG_HST_CLIENTS_WAITING_TO_ARB 0x103008
/* [RW 1] When 1; doorbells are discarded and not passed to doorbell queue
block. Should be used for close the gates. */
#define PXP_REG_HST_DISCARD_DOORBELLS 0x1030a4
/* [R 1] debug only: '1' means this PSWHST is discarding doorbells. This bit
should update accoring to 'hst_discard_doorbells' register when the state
machine is idle */
#define PXP_REG_HST_DISCARD_DOORBELLS_STATUS 0x1030a0
/* [RW 1] When 1; new internal writes arriving to the block are discarded.
Should be used for close the gates. */
#define PXP_REG_HST_DISCARD_INTERNAL_WRITES 0x1030a8
/* [R 6] debug only: A bit mask for all PSWHST internal write clients. '1'
means this PSWHST is discarding inputs from this client. Each bit should
update accoring to 'hst_discard_internal_writes' register when the state
@ -4422,11 +4432,21 @@
#define MISC_REGISTERS_GPIO_PORT_SHIFT 4
#define MISC_REGISTERS_GPIO_SET_POS 8
#define MISC_REGISTERS_RESET_REG_1_CLEAR 0x588
#define MISC_REGISTERS_RESET_REG_1_RST_HC (0x1<<29)
#define MISC_REGISTERS_RESET_REG_1_RST_NIG (0x1<<7)
#define MISC_REGISTERS_RESET_REG_1_RST_PXP (0x1<<26)
#define MISC_REGISTERS_RESET_REG_1_RST_PXPV (0x1<<27)
#define MISC_REGISTERS_RESET_REG_1_SET 0x584
#define MISC_REGISTERS_RESET_REG_2_CLEAR 0x598
#define MISC_REGISTERS_RESET_REG_2_RST_BMAC0 (0x1<<0)
#define MISC_REGISTERS_RESET_REG_2_RST_EMAC0_HARD_CORE (0x1<<14)
#define MISC_REGISTERS_RESET_REG_2_RST_EMAC1_HARD_CORE (0x1<<15)
#define MISC_REGISTERS_RESET_REG_2_RST_GRC (0x1<<4)
#define MISC_REGISTERS_RESET_REG_2_RST_MCP_N_HARD_CORE_RST_B (0x1<<6)
#define MISC_REGISTERS_RESET_REG_2_RST_MCP_N_RESET_REG_HARD_CORE (0x1<<5)
#define MISC_REGISTERS_RESET_REG_2_RST_MDIO (0x1<<13)
#define MISC_REGISTERS_RESET_REG_2_RST_MISC_CORE (0x1<<11)
#define MISC_REGISTERS_RESET_REG_2_RST_RBCN (0x1<<9)
#define MISC_REGISTERS_RESET_REG_2_SET 0x594
#define MISC_REGISTERS_RESET_REG_3_CLEAR 0x5a8
#define MISC_REGISTERS_RESET_REG_3_MISC_NIG_MUX_SERDES0_IDDQ (0x1<<1)
@ -4454,6 +4474,7 @@
#define HW_LOCK_RESOURCE_GPIO 1
#define HW_LOCK_RESOURCE_MDIO 0
#define HW_LOCK_RESOURCE_PORT0_ATT_MASK 3
#define HW_LOCK_RESOURCE_RESERVED_08 8
#define HW_LOCK_RESOURCE_SPIO 2
#define HW_LOCK_RESOURCE_UNDI 5
#define PRS_FLAG_OVERETH_IPV4 1
@ -4474,6 +4495,10 @@
#define AEU_INPUTS_ATTN_BITS_GPIO3_FUNCTION_0 (1<<5)
#define AEU_INPUTS_ATTN_BITS_GPIO3_FUNCTION_1 (1<<9)
#define AEU_INPUTS_ATTN_BITS_IGU_PARITY_ERROR (1<<12)
#define AEU_INPUTS_ATTN_BITS_MCP_LATCHED_ROM_PARITY (1<<28)
#define AEU_INPUTS_ATTN_BITS_MCP_LATCHED_SCPAD_PARITY (1<<31)
#define AEU_INPUTS_ATTN_BITS_MCP_LATCHED_UMP_RX_PARITY (1<<29)
#define AEU_INPUTS_ATTN_BITS_MCP_LATCHED_UMP_TX_PARITY (1<<30)
#define AEU_INPUTS_ATTN_BITS_MISC_HW_INTERRUPT (1<<15)
#define AEU_INPUTS_ATTN_BITS_MISC_PARITY_ERROR (1<<14)
#define AEU_INPUTS_ATTN_BITS_PARSER_PARITY_ERROR (1<<20)

View File

@ -37,7 +37,6 @@
static void bond_glean_dev_ipv6(struct net_device *dev, struct in6_addr *addr)
{
struct inet6_dev *idev;
struct inet6_ifaddr *ifa;
if (!dev)
return;
@ -47,10 +46,12 @@ static void bond_glean_dev_ipv6(struct net_device *dev, struct in6_addr *addr)
return;
read_lock_bh(&idev->lock);
ifa = idev->addr_list;
if (ifa)
if (!list_empty(&idev->addr_list)) {
struct inet6_ifaddr *ifa
= list_first_entry(&idev->addr_list,
struct inet6_ifaddr, if_list);
ipv6_addr_copy(addr, &ifa->addr);
else
} else
ipv6_addr_set(addr, 0, 0, 0, 0);
read_unlock_bh(&idev->lock);

View File

@ -59,6 +59,7 @@
#include <linux/uaccess.h>
#include <linux/errno.h>
#include <linux/netdevice.h>
#include <linux/netpoll.h>
#include <linux/inetdevice.h>
#include <linux/igmp.h>
#include <linux/etherdevice.h>
@ -430,6 +431,17 @@ int bond_dev_queue_xmit(struct bonding *bond, struct sk_buff *skb,
}
skb->priority = 1;
#ifdef CONFIG_NET_POLL_CONTROLLER
if (unlikely(bond->dev->priv_flags & IFF_IN_NETPOLL)) {
struct netpoll *np = bond->dev->npinfo->netpoll;
slave_dev->npinfo = bond->dev->npinfo;
np->real_dev = np->dev = skb->dev;
slave_dev->priv_flags |= IFF_IN_NETPOLL;
netpoll_send_skb(np, skb);
slave_dev->priv_flags &= ~IFF_IN_NETPOLL;
np->dev = bond->dev;
} else
#endif
dev_queue_xmit(skb);
return 0;
@ -761,32 +773,6 @@ static int bond_check_dev_link(struct bonding *bond,
/*----------------------------- Multicast list ------------------------------*/
/*
* Returns 0 if dmi1 and dmi2 are the same, non-0 otherwise
*/
static inline int bond_is_dmi_same(const struct dev_mc_list *dmi1,
const struct dev_mc_list *dmi2)
{
return memcmp(dmi1->dmi_addr, dmi2->dmi_addr, dmi1->dmi_addrlen) == 0 &&
dmi1->dmi_addrlen == dmi2->dmi_addrlen;
}
/*
* returns dmi entry if found, NULL otherwise
*/
static struct dev_mc_list *bond_mc_list_find_dmi(struct dev_mc_list *dmi,
struct dev_mc_list *mc_list)
{
struct dev_mc_list *idmi;
for (idmi = mc_list; idmi; idmi = idmi->next) {
if (bond_is_dmi_same(dmi, idmi))
return idmi;
}
return NULL;
}
/*
* Push the promiscuity flag down to appropriate slaves
*/
@ -839,18 +825,18 @@ static int bond_set_allmulti(struct bonding *bond, int inc)
* Add a Multicast address to slaves
* according to mode
*/
static void bond_mc_add(struct bonding *bond, void *addr, int alen)
static void bond_mc_add(struct bonding *bond, void *addr)
{
if (USES_PRIMARY(bond->params.mode)) {
/* write lock already acquired */
if (bond->curr_active_slave)
dev_mc_add(bond->curr_active_slave->dev, addr, alen, 0);
dev_mc_add(bond->curr_active_slave->dev, addr);
} else {
struct slave *slave;
int i;
bond_for_each_slave(bond, slave, i)
dev_mc_add(slave->dev, addr, alen, 0);
dev_mc_add(slave->dev, addr);
}
}
@ -858,18 +844,17 @@ static void bond_mc_add(struct bonding *bond, void *addr, int alen)
* Remove a multicast address from slave
* according to mode
*/
static void bond_mc_delete(struct bonding *bond, void *addr, int alen)
static void bond_mc_del(struct bonding *bond, void *addr)
{
if (USES_PRIMARY(bond->params.mode)) {
/* write lock already acquired */
if (bond->curr_active_slave)
dev_mc_delete(bond->curr_active_slave->dev, addr,
alen, 0);
dev_mc_del(bond->curr_active_slave->dev, addr);
} else {
struct slave *slave;
int i;
bond_for_each_slave(bond, slave, i) {
dev_mc_delete(slave->dev, addr, alen, 0);
dev_mc_del(slave->dev, addr);
}
}
}
@ -895,50 +880,6 @@ static void bond_resend_igmp_join_requests(struct bonding *bond)
rcu_read_unlock();
}
/*
* Totally destroys the mc_list in bond
*/
static void bond_mc_list_destroy(struct bonding *bond)
{
struct dev_mc_list *dmi;
dmi = bond->mc_list;
while (dmi) {
bond->mc_list = dmi->next;
kfree(dmi);
dmi = bond->mc_list;
}
bond->mc_list = NULL;
}
/*
* Copy all the Multicast addresses from src to the bonding device dst
*/
static int bond_mc_list_copy(struct dev_mc_list *mc_list, struct bonding *bond,
gfp_t gfp_flag)
{
struct dev_mc_list *dmi, *new_dmi;
for (dmi = mc_list; dmi; dmi = dmi->next) {
new_dmi = kmalloc(sizeof(struct dev_mc_list), gfp_flag);
if (!new_dmi) {
/* FIXME: Potential memory leak !!! */
return -ENOMEM;
}
new_dmi->next = bond->mc_list;
bond->mc_list = new_dmi;
new_dmi->dmi_addrlen = dmi->dmi_addrlen;
memcpy(new_dmi->dmi_addr, dmi->dmi_addr, dmi->dmi_addrlen);
new_dmi->dmi_users = dmi->dmi_users;
new_dmi->dmi_gusers = dmi->dmi_gusers;
}
return 0;
}
/*
* flush all members of flush->mc_list from device dev->mc_list
*/
@ -946,16 +887,16 @@ static void bond_mc_list_flush(struct net_device *bond_dev,
struct net_device *slave_dev)
{
struct bonding *bond = netdev_priv(bond_dev);
struct dev_mc_list *dmi;
struct netdev_hw_addr *ha;
for (dmi = bond_dev->mc_list; dmi; dmi = dmi->next)
dev_mc_delete(slave_dev, dmi->dmi_addr, dmi->dmi_addrlen, 0);
netdev_for_each_mc_addr(ha, bond_dev)
dev_mc_del(slave_dev, ha->addr);
if (bond->params.mode == BOND_MODE_8023AD) {
/* del lacpdu mc addr from mc list */
u8 lacpdu_multicast[ETH_ALEN] = MULTICAST_LACPDU_ADDR;
dev_mc_delete(slave_dev, lacpdu_multicast, ETH_ALEN, 0);
dev_mc_del(slave_dev, lacpdu_multicast);
}
}
@ -969,7 +910,7 @@ static void bond_mc_list_flush(struct net_device *bond_dev,
static void bond_mc_swap(struct bonding *bond, struct slave *new_active,
struct slave *old_active)
{
struct dev_mc_list *dmi;
struct netdev_hw_addr *ha;
if (!USES_PRIMARY(bond->params.mode))
/* nothing to do - mc list is already up-to-date on
@ -984,9 +925,8 @@ static void bond_mc_swap(struct bonding *bond, struct slave *new_active,
if (bond->dev->flags & IFF_ALLMULTI)
dev_set_allmulti(old_active->dev, -1);
for (dmi = bond->dev->mc_list; dmi; dmi = dmi->next)
dev_mc_delete(old_active->dev, dmi->dmi_addr,
dmi->dmi_addrlen, 0);
netdev_for_each_mc_addr(ha, bond->dev)
dev_mc_del(old_active->dev, ha->addr);
}
if (new_active) {
@ -997,9 +937,8 @@ static void bond_mc_swap(struct bonding *bond, struct slave *new_active,
if (bond->dev->flags & IFF_ALLMULTI)
dev_set_allmulti(new_active->dev, 1);
for (dmi = bond->dev->mc_list; dmi; dmi = dmi->next)
dev_mc_add(new_active->dev, dmi->dmi_addr,
dmi->dmi_addrlen, 0);
netdev_for_each_mc_addr(ha, bond->dev)
dev_mc_add(new_active->dev, ha->addr);
bond_resend_igmp_join_requests(bond);
}
}
@ -1329,6 +1268,61 @@ static void bond_detach_slave(struct bonding *bond, struct slave *slave)
bond->slave_cnt--;
}
#ifdef CONFIG_NET_POLL_CONTROLLER
/*
* You must hold read lock on bond->lock before calling this.
*/
static bool slaves_support_netpoll(struct net_device *bond_dev)
{
struct bonding *bond = netdev_priv(bond_dev);
struct slave *slave;
int i = 0;
bool ret = true;
bond_for_each_slave(bond, slave, i) {
if ((slave->dev->priv_flags & IFF_DISABLE_NETPOLL) ||
!slave->dev->netdev_ops->ndo_poll_controller)
ret = false;
}
return i != 0 && ret;
}
static void bond_poll_controller(struct net_device *bond_dev)
{
struct net_device *dev = bond_dev->npinfo->netpoll->real_dev;
if (dev != bond_dev)
netpoll_poll_dev(dev);
}
static void bond_netpoll_cleanup(struct net_device *bond_dev)
{
struct bonding *bond = netdev_priv(bond_dev);
struct slave *slave;
const struct net_device_ops *ops;
int i;
read_lock(&bond->lock);
bond_dev->npinfo = NULL;
bond_for_each_slave(bond, slave, i) {
if (slave->dev) {
ops = slave->dev->netdev_ops;
if (ops->ndo_netpoll_cleanup)
ops->ndo_netpoll_cleanup(slave->dev);
else
slave->dev->npinfo = NULL;
}
}
read_unlock(&bond->lock);
}
#else
static void bond_netpoll_cleanup(struct net_device *bond_dev)
{
}
#endif
/*---------------------------------- IOCTL ----------------------------------*/
static int bond_sethwaddr(struct net_device *bond_dev,
@ -1411,7 +1405,7 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev)
struct bonding *bond = netdev_priv(bond_dev);
const struct net_device_ops *slave_ops = slave_dev->netdev_ops;
struct slave *new_slave = NULL;
struct dev_mc_list *dmi;
struct netdev_hw_addr *ha;
struct sockaddr addr;
int link_reporting;
int old_features = bond_dev->features;
@ -1485,14 +1479,27 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev)
bond_dev->name,
bond_dev->type, slave_dev->type);
netdev_bonding_change(bond_dev, NETDEV_BONDING_OLDTYPE);
res = netdev_bonding_change(bond_dev,
NETDEV_PRE_TYPE_CHANGE);
res = notifier_to_errno(res);
if (res) {
pr_err("%s: refused to change device type\n",
bond_dev->name);
res = -EBUSY;
goto err_undo_flags;
}
/* Flush unicast and multicast addresses */
dev_uc_flush(bond_dev);
dev_mc_flush(bond_dev);
if (slave_dev->type != ARPHRD_ETHER)
bond_setup_by_slave(bond_dev, slave_dev);
else
ether_setup(bond_dev);
netdev_bonding_change(bond_dev, NETDEV_BONDING_NEWTYPE);
netdev_bonding_change(bond_dev,
NETDEV_POST_TYPE_CHANGE);
}
} else if (bond_dev->type != slave_dev->type) {
pr_err("%s ether type (%d) is different from other slaves (%d), can not enslave it.\n",
@ -1593,9 +1600,8 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev)
netif_addr_lock_bh(bond_dev);
/* upload master's mc_list to new slave */
for (dmi = bond_dev->mc_list; dmi; dmi = dmi->next)
dev_mc_add(slave_dev, dmi->dmi_addr,
dmi->dmi_addrlen, 0);
netdev_for_each_mc_addr(ha, bond_dev)
dev_mc_add(slave_dev, ha->addr);
netif_addr_unlock_bh(bond_dev);
}
@ -1603,7 +1609,7 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev)
/* add lacpdu mc addr to mc list */
u8 lacpdu_multicast[ETH_ALEN] = MULTICAST_LACPDU_ADDR;
dev_mc_add(slave_dev, lacpdu_multicast, ETH_ALEN, 0);
dev_mc_add(slave_dev, lacpdu_multicast);
}
bond_add_vlans_on_slave(bond, slave_dev);
@ -1735,6 +1741,18 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev)
bond_set_carrier(bond);
#ifdef CONFIG_NET_POLL_CONTROLLER
if (slaves_support_netpoll(bond_dev)) {
bond_dev->priv_flags &= ~IFF_DISABLE_NETPOLL;
if (bond_dev->npinfo)
slave_dev->npinfo = bond_dev->npinfo;
} else if (!(bond_dev->priv_flags & IFF_DISABLE_NETPOLL)) {
bond_dev->priv_flags |= IFF_DISABLE_NETPOLL;
pr_info("New slave device %s does not support netpoll\n",
slave_dev->name);
pr_info("Disabling netpoll support for %s\n", bond_dev->name);
}
#endif
read_unlock(&bond->lock);
res = bond_create_slave_symlinks(bond_dev, slave_dev);
@ -1801,6 +1819,7 @@ int bond_release(struct net_device *bond_dev, struct net_device *slave_dev)
return -EINVAL;
}
netdev_bonding_change(bond_dev, NETDEV_BONDING_DESLAVE);
write_lock_bh(&bond->lock);
slave = bond_get_slave_by_dev(bond, slave_dev);
@ -1929,6 +1948,17 @@ int bond_release(struct net_device *bond_dev, struct net_device *slave_dev)
netdev_set_master(slave_dev, NULL);
#ifdef CONFIG_NET_POLL_CONTROLLER
read_lock_bh(&bond->lock);
if (slaves_support_netpoll(bond_dev))
bond_dev->priv_flags &= ~IFF_DISABLE_NETPOLL;
read_unlock_bh(&bond->lock);
if (slave_dev->netdev_ops->ndo_netpoll_cleanup)
slave_dev->netdev_ops->ndo_netpoll_cleanup(slave_dev);
else
slave_dev->npinfo = NULL;
#endif
/* close slave before restoring its mac address */
dev_close(slave_dev);
@ -3905,10 +3935,24 @@ static int bond_do_ioctl(struct net_device *bond_dev, struct ifreq *ifr, int cmd
return res;
}
static bool bond_addr_in_mc_list(unsigned char *addr,
struct netdev_hw_addr_list *list,
int addrlen)
{
struct netdev_hw_addr *ha;
netdev_hw_addr_list_for_each(ha, list)
if (!memcmp(ha->addr, addr, addrlen))
return true;
return false;
}
static void bond_set_multicast_list(struct net_device *bond_dev)
{
struct bonding *bond = netdev_priv(bond_dev);
struct dev_mc_list *dmi;
struct netdev_hw_addr *ha;
bool found;
/*
* Do promisc before checking multicast_mode
@ -3943,20 +3987,25 @@ static void bond_set_multicast_list(struct net_device *bond_dev)
bond->flags = bond_dev->flags;
/* looking for addresses to add to slaves' mc list */
for (dmi = bond_dev->mc_list; dmi; dmi = dmi->next) {
if (!bond_mc_list_find_dmi(dmi, bond->mc_list))
bond_mc_add(bond, dmi->dmi_addr, dmi->dmi_addrlen);
netdev_for_each_mc_addr(ha, bond_dev) {
found = bond_addr_in_mc_list(ha->addr, &bond->mc_list,
bond_dev->addr_len);
if (!found)
bond_mc_add(bond, ha->addr);
}
/* looking for addresses to delete from slaves' list */
for (dmi = bond->mc_list; dmi; dmi = dmi->next) {
if (!bond_mc_list_find_dmi(dmi, bond_dev->mc_list))
bond_mc_delete(bond, dmi->dmi_addr, dmi->dmi_addrlen);
netdev_hw_addr_list_for_each(ha, &bond->mc_list) {
found = bond_addr_in_mc_list(ha->addr, &bond_dev->mc,
bond_dev->addr_len);
if (!found)
bond_mc_del(bond, ha->addr);
}
/* save master's multicast list */
bond_mc_list_destroy(bond);
bond_mc_list_copy(bond_dev->mc_list, bond, GFP_ATOMIC);
__hw_addr_flush(&bond->mc_list);
__hw_addr_add_multiple(&bond->mc_list, &bond_dev->mc,
bond_dev->addr_len, NETDEV_HW_ADDR_T_MULTICAST);
read_unlock(&bond->lock);
}
@ -4448,6 +4497,10 @@ static const struct net_device_ops bond_netdev_ops = {
.ndo_vlan_rx_register = bond_vlan_rx_register,
.ndo_vlan_rx_add_vid = bond_vlan_rx_add_vid,
.ndo_vlan_rx_kill_vid = bond_vlan_rx_kill_vid,
#ifdef CONFIG_NET_POLL_CONTROLLER
.ndo_netpoll_cleanup = bond_netpoll_cleanup,
.ndo_poll_controller = bond_poll_controller,
#endif
};
static void bond_destructor(struct net_device *bond_dev)
@ -4541,6 +4594,8 @@ static void bond_uninit(struct net_device *bond_dev)
{
struct bonding *bond = netdev_priv(bond_dev);
bond_netpoll_cleanup(bond_dev);
/* Release the bonded slaves */
bond_release_all(bond_dev);
@ -4550,9 +4605,7 @@ static void bond_uninit(struct net_device *bond_dev)
bond_remove_proc_entry(bond);
netif_addr_lock_bh(bond_dev);
bond_mc_list_destroy(bond);
netif_addr_unlock_bh(bond_dev);
__hw_addr_flush(&bond->mc_list);
}
/*------------------------- Module initialization ---------------------------*/
@ -4924,6 +4977,8 @@ static int bond_init(struct net_device *bond_dev)
list_add_tail(&bond->bond_list, &bn->dev_list);
bond_prepare_sysfs_group(bond);
__hw_addr_init(&bond->mc_list);
return 0;
}

View File

@ -202,7 +202,7 @@ struct bonding {
char proc_file_name[IFNAMSIZ];
#endif /* CONFIG_PROC_FS */
struct list_head bond_list;
struct dev_mc_list *mc_list;
struct netdev_hw_addr_list mc_list;
int (*xmit_hash_policy)(struct sk_buff *, int);
__be32 master_ip;
u16 flags;

17
drivers/net/caif/Kconfig Normal file
View File

@ -0,0 +1,17 @@
#
# CAIF physical drivers
#
if CAIF
comment "CAIF transport drivers"
config CAIF_TTY
tristate "CAIF TTY transport driver"
default n
---help---
The CAIF TTY transport driver is a Line Discipline (ldisc)
identified as N_CAIF. When this ldisc is opened from user space
it will redirect the TTY's traffic into the CAIF stack.
endif # CAIF

12
drivers/net/caif/Makefile Normal file
View File

@ -0,0 +1,12 @@
ifeq ($(CONFIG_CAIF_DEBUG),1)
CAIF_DBG_FLAGS := -DDEBUG
endif
KBUILD_EXTRA_SYMBOLS=net/caif/Module.symvers
ccflags-y := $(CAIF_FLAGS) $(CAIF_DBG_FLAGS)
clean-dirs:= .tmp_versions
clean-files:= Module.symvers modules.order *.cmd *~ \
# Serial interface
obj-$(CONFIG_CAIF_TTY) += caif_serial.o

Some files were not shown because too many files have changed in this diff Show More