Merge branch 'master' of master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6

* 'master' of master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6: (183 commits)
  [TG3]: Update version to 3.78.
  [TG3]: Add missing NVRAM strapping.
  [TG3]: Enable auto MDI.
  [TG3]: Fix the polarity bit.
  [TG3]: Fix irq_sync race condition.
  [NET_SCHED]: ematch: module autoloading
  [TCP]: tcp probe wraparound handling and other changes
  [RTNETLINK]: rtnl_link: allow specifying initial device address
  [RTNETLINK]: rtnl_link API simplification
  [VLAN]: Fix MAC address handling
  [ETH]: Validate address in eth_mac_addr
  [NET]: Fix races in net_rx_action vs netpoll.
  [AF_UNIX]: Rewrite garbage collector, fixes race.
  [NETFILTER]: {ip, nf}_conntrack_sctp: fix remotely triggerable NULL ptr dereference (CVE-2007-2876)
  [NET]: Make all initialized struct seq_operations const.
  [UDP]: Fix length check.
  [IPV6]: Remove unneeded pointer idev from addrconf_cleanup().
  [DECNET]: Another unnecessary net/tcp.h inclusion in net/dn.h
  [IPV6]: Make IPV6_{RECV,2292}RTHDR boolean options.
  [IPV6]: Do not send RH0 anymore.
  ...

Fixed up trivial conflict in Documentation/feature-removal-schedule.txt
manually.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Linus Torvalds 2007-07-12 13:31:22 -07:00
commit e1bd2ac5a6
412 changed files with 11330 additions and 7421 deletions

View File

@ -262,25 +262,6 @@ Who: Richard Purdie <rpurdie@rpsys.net>
---------------------------
What: Multipath cached routing support in ipv4
When: in 2.6.23
Why: Code was merged, then submitter immediately disappeared leaving
us with no maintainer and lots of bugs. The code should not have
been merged in the first place, and many aspects of it's
implementation are blocking more critical core networking
development. It's marked EXPERIMENTAL and no distribution
enables it because it cause obscure crashes due to unfixable bugs
(interfaces don't return errors so memory allocation can't be
handled, calling contexts of these interfaces make handling
errors impossible too because they get called after we've
totally commited to creating a route object, for example).
This problem has existed for years and no forward progress
has ever been made, and nobody steps up to try and salvage
this code, so we're going to finally just get rid of it.
Who: David S. Miller <davem@davemloft.net>
---------------------------
What: read_dev_chars(), read_conf_data{,_lpm}() (s390 common I/O layer)
When: December 2007
Why: These functions are a leftover from 2.4 times. They have several
@ -337,3 +318,11 @@ Who: Jean Delvare <khali@linux-fr.org>
---------------------------
What: iptables SAME target
When: 1.1. 2008
Files: net/ipv4/netfilter/ipt_SAME.c, include/linux/netfilter_ipv4/ipt_SAME.h
Why: Obsolete for multiple years now, NAT core provides the same behaviour.
Unfixable broken wrt. 32/64 bit cleanness.
Who: Patrick McHardy <kaber@trash.net>
---------------------------

View File

@ -874,8 +874,7 @@ accept_redirects - BOOLEAN
accept_source_route - INTEGER
Accept source routing (routing extension header).
> 0: Accept routing header.
= 0: Accept only routing header type 2.
>= 0: Accept only routing header type 2.
< 0: Do not accept routing header.
Default: 0

View File

@ -0,0 +1,169 @@
This brief document describes how to use the kernel's PPPoL2TP driver
to provide L2TP functionality. L2TP is a protocol that tunnels one or
more PPP sessions over a UDP tunnel. It is commonly used for VPNs
(L2TP/IPSec) and by ISPs to tunnel subscriber PPP sessions over an IP
network infrastructure.
Design
======
The PPPoL2TP driver, drivers/net/pppol2tp.c, provides a mechanism by
which PPP frames carried through an L2TP session are passed through
the kernel's PPP subsystem. The standard PPP daemon, pppd, handles all
PPP interaction with the peer. PPP network interfaces are created for
each local PPP endpoint.
The L2TP protocol http://www.faqs.org/rfcs/rfc2661.html defines L2TP
control and data frames. L2TP control frames carry messages between
L2TP clients/servers and are used to setup / teardown tunnels and
sessions. An L2TP client or server is implemented in userspace and
will use a regular UDP socket per tunnel. L2TP data frames carry PPP
frames, which may be PPP control or PPP data. The kernel's PPP
subsystem arranges for PPP control frames to be delivered to pppd,
while data frames are forwarded as usual.
Each tunnel and session within a tunnel is assigned a unique tunnel_id
and session_id. These ids are carried in the L2TP header of every
control and data packet. The pppol2tp driver uses them to lookup
internal tunnel and/or session contexts. Zero tunnel / session ids are
treated specially - zero ids are never assigned to tunnels or sessions
in the network. In the driver, the tunnel context keeps a pointer to
the tunnel UDP socket. The session context keeps a pointer to the
PPPoL2TP socket, as well as other data that lets the driver interface
to the kernel PPP subsystem.
Note that the pppol2tp kernel driver handles only L2TP data frames;
L2TP control frames are simply passed up to userspace in the UDP
tunnel socket. The kernel handles all datapath aspects of the
protocol, including data packet resequencing (if enabled).
There are a number of requirements on the userspace L2TP daemon in
order to use the pppol2tp driver.
1. Use a UDP socket per tunnel.
2. Create a single PPPoL2TP socket per tunnel bound to a special null
session id. This is used only for communicating with the driver but
must remain open while the tunnel is active. Opening this tunnel
management socket causes the driver to mark the tunnel socket as an
L2TP UDP encapsulation socket and flags it for use by the
referenced tunnel id. This hooks up the UDP receive path via
udp_encap_rcv() in net/ipv4/udp.c. PPP data frames are never passed
in this special PPPoX socket.
3. Create a PPPoL2TP socket per L2TP session. This is typically done
by starting pppd with the pppol2tp plugin and appropriate
arguments. A PPPoL2TP tunnel management socket (Step 2) must be
created before the first PPPoL2TP session socket is created.
When creating PPPoL2TP sockets, the application provides information
to the driver about the socket in a socket connect() call. Source and
destination tunnel and session ids are provided, as well as the file
descriptor of a UDP socket. See struct pppol2tp_addr in
include/linux/if_ppp.h. Note that zero tunnel / session ids are
treated specially. When creating the per-tunnel PPPoL2TP management
socket in Step 2 above, zero source and destination session ids are
specified, which tells the driver to prepare the supplied UDP file
descriptor for use as an L2TP tunnel socket.
Userspace may control behavior of the tunnel or session using
setsockopt and ioctl on the PPPoX socket. The following socket
options are supported:-
DEBUG - bitmask of debug message categories. See below.
SENDSEQ - 0 => don't send packets with sequence numbers
1 => send packets with sequence numbers
RECVSEQ - 0 => receive packet sequence numbers are optional
1 => drop receive packets without sequence numbers
LNSMODE - 0 => act as LAC.
1 => act as LNS.
REORDERTO - reorder timeout (in millisecs). If 0, don't try to reorder.
Only the DEBUG option is supported by the special tunnel management
PPPoX socket.
In addition to the standard PPP ioctls, a PPPIOCGL2TPSTATS is provided
to retrieve tunnel and session statistics from the kernel using the
PPPoX socket of the appropriate tunnel or session.
Debugging
=========
The driver supports a flexible debug scheme where kernel trace
messages may be optionally enabled per tunnel and per session. Care is
needed when debugging a live system since the messages are not
rate-limited and a busy system could be swamped. Userspace uses
setsockopt on the PPPoX socket to set a debug mask.
The following debug mask bits are available:
PPPOL2TP_MSG_DEBUG verbose debug (if compiled in)
PPPOL2TP_MSG_CONTROL userspace - kernel interface
PPPOL2TP_MSG_SEQ sequence numbers handling
PPPOL2TP_MSG_DATA data packets
Sample Userspace Code
=====================
1. Create tunnel management PPPoX socket
kernel_fd = socket(AF_PPPOX, SOCK_DGRAM, PX_PROTO_OL2TP);
if (kernel_fd >= 0) {
struct sockaddr_pppol2tp sax;
struct sockaddr_in const *peer_addr;
peer_addr = l2tp_tunnel_get_peer_addr(tunnel);
memset(&sax, 0, sizeof(sax));
sax.sa_family = AF_PPPOX;
sax.sa_protocol = PX_PROTO_OL2TP;
sax.pppol2tp.fd = udp_fd; /* fd of tunnel UDP socket */
sax.pppol2tp.addr.sin_addr.s_addr = peer_addr->sin_addr.s_addr;
sax.pppol2tp.addr.sin_port = peer_addr->sin_port;
sax.pppol2tp.addr.sin_family = AF_INET;
sax.pppol2tp.s_tunnel = tunnel_id;
sax.pppol2tp.s_session = 0; /* special case: mgmt socket */
sax.pppol2tp.d_tunnel = 0;
sax.pppol2tp.d_session = 0; /* special case: mgmt socket */
if(connect(kernel_fd, (struct sockaddr *)&sax, sizeof(sax) ) < 0 ) {
perror("connect failed");
result = -errno;
goto err;
}
}
2. Create session PPPoX data socket
struct sockaddr_pppol2tp sax;
int fd;
/* Note, the target socket must be bound already, else it will not be ready */
sax.sa_family = AF_PPPOX;
sax.sa_protocol = PX_PROTO_OL2TP;
sax.pppol2tp.fd = tunnel_fd;
sax.pppol2tp.addr.sin_addr.s_addr = addr->sin_addr.s_addr;
sax.pppol2tp.addr.sin_port = addr->sin_port;
sax.pppol2tp.addr.sin_family = AF_INET;
sax.pppol2tp.s_tunnel = tunnel_id;
sax.pppol2tp.s_session = session_id;
sax.pppol2tp.d_tunnel = peer_tunnel_id;
sax.pppol2tp.d_session = peer_session_id;
/* session_fd is the fd of the session's PPPoL2TP socket.
* tunnel_fd is the fd of the tunnel UDP socket.
*/
fd = connect(session_fd, (struct sockaddr *)&sax, sizeof(sax));
if (fd < 0 ) {
return -errno;
}
return 0;
Miscellanous
============
The PPPoL2TP driver was developed as part of the OpenL2TP project by
Katalix Systems Ltd. OpenL2TP is a full-featured L2TP client / server,
designed from the ground up to have the L2TP datapath in the
kernel. The project also implemented the pppol2tp plugin for pppd
which allows pppd to use the kernel driver. Details can be found at
http://openl2tp.sourceforge.net.

View File

@ -0,0 +1,111 @@
HOWTO for multiqueue network device support
===========================================
Section 1: Base driver requirements for implementing multiqueue support
Section 2: Qdisc support for multiqueue devices
Section 3: Brief howto using PRIO or RR for multiqueue devices
Intro: Kernel support for multiqueue devices
---------------------------------------------------------
Kernel support for multiqueue devices is only an API that is presented to the
netdevice layer for base drivers to implement. This feature is part of the
core networking stack, and all network devices will be running on the
multiqueue-aware stack. If a base driver only has one queue, then these
changes are transparent to that driver.
Section 1: Base driver requirements for implementing multiqueue support
-----------------------------------------------------------------------
Base drivers are required to use the new alloc_etherdev_mq() or
alloc_netdev_mq() functions to allocate the subqueues for the device. The
underlying kernel API will take care of the allocation and deallocation of
the subqueue memory, as well as netdev configuration of where the queues
exist in memory.
The base driver will also need to manage the queues as it does the global
netdev->queue_lock today. Therefore base drivers should use the
netif_{start|stop|wake}_subqueue() functions to manage each queue while the
device is still operational. netdev->queue_lock is still used when the device
comes online or when it's completely shut down (unregister_netdev(), etc.).
Finally, the base driver should indicate that it is a multiqueue device. The
feature flag NETIF_F_MULTI_QUEUE should be added to the netdev->features
bitmap on device initialization. Below is an example from e1000:
#ifdef CONFIG_E1000_MQ
if ( (adapter->hw.mac.type == e1000_82571) ||
(adapter->hw.mac.type == e1000_82572) ||
(adapter->hw.mac.type == e1000_80003es2lan))
netdev->features |= NETIF_F_MULTI_QUEUE;
#endif
Section 2: Qdisc support for multiqueue devices
-----------------------------------------------
Currently two qdiscs support multiqueue devices. A new round-robin qdisc,
sch_rr, and sch_prio. The qdisc is responsible for classifying the skb's to
bands and queues, and will store the queue mapping into skb->queue_mapping.
Use this field in the base driver to determine which queue to send the skb
to.
sch_rr has been added for hardware that doesn't want scheduling policies from
software, so it's a straight round-robin qdisc. It uses the same syntax and
classification priomap that sch_prio uses, so it should be intuitive to
configure for people who've used sch_prio.
The PRIO qdisc naturally plugs into a multiqueue device. If PRIO has been
built with NET_SCH_PRIO_MQ, then upon load, it will make sure the number of
bands requested is equal to the number of queues on the hardware. If they
are equal, it sets a one-to-one mapping up between the queues and bands. If
they're not equal, it will not load the qdisc. This is the same behavior
for RR. Once the association is made, any skb that is classified will have
skb->queue_mapping set, which will allow the driver to properly queue skb's
to multiple queues.
Section 3: Brief howto using PRIO and RR for multiqueue devices
---------------------------------------------------------------
The userspace command 'tc,' part of the iproute2 package, is used to configure
qdiscs. To add the PRIO qdisc to your network device, assuming the device is
called eth0, run the following command:
# tc qdisc add dev eth0 root handle 1: prio bands 4 multiqueue
This will create 4 bands, 0 being highest priority, and associate those bands
to the queues on your NIC. Assuming eth0 has 4 Tx queues, the band mapping
would look like:
band 0 => queue 0
band 1 => queue 1
band 2 => queue 2
band 3 => queue 3
Traffic will begin flowing through each queue if your TOS values are assigning
traffic across the various bands. For example, ssh traffic will always try to
go out band 0 based on TOS -> Linux priority conversion (realtime traffic),
so it will be sent out queue 0. ICMP traffic (pings) fall into the "normal"
traffic classification, which is band 1. Therefore pings will be send out
queue 1 on the NIC.
Note the use of the multiqueue keyword. This is only in versions of iproute2
that support multiqueue networking devices; if this is omitted when loading
a qdisc onto a multiqueue device, the qdisc will load and operate the same
if it were loaded onto a single-queue device (i.e. - sends all traffic to
queue 0).
Another alternative to multiqueue band allocation can be done by using the
multiqueue option and specify 0 bands. If this is the case, the qdisc will
allocate the number of bands to equal the number of queues that the device
reports, and bring the qdisc online.
The behavior of tc filters remains the same, where it will override TOS priority
classification.
Author: Peter P. Waskiewicz Jr. <peter.p.waskiewicz.jr@intel.com>

View File

@ -20,6 +20,30 @@ private data which gets freed when the network device is freed. If
separately allocated data is attached to the network device
(dev->priv) then it is up to the module exit handler to free that.
MTU
===
Each network device has a Maximum Transfer Unit. The MTU does not
include any link layer protocol overhead. Upper layer protocols must
not pass a socket buffer (skb) to a device to transmit with more data
than the mtu. The MTU does not include link layer header overhead, so
for example on Ethernet if the standard MTU is 1500 bytes used, the
actual skb will contain up to 1514 bytes because of the Ethernet
header. Devices should allow for the 4 byte VLAN header as well.
Segmentation Offload (GSO, TSO) is an exception to this rule. The
upper layer protocol may pass a large socket buffer to the device
transmit routine, and the device will break that up into separate
packets based on the current MTU.
MTU is symmetrical and applies both to receive and transmit. A device
must be able to receive at least the maximum size packet allowed by
the MTU. A network device may use the MTU as mechanism to size receive
buffers, but the device should allow packets with VLAN header. With
standard Ethernet mtu of 1500 bytes, the device should allow up to
1518 byte packets (1500 + 14 header + 4 tag). The device may either:
drop, truncate, or pass up oversize packets, but dropping oversize
packets is preferred.
struct net_device synchronization rules
=======================================
@ -43,16 +67,17 @@ dev->get_stats:
dev->hard_start_xmit:
Synchronization: netif_tx_lock spinlock.
When the driver sets NETIF_F_LLTX in dev->features this will be
called without holding netif_tx_lock. In this case the driver
has to lock by itself when needed. It is recommended to use a try lock
for this and return -1 when the spin lock fails.
for this and return NETDEV_TX_LOCKED when the spin lock fails.
The locking there should also properly protect against
set_multicast_list
Context: Process with BHs disabled or BH (timer).
Notes: netif_queue_stopped() is guaranteed false
Interrupts must be enabled when calling hard_start_xmit.
(Interrupts must also be enabled when enabling the BH handler.)
set_multicast_list.
Context: Process with BHs disabled or BH (timer),
will be called with interrupts disabled by netconsole.
Return codes:
o NETDEV_TX_OK everything ok.
o NETDEV_TX_BUSY Cannot transmit packet, try later
@ -74,4 +99,5 @@ dev->poll:
Synchronization: __LINK_STATE_RX_SCHED bit in dev->state. See
dev_close code and comments in net/core/dev.c for more info.
Context: softirq
will be called with interrupts disabled by netconsole.

View File

@ -2903,6 +2903,11 @@ P: Michal Ostrowski
M: mostrows@speakeasy.net
S: Maintained
PPP OVER L2TP
P: James Chapman
M: jchapman@katalix.com
S: Maintained
PREEMPTIBLE KERNEL
P: Robert Love
M: rml@tech9.net

View File

@ -477,9 +477,9 @@ for (;;) {
}
else {
skb_put(skb,pkt_len-4); /* Make room */
eth_copy_and_sum(skb,
skb_copy_to_linear_data(skb,
(unsigned char *)__va(bdp->cbd_bufaddr),
pkt_len-4, 0);
pkt_len-4);
skb->protocol=eth_type_trans(skb,dev);
netif_rx(skb);
}

View File

@ -734,9 +734,9 @@ for (;;) {
}
else {
skb_put(skb,pkt_len); /* Make room */
eth_copy_and_sum(skb,
skb_copy_to_linear_data(skb,
(unsigned char *)__va(bdp->cbd_bufaddr),
pkt_len, 0);
pkt_len);
skb->protocol=eth_type_trans(skb,dev);
netif_rx(skb);
}

View File

@ -506,9 +506,9 @@ for (;;) {
}
else {
skb_put(skb,pkt_len-4); /* Make room */
eth_copy_and_sum(skb,
skb_copy_to_linear_data(skb,
cep->rx_vaddr[bdp - cep->rx_bd_base],
pkt_len-4, 0);
pkt_len-4);
skb->protocol=eth_type_trans(skb,dev);
netif_rx(skb);
}

View File

@ -725,7 +725,7 @@ while (!(bdp->cbd_sc & BD_ENET_RX_EMPTY)) {
fep->stats.rx_dropped++;
} else {
skb_put(skb,pkt_len-4); /* Make room */
eth_copy_and_sum(skb, data, pkt_len-4, 0);
skb_copy_to_linear_data(skb, data, pkt_len-4);
skb->protocol=eth_type_trans(skb,dev);
netif_rx(skb);
}

View File

@ -199,7 +199,6 @@ static void hci_usb_tx_complete(struct urb *urb);
#define __pending_q(husb, type) (&husb->pending_q[type-1])
#define __completed_q(husb, type) (&husb->completed_q[type-1])
#define __transmit_q(husb, type) (&husb->transmit_q[type-1])
#define __reassembly(husb, type) (husb->reassembly[type-1])
static inline struct _urb *__get_completed(struct hci_usb *husb, int type)
{
@ -429,12 +428,6 @@ static void hci_usb_unlink_urbs(struct hci_usb *husb)
kfree(urb->transfer_buffer);
_urb_free(_urb);
}
/* Release reassembly buffers */
if (husb->reassembly[i]) {
kfree_skb(husb->reassembly[i]);
husb->reassembly[i] = NULL;
}
}
}
@ -671,83 +664,6 @@ static int hci_usb_send_frame(struct sk_buff *skb)
return 0;
}
static inline int __recv_frame(struct hci_usb *husb, int type, void *data, int count)
{
BT_DBG("%s type %d data %p count %d", husb->hdev->name, type, data, count);
husb->hdev->stat.byte_rx += count;
while (count) {
struct sk_buff *skb = __reassembly(husb, type);
struct { int expect; } *scb;
int len = 0;
if (!skb) {
/* Start of the frame */
switch (type) {
case HCI_EVENT_PKT:
if (count >= HCI_EVENT_HDR_SIZE) {
struct hci_event_hdr *h = data;
len = HCI_EVENT_HDR_SIZE + h->plen;
} else
return -EILSEQ;
break;
case HCI_ACLDATA_PKT:
if (count >= HCI_ACL_HDR_SIZE) {
struct hci_acl_hdr *h = data;
len = HCI_ACL_HDR_SIZE + __le16_to_cpu(h->dlen);
} else
return -EILSEQ;
break;
#ifdef CONFIG_BT_HCIUSB_SCO
case HCI_SCODATA_PKT:
if (count >= HCI_SCO_HDR_SIZE) {
struct hci_sco_hdr *h = data;
len = HCI_SCO_HDR_SIZE + h->dlen;
} else
return -EILSEQ;
break;
#endif
}
BT_DBG("new packet len %d", len);
skb = bt_skb_alloc(len, GFP_ATOMIC);
if (!skb) {
BT_ERR("%s no memory for the packet", husb->hdev->name);
return -ENOMEM;
}
skb->dev = (void *) husb->hdev;
bt_cb(skb)->pkt_type = type;
__reassembly(husb, type) = skb;
scb = (void *) skb->cb;
scb->expect = len;
} else {
/* Continuation */
scb = (void *) skb->cb;
len = scb->expect;
}
len = min(len, count);
memcpy(skb_put(skb, len), data, len);
scb->expect -= len;
if (!scb->expect) {
/* Complete frame */
__reassembly(husb, type) = NULL;
bt_cb(skb)->pkt_type = type;
hci_recv_frame(skb);
}
count -= len; data += len;
}
return 0;
}
static void hci_usb_rx_complete(struct urb *urb)
{
struct _urb *_urb = container_of(urb, struct _urb, urb);
@ -776,7 +692,7 @@ static void hci_usb_rx_complete(struct urb *urb)
urb->iso_frame_desc[i].actual_length);
if (!urb->iso_frame_desc[i].status)
__recv_frame(husb, _urb->type,
hci_recv_fragment(husb->hdev, _urb->type,
urb->transfer_buffer + urb->iso_frame_desc[i].offset,
urb->iso_frame_desc[i].actual_length);
}
@ -784,7 +700,7 @@ static void hci_usb_rx_complete(struct urb *urb)
;
#endif
} else {
err = __recv_frame(husb, _urb->type, urb->transfer_buffer, count);
err = hci_recv_fragment(husb->hdev, _urb->type, urb->transfer_buffer, count);
if (err < 0) {
BT_ERR("%s corrupted packet: type %d count %d",
husb->hdev->name, _urb->type, count);

View File

@ -116,7 +116,6 @@ struct hci_usb {
__u8 ctrl_req;
struct sk_buff_head transmit_q[4];
struct sk_buff *reassembly[4]; /* Reassembly buffers */
rwlock_t completion_lock;

View File

@ -180,11 +180,6 @@ static inline ssize_t vhci_put_user(struct vhci_data *data,
return total;
}
static loff_t vhci_llseek(struct file *file, loff_t offset, int origin)
{
return -ESPIPE;
}
static ssize_t vhci_read(struct file *file,
char __user *buf, size_t count, loff_t *pos)
{
@ -334,7 +329,6 @@ static int vhci_fasync(int fd, struct file *file, int on)
static const struct file_operations vhci_fops = {
.owner = THIS_MODULE,
.llseek = vhci_llseek,
.read = vhci_read,
.write = vhci_write,
.poll = vhci_poll,

View File

@ -990,7 +990,7 @@ static void elmc_rcv_int(struct net_device *dev)
if (skb != NULL) {
skb_reserve(skb, 2); /* 16 byte alignment */
skb_put(skb,totlen);
eth_copy_and_sum(skb, (char *) p->base+(unsigned long) rbd->buffer,totlen,0);
skb_copy_to_linear_data(skb, (char *) p->base+(unsigned long) rbd->buffer,totlen);
skb->protocol = eth_type_trans(skb, dev);
netif_rx(skb);
dev->last_rx = jiffies;

View File

@ -333,9 +333,9 @@ static int lance_rx (struct net_device *dev)
skb_reserve (skb, 2); /* 16 byte align */
skb_put (skb, len); /* make room */
eth_copy_and_sum(skb,
skb_copy_to_linear_data(skb,
(unsigned char *)&(ib->rx_buf [lp->rx_new][0]),
len, 0);
len);
skb->protocol = eth_type_trans (skb, dev);
netif_rx (skb);
dev->last_rx = jiffies;

View File

@ -2017,7 +2017,7 @@ no_early_rx:
#if RX_BUF_IDX == 3
wrap_copy(skb, rx_ring, ring_offset+4, pkt_size);
#else
eth_copy_and_sum (skb, &rx_ring[ring_offset + 4], pkt_size, 0);
skb_copy_to_linear_data (skb, &rx_ring[ring_offset + 4], pkt_size);
#endif
skb_put (skb, pkt_size);

View File

@ -25,6 +25,14 @@ menuconfig NETDEVICES
# that for each of the symbols.
if NETDEVICES
config NETDEVICES_MULTIQUEUE
bool "Netdevice multiple hardware queue support"
---help---
Say Y here if you want to allow the network stack to use multiple
hardware TX queues on an ethernet device.
Most people will say N here.
config IFB
tristate "Intermediate Functional Block support"
depends on NET_CLS_ACT
@ -2784,6 +2792,19 @@ config PPPOATM
which can lead to bad results if the ATM peer loses state and
changes its encapsulation unilaterally.
config PPPOL2TP
tristate "PPP over L2TP (EXPERIMENTAL)"
depends on EXPERIMENTAL && PPP
help
Support for PPP-over-L2TP socket family. L2TP is a protocol
used by ISPs and enterprises to tunnel PPP traffic over UDP
tunnels. L2TP is replacing PPTP for VPN uses.
This kernel component handles only L2TP data packets: a
userland daemon handles L2TP the control protocol (tunnel
and session setup). One such daemon is OpenL2TP
(http://openl2tp.sourceforge.net/).
config SLIP
tristate "SLIP (serial line) support"
---help---

View File

@ -121,6 +121,7 @@ obj-$(CONFIG_PPP_DEFLATE) += ppp_deflate.o
obj-$(CONFIG_PPP_BSDCOMP) += bsd_comp.o
obj-$(CONFIG_PPP_MPPE) += ppp_mppe.o
obj-$(CONFIG_PPPOE) += pppox.o pppoe.o
obj-$(CONFIG_PPPOL2TP) += pppox.o pppol2tp.o
obj-$(CONFIG_SLIP) += slip.o
obj-$(CONFIG_SLHC) += slhc.o

View File

@ -322,9 +322,9 @@ static int lance_rx (struct net_device *dev)
skb_reserve (skb, 2); /* 16 byte align */
skb_put (skb, len); /* make room */
eth_copy_and_sum(skb,
skb_copy_to_linear_data(skb,
(unsigned char *)&(ib->rx_buf [lp->rx_new][0]),
len, 0);
len);
skb->protocol = eth_type_trans (skb, dev);
netif_rx (skb);
dev->last_rx = jiffies;

View File

@ -746,7 +746,7 @@ static int ariadne_rx(struct net_device *dev)
skb_reserve(skb,2); /* 16 byte align */
skb_put(skb,pkt_len); /* Make room */
eth_copy_and_sum(skb, (char *)priv->rx_buff[entry], pkt_len,0);
skb_copy_to_linear_data(skb, (char *)priv->rx_buff[entry], pkt_len);
skb->protocol=eth_type_trans(skb,dev);
#if 0
printk(KERN_DEBUG "RX pkt type 0x%04x from ",

View File

@ -258,7 +258,7 @@ static int ep93xx_rx(struct net_device *dev, int *budget)
skb_reserve(skb, 2);
dma_sync_single(NULL, ep->descs->rdesc[entry].buf_addr,
length, DMA_FROM_DEVICE);
eth_copy_and_sum(skb, ep->rx_buf[entry], length, 0);
skb_copy_to_linear_data(skb, ep->rx_buf[entry], length);
skb_put(skb, length);
skb->protocol = eth_type_trans(skb, dev);

View File

@ -1205,8 +1205,8 @@ static int au1000_rx(struct net_device *dev)
continue;
}
skb_reserve(skb, 2); /* 16 byte IP header align */
eth_copy_and_sum(skb,
(unsigned char *)pDB->vaddr, frmlen, 0);
skb_copy_to_linear_data(skb,
(unsigned char *)pDB->vaddr, frmlen);
skb_put(skb, frmlen);
skb->protocol = eth_type_trans(skb, dev);
netif_rx(skb); /* pass the packet to upper layers */

View File

@ -40,7 +40,6 @@
#define BCM_VLAN 1
#endif
#include <net/ip.h>
#include <net/tcp.h>
#include <net/checksum.h>
#include <linux/workqueue.h>
#include <linux/crc32.h>
@ -54,8 +53,8 @@
#define DRV_MODULE_NAME "bnx2"
#define PFX DRV_MODULE_NAME ": "
#define DRV_MODULE_VERSION "1.5.11"
#define DRV_MODULE_RELDATE "June 4, 2007"
#define DRV_MODULE_VERSION "1.6.2"
#define DRV_MODULE_RELDATE "July 6, 2007"
#define RUN_AT(x) (jiffies + (x))
@ -550,6 +549,9 @@ bnx2_report_fw_link(struct bnx2 *bp)
{
u32 fw_link_status = 0;
if (bp->phy_flags & REMOTE_PHY_CAP_FLAG)
return;
if (bp->link_up) {
u32 bmsr;
@ -601,12 +603,21 @@ bnx2_report_fw_link(struct bnx2 *bp)
REG_WR_IND(bp, bp->shmem_base + BNX2_LINK_STATUS, fw_link_status);
}
static char *
bnx2_xceiver_str(struct bnx2 *bp)
{
return ((bp->phy_port == PORT_FIBRE) ? "SerDes" :
((bp->phy_flags & PHY_SERDES_FLAG) ? "Remote Copper" :
"Copper"));
}
static void
bnx2_report_link(struct bnx2 *bp)
{
if (bp->link_up) {
netif_carrier_on(bp->dev);
printk(KERN_INFO PFX "%s NIC Link is Up, ", bp->dev->name);
printk(KERN_INFO PFX "%s NIC %s Link is Up, ", bp->dev->name,
bnx2_xceiver_str(bp));
printk("%d Mbps ", bp->line_speed);
@ -630,7 +641,8 @@ bnx2_report_link(struct bnx2 *bp)
}
else {
netif_carrier_off(bp->dev);
printk(KERN_ERR PFX "%s NIC Link is Down\n", bp->dev->name);
printk(KERN_ERR PFX "%s NIC %s Link is Down\n", bp->dev->name,
bnx2_xceiver_str(bp));
}
bnx2_report_fw_link(bp);
@ -1100,6 +1112,9 @@ bnx2_set_link(struct bnx2 *bp)
return 0;
}
if (bp->phy_flags & REMOTE_PHY_CAP_FLAG)
return 0;
link_up = bp->link_up;
bnx2_enable_bmsr1(bp);
@ -1210,12 +1225,74 @@ bnx2_phy_get_pause_adv(struct bnx2 *bp)
return adv;
}
static int bnx2_fw_sync(struct bnx2 *, u32, int);
static int
bnx2_setup_serdes_phy(struct bnx2 *bp)
bnx2_setup_remote_phy(struct bnx2 *bp, u8 port)
{
u32 speed_arg = 0, pause_adv;
pause_adv = bnx2_phy_get_pause_adv(bp);
if (bp->autoneg & AUTONEG_SPEED) {
speed_arg |= BNX2_NETLINK_SET_LINK_ENABLE_AUTONEG;
if (bp->advertising & ADVERTISED_10baseT_Half)
speed_arg |= BNX2_NETLINK_SET_LINK_SPEED_10HALF;
if (bp->advertising & ADVERTISED_10baseT_Full)
speed_arg |= BNX2_NETLINK_SET_LINK_SPEED_10FULL;
if (bp->advertising & ADVERTISED_100baseT_Half)
speed_arg |= BNX2_NETLINK_SET_LINK_SPEED_100HALF;
if (bp->advertising & ADVERTISED_100baseT_Full)
speed_arg |= BNX2_NETLINK_SET_LINK_SPEED_100FULL;
if (bp->advertising & ADVERTISED_1000baseT_Full)
speed_arg |= BNX2_NETLINK_SET_LINK_SPEED_1GFULL;
if (bp->advertising & ADVERTISED_2500baseX_Full)
speed_arg |= BNX2_NETLINK_SET_LINK_SPEED_2G5FULL;
} else {
if (bp->req_line_speed == SPEED_2500)
speed_arg = BNX2_NETLINK_SET_LINK_SPEED_2G5FULL;
else if (bp->req_line_speed == SPEED_1000)
speed_arg = BNX2_NETLINK_SET_LINK_SPEED_1GFULL;
else if (bp->req_line_speed == SPEED_100) {
if (bp->req_duplex == DUPLEX_FULL)
speed_arg = BNX2_NETLINK_SET_LINK_SPEED_100FULL;
else
speed_arg = BNX2_NETLINK_SET_LINK_SPEED_100HALF;
} else if (bp->req_line_speed == SPEED_10) {
if (bp->req_duplex == DUPLEX_FULL)
speed_arg = BNX2_NETLINK_SET_LINK_SPEED_10FULL;
else
speed_arg = BNX2_NETLINK_SET_LINK_SPEED_10HALF;
}
}
if (pause_adv & (ADVERTISE_1000XPAUSE | ADVERTISE_PAUSE_CAP))
speed_arg |= BNX2_NETLINK_SET_LINK_FC_SYM_PAUSE;
if (pause_adv & (ADVERTISE_1000XPSE_ASYM | ADVERTISE_1000XPSE_ASYM))
speed_arg |= BNX2_NETLINK_SET_LINK_FC_ASYM_PAUSE;
if (port == PORT_TP)
speed_arg |= BNX2_NETLINK_SET_LINK_PHY_APP_REMOTE |
BNX2_NETLINK_SET_LINK_ETH_AT_WIRESPEED;
REG_WR_IND(bp, bp->shmem_base + BNX2_DRV_MB_ARG0, speed_arg);
spin_unlock_bh(&bp->phy_lock);
bnx2_fw_sync(bp, BNX2_DRV_MSG_CODE_CMD_SET_LINK, 0);
spin_lock_bh(&bp->phy_lock);
return 0;
}
static int
bnx2_setup_serdes_phy(struct bnx2 *bp, u8 port)
{
u32 adv, bmcr;
u32 new_adv = 0;
if (bp->phy_flags & REMOTE_PHY_CAP_FLAG)
return (bnx2_setup_remote_phy(bp, port));
if (!(bp->autoneg & AUTONEG_SPEED)) {
u32 new_bmcr;
int force_link_down = 0;
@ -1323,7 +1400,9 @@ bnx2_setup_serdes_phy(struct bnx2 *bp)
}
#define ETHTOOL_ALL_FIBRE_SPEED \
(ADVERTISED_1000baseT_Full)
(bp->phy_flags & PHY_2_5G_CAPABLE_FLAG) ? \
(ADVERTISED_2500baseX_Full | ADVERTISED_1000baseT_Full) :\
(ADVERTISED_1000baseT_Full)
#define ETHTOOL_ALL_COPPER_SPEED \
(ADVERTISED_10baseT_Half | ADVERTISED_10baseT_Full | \
@ -1335,6 +1414,188 @@ bnx2_setup_serdes_phy(struct bnx2 *bp)
#define PHY_ALL_1000_SPEED (ADVERTISE_1000HALF | ADVERTISE_1000FULL)
static void
bnx2_set_default_remote_link(struct bnx2 *bp)
{
u32 link;
if (bp->phy_port == PORT_TP)
link = REG_RD_IND(bp, bp->shmem_base + BNX2_RPHY_COPPER_LINK);
else
link = REG_RD_IND(bp, bp->shmem_base + BNX2_RPHY_SERDES_LINK);
if (link & BNX2_NETLINK_SET_LINK_ENABLE_AUTONEG) {
bp->req_line_speed = 0;
bp->autoneg |= AUTONEG_SPEED;
bp->advertising = ADVERTISED_Autoneg;
if (link & BNX2_NETLINK_SET_LINK_SPEED_10HALF)
bp->advertising |= ADVERTISED_10baseT_Half;
if (link & BNX2_NETLINK_SET_LINK_SPEED_10FULL)
bp->advertising |= ADVERTISED_10baseT_Full;
if (link & BNX2_NETLINK_SET_LINK_SPEED_100HALF)
bp->advertising |= ADVERTISED_100baseT_Half;
if (link & BNX2_NETLINK_SET_LINK_SPEED_100FULL)
bp->advertising |= ADVERTISED_100baseT_Full;
if (link & BNX2_NETLINK_SET_LINK_SPEED_1GFULL)
bp->advertising |= ADVERTISED_1000baseT_Full;
if (link & BNX2_NETLINK_SET_LINK_SPEED_2G5FULL)
bp->advertising |= ADVERTISED_2500baseX_Full;
} else {
bp->autoneg = 0;
bp->advertising = 0;
bp->req_duplex = DUPLEX_FULL;
if (link & BNX2_NETLINK_SET_LINK_SPEED_10) {
bp->req_line_speed = SPEED_10;
if (link & BNX2_NETLINK_SET_LINK_SPEED_10HALF)
bp->req_duplex = DUPLEX_HALF;
}
if (link & BNX2_NETLINK_SET_LINK_SPEED_100) {
bp->req_line_speed = SPEED_100;
if (link & BNX2_NETLINK_SET_LINK_SPEED_100HALF)
bp->req_duplex = DUPLEX_HALF;
}
if (link & BNX2_NETLINK_SET_LINK_SPEED_1GFULL)
bp->req_line_speed = SPEED_1000;
if (link & BNX2_NETLINK_SET_LINK_SPEED_2G5FULL)
bp->req_line_speed = SPEED_2500;
}
}
static void
bnx2_set_default_link(struct bnx2 *bp)
{
if (bp->phy_flags & REMOTE_PHY_CAP_FLAG)
return bnx2_set_default_remote_link(bp);
bp->autoneg = AUTONEG_SPEED | AUTONEG_FLOW_CTRL;
bp->req_line_speed = 0;
if (bp->phy_flags & PHY_SERDES_FLAG) {
u32 reg;
bp->advertising = ETHTOOL_ALL_FIBRE_SPEED | ADVERTISED_Autoneg;
reg = REG_RD_IND(bp, bp->shmem_base + BNX2_PORT_HW_CFG_CONFIG);
reg &= BNX2_PORT_HW_CFG_CFG_DFLT_LINK_MASK;
if (reg == BNX2_PORT_HW_CFG_CFG_DFLT_LINK_1G) {
bp->autoneg = 0;
bp->req_line_speed = bp->line_speed = SPEED_1000;
bp->req_duplex = DUPLEX_FULL;
}
} else
bp->advertising = ETHTOOL_ALL_COPPER_SPEED | ADVERTISED_Autoneg;
}
static void
bnx2_send_heart_beat(struct bnx2 *bp)
{
u32 msg;
u32 addr;
spin_lock(&bp->indirect_lock);
msg = (u32) (++bp->fw_drv_pulse_wr_seq & BNX2_DRV_PULSE_SEQ_MASK);
addr = bp->shmem_base + BNX2_DRV_PULSE_MB;
REG_WR(bp, BNX2_PCICFG_REG_WINDOW_ADDRESS, addr);
REG_WR(bp, BNX2_PCICFG_REG_WINDOW, msg);
spin_unlock(&bp->indirect_lock);
}
static void
bnx2_remote_phy_event(struct bnx2 *bp)
{
u32 msg;
u8 link_up = bp->link_up;
u8 old_port;
msg = REG_RD_IND(bp, bp->shmem_base + BNX2_LINK_STATUS);
if (msg & BNX2_LINK_STATUS_HEART_BEAT_EXPIRED)
bnx2_send_heart_beat(bp);
msg &= ~BNX2_LINK_STATUS_HEART_BEAT_EXPIRED;
if ((msg & BNX2_LINK_STATUS_LINK_UP) == BNX2_LINK_STATUS_LINK_DOWN)
bp->link_up = 0;
else {
u32 speed;
bp->link_up = 1;
speed = msg & BNX2_LINK_STATUS_SPEED_MASK;
bp->duplex = DUPLEX_FULL;
switch (speed) {
case BNX2_LINK_STATUS_10HALF:
bp->duplex = DUPLEX_HALF;
case BNX2_LINK_STATUS_10FULL:
bp->line_speed = SPEED_10;
break;
case BNX2_LINK_STATUS_100HALF:
bp->duplex = DUPLEX_HALF;
case BNX2_LINK_STATUS_100BASE_T4:
case BNX2_LINK_STATUS_100FULL:
bp->line_speed = SPEED_100;
break;
case BNX2_LINK_STATUS_1000HALF:
bp->duplex = DUPLEX_HALF;
case BNX2_LINK_STATUS_1000FULL:
bp->line_speed = SPEED_1000;
break;
case BNX2_LINK_STATUS_2500HALF:
bp->duplex = DUPLEX_HALF;
case BNX2_LINK_STATUS_2500FULL:
bp->line_speed = SPEED_2500;
break;
default:
bp->line_speed = 0;
break;
}
spin_lock(&bp->phy_lock);
bp->flow_ctrl = 0;
if ((bp->autoneg & (AUTONEG_SPEED | AUTONEG_FLOW_CTRL)) !=
(AUTONEG_SPEED | AUTONEG_FLOW_CTRL)) {
if (bp->duplex == DUPLEX_FULL)
bp->flow_ctrl = bp->req_flow_ctrl;
} else {
if (msg & BNX2_LINK_STATUS_TX_FC_ENABLED)
bp->flow_ctrl |= FLOW_CTRL_TX;
if (msg & BNX2_LINK_STATUS_RX_FC_ENABLED)
bp->flow_ctrl |= FLOW_CTRL_RX;
}
old_port = bp->phy_port;
if (msg & BNX2_LINK_STATUS_SERDES_LINK)
bp->phy_port = PORT_FIBRE;
else
bp->phy_port = PORT_TP;
if (old_port != bp->phy_port)
bnx2_set_default_link(bp);
spin_unlock(&bp->phy_lock);
}
if (bp->link_up != link_up)
bnx2_report_link(bp);
bnx2_set_mac_link(bp);
}
static int
bnx2_set_remote_link(struct bnx2 *bp)
{
u32 evt_code;
evt_code = REG_RD_IND(bp, bp->shmem_base + BNX2_FW_EVT_CODE_MB);
switch (evt_code) {
case BNX2_FW_EVT_CODE_LINK_EVENT:
bnx2_remote_phy_event(bp);
break;
case BNX2_FW_EVT_CODE_SW_TIMER_EXPIRATION_EVENT:
default:
bnx2_send_heart_beat(bp);
break;
}
return 0;
}
static int
bnx2_setup_copper_phy(struct bnx2 *bp)
{
@ -1433,13 +1694,13 @@ bnx2_setup_copper_phy(struct bnx2 *bp)
}
static int
bnx2_setup_phy(struct bnx2 *bp)
bnx2_setup_phy(struct bnx2 *bp, u8 port)
{
if (bp->loopback == MAC_LOOPBACK)
return 0;
if (bp->phy_flags & PHY_SERDES_FLAG) {
return (bnx2_setup_serdes_phy(bp));
return (bnx2_setup_serdes_phy(bp, port));
}
else {
return (bnx2_setup_copper_phy(bp));
@ -1659,6 +1920,9 @@ bnx2_init_phy(struct bnx2 *bp)
REG_WR(bp, BNX2_EMAC_ATTENTION_ENA, BNX2_EMAC_ATTENTION_ENA_LINK);
if (bp->phy_flags & REMOTE_PHY_CAP_FLAG)
goto setup_phy;
bnx2_read_phy(bp, MII_PHYSID1, &val);
bp->phy_id = val << 16;
bnx2_read_phy(bp, MII_PHYSID2, &val);
@ -1676,7 +1940,9 @@ bnx2_init_phy(struct bnx2 *bp)
rc = bnx2_init_copper_phy(bp);
}
bnx2_setup_phy(bp);
setup_phy:
if (!rc)
rc = bnx2_setup_phy(bp, bp->phy_port);
return rc;
}
@ -1984,6 +2250,9 @@ bnx2_phy_int(struct bnx2 *bp)
bnx2_set_link(bp);
spin_unlock(&bp->phy_lock);
}
if (bnx2_phy_event_is_set(bp, STATUS_ATTN_BITS_TIMER_ABORT))
bnx2_set_remote_link(bp);
}
static void
@ -2297,6 +2566,7 @@ bnx2_interrupt(int irq, void *dev_instance)
{
struct net_device *dev = dev_instance;
struct bnx2 *bp = netdev_priv(dev);
struct status_block *sblk = bp->status_blk;
/* When using INTx, it is possible for the interrupt to arrive
* at the CPU before the status block posted prior to the
@ -2304,7 +2574,7 @@ bnx2_interrupt(int irq, void *dev_instance)
* When using MSI, the MSI message will always complete after
* the status block write.
*/
if ((bp->status_blk->status_idx == bp->last_status_idx) &&
if ((sblk->status_idx == bp->last_status_idx) &&
(REG_RD(bp, BNX2_PCICFG_MISC_STATUS) &
BNX2_PCICFG_MISC_STATUS_INTA_VALUE))
return IRQ_NONE;
@ -2313,16 +2583,25 @@ bnx2_interrupt(int irq, void *dev_instance)
BNX2_PCICFG_INT_ACK_CMD_USE_INT_HC_PARAM |
BNX2_PCICFG_INT_ACK_CMD_MASK_INT);
/* Read back to deassert IRQ immediately to avoid too many
* spurious interrupts.
*/
REG_RD(bp, BNX2_PCICFG_INT_ACK_CMD);
/* Return here if interrupt is shared and is disabled. */
if (unlikely(atomic_read(&bp->intr_sem) != 0))
return IRQ_HANDLED;
netif_rx_schedule(dev);
if (netif_rx_schedule_prep(dev)) {
bp->last_status_idx = sblk->status_idx;
__netif_rx_schedule(dev);
}
return IRQ_HANDLED;
}
#define STATUS_ATTN_EVENTS STATUS_ATTN_BITS_LINK_STATE
#define STATUS_ATTN_EVENTS (STATUS_ATTN_BITS_LINK_STATE | \
STATUS_ATTN_BITS_TIMER_ABORT)
static inline int
bnx2_has_work(struct bnx2 *bp)
@ -3562,6 +3841,36 @@ nvram_write_end:
return rc;
}
static void
bnx2_init_remote_phy(struct bnx2 *bp)
{
u32 val;
bp->phy_flags &= ~REMOTE_PHY_CAP_FLAG;
if (!(bp->phy_flags & PHY_SERDES_FLAG))
return;
val = REG_RD_IND(bp, bp->shmem_base + BNX2_FW_CAP_MB);
if ((val & BNX2_FW_CAP_SIGNATURE_MASK) != BNX2_FW_CAP_SIGNATURE)
return;
if (val & BNX2_FW_CAP_REMOTE_PHY_CAPABLE) {
if (netif_running(bp->dev)) {
val = BNX2_DRV_ACK_CAP_SIGNATURE |
BNX2_FW_CAP_REMOTE_PHY_CAPABLE;
REG_WR_IND(bp, bp->shmem_base + BNX2_DRV_ACK_CAP_MB,
val);
}
bp->phy_flags |= REMOTE_PHY_CAP_FLAG;
val = REG_RD_IND(bp, bp->shmem_base + BNX2_LINK_STATUS);
if (val & BNX2_LINK_STATUS_SERDES_LINK)
bp->phy_port = PORT_FIBRE;
else
bp->phy_port = PORT_TP;
}
}
static int
bnx2_reset_chip(struct bnx2 *bp, u32 reset_code)
{
@ -3642,6 +3951,12 @@ bnx2_reset_chip(struct bnx2 *bp, u32 reset_code)
if (rc)
return rc;
spin_lock_bh(&bp->phy_lock);
bnx2_init_remote_phy(bp);
if (bp->phy_flags & REMOTE_PHY_CAP_FLAG)
bnx2_set_default_remote_link(bp);
spin_unlock_bh(&bp->phy_lock);
if (CHIP_ID(bp) == CHIP_ID_5706_A0) {
/* Adjust the voltage regular to two steps lower. The default
* of this register is 0x0000000e. */
@ -3826,7 +4141,7 @@ bnx2_init_chip(struct bnx2 *bp)
rc = bnx2_fw_sync(bp, BNX2_DRV_MSG_DATA_WAIT2 | BNX2_DRV_MSG_CODE_RESET,
0);
REG_WR(bp, BNX2_MISC_ENABLE_SET_BITS, 0x5ffffff);
REG_WR(bp, BNX2_MISC_ENABLE_SET_BITS, BNX2_MISC_ENABLE_DEFAULT);
REG_RD(bp, BNX2_MISC_ENABLE_SET_BITS);
udelay(20);
@ -4069,8 +4384,8 @@ bnx2_init_nic(struct bnx2 *bp)
spin_lock_bh(&bp->phy_lock);
bnx2_init_phy(bp);
spin_unlock_bh(&bp->phy_lock);
bnx2_set_link(bp);
spin_unlock_bh(&bp->phy_lock);
return 0;
}
@ -4600,6 +4915,9 @@ bnx2_5706_serdes_timer(struct bnx2 *bp)
static void
bnx2_5708_serdes_timer(struct bnx2 *bp)
{
if (bp->phy_flags & REMOTE_PHY_CAP_FLAG)
return;
if ((bp->phy_flags & PHY_2_5G_CAPABLE_FLAG) == 0) {
bp->serdes_an_pending = 0;
return;
@ -4631,7 +4949,6 @@ static void
bnx2_timer(unsigned long data)
{
struct bnx2 *bp = (struct bnx2 *) data;
u32 msg;
if (!netif_running(bp->dev))
return;
@ -4639,8 +4956,7 @@ bnx2_timer(unsigned long data)
if (atomic_read(&bp->intr_sem) != 0)
goto bnx2_restart_timer;
msg = (u32) ++bp->fw_drv_pulse_wr_seq;
REG_WR_IND(bp, bp->shmem_base + BNX2_DRV_PULSE_MB, msg);
bnx2_send_heart_beat(bp);
bp->stats_blk->stat_FwRxDrop = REG_RD_IND(bp, BNX2_FW_RX_DROP_COUNT);
@ -5083,17 +5399,25 @@ static int
bnx2_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
{
struct bnx2 *bp = netdev_priv(dev);
int support_serdes = 0, support_copper = 0;
cmd->supported = SUPPORTED_Autoneg;
if (bp->phy_flags & PHY_SERDES_FLAG) {
if (bp->phy_flags & REMOTE_PHY_CAP_FLAG) {
support_serdes = 1;
support_copper = 1;
} else if (bp->phy_port == PORT_FIBRE)
support_serdes = 1;
else
support_copper = 1;
if (support_serdes) {
cmd->supported |= SUPPORTED_1000baseT_Full |
SUPPORTED_FIBRE;
if (bp->phy_flags & PHY_2_5G_CAPABLE_FLAG)
cmd->supported |= SUPPORTED_2500baseX_Full;
cmd->port = PORT_FIBRE;
}
else {
if (support_copper) {
cmd->supported |= SUPPORTED_10baseT_Half |
SUPPORTED_10baseT_Full |
SUPPORTED_100baseT_Half |
@ -5101,9 +5425,10 @@ bnx2_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
SUPPORTED_1000baseT_Full |
SUPPORTED_TP;
cmd->port = PORT_TP;
}
spin_lock_bh(&bp->phy_lock);
cmd->port = bp->phy_port;
cmd->advertising = bp->advertising;
if (bp->autoneg & AUTONEG_SPEED) {
@ -5121,6 +5446,7 @@ bnx2_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
cmd->speed = -1;
cmd->duplex = -1;
}
spin_unlock_bh(&bp->phy_lock);
cmd->transceiver = XCVR_INTERNAL;
cmd->phy_address = bp->phy_addr;
@ -5136,6 +5462,15 @@ bnx2_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
u8 req_duplex = bp->req_duplex;
u16 req_line_speed = bp->req_line_speed;
u32 advertising = bp->advertising;
int err = -EINVAL;
spin_lock_bh(&bp->phy_lock);
if (cmd->port != PORT_TP && cmd->port != PORT_FIBRE)
goto err_out_unlock;
if (cmd->port != bp->phy_port && !(bp->phy_flags & REMOTE_PHY_CAP_FLAG))
goto err_out_unlock;
if (cmd->autoneg == AUTONEG_ENABLE) {
autoneg |= AUTONEG_SPEED;
@ -5148,44 +5483,41 @@ bnx2_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
(cmd->advertising == ADVERTISED_100baseT_Half) ||
(cmd->advertising == ADVERTISED_100baseT_Full)) {
if (bp->phy_flags & PHY_SERDES_FLAG)
return -EINVAL;
if (cmd->port == PORT_FIBRE)
goto err_out_unlock;
advertising = cmd->advertising;
} else if (cmd->advertising == ADVERTISED_2500baseX_Full) {
if (!(bp->phy_flags & PHY_2_5G_CAPABLE_FLAG))
return -EINVAL;
} else if (cmd->advertising == ADVERTISED_1000baseT_Full) {
if (!(bp->phy_flags & PHY_2_5G_CAPABLE_FLAG) ||
(cmd->port == PORT_TP))
goto err_out_unlock;
} else if (cmd->advertising == ADVERTISED_1000baseT_Full)
advertising = cmd->advertising;
}
else if (cmd->advertising == ADVERTISED_1000baseT_Half) {
return -EINVAL;
}
else if (cmd->advertising == ADVERTISED_1000baseT_Half)
goto err_out_unlock;
else {
if (bp->phy_flags & PHY_SERDES_FLAG) {
if (cmd->port == PORT_FIBRE)
advertising = ETHTOOL_ALL_FIBRE_SPEED;
}
else {
else
advertising = ETHTOOL_ALL_COPPER_SPEED;
}
}
advertising |= ADVERTISED_Autoneg;
}
else {
if (bp->phy_flags & PHY_SERDES_FLAG) {
if (cmd->port == PORT_FIBRE) {
if ((cmd->speed != SPEED_1000 &&
cmd->speed != SPEED_2500) ||
(cmd->duplex != DUPLEX_FULL))
return -EINVAL;
goto err_out_unlock;
if (cmd->speed == SPEED_2500 &&
!(bp->phy_flags & PHY_2_5G_CAPABLE_FLAG))
return -EINVAL;
}
else if (cmd->speed == SPEED_1000) {
return -EINVAL;
goto err_out_unlock;
}
else if (cmd->speed == SPEED_1000 || cmd->speed == SPEED_2500)
goto err_out_unlock;
autoneg &= ~AUTONEG_SPEED;
req_line_speed = cmd->speed;
req_duplex = cmd->duplex;
@ -5197,13 +5529,12 @@ bnx2_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
bp->req_line_speed = req_line_speed;
bp->req_duplex = req_duplex;
spin_lock_bh(&bp->phy_lock);
bnx2_setup_phy(bp);
err = bnx2_setup_phy(bp, cmd->port);
err_out_unlock:
spin_unlock_bh(&bp->phy_lock);
return 0;
return err;
}
static void
@ -5214,11 +5545,7 @@ bnx2_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info)
strcpy(info->driver, DRV_MODULE_NAME);
strcpy(info->version, DRV_MODULE_VERSION);
strcpy(info->bus_info, pci_name(bp->pdev));
info->fw_version[0] = ((bp->fw_ver & 0xff000000) >> 24) + '0';
info->fw_version[2] = ((bp->fw_ver & 0xff0000) >> 16) + '0';
info->fw_version[4] = ((bp->fw_ver & 0xff00) >> 8) + '0';
info->fw_version[1] = info->fw_version[3] = '.';
info->fw_version[5] = 0;
strcpy(info->fw_version, bp->fw_version);
}
#define BNX2_REGDUMP_LEN (32 * 1024)
@ -5330,6 +5657,14 @@ bnx2_nway_reset(struct net_device *dev)
spin_lock_bh(&bp->phy_lock);
if (bp->phy_flags & REMOTE_PHY_CAP_FLAG) {
int rc;
rc = bnx2_setup_remote_phy(bp, bp->phy_port);
spin_unlock_bh(&bp->phy_lock);
return rc;
}
/* Force a link down visible on the other side */
if (bp->phy_flags & PHY_SERDES_FLAG) {
bnx2_write_phy(bp, bp->mii_bmcr, BMCR_LOOPBACK);
@ -5543,7 +5878,7 @@ bnx2_set_pauseparam(struct net_device *dev, struct ethtool_pauseparam *epause)
spin_lock_bh(&bp->phy_lock);
bnx2_setup_phy(bp);
bnx2_setup_phy(bp, bp->phy_port);
spin_unlock_bh(&bp->phy_lock);
@ -5939,6 +6274,9 @@ bnx2_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
case SIOCGMIIREG: {
u32 mii_regval;
if (bp->phy_flags & REMOTE_PHY_CAP_FLAG)
return -EOPNOTSUPP;
if (!netif_running(dev))
return -EAGAIN;
@ -5955,6 +6293,9 @@ bnx2_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
if (!capable(CAP_NET_ADMIN))
return -EPERM;
if (bp->phy_flags & REMOTE_PHY_CAP_FLAG)
return -EOPNOTSUPP;
if (!netif_running(dev))
return -EAGAIN;
@ -6116,7 +6457,7 @@ bnx2_init_board(struct pci_dev *pdev, struct net_device *dev)
{
struct bnx2 *bp;
unsigned long mem_len;
int rc;
int rc, i, j;
u32 reg;
u64 dma_mask, persist_dma_mask;
@ -6273,7 +6614,35 @@ bnx2_init_board(struct pci_dev *pdev, struct net_device *dev)
goto err_out_unmap;
}
bp->fw_ver = REG_RD_IND(bp, bp->shmem_base + BNX2_DEV_INFO_BC_REV);
reg = REG_RD_IND(bp, bp->shmem_base + BNX2_DEV_INFO_BC_REV);
for (i = 0, j = 0; i < 3; i++) {
u8 num, k, skip0;
num = (u8) (reg >> (24 - (i * 8)));
for (k = 100, skip0 = 1; k >= 1; num %= k, k /= 10) {
if (num >= k || !skip0 || k == 1) {
bp->fw_version[j++] = (num / k) + '0';
skip0 = 0;
}
}
if (i != 2)
bp->fw_version[j++] = '.';
}
reg = REG_RD_IND(bp, bp->shmem_base + BNX2_BC_STATE_CONDITION);
reg &= BNX2_CONDITION_MFW_RUN_MASK;
if (reg != BNX2_CONDITION_MFW_RUN_UNKNOWN &&
reg != BNX2_CONDITION_MFW_RUN_NONE) {
int i;
u32 addr = REG_RD_IND(bp, bp->shmem_base + BNX2_MFW_VER_PTR);
bp->fw_version[j++] = ' ';
for (i = 0; i < 3; i++) {
reg = REG_RD_IND(bp, addr + i * 4);
reg = swab32(reg);
memcpy(&bp->fw_version[j], &reg, 4);
j += 4;
}
}
reg = REG_RD_IND(bp, bp->shmem_base + BNX2_PORT_HW_CFG_MAC_UPPER);
bp->mac_addr[0] = (u8) (reg >> 8);
@ -6315,7 +6684,9 @@ bnx2_init_board(struct pci_dev *pdev, struct net_device *dev)
else if (CHIP_BOND_ID(bp) & CHIP_BOND_ID_SERDES_BIT)
bp->phy_flags |= PHY_SERDES_FLAG;
bp->phy_port = PORT_TP;
if (bp->phy_flags & PHY_SERDES_FLAG) {
bp->phy_port = PORT_FIBRE;
bp->flags |= NO_WOL_FLAG;
if (CHIP_NUM(bp) != CHIP_NUM_5706) {
bp->phy_addr = 2;
@ -6324,6 +6695,8 @@ bnx2_init_board(struct pci_dev *pdev, struct net_device *dev)
if (reg & BNX2_SHARED_HW_CFG_PHY_2_5G)
bp->phy_flags |= PHY_2_5G_CAPABLE_FLAG;
}
bnx2_init_remote_phy(bp);
} else if (CHIP_NUM(bp) == CHIP_NUM_5706 ||
CHIP_NUM(bp) == CHIP_NUM_5708)
bp->phy_flags |= PHY_CRC_FIX_FLAG;
@ -6374,23 +6747,7 @@ bnx2_init_board(struct pci_dev *pdev, struct net_device *dev)
}
}
bp->autoneg = AUTONEG_SPEED | AUTONEG_FLOW_CTRL;
bp->req_line_speed = 0;
if (bp->phy_flags & PHY_SERDES_FLAG) {
bp->advertising = ETHTOOL_ALL_FIBRE_SPEED | ADVERTISED_Autoneg;
reg = REG_RD_IND(bp, bp->shmem_base + BNX2_PORT_HW_CFG_CONFIG);
reg &= BNX2_PORT_HW_CFG_CFG_DFLT_LINK_MASK;
if (reg == BNX2_PORT_HW_CFG_CFG_DFLT_LINK_1G) {
bp->autoneg = 0;
bp->req_line_speed = bp->line_speed = SPEED_1000;
bp->req_duplex = DUPLEX_FULL;
}
}
else {
bp->advertising = ETHTOOL_ALL_COPPER_SPEED | ADVERTISED_Autoneg;
}
bnx2_set_default_link(bp);
bp->req_flow_ctrl = FLOW_CTRL_RX | FLOW_CTRL_TX;
init_timer(&bp->timer);
@ -6490,10 +6847,10 @@ bnx2_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
memcpy(dev->perm_addr, bp->mac_addr, 6);
bp->name = board_info[ent->driver_data].name;
dev->features |= NETIF_F_IP_CSUM | NETIF_F_SG;
if (CHIP_NUM(bp) == CHIP_NUM_5709)
dev->features |= NETIF_F_HW_CSUM | NETIF_F_SG;
else
dev->features |= NETIF_F_IP_CSUM | NETIF_F_SG;
dev->features |= NETIF_F_IPV6_CSUM;
#ifdef BCM_VLAN
dev->features |= NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX;
#endif

View File

@ -6338,6 +6338,8 @@ struct l2_fhdr {
#define RX_COPY_THRESH 92
#define BNX2_MISC_ENABLE_DEFAULT 0x7ffffff
#define DMA_READ_CHANS 5
#define DMA_WRITE_CHANS 3
@ -6537,6 +6539,7 @@ struct bnx2 {
#define PHY_INT_MODE_AUTO_POLLING_FLAG 0x100
#define PHY_INT_MODE_LINK_READY_FLAG 0x200
#define PHY_DIS_EARLY_DAC_FLAG 0x400
#define REMOTE_PHY_CAP_FLAG 0x800
u32 mii_bmcr;
u32 mii_bmsr;
@ -6625,6 +6628,7 @@ struct bnx2 {
u16 req_line_speed;
u8 req_duplex;
u8 phy_port;
u8 link_up;
u16 line_speed;
@ -6656,7 +6660,7 @@ struct bnx2 {
u32 shmem_base;
u32 fw_ver;
char fw_version[32];
int pm_cap;
int pcix_cap;
@ -6770,7 +6774,7 @@ struct fw_info {
* the firmware has timed out, the driver will assume there is no firmware
* running and there won't be any firmware-driver synchronization during a
* driver reset. */
#define FW_ACK_TIME_OUT_MS 100
#define FW_ACK_TIME_OUT_MS 1000
#define BNX2_DRV_RESET_SIGNATURE 0x00000000
@ -6788,6 +6792,7 @@ struct fw_info {
#define BNX2_DRV_MSG_CODE_DIAG 0x07000000
#define BNX2_DRV_MSG_CODE_SUSPEND_NO_WOL 0x09000000
#define BNX2_DRV_MSG_CODE_UNLOAD_LNK_DN 0x0b000000
#define BNX2_DRV_MSG_CODE_CMD_SET_LINK 0x10000000
#define BNX2_DRV_MSG_DATA 0x00ff0000
#define BNX2_DRV_MSG_DATA_WAIT0 0x00010000
@ -6836,6 +6841,7 @@ struct fw_info {
#define BNX2_LINK_STATUS_SERDES_LINK (1<<20)
#define BNX2_LINK_STATUS_PARTNER_AD_2500FULL (1<<21)
#define BNX2_LINK_STATUS_PARTNER_AD_2500HALF (1<<22)
#define BNX2_LINK_STATUS_HEART_BEAT_EXPIRED (1<<31)
#define BNX2_DRV_PULSE_MB 0x00000010
#define BNX2_DRV_PULSE_SEQ_MASK 0x00007fff
@ -6845,6 +6851,30 @@ struct fw_info {
* This is used for debugging. */
#define BNX2_DRV_MSG_DATA_PULSE_CODE_ALWAYS_ALIVE 0x00080000
#define BNX2_DRV_MB_ARG0 0x00000014
#define BNX2_NETLINK_SET_LINK_SPEED_10HALF (1<<0)
#define BNX2_NETLINK_SET_LINK_SPEED_10FULL (1<<1)
#define BNX2_NETLINK_SET_LINK_SPEED_10 \
(BNX2_NETLINK_SET_LINK_SPEED_10HALF | \
BNX2_NETLINK_SET_LINK_SPEED_10FULL)
#define BNX2_NETLINK_SET_LINK_SPEED_100HALF (1<<2)
#define BNX2_NETLINK_SET_LINK_SPEED_100FULL (1<<3)
#define BNX2_NETLINK_SET_LINK_SPEED_100 \
(BNX2_NETLINK_SET_LINK_SPEED_100HALF | \
BNX2_NETLINK_SET_LINK_SPEED_100FULL)
#define BNX2_NETLINK_SET_LINK_SPEED_1GHALF (1<<4)
#define BNX2_NETLINK_SET_LINK_SPEED_1GFULL (1<<5)
#define BNX2_NETLINK_SET_LINK_SPEED_2G5HALF (1<<6)
#define BNX2_NETLINK_SET_LINK_SPEED_2G5FULL (1<<7)
#define BNX2_NETLINK_SET_LINK_SPEED_10GHALF (1<<8)
#define BNX2_NETLINK_SET_LINK_SPEED_10GFULL (1<<9)
#define BNX2_NETLINK_SET_LINK_ENABLE_AUTONEG (1<<10)
#define BNX2_NETLINK_SET_LINK_PHY_APP_REMOTE (1<<11)
#define BNX2_NETLINK_SET_LINK_FC_SYM_PAUSE (1<<12)
#define BNX2_NETLINK_SET_LINK_FC_ASYM_PAUSE (1<<13)
#define BNX2_NETLINK_SET_LINK_ETH_AT_WIRESPEED (1<<14)
#define BNX2_NETLINK_SET_LINK_PHY_RESET (1<<15)
#define BNX2_DEV_INFO_SIGNATURE 0x00000020
#define BNX2_DEV_INFO_SIGNATURE_MAGIC 0x44564900
#define BNX2_DEV_INFO_SIGNATURE_MAGIC_MASK 0xffffff00
@ -7006,6 +7036,8 @@ struct fw_info {
#define BNX2_PORT_FEATURE_MBA_VLAN_TAG_MASK 0xffff
#define BNX2_PORT_FEATURE_MBA_VLAN_ENABLE 0x10000
#define BNX2_MFW_VER_PTR 0x00000014c
#define BNX2_BC_STATE_RESET_TYPE 0x000001c0
#define BNX2_BC_STATE_RESET_TYPE_SIG 0x00005254
#define BNX2_BC_STATE_RESET_TYPE_SIG_MASK 0x0000ffff
@ -7059,12 +7091,42 @@ struct fw_info {
#define BNX2_BC_STATE_ERR_NO_RXP (BNX2_BC_STATE_SIGN | 0x0600)
#define BNX2_BC_STATE_ERR_TOO_MANY_RBUF (BNX2_BC_STATE_SIGN | 0x0700)
#define BNX2_BC_STATE_CONDITION 0x000001c8
#define BNX2_CONDITION_MFW_RUN_UNKNOWN 0x00000000
#define BNX2_CONDITION_MFW_RUN_IPMI 0x00002000
#define BNX2_CONDITION_MFW_RUN_UMP 0x00004000
#define BNX2_CONDITION_MFW_RUN_NCSI 0x00006000
#define BNX2_CONDITION_MFW_RUN_NONE 0x0000e000
#define BNX2_CONDITION_MFW_RUN_MASK 0x0000e000
#define BNX2_BC_STATE_DEBUG_CMD 0x1dc
#define BNX2_BC_STATE_BC_DBG_CMD_SIGNATURE 0x42440000
#define BNX2_BC_STATE_BC_DBG_CMD_SIGNATURE_MASK 0xffff0000
#define BNX2_BC_STATE_BC_DBG_CMD_LOOP_CNT_MASK 0xffff
#define BNX2_BC_STATE_BC_DBG_CMD_LOOP_INFINITE 0xffff
#define BNX2_FW_EVT_CODE_MB 0x354
#define BNX2_FW_EVT_CODE_SW_TIMER_EXPIRATION_EVENT 0x00000000
#define BNX2_FW_EVT_CODE_LINK_EVENT 0x00000001
#define BNX2_DRV_ACK_CAP_MB 0x364
#define BNX2_DRV_ACK_CAP_SIGNATURE 0x35450000
#define BNX2_CAPABILITY_SIGNATURE_MASK 0xFFFF0000
#define BNX2_FW_CAP_MB 0x368
#define BNX2_FW_CAP_SIGNATURE 0xaa550000
#define BNX2_FW_ACK_DRV_SIGNATURE 0x52500000
#define BNX2_FW_CAP_SIGNATURE_MASK 0xffff0000
#define BNX2_FW_CAP_REMOTE_PHY_CAPABLE 0x00000001
#define BNX2_FW_CAP_REMOTE_PHY_PRESENT 0x00000002
#define BNX2_RPHY_SIGNATURE 0x36c
#define BNX2_RPHY_LOAD_SIGNATURE 0x5a5a5a5a
#define BNX2_RPHY_FLAGS 0x370
#define BNX2_RPHY_SERDES_LINK 0x374
#define BNX2_RPHY_COPPER_LINK 0x378
#define HOST_VIEW_SHMEM_BASE 0x167c00
#endif

View File

@ -866,9 +866,9 @@ receive_packet (struct net_device *dev)
PCI_DMA_FROMDEVICE);
/* 16 byte align the IP header */
skb_reserve (skb, 2);
eth_copy_and_sum (skb,
skb_copy_to_linear_data (skb,
np->rx_skbuff[entry]->data,
pkt_len, 0);
pkt_len);
skb_put (skb, pkt_len);
pci_dma_sync_single_for_device(np->pdev,
desc->fraginfo &

View File

@ -34,11 +34,12 @@
#include <linux/etherdevice.h>
#include <linux/init.h>
#include <linux/moduleparam.h>
#include <linux/rtnetlink.h>
#include <net/rtnetlink.h>
static int numdummies = 1;
static int dummy_xmit(struct sk_buff *skb, struct net_device *dev);
static struct net_device_stats *dummy_get_stats(struct net_device *dev);
static int dummy_set_address(struct net_device *dev, void *p)
{
@ -56,13 +57,13 @@ static void set_multicast_list(struct net_device *dev)
{
}
static void __init dummy_setup(struct net_device *dev)
static void dummy_setup(struct net_device *dev)
{
/* Initialize the device structure. */
dev->get_stats = dummy_get_stats;
dev->hard_start_xmit = dummy_xmit;
dev->set_multicast_list = set_multicast_list;
dev->set_mac_address = dummy_set_address;
dev->destructor = free_netdev;
/* Fill in device structure with ethernet-generic values. */
ether_setup(dev);
@ -76,77 +77,80 @@ static void __init dummy_setup(struct net_device *dev)
static int dummy_xmit(struct sk_buff *skb, struct net_device *dev)
{
struct net_device_stats *stats = netdev_priv(dev);
stats->tx_packets++;
stats->tx_bytes+=skb->len;
dev->stats.tx_packets++;
dev->stats.tx_bytes += skb->len;
dev_kfree_skb(skb);
return 0;
}
static struct net_device_stats *dummy_get_stats(struct net_device *dev)
static int dummy_validate(struct nlattr *tb[], struct nlattr *data[])
{
return netdev_priv(dev);
if (tb[IFLA_ADDRESS]) {
if (nla_len(tb[IFLA_ADDRESS]) != ETH_ALEN)
return -EINVAL;
if (!is_valid_ether_addr(nla_data(tb[IFLA_ADDRESS])))
return -EADDRNOTAVAIL;
}
return 0;
}
static struct net_device **dummies;
static struct rtnl_link_ops dummy_link_ops __read_mostly = {
.kind = "dummy",
.setup = dummy_setup,
.validate = dummy_validate,
};
/* Number of dummy devices to be set up by this module. */
module_param(numdummies, int, 0);
MODULE_PARM_DESC(numdummies, "Number of dummy pseudo devices");
static int __init dummy_init_one(int index)
static int __init dummy_init_one(void)
{
struct net_device *dev_dummy;
int err;
dev_dummy = alloc_netdev(sizeof(struct net_device_stats),
"dummy%d", dummy_setup);
dev_dummy = alloc_netdev(0, "dummy%d", dummy_setup);
if (!dev_dummy)
return -ENOMEM;
if ((err = register_netdev(dev_dummy))) {
free_netdev(dev_dummy);
dev_dummy = NULL;
} else {
dummies[index] = dev_dummy;
}
err = dev_alloc_name(dev_dummy, dev_dummy->name);
if (err < 0)
goto err;
dev_dummy->rtnl_link_ops = &dummy_link_ops;
err = register_netdevice(dev_dummy);
if (err < 0)
goto err;
return 0;
err:
free_netdev(dev_dummy);
return err;
}
static void dummy_free_one(int index)
{
unregister_netdev(dummies[index]);
free_netdev(dummies[index]);
}
static int __init dummy_init_module(void)
{
int i, err = 0;
dummies = kmalloc(numdummies * sizeof(void *), GFP_KERNEL);
if (!dummies)
return -ENOMEM;
rtnl_lock();
err = __rtnl_link_register(&dummy_link_ops);
for (i = 0; i < numdummies && !err; i++)
err = dummy_init_one(i);
if (err) {
i--;
while (--i >= 0)
dummy_free_one(i);
}
err = dummy_init_one();
if (err < 0)
__rtnl_link_unregister(&dummy_link_ops);
rtnl_unlock();
return err;
}
static void __exit dummy_cleanup_module(void)
{
int i;
for (i = 0; i < numdummies; i++)
dummy_free_one(i);
kfree(dummies);
rtnl_link_unregister(&dummy_link_ops);
}
module_init(dummy_init_module);
module_exit(dummy_cleanup_module);
MODULE_LICENSE("GPL");
MODULE_ALIAS_RTNL_LINK("dummy");

View File

@ -1801,7 +1801,7 @@ speedo_rx(struct net_device *dev)
#if 1 || USE_IP_CSUM
/* Packet is in one chunk -- we can copy + cksum. */
eth_copy_and_sum(skb, sp->rx_skbuff[entry]->data, pkt_len, 0);
skb_copy_to_linear_data(skb, sp->rx_skbuff[entry]->data, pkt_len);
skb_put(skb, pkt_len);
#else
skb_copy_from_linear_data(sp->rx_skbuff[entry],

View File

@ -1201,7 +1201,7 @@ static int epic_rx(struct net_device *dev, int budget)
ep->rx_ring[entry].bufaddr,
ep->rx_buf_sz,
PCI_DMA_FROMDEVICE);
eth_copy_and_sum(skb, ep->rx_skbuff[entry]->data, pkt_len, 0);
skb_copy_to_linear_data(skb, ep->rx_skbuff[entry]->data, pkt_len);
skb_put(skb, pkt_len);
pci_dma_sync_single_for_device(ep->pci_dev,
ep->rx_ring[entry].bufaddr,

View File

@ -1727,8 +1727,8 @@ static int netdev_rx(struct net_device *dev)
/* Call copy + cksum if available. */
#if ! defined(__alpha__)
eth_copy_and_sum(skb,
np->cur_rx->skbuff->data, pkt_len, 0);
skb_copy_to_linear_data(skb,
np->cur_rx->skbuff->data, pkt_len);
skb_put(skb, pkt_len);
#else
memcpy(skb_put(skb, pkt_len),

View File

@ -648,7 +648,7 @@ while (!((status = bdp->cbd_sc) & BD_ENET_RX_EMPTY)) {
fep->stats.rx_dropped++;
} else {
skb_put(skb,pkt_len-4); /* Make room */
eth_copy_and_sum(skb, data, pkt_len-4, 0);
skb_copy_to_linear_data(skb, data, pkt_len-4);
skb->protocol=eth_type_trans(skb,dev);
netif_rx(skb);
}

View File

@ -1575,8 +1575,8 @@ static int hamachi_rx(struct net_device *dev)
PCI_DMA_FROMDEVICE);
/* Call copy + cksum if available. */
#if 1 || USE_IP_COPYSUM
eth_copy_and_sum(skb,
hmp->rx_skbuff[entry]->data, pkt_len, 0);
skb_copy_to_linear_data(skb,
hmp->rx_skbuff[entry]->data, pkt_len);
skb_put(skb, pkt_len);
#else
memcpy(skb_put(skb, pkt_len), hmp->rx_ring_dma

View File

@ -136,13 +136,14 @@ resched:
}
static void __init ifb_setup(struct net_device *dev)
static void ifb_setup(struct net_device *dev)
{
/* Initialize the device structure. */
dev->get_stats = ifb_get_stats;
dev->hard_start_xmit = ifb_xmit;
dev->open = &ifb_open;
dev->stop = &ifb_close;
dev->destructor = free_netdev;
/* Fill in device structure with ethernet-generic values. */
ether_setup(dev);
@ -197,12 +198,6 @@ static struct net_device_stats *ifb_get_stats(struct net_device *dev)
return stats;
}
static struct net_device **ifbs;
/* Number of ifb devices to be set up by this module. */
module_param(numifbs, int, 0);
MODULE_PARM_DESC(numifbs, "Number of ifb devices");
static int ifb_close(struct net_device *dev)
{
struct ifb_private *dp = netdev_priv(dev);
@ -226,6 +221,28 @@ static int ifb_open(struct net_device *dev)
return 0;
}
static int ifb_validate(struct nlattr *tb[], struct nlattr *data[])
{
if (tb[IFLA_ADDRESS]) {
if (nla_len(tb[IFLA_ADDRESS]) != ETH_ALEN)
return -EINVAL;
if (!is_valid_ether_addr(nla_data(tb[IFLA_ADDRESS])))
return -EADDRNOTAVAIL;
}
return 0;
}
static struct rtnl_link_ops ifb_link_ops __read_mostly = {
.kind = "ifb",
.priv_size = sizeof(struct ifb_private),
.setup = ifb_setup,
.validate = ifb_validate,
};
/* Number of ifb devices to be set up by this module. */
module_param(numifbs, int, 0);
MODULE_PARM_DESC(numifbs, "Number of ifb devices");
static int __init ifb_init_one(int index)
{
struct net_device *dev_ifb;
@ -237,49 +254,44 @@ static int __init ifb_init_one(int index)
if (!dev_ifb)
return -ENOMEM;
if ((err = register_netdev(dev_ifb))) {
free_netdev(dev_ifb);
dev_ifb = NULL;
} else {
ifbs[index] = dev_ifb;
}
err = dev_alloc_name(dev_ifb, dev_ifb->name);
if (err < 0)
goto err;
dev_ifb->rtnl_link_ops = &ifb_link_ops;
err = register_netdevice(dev_ifb);
if (err < 0)
goto err;
return 0;
err:
free_netdev(dev_ifb);
return err;
}
static void ifb_free_one(int index)
{
unregister_netdev(ifbs[index]);
free_netdev(ifbs[index]);
}
static int __init ifb_init_module(void)
{
int i, err = 0;
ifbs = kmalloc(numifbs * sizeof(void *), GFP_KERNEL);
if (!ifbs)
return -ENOMEM;
int i, err;
rtnl_lock();
err = __rtnl_link_register(&ifb_link_ops);
for (i = 0; i < numifbs && !err; i++)
err = ifb_init_one(i);
if (err) {
i--;
while (--i >= 0)
ifb_free_one(i);
}
if (err)
__rtnl_link_unregister(&ifb_link_ops);
rtnl_unlock();
return err;
}
static void __exit ifb_cleanup_module(void)
{
int i;
for (i = 0; i < numifbs; i++)
ifb_free_one(i);
kfree(ifbs);
rtnl_link_unregister(&ifb_link_ops);
}
module_init(ifb_init_module);
module_exit(ifb_cleanup_module);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Jamal Hadi Salim");
MODULE_ALIAS_RTNL_LINK("ifb");

View File

@ -4,7 +4,7 @@
* Version: 0.1.1
* Description: Irda KingSun/DonShine USB Dongle
* Status: Experimental
* Author: Alex Villac<EFBFBD>s Lasso <a_villacis@palosanto.com>
* Author: Alex Villacís Lasso <a_villacis@palosanto.com>
*
* Based on stir4200 and mcs7780 drivers, with (strange?) differences
*
@ -652,6 +652,6 @@ static void __exit kingsun_cleanup(void)
}
module_exit(kingsun_cleanup);
MODULE_AUTHOR("Alex Villac<EFBFBD>s Lasso <a_villacis@palosanto.com>");
MODULE_AUTHOR("Alex Villacís Lasso <a_villacis@palosanto.com>");
MODULE_DESCRIPTION("IrDA-USB Dongle Driver for KingSun/DonShine");
MODULE_LICENSE("GPL");

View File

@ -44,6 +44,7 @@ MODULE_LICENSE("GPL");
#include <linux/time.h>
#include <linux/proc_fs.h>
#include <linux/seq_file.h>
#include <linux/mutex.h>
#include <asm/uaccess.h>
#include <asm/byteorder.h>
@ -1660,8 +1661,8 @@ vlsi_irda_probe(struct pci_dev *pdev, const struct pci_device_id *id)
idev = ndev->priv;
spin_lock_init(&idev->lock);
init_MUTEX(&idev->sem);
down(&idev->sem);
mutex_init(&idev->mtx);
mutex_lock(&idev->mtx);
idev->pdev = pdev;
if (vlsi_irda_init(ndev) < 0)
@ -1689,12 +1690,12 @@ vlsi_irda_probe(struct pci_dev *pdev, const struct pci_device_id *id)
IRDA_MESSAGE("%s: registered device %s\n", drivername, ndev->name);
pci_set_drvdata(pdev, ndev);
up(&idev->sem);
mutex_unlock(&idev->mtx);
return 0;
out_freedev:
up(&idev->sem);
mutex_unlock(&idev->mtx);
free_netdev(ndev);
out_disable:
pci_disable_device(pdev);
@ -1716,12 +1717,12 @@ static void __devexit vlsi_irda_remove(struct pci_dev *pdev)
unregister_netdev(ndev);
idev = ndev->priv;
down(&idev->sem);
mutex_lock(&idev->mtx);
if (idev->proc_entry) {
remove_proc_entry(ndev->name, vlsi_proc_root);
idev->proc_entry = NULL;
}
up(&idev->sem);
mutex_unlock(&idev->mtx);
free_netdev(ndev);
@ -1751,7 +1752,7 @@ static int vlsi_irda_suspend(struct pci_dev *pdev, pm_message_t state)
return 0;
}
idev = ndev->priv;
down(&idev->sem);
mutex_lock(&idev->mtx);
if (pdev->current_state != 0) { /* already suspended */
if (state.event > pdev->current_state) { /* simply go deeper */
pci_set_power_state(pdev, pci_choose_state(pdev, state));
@ -1759,7 +1760,7 @@ static int vlsi_irda_suspend(struct pci_dev *pdev, pm_message_t state)
}
else
IRDA_ERROR("%s - %s: invalid suspend request %u -> %u\n", __FUNCTION__, pci_name(pdev), pdev->current_state, state.event);
up(&idev->sem);
mutex_unlock(&idev->mtx);
return 0;
}
@ -1775,7 +1776,7 @@ static int vlsi_irda_suspend(struct pci_dev *pdev, pm_message_t state)
pci_set_power_state(pdev, pci_choose_state(pdev, state));
pdev->current_state = state.event;
idev->resume_ok = 1;
up(&idev->sem);
mutex_unlock(&idev->mtx);
return 0;
}
@ -1790,9 +1791,9 @@ static int vlsi_irda_resume(struct pci_dev *pdev)
return 0;
}
idev = ndev->priv;
down(&idev->sem);
mutex_lock(&idev->mtx);
if (pdev->current_state == 0) {
up(&idev->sem);
mutex_unlock(&idev->mtx);
IRDA_WARNING("%s - %s: already resumed\n",
__FUNCTION__, pci_name(pdev));
return 0;
@ -1814,7 +1815,7 @@ static int vlsi_irda_resume(struct pci_dev *pdev)
* device and independently resume_ok should catch any garbage config.
*/
IRDA_WARNING("%s - hm, nothing to resume?\n", __FUNCTION__);
up(&idev->sem);
mutex_unlock(&idev->mtx);
return 0;
}
@ -1824,7 +1825,7 @@ static int vlsi_irda_resume(struct pci_dev *pdev)
netif_device_attach(ndev);
}
idev->resume_ok = 0;
up(&idev->sem);
mutex_unlock(&idev->mtx);
return 0;
}

View File

@ -728,7 +728,7 @@ typedef struct vlsi_irda_dev {
struct timeval last_rx;
spinlock_t lock;
struct semaphore sem;
struct mutex mtx;
u8 resume_ok;
struct proc_dir_entry *proc_entry;

View File

@ -111,7 +111,7 @@ static int ixpdev_rx(struct net_device *dev, int *budget)
skb = dev_alloc_skb(desc->pkt_length + 2);
if (likely(skb != NULL)) {
skb_reserve(skb, 2);
eth_copy_and_sum(skb, buf, desc->pkt_length, 0);
skb_copy_to_linear_data(skb, buf, desc->pkt_length);
skb_put(skb, desc->pkt_length);
skb->protocol = eth_type_trans(skb, nds[desc->channel]);

View File

@ -1186,9 +1186,9 @@ lance_rx(struct net_device *dev)
}
skb_reserve(skb,2); /* 16 byte align */
skb_put(skb,pkt_len); /* Make room */
eth_copy_and_sum(skb,
skb_copy_to_linear_data(skb,
(unsigned char *)isa_bus_to_virt((lp->rx_ring[entry].base & 0x00ffffff)),
pkt_len,0);
pkt_len);
skb->protocol=eth_type_trans(skb,dev);
netif_rx(skb);
dev->last_rx = jiffies;

View File

@ -2357,8 +2357,8 @@ static void netdev_rx(struct net_device *dev, int *work_done, int work_to_do)
np->rx_dma[entry],
buflen,
PCI_DMA_FROMDEVICE);
eth_copy_and_sum(skb,
np->rx_skbuff[entry]->data, pkt_len, 0);
skb_copy_to_linear_data(skb,
np->rx_skbuff[entry]->data, pkt_len);
skb_put(skb, pkt_len);
pci_dma_sync_single_for_device(np->pci_dev,
np->rx_dma[entry],

View File

@ -936,7 +936,7 @@ static void ni52_rcv_int(struct net_device *dev)
{
skb_reserve(skb,2);
skb_put(skb,totlen);
eth_copy_and_sum(skb,(char *) p->base+(unsigned long) rbd->buffer,totlen,0);
skb_copy_to_linear_data(skb,(char *) p->base+(unsigned long) rbd->buffer,totlen);
skb->protocol=eth_type_trans(skb,dev);
netif_rx(skb);
dev->last_rx = jiffies;

View File

@ -1096,7 +1096,7 @@ static void ni65_recv_intr(struct net_device *dev,int csr0)
#ifdef RCV_VIA_SKB
if( (unsigned long) (skb->data + R_BUF_SIZE) > 0x1000000) {
skb_put(skb,len);
eth_copy_and_sum(skb, (unsigned char *)(p->recv_skb[p->rmdnum]->data),len,0);
skb_copy_to_linear_data(skb, (unsigned char *)(p->recv_skb[p->rmdnum]->data),len);
}
else {
struct sk_buff *skb1 = p->recv_skb[p->rmdnum];
@ -1108,7 +1108,7 @@ static void ni65_recv_intr(struct net_device *dev,int csr0)
}
#else
skb_put(skb,len);
eth_copy_and_sum(skb, (unsigned char *) p->recvbounce[p->rmdnum],len,0);
skb_copy_to_linear_data(skb, (unsigned char *) p->recvbounce[p->rmdnum],len);
#endif
p->stats.rx_packets++;
p->stats.rx_bytes += len;

View File

@ -1567,7 +1567,7 @@ static void netdrv_rx_interrupt (struct net_device *dev,
if (skb) {
skb_reserve (skb, 2); /* 16 byte align the IP fields. */
eth_copy_and_sum (skb, &rx_ring[ring_offset + 4], pkt_size, 0);
skb_copy_to_linear_data (skb, &rx_ring[ring_offset + 4], pkt_size);
skb_put (skb, pkt_size);
skb->protocol = eth_type_trans (skb, dev);

View File

@ -1235,9 +1235,9 @@ static void pcnet32_rx_entry(struct net_device *dev,
lp->rx_dma_addr[entry],
pkt_len,
PCI_DMA_FROMDEVICE);
eth_copy_and_sum(skb,
skb_copy_to_linear_data(skb,
(unsigned char *)(lp->rx_skbuff[entry]->data),
pkt_len, 0);
pkt_len);
pci_dma_sync_single_for_device(lp->pci_dev,
lp->rx_dma_addr[entry],
pkt_len,

2486
drivers/net/pppol2tp.c Normal file

File diff suppressed because it is too large Load Diff

View File

@ -690,9 +690,9 @@ static int lan_saa9730_rx(struct net_device *dev)
lp->stats.rx_packets++;
skb_reserve(skb, 2); /* 16 byte align */
skb_put(skb, len); /* make room */
eth_copy_and_sum(skb,
skb_copy_to_linear_data(skb,
(unsigned char *) pData,
len, 0);
len);
skb->protocol = eth_type_trans(skb, dev);
netif_rx(skb);
dev->last_rx = jiffies;

View File

@ -320,7 +320,7 @@ static inline void sgiseeq_rx(struct net_device *dev, struct sgiseeq_private *sp
skb_put(skb, len);
/* Copy out of kseg1 to avoid silly cache flush. */
eth_copy_and_sum(skb, pkt_pointer + 2, len, 0);
skb_copy_to_linear_data(skb, pkt_pointer + 2, len);
skb->protocol = eth_type_trans(skb, dev);
/* We don't want to receive our own packets */

View File

@ -548,7 +548,7 @@ static inline int sis190_try_rx_copy(struct sk_buff **sk_buff, int pkt_size,
skb = dev_alloc_skb(pkt_size + NET_IP_ALIGN);
if (skb) {
skb_reserve(skb, NET_IP_ALIGN);
eth_copy_and_sum(skb, sk_buff[0]->data, pkt_size, 0);
skb_copy_to_linear_data(skb, sk_buff[0]->data, pkt_size);
*sk_buff = skb;
sis190_give_to_asic(desc, rx_buf_sz);
ret = 0;

View File

@ -1456,7 +1456,7 @@ static int __netdev_rx(struct net_device *dev, int *quota)
pci_dma_sync_single_for_cpu(np->pci_dev,
np->rx_info[entry].mapping,
pkt_len, PCI_DMA_FROMDEVICE);
eth_copy_and_sum(skb, np->rx_info[entry].skb->data, pkt_len, 0);
skb_copy_to_linear_data(skb, np->rx_info[entry].skb->data, pkt_len);
pci_dma_sync_single_for_device(np->pci_dev,
np->rx_info[entry].mapping,
pkt_len, PCI_DMA_FROMDEVICE);

View File

@ -777,7 +777,7 @@ static void sun3_82586_rcv_int(struct net_device *dev)
{
skb_reserve(skb,2);
skb_put(skb,totlen);
eth_copy_and_sum(skb,(char *) p->base+swab32((unsigned long) rbd->buffer),totlen,0);
skb_copy_to_linear_data(skb,(char *) p->base+swab32((unsigned long) rbd->buffer),totlen);
skb->protocol=eth_type_trans(skb,dev);
netif_rx(skb);
p->stats.rx_packets++;

View File

@ -853,10 +853,9 @@ static int lance_rx( struct net_device *dev )
skb_reserve( skb, 2 ); /* 16 byte align */
skb_put( skb, pkt_len ); /* Make room */
// skb_copy_to_linear_data(skb, PKTBUF_ADDR(head), pkt_len);
eth_copy_and_sum(skb,
skb_copy_to_linear_data(skb,
PKTBUF_ADDR(head),
pkt_len, 0);
pkt_len);
skb->protocol = eth_type_trans( skb, dev );
netif_rx( skb );

View File

@ -860,7 +860,7 @@ static void bigmac_rx(struct bigmac *bp)
sbus_dma_sync_single_for_cpu(bp->bigmac_sdev,
this->rx_addr, len,
SBUS_DMA_FROMDEVICE);
eth_copy_and_sum(copy_skb, (unsigned char *)skb->data, len, 0);
skb_copy_to_linear_data(copy_skb, (unsigned char *)skb->data, len);
sbus_dma_sync_single_for_device(bp->bigmac_sdev,
this->rx_addr, len,
SBUS_DMA_FROMDEVICE);

View File

@ -1313,7 +1313,7 @@ static void rx_poll(unsigned long data)
np->rx_buf_sz,
PCI_DMA_FROMDEVICE);
eth_copy_and_sum(skb, np->rx_skbuff[entry]->data, pkt_len, 0);
skb_copy_to_linear_data(skb, np->rx_skbuff[entry]->data, pkt_len);
pci_dma_sync_single_for_device(np->pci_dev,
desc->frag[0].addr,
np->rx_buf_sz,

View File

@ -549,9 +549,9 @@ static void lance_rx_dvma(struct net_device *dev)
skb_reserve(skb, 2); /* 16 byte align */
skb_put(skb, len); /* make room */
eth_copy_and_sum(skb,
skb_copy_to_linear_data(skb,
(unsigned char *)&(ib->rx_buf [entry][0]),
len, 0);
len);
skb->protocol = eth_type_trans(skb, dev);
netif_rx(skb);
dev->last_rx = jiffies;

View File

@ -439,8 +439,8 @@ static void qe_rx(struct sunqe *qep)
} else {
skb_reserve(skb, 2);
skb_put(skb, len);
eth_copy_and_sum(skb, (unsigned char *) this_qbuf,
len, 0);
skb_copy_to_linear_data(skb, (unsigned char *) this_qbuf,
len);
skb->protocol = eth_type_trans(skb, qep->dev);
netif_rx(skb);
qep->dev->last_rx = jiffies;

View File

@ -64,8 +64,8 @@
#define DRV_MODULE_NAME "tg3"
#define PFX DRV_MODULE_NAME ": "
#define DRV_MODULE_VERSION "3.77"
#define DRV_MODULE_RELDATE "May 31, 2007"
#define DRV_MODULE_VERSION "3.78"
#define DRV_MODULE_RELDATE "July 11, 2007"
#define TG3_DEF_MAC_MODE 0
#define TG3_DEF_RX_MODE 0
@ -721,6 +721,44 @@ static int tg3_writephy(struct tg3 *tp, int reg, u32 val)
return ret;
}
static void tg3_phy_toggle_automdix(struct tg3 *tp, int enable)
{
u32 phy;
if (!(tp->tg3_flags2 & TG3_FLG2_5705_PLUS) ||
(tp->tg3_flags2 & TG3_FLG2_ANY_SERDES))
return;
if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5906) {
u32 ephy;
if (!tg3_readphy(tp, MII_TG3_EPHY_TEST, &ephy)) {
tg3_writephy(tp, MII_TG3_EPHY_TEST,
ephy | MII_TG3_EPHY_SHADOW_EN);
if (!tg3_readphy(tp, MII_TG3_EPHYTST_MISCCTRL, &phy)) {
if (enable)
phy |= MII_TG3_EPHYTST_MISCCTRL_MDIX;
else
phy &= ~MII_TG3_EPHYTST_MISCCTRL_MDIX;
tg3_writephy(tp, MII_TG3_EPHYTST_MISCCTRL, phy);
}
tg3_writephy(tp, MII_TG3_EPHY_TEST, ephy);
}
} else {
phy = MII_TG3_AUXCTL_MISC_RDSEL_MISC |
MII_TG3_AUXCTL_SHDWSEL_MISC;
if (!tg3_writephy(tp, MII_TG3_AUX_CTRL, phy) &&
!tg3_readphy(tp, MII_TG3_AUX_CTRL, &phy)) {
if (enable)
phy |= MII_TG3_AUXCTL_MISC_FORCE_AMDIX;
else
phy &= ~MII_TG3_AUXCTL_MISC_FORCE_AMDIX;
phy |= MII_TG3_AUXCTL_MISC_WREN;
tg3_writephy(tp, MII_TG3_AUX_CTRL, phy);
}
}
}
static void tg3_phy_set_wirespeed(struct tg3 *tp)
{
u32 val;
@ -1045,23 +1083,11 @@ out:
}
if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5906) {
u32 phy_reg;
/* adjust output voltage */
tg3_writephy(tp, MII_TG3_EPHY_PTEST, 0x12);
if (!tg3_readphy(tp, MII_TG3_EPHY_TEST, &phy_reg)) {
u32 phy_reg2;
tg3_writephy(tp, MII_TG3_EPHY_TEST,
phy_reg | MII_TG3_EPHY_SHADOW_EN);
/* Enable auto-MDIX */
if (!tg3_readphy(tp, 0x10, &phy_reg2))
tg3_writephy(tp, 0x10, phy_reg2 | 0x4000);
tg3_writephy(tp, MII_TG3_EPHY_TEST, phy_reg);
}
}
tg3_phy_toggle_automdix(tp, 1);
tg3_phy_set_wirespeed(tp);
return 0;
}
@ -1162,6 +1188,19 @@ static void tg3_frob_aux_power(struct tg3 *tp)
}
}
static int tg3_5700_link_polarity(struct tg3 *tp, u32 speed)
{
if (tp->led_ctrl == LED_CTRL_MODE_PHY_2)
return 1;
else if ((tp->phy_id & PHY_ID_MASK) == PHY_ID_BCM5411) {
if (speed != SPEED_10)
return 1;
} else if (speed == SPEED_10)
return 1;
return 0;
}
static int tg3_setup_phy(struct tg3 *, int);
#define RESET_KIND_SHUTDOWN 0
@ -1320,9 +1359,17 @@ static int tg3_set_power_state(struct tg3 *tp, pci_power_t state)
else
mac_mode = MAC_MODE_PORT_MODE_MII;
if (GET_ASIC_REV(tp->pci_chip_rev_id) != ASIC_REV_5700 ||
!(tp->tg3_flags & TG3_FLAG_WOL_SPEED_100MB))
mac_mode |= MAC_MODE_LINK_POLARITY;
mac_mode |= tp->mac_mode & MAC_MODE_LINK_POLARITY;
if (GET_ASIC_REV(tp->pci_chip_rev_id) ==
ASIC_REV_5700) {
u32 speed = (tp->tg3_flags &
TG3_FLAG_WOL_SPEED_100MB) ?
SPEED_100 : SPEED_10;
if (tg3_5700_link_polarity(tp, speed))
mac_mode |= MAC_MODE_LINK_POLARITY;
else
mac_mode &= ~MAC_MODE_LINK_POLARITY;
}
} else {
mac_mode = MAC_MODE_PORT_MODE_TBI;
}
@ -1990,15 +2037,12 @@ relink:
if (tp->link_config.active_duplex == DUPLEX_HALF)
tp->mac_mode |= MAC_MODE_HALF_DUPLEX;
tp->mac_mode &= ~MAC_MODE_LINK_POLARITY;
if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5700) {
if ((tp->led_ctrl == LED_CTRL_MODE_PHY_2) ||
(current_link_up == 1 &&
tp->link_config.active_speed == SPEED_10))
tp->mac_mode |= MAC_MODE_LINK_POLARITY;
} else {
if (current_link_up == 1)
if (current_link_up == 1 &&
tg3_5700_link_polarity(tp, tp->link_config.active_speed))
tp->mac_mode |= MAC_MODE_LINK_POLARITY;
else
tp->mac_mode &= ~MAC_MODE_LINK_POLARITY;
}
/* ??? Without this setting Netgear GA302T PHY does not
@ -2639,6 +2683,9 @@ static int tg3_setup_fiber_by_hand(struct tg3 *tp, u32 mac_status)
tw32_f(MAC_MODE, (tp->mac_mode | MAC_MODE_SEND_CONFIGS));
udelay(40);
tw32_f(MAC_MODE, tp->mac_mode);
udelay(40);
}
out:
@ -2698,10 +2745,6 @@ static int tg3_setup_fiber_phy(struct tg3 *tp, int force_reset)
else
current_link_up = tg3_setup_fiber_by_hand(tp, mac_status);
tp->mac_mode &= ~MAC_MODE_LINK_POLARITY;
tw32_f(MAC_MODE, tp->mac_mode);
udelay(40);
tp->hw_status->status =
(SD_STATUS_UPDATED |
(tp->hw_status->status & ~SD_STATUS_LINK_CHG));
@ -3512,9 +3555,9 @@ static inline int tg3_irq_sync(struct tg3 *tp)
*/
static inline void tg3_full_lock(struct tg3 *tp, int irq_sync)
{
spin_lock_bh(&tp->lock);
if (irq_sync)
tg3_irq_quiesce(tp);
spin_lock_bh(&tp->lock);
}
static inline void tg3_full_unlock(struct tg3 *tp)
@ -6444,6 +6487,10 @@ static int tg3_reset_hw(struct tg3 *tp, int reset_phy)
tp->mac_mode = MAC_MODE_TXSTAT_ENABLE | MAC_MODE_RXSTAT_ENABLE |
MAC_MODE_TDE_ENABLE | MAC_MODE_RDE_ENABLE | MAC_MODE_FHDE_ENABLE;
if (!(tp->tg3_flags2 & TG3_FLG2_5705_PLUS) &&
!(tp->tg3_flags2 & TG3_FLG2_PHY_SERDES) &&
GET_ASIC_REV(tp->pci_chip_rev_id) != ASIC_REV_5700)
tp->mac_mode |= MAC_MODE_LINK_POLARITY;
tw32_f(MAC_MODE, tp->mac_mode | MAC_MODE_RXSTAT_CLEAR | MAC_MODE_TXSTAT_CLEAR);
udelay(40);
@ -8805,7 +8852,9 @@ static int tg3_run_loopback(struct tg3 *tp, int loopback_mode)
return 0;
mac_mode = (tp->mac_mode & ~MAC_MODE_PORT_MODE_MASK) |
MAC_MODE_PORT_INT_LPBACK | MAC_MODE_LINK_POLARITY;
MAC_MODE_PORT_INT_LPBACK;
if (!(tp->tg3_flags2 & TG3_FLG2_5705_PLUS))
mac_mode |= MAC_MODE_LINK_POLARITY;
if (tp->tg3_flags & TG3_FLAG_10_100_ONLY)
mac_mode |= MAC_MODE_PORT_MODE_MII;
else
@ -8824,19 +8873,18 @@ static int tg3_run_loopback(struct tg3 *tp, int loopback_mode)
phytest | MII_TG3_EPHY_SHADOW_EN);
if (!tg3_readphy(tp, 0x1b, &phy))
tg3_writephy(tp, 0x1b, phy & ~0x20);
if (!tg3_readphy(tp, 0x10, &phy))
tg3_writephy(tp, 0x10, phy & ~0x4000);
tg3_writephy(tp, MII_TG3_EPHY_TEST, phytest);
}
val = BMCR_LOOPBACK | BMCR_FULLDPLX | BMCR_SPEED100;
} else
val = BMCR_LOOPBACK | BMCR_FULLDPLX | BMCR_SPEED1000;
tg3_phy_toggle_automdix(tp, 0);
tg3_writephy(tp, MII_BMCR, val);
udelay(40);
mac_mode = (tp->mac_mode & ~MAC_MODE_PORT_MODE_MASK) |
MAC_MODE_LINK_POLARITY;
mac_mode = tp->mac_mode & ~MAC_MODE_PORT_MODE_MASK;
if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5906) {
tg3_writephy(tp, MII_TG3_EPHY_PTEST, 0x1800);
mac_mode |= MAC_MODE_PORT_MODE_MII;
@ -8849,8 +8897,11 @@ static int tg3_run_loopback(struct tg3 *tp, int loopback_mode)
udelay(10);
tw32_f(MAC_RX_MODE, tp->rx_mode);
}
if ((tp->phy_id & PHY_ID_MASK) == PHY_ID_BCM5401) {
mac_mode &= ~MAC_MODE_LINK_POLARITY;
if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5700) {
if ((tp->phy_id & PHY_ID_MASK) == PHY_ID_BCM5401)
mac_mode &= ~MAC_MODE_LINK_POLARITY;
else if ((tp->phy_id & PHY_ID_MASK) == PHY_ID_BCM5411)
mac_mode |= MAC_MODE_LINK_POLARITY;
tg3_writephy(tp, MII_TG3_EXT_CTRL,
MII_TG3_EXT_CTRL_LNK3_LED_MODE);
}
@ -9116,10 +9167,10 @@ static void tg3_vlan_rx_register(struct net_device *dev, struct vlan_group *grp)
/* Update RX_MODE_KEEP_VLAN_TAG bit in RX_MODE register. */
__tg3_set_rx_mode(dev);
tg3_full_unlock(tp);
if (netif_running(dev))
tg3_netif_start(tp);
tg3_full_unlock(tp);
}
#endif
@ -9410,11 +9461,13 @@ static void __devinit tg3_get_5755_nvram_info(struct tg3 *tp)
case FLASH_5755VENDOR_ATMEL_FLASH_1:
case FLASH_5755VENDOR_ATMEL_FLASH_2:
case FLASH_5755VENDOR_ATMEL_FLASH_3:
case FLASH_5755VENDOR_ATMEL_FLASH_5:
tp->nvram_jedecnum = JEDEC_ATMEL;
tp->tg3_flags |= TG3_FLAG_NVRAM_BUFFERED;
tp->tg3_flags2 |= TG3_FLG2_FLASH;
tp->nvram_pagesize = 264;
if (nvcfg1 == FLASH_5755VENDOR_ATMEL_FLASH_1)
if (nvcfg1 == FLASH_5755VENDOR_ATMEL_FLASH_1 ||
nvcfg1 == FLASH_5755VENDOR_ATMEL_FLASH_5)
tp->nvram_size = (protect ? 0x3e200 : 0x80000);
else if (nvcfg1 == FLASH_5755VENDOR_ATMEL_FLASH_2)
tp->nvram_size = (protect ? 0x1f200 : 0x40000);
@ -11944,12 +11997,11 @@ static int __devinit tg3_init_one(struct pci_dev *pdev,
* checksumming.
*/
if ((tp->tg3_flags & TG3_FLAG_BROKEN_CHECKSUMS) == 0) {
dev->features |= NETIF_F_IP_CSUM | NETIF_F_SG;
if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5755 ||
GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5787)
dev->features |= NETIF_F_HW_CSUM;
else
dev->features |= NETIF_F_IP_CSUM;
dev->features |= NETIF_F_SG;
dev->features |= NETIF_F_IPV6_CSUM;
tp->tg3_flags |= TG3_FLAG_RX_CHECKSUMS;
} else
tp->tg3_flags &= ~TG3_FLAG_RX_CHECKSUMS;

View File

@ -1467,6 +1467,7 @@
#define FLASH_5755VENDOR_ATMEL_FLASH_2 0x03400002
#define FLASH_5755VENDOR_ATMEL_FLASH_3 0x03400000
#define FLASH_5755VENDOR_ATMEL_FLASH_4 0x00000003
#define FLASH_5755VENDOR_ATMEL_FLASH_5 0x02000003
#define FLASH_5755VENDOR_ATMEL_EEPROM_64KHZ 0x03c00003
#define FLASH_5755VENDOR_ATMEL_EEPROM_376KHZ 0x03c00002
#define FLASH_5787VENDOR_ATMEL_EEPROM_64KHZ 0x03000003
@ -1642,6 +1643,11 @@
#define MII_TG3_AUX_CTRL 0x18 /* auxilliary control register */
#define MII_TG3_AUXCTL_MISC_WREN 0x8000
#define MII_TG3_AUXCTL_MISC_FORCE_AMDIX 0x0200
#define MII_TG3_AUXCTL_MISC_RDSEL_MISC 0x7000
#define MII_TG3_AUXCTL_SHDWSEL_MISC 0x0007
#define MII_TG3_AUX_STAT 0x19 /* auxilliary status register */
#define MII_TG3_AUX_STAT_LPASS 0x0004
#define MII_TG3_AUX_STAT_SPDMASK 0x0700
@ -1667,6 +1673,9 @@
#define MII_TG3_EPHY_TEST 0x1f /* 5906 PHY register */
#define MII_TG3_EPHY_SHADOW_EN 0x80
#define MII_TG3_EPHYTST_MISCCTRL 0x10 /* 5906 EPHY misc ctrl shadow register */
#define MII_TG3_EPHYTST_MISCCTRL_MDIX 0x4000
#define MII_TG3_TEST1 0x1e
#define MII_TG3_TEST1_TRIM_EN 0x0010
#define MII_TG3_TEST1_CRC_EN 0x8000

View File

@ -197,8 +197,8 @@ int tulip_poll(struct net_device *dev, int *budget)
tp->rx_buffers[entry].mapping,
pkt_len, PCI_DMA_FROMDEVICE);
#if ! defined(__alpha__)
eth_copy_and_sum(skb, tp->rx_buffers[entry].skb->data,
pkt_len, 0);
skb_copy_to_linear_data(skb, tp->rx_buffers[entry].skb->data,
pkt_len);
skb_put(skb, pkt_len);
#else
memcpy(skb_put(skb, pkt_len),
@ -420,8 +420,8 @@ static int tulip_rx(struct net_device *dev)
tp->rx_buffers[entry].mapping,
pkt_len, PCI_DMA_FROMDEVICE);
#if ! defined(__alpha__)
eth_copy_and_sum(skb, tp->rx_buffers[entry].skb->data,
pkt_len, 0);
skb_copy_to_linear_data(skb, tp->rx_buffers[entry].skb->data,
pkt_len);
skb_put(skb, pkt_len);
#else
memcpy(skb_put(skb, pkt_len),

View File

@ -1232,7 +1232,7 @@ static int netdev_rx(struct net_device *dev)
pci_dma_sync_single_for_cpu(np->pci_dev,np->rx_addr[entry],
np->rx_skbuff[entry]->len,
PCI_DMA_FROMDEVICE);
eth_copy_and_sum(skb, np->rx_skbuff[entry]->data, pkt_len, 0);
skb_copy_to_linear_data(skb, np->rx_skbuff[entry]->data, pkt_len);
skb_put(skb, pkt_len);
pci_dma_sync_single_for_device(np->pci_dev,np->rx_addr[entry],
np->rx_skbuff[entry]->len,

View File

@ -1208,7 +1208,7 @@ static void investigate_read_descriptor(struct net_device *dev,struct xircom_pri
goto out;
}
skb_reserve(skb, 2);
eth_copy_and_sum(skb, (unsigned char*)&card->rx_buffer[bufferoffset / 4], pkt_len, 0);
skb_copy_to_linear_data(skb, (unsigned char*)&card->rx_buffer[bufferoffset / 4], pkt_len);
skb_put(skb, pkt_len);
skb->protocol = eth_type_trans(skb, dev);
netif_rx(skb);

View File

@ -1242,8 +1242,8 @@ xircom_rx(struct net_device *dev)
&& (skb = dev_alloc_skb(pkt_len + 2)) != NULL) {
skb_reserve(skb, 2); /* 16 byte align the IP header */
#if ! defined(__alpha__)
eth_copy_and_sum(skb, bus_to_virt(tp->rx_ring[entry].buffer1),
pkt_len, 0);
skb_copy_to_linear_data(skb, bus_to_virt(tp->rx_ring[entry].buffer1),
pkt_len);
skb_put(skb, pkt_len);
#else
memcpy(skb_put(skb, pkt_len),

View File

@ -432,6 +432,7 @@ static void tun_setup(struct net_device *dev)
init_waitqueue_head(&tun->read_wait);
tun->owner = -1;
tun->group = -1;
SET_MODULE_OWNER(dev);
dev->open = tun_net_open;
@ -467,8 +468,11 @@ static int tun_set_iff(struct file *file, struct ifreq *ifr)
return -EBUSY;
/* Check permissions */
if (tun->owner != -1 &&
current->euid != tun->owner && !capable(CAP_NET_ADMIN))
if (((tun->owner != -1 &&
current->euid != tun->owner) ||
(tun->group != -1 &&
current->egid != tun->group)) &&
!capable(CAP_NET_ADMIN))
return -EPERM;
}
else if (__dev_get_by_name(ifr->ifr_name))
@ -610,6 +614,13 @@ static int tun_chr_ioctl(struct inode *inode, struct file *file,
DBG(KERN_INFO "%s: owner set to %d\n", tun->dev->name, tun->owner);
break;
case TUNSETGROUP:
/* Set group of the device */
tun->group= (gid_t) arg;
DBG(KERN_INFO "%s: group set to %d\n", tun->dev->name, tun->group);
break;
case TUNSETLINK:
/* Only allow setting the type when the interface is down */
if (tun->dev->flags & IFF_UP) {

View File

@ -1703,7 +1703,7 @@ typhoon_rx(struct typhoon *tp, struct basic_ring *rxRing, volatile u32 * ready,
pci_dma_sync_single_for_cpu(tp->pdev, dma_addr,
PKT_BUF_SZ,
PCI_DMA_FROMDEVICE);
eth_copy_and_sum(new_skb, skb->data, pkt_len, 0);
skb_copy_to_linear_data(new_skb, skb->data, pkt_len);
pci_dma_sync_single_for_device(tp->pdev, dma_addr,
PKT_BUF_SZ,
PCI_DMA_FROMDEVICE);

View File

@ -255,7 +255,7 @@ static void catc_rx_done(struct urb *urb)
if (!(skb = dev_alloc_skb(pkt_len)))
return;
eth_copy_and_sum(skb, pkt_start + pkt_offset, pkt_len, 0);
skb_copy_to_linear_data(skb, pkt_start + pkt_offset, pkt_len);
skb_put(skb, pkt_len);
skb->protocol = eth_type_trans(skb, catc->netdev);

View File

@ -635,7 +635,7 @@ static void kaweth_usb_receive(struct urb *urb)
skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */
eth_copy_and_sum(skb, kaweth->rx_buf + 2, pkt_len, 0);
skb_copy_to_linear_data(skb, kaweth->rx_buf + 2, pkt_len);
skb_put(skb, pkt_len);

View File

@ -1492,9 +1492,9 @@ static int rhine_rx(struct net_device *dev, int limit)
rp->rx_buf_sz,
PCI_DMA_FROMDEVICE);
eth_copy_and_sum(skb,
skb_copy_to_linear_data(skb,
rp->rx_skbuff[entry]->data,
pkt_len, 0);
pkt_len);
skb_put(skb, pkt_len);
pci_dma_sync_single_for_device(rp->pdev,
rp->rx_skbuff_dma[entry],

View File

@ -1011,7 +1011,7 @@ static inline void wl3501_md_ind_interrupt(struct net_device *dev,
} else {
skb->dev = dev;
skb_reserve(skb, 2); /* IP headers on 16 bytes boundaries */
eth_copy_and_sum(skb, (unsigned char *)&sig.daddr, 12, 0);
skb_copy_to_linear_data(skb, (unsigned char *)&sig.daddr, 12);
wl3501_receive(this, skb->data, pkt_len);
skb_put(skb, pkt_len);
skb->protocol = eth_type_trans(skb, dev);

View File

@ -1137,7 +1137,7 @@ static int yellowfin_rx(struct net_device *dev)
if (skb == NULL)
break;
skb_reserve(skb, 2); /* 16 byte align the IP header */
eth_copy_and_sum(skb, rx_skb->data, pkt_len, 0);
skb_copy_to_linear_data(skb, rx_skb->data, pkt_len);
skb_put(skb, pkt_len);
pci_dma_sync_single_for_device(yp->pci_dev, desc->addr,
yp->rx_buf_sz,

View File

@ -91,7 +91,6 @@ header-y += in6.h
header-y += in_route.h
header-y += ioctl.h
header-y += ipmi_msgdefs.h
header-y += ip_mp_alg.h
header-y += ipsec.h
header-y += ipx.h
header-y += irda.h
@ -226,6 +225,7 @@ unifdef-y += if_fddi.h
unifdef-y += if_frad.h
unifdef-y += if_ltalk.h
unifdef-y += if_link.h
unifdef-y += if_pppol2tp.h
unifdef-y += if_pppox.h
unifdef-y += if_shaper.h
unifdef-y += if_tr.h

View File

@ -39,13 +39,8 @@ extern void eth_header_cache_update(struct hh_cache *hh, struct net_device *dev
extern int eth_header_cache(struct neighbour *neigh,
struct hh_cache *hh);
extern struct net_device *alloc_etherdev(int sizeof_priv);
static inline void eth_copy_and_sum (struct sk_buff *dest,
const unsigned char *src,
int len, int base)
{
memcpy (dest->data, src, len);
}
extern struct net_device *alloc_etherdev_mq(int sizeof_priv, unsigned int queue_count);
#define alloc_etherdev(sizeof_priv) alloc_etherdev_mq(sizeof_priv, 1)
/**
* is_zero_ether_addr - Determine if give Ethernet address is all zeros.

View File

@ -76,6 +76,8 @@ enum
#define IFLA_WEIGHT IFLA_WEIGHT
IFLA_OPERSTATE,
IFLA_LINKMODE,
IFLA_LINKINFO,
#define IFLA_LINKINFO IFLA_LINKINFO
__IFLA_MAX
};
@ -140,4 +142,49 @@ struct ifla_cacheinfo
__u32 retrans_time;
};
enum
{
IFLA_INFO_UNSPEC,
IFLA_INFO_KIND,
IFLA_INFO_DATA,
IFLA_INFO_XSTATS,
__IFLA_INFO_MAX,
};
#define IFLA_INFO_MAX (__IFLA_INFO_MAX - 1)
/* VLAN section */
enum
{
IFLA_VLAN_UNSPEC,
IFLA_VLAN_ID,
IFLA_VLAN_FLAGS,
IFLA_VLAN_EGRESS_QOS,
IFLA_VLAN_INGRESS_QOS,
__IFLA_VLAN_MAX,
};
#define IFLA_VLAN_MAX (__IFLA_VLAN_MAX - 1)
struct ifla_vlan_flags {
__u32 flags;
__u32 mask;
};
enum
{
IFLA_VLAN_QOS_UNSPEC,
IFLA_VLAN_QOS_MAPPING,
__IFLA_VLAN_QOS_MAX
};
#define IFLA_VLAN_QOS_MAX (__IFLA_VLAN_QOS_MAX - 1)
struct ifla_vlan_qos_mapping
{
__u32 from;
__u32 to;
};
#endif /* _LINUX_IF_LINK_H */

View File

@ -110,6 +110,21 @@ struct ifpppcstatsreq {
struct ppp_comp_stats stats;
};
/* For PPPIOCGL2TPSTATS */
struct pppol2tp_ioc_stats {
__u16 tunnel_id; /* redundant */
__u16 session_id; /* if zero, get tunnel stats */
__u32 using_ipsec:1; /* valid only for session_id == 0 */
aligned_u64 tx_packets;
aligned_u64 tx_bytes;
aligned_u64 tx_errors;
aligned_u64 rx_packets;
aligned_u64 rx_bytes;
aligned_u64 rx_seq_discards;
aligned_u64 rx_oos_packets;
aligned_u64 rx_errors;
};
#define ifr__name b.ifr_ifrn.ifrn_name
#define stats_ptr b.ifr_ifru.ifru_data
@ -146,6 +161,7 @@ struct ifpppcstatsreq {
#define PPPIOCDISCONN _IO('t', 57) /* disconnect channel */
#define PPPIOCATTCHAN _IOW('t', 56, int) /* attach to ppp channel */
#define PPPIOCGCHAN _IOR('t', 55, int) /* get ppp channel number */
#define PPPIOCGL2TPSTATS _IOR('t', 54, struct pppol2tp_ioc_stats)
#define SIOCGPPPSTATS (SIOCDEVPRIVATE + 0)
#define SIOCGPPPVER (SIOCDEVPRIVATE + 1) /* NEVER change this!! */

View File

@ -0,0 +1,69 @@
/***************************************************************************
* Linux PPP over L2TP (PPPoL2TP) Socket Implementation (RFC 2661)
*
* This file supplies definitions required by the PPP over L2TP driver
* (pppol2tp.c). All version information wrt this file is located in pppol2tp.c
*
* License:
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*
*/
#ifndef __LINUX_IF_PPPOL2TP_H
#define __LINUX_IF_PPPOL2TP_H
#include <asm/types.h>
#ifdef __KERNEL__
#include <linux/in.h>
#endif
/* Structure used to connect() the socket to a particular tunnel UDP
* socket.
*/
struct pppol2tp_addr
{
pid_t pid; /* pid that owns the fd.
* 0 => current */
int fd; /* FD of UDP socket to use */
struct sockaddr_in addr; /* IP address and port to send to */
__be16 s_tunnel, s_session; /* For matching incoming packets */
__be16 d_tunnel, d_session; /* For sending outgoing packets */
};
/* Socket options:
* DEBUG - bitmask of debug message categories
* SENDSEQ - 0 => don't send packets with sequence numbers
* 1 => send packets with sequence numbers
* RECVSEQ - 0 => receive packet sequence numbers are optional
* 1 => drop receive packets without sequence numbers
* LNSMODE - 0 => act as LAC.
* 1 => act as LNS.
* REORDERTO - reorder timeout (in millisecs). If 0, don't try to reorder.
*/
enum {
PPPOL2TP_SO_DEBUG = 1,
PPPOL2TP_SO_RECVSEQ = 2,
PPPOL2TP_SO_SENDSEQ = 3,
PPPOL2TP_SO_LNSMODE = 4,
PPPOL2TP_SO_REORDERTO = 5,
};
/* Debug message categories for the DEBUG socket option */
enum {
PPPOL2TP_MSG_DEBUG = (1 << 0), /* verbose debug (if
* compiled in) */
PPPOL2TP_MSG_CONTROL = (1 << 1), /* userspace - kernel
* interface */
PPPOL2TP_MSG_SEQ = (1 << 2), /* sequence numbers */
PPPOL2TP_MSG_DATA = (1 << 3), /* data packets */
};
#endif

View File

@ -27,6 +27,7 @@
#include <asm/semaphore.h>
#include <linux/ppp_channel.h>
#endif /* __KERNEL__ */
#include <linux/if_pppol2tp.h>
/* For user-space programs to pick up these definitions
* which they wouldn't get otherwise without defining __KERNEL__
@ -50,7 +51,8 @@ struct pppoe_addr{
* Protocols supported by AF_PPPOX
*/
#define PX_PROTO_OE 0 /* Currently just PPPoE */
#define PX_MAX_PROTO 1
#define PX_PROTO_OL2TP 1 /* Now L2TP also */
#define PX_MAX_PROTO 2
struct sockaddr_pppox {
sa_family_t sa_family; /* address family, AF_PPPOX */
@ -60,6 +62,16 @@ struct sockaddr_pppox {
}sa_addr;
}__attribute__ ((packed));
/* The use of the above union isn't viable because the size of this
* struct must stay fixed over time -- applications use sizeof(struct
* sockaddr_pppox) to fill it. We use a protocol specific sockaddr
* type instead.
*/
struct sockaddr_pppol2tp {
sa_family_t sa_family; /* address family, AF_PPPOX */
unsigned int sa_protocol; /* protocol identifier */
struct pppol2tp_addr pppol2tp;
}__attribute__ ((packed));
/*********************************************************************
*

View File

@ -36,6 +36,7 @@ struct tun_struct {
unsigned long flags;
int attached;
uid_t owner;
gid_t group;
wait_queue_head_t read_wait;
struct sk_buff_head readq;
@ -78,6 +79,7 @@ struct tun_struct {
#define TUNSETPERSIST _IOW('T', 203, int)
#define TUNSETOWNER _IOW('T', 204, int)
#define TUNSETLINK _IOW('T', 205, int)
#define TUNSETGROUP _IOW('T', 206, int)
/* TUNSETIFF ifr flags */
#define IFF_TUN 0x0001

View File

@ -99,7 +99,7 @@ static inline void vlan_group_set_device(struct vlan_group *vg, int vlan_id,
}
struct vlan_priority_tci_mapping {
unsigned long priority;
u32 priority;
unsigned short vlan_qos; /* This should be shifted when first set, so we only do it
* at provisioning time.
* ((skb->priority << 13) & 0xE000)
@ -112,7 +112,10 @@ struct vlan_dev_info {
/** This will be the mapping that correlates skb->priority to
* 3 bits of VLAN QOS tags...
*/
unsigned long ingress_priority_map[8];
unsigned int nr_ingress_mappings;
u32 ingress_priority_map[8];
unsigned int nr_egress_mappings;
struct vlan_priority_tci_mapping *egress_priority_map[16]; /* hash table */
unsigned short vlan_id; /* The VLAN Identifier for this interface. */
@ -132,6 +135,7 @@ struct vlan_dev_info {
int old_allmulti; /* similar to above. */
int old_promiscuity; /* similar to above. */
struct net_device *real_dev; /* the underlying device/interface */
unsigned char real_dev_addr[ETH_ALEN];
struct proc_dir_entry *dent; /* Holds the proc data */
unsigned long cnt_inc_headroom_on_tx; /* How many times did we have to grow the skb on TX. */
unsigned long cnt_encap_on_xmit; /* How many times did we have to encapsulate the skb on TX. */
@ -395,6 +399,10 @@ enum vlan_ioctl_cmds {
GET_VLAN_VID_CMD /* Get the VID of this VLAN (specified by name) */
};
enum vlan_flags {
VLAN_FLAG_REORDER_HDR = 0x1,
};
enum vlan_name_types {
VLAN_NAME_TYPE_PLUS_VID, /* Name will look like: vlan0005 */
VLAN_NAME_TYPE_RAW_PLUS_VID, /* name will look like: eth1.0005 */

View File

@ -1,22 +0,0 @@
/* ip_mp_alg.h: IPV4 multipath algorithm support, user-visible values.
*
* Copyright (C) 2004, 2005 Einar Lueck <elueck@de.ibm.com>
* Copyright (C) 2005 David S. Miller <davem@davemloft.net>
*/
#ifndef _LINUX_IP_MP_ALG_H
#define _LINUX_IP_MP_ALG_H
enum ip_mp_alg {
IP_MP_ALG_NONE,
IP_MP_ALG_RR,
IP_MP_ALG_DRR,
IP_MP_ALG_RANDOM,
IP_MP_ALG_WRANDOM,
__IP_MP_ALG_MAX
};
#define IP_MP_ALG_MAX (__IP_MP_ALG_MAX - 1)
#endif /* _LINUX_IP_MP_ALG_H */

View File

@ -27,8 +27,8 @@ struct in6_ifreq {
int ifr6_ifindex;
};
#define IPV6_SRCRT_STRICT 0x01 /* this hop must be a neighbor */
#define IPV6_SRCRT_TYPE_0 0 /* IPv6 type 0 Routing Header */
#define IPV6_SRCRT_STRICT 0x01 /* Deprecated; will be removed */
#define IPV6_SRCRT_TYPE_0 0 /* Deprecated; will be removed */
#define IPV6_SRCRT_TYPE_2 2 /* IPv6 type 2 Routing Header */
/*
@ -247,7 +247,7 @@ struct inet6_skb_parm {
__u16 lastopt;
__u32 nhoff;
__u16 flags;
#ifdef CONFIG_IPV6_MIP6
#if defined(CONFIG_IPV6_MIP6) || defined(CONFIG_IPV6_MIP6_MODULE)
__u16 dsthao;
#endif
@ -299,8 +299,8 @@ struct ipv6_pinfo {
/* pktoption flags */
union {
struct {
__u16 srcrt:2,
osrcrt:2,
__u16 srcrt:1,
osrcrt:1,
rxinfo:1,
rxoinfo:1,
rxhlim:1,

View File

@ -216,6 +216,34 @@ struct if_irda_req {
#define ifr_dtr ifr_ifru.ifru_line.dtr
#define ifr_rts ifr_ifru.ifru_line.rts
/* IrDA netlink definitions */
#define IRDA_NL_NAME "irda"
#define IRDA_NL_VERSION 1
enum irda_nl_commands {
IRDA_NL_CMD_UNSPEC,
IRDA_NL_CMD_SET_MODE,
IRDA_NL_CMD_GET_MODE,
__IRDA_NL_CMD_AFTER_LAST
};
#define IRDA_NL_CMD_MAX (__IRDA_NL_CMD_AFTER_LAST - 1)
enum nl80211_attrs {
IRDA_NL_ATTR_UNSPEC,
IRDA_NL_ATTR_IFNAME,
IRDA_NL_ATTR_MODE,
__IRDA_NL_ATTR_AFTER_LAST
};
#define IRDA_NL_ATTR_MAX (__IRDA_NL_ATTR_AFTER_LAST - 1)
/* IrDA modes */
#define IRDA_MODE_PRIMARY 0x1
#define IRDA_MODE_SECONDARY 0x2
#define IRDA_MODE_MONITOR 0x4
#endif /* KERNEL_IRDA_H */

View File

@ -279,6 +279,16 @@ static inline s64 ktime_to_us(const ktime_t kt)
return (s64) tv.tv_sec * USEC_PER_SEC + tv.tv_usec;
}
static inline s64 ktime_us_delta(const ktime_t later, const ktime_t earlier)
{
return ktime_to_us(ktime_sub(later, earlier));
}
static inline ktime_t ktime_add_us(const ktime_t kt, const u64 usec)
{
return ktime_add_ns(kt, usec * 1000);
}
/*
* The resolution of the clocks. The resolution value is returned in
* the clock_getres() system call to give application programmers an

View File

@ -108,6 +108,14 @@ struct wireless_dev;
#define MAX_HEADER (LL_MAX_HEADER + 48)
#endif
struct net_device_subqueue
{
/* Give a control state for each queue. This struct may contain
* per-queue locks in the future.
*/
unsigned long state;
};
/*
* Network device statistics. Akin to the 2.0 ether stats but
* with byte counters.
@ -177,19 +185,24 @@ struct netif_rx_stats
DECLARE_PER_CPU(struct netif_rx_stats, netdev_rx_stat);
struct dev_addr_list
{
struct dev_addr_list *next;
u8 da_addr[MAX_ADDR_LEN];
u8 da_addrlen;
int da_users;
int da_gusers;
};
/*
* We tag multicasts with these structures.
*/
struct dev_mc_list
{
struct dev_mc_list *next;
__u8 dmi_addr[MAX_ADDR_LEN];
unsigned char dmi_addrlen;
int dmi_users;
int dmi_gusers;
};
#define dev_mc_list dev_addr_list
#define dmi_addr da_addr
#define dmi_addrlen da_addrlen
#define dmi_users da_users
#define dmi_gusers da_gusers
struct hh_cache
{
@ -248,6 +261,8 @@ enum netdev_state_t
__LINK_STATE_LINKWATCH_PENDING,
__LINK_STATE_DORMANT,
__LINK_STATE_QDISC_RUNNING,
/* Set by the netpoll NAPI code */
__LINK_STATE_POLL_LIST_FROZEN,
};
@ -314,9 +329,10 @@ struct net_device
/* Net device features */
unsigned long features;
#define NETIF_F_SG 1 /* Scatter/gather IO. */
#define NETIF_F_IP_CSUM 2 /* Can checksum only TCP/UDP over IPv4. */
#define NETIF_F_IP_CSUM 2 /* Can checksum TCP/UDP over IPv4. */
#define NETIF_F_NO_CSUM 4 /* Does not require checksum. F.e. loopack. */
#define NETIF_F_HW_CSUM 8 /* Can checksum all the packets. */
#define NETIF_F_IPV6_CSUM 16 /* Can checksum TCP/UDP over IPV6 */
#define NETIF_F_HIGHDMA 32 /* Can DMA to high memory. */
#define NETIF_F_FRAGLIST 64 /* Scatter/gather IO. */
#define NETIF_F_HW_VLAN_TX 128 /* Transmit VLAN hw acceleration */
@ -325,6 +341,7 @@ struct net_device
#define NETIF_F_VLAN_CHALLENGED 1024 /* Device cannot handle VLAN packets */
#define NETIF_F_GSO 2048 /* Enable software GSO. */
#define NETIF_F_LLTX 4096 /* LockLess TX */
#define NETIF_F_MULTI_QUEUE 16384 /* Has multiple TX/RX queues */
/* Segmentation offload features */
#define NETIF_F_GSO_SHIFT 16
@ -338,8 +355,11 @@ struct net_device
/* List of features with software fallbacks. */
#define NETIF_F_GSO_SOFTWARE (NETIF_F_TSO | NETIF_F_TSO_ECN | NETIF_F_TSO6)
#define NETIF_F_GEN_CSUM (NETIF_F_NO_CSUM | NETIF_F_HW_CSUM)
#define NETIF_F_ALL_CSUM (NETIF_F_IP_CSUM | NETIF_F_GEN_CSUM)
#define NETIF_F_V4_CSUM (NETIF_F_GEN_CSUM | NETIF_F_IP_CSUM)
#define NETIF_F_V6_CSUM (NETIF_F_GEN_CSUM | NETIF_F_IPV6_CSUM)
#define NETIF_F_ALL_CSUM (NETIF_F_V4_CSUM | NETIF_F_V6_CSUM)
struct net_device *next_sched;
@ -388,7 +408,10 @@ struct net_device
unsigned char addr_len; /* hardware address length */
unsigned short dev_id; /* for shared network cards */
struct dev_mc_list *mc_list; /* Multicast mac addresses */
struct dev_addr_list *uc_list; /* Secondary unicast mac addresses */
int uc_count; /* Number of installed ucasts */
int uc_promisc;
struct dev_addr_list *mc_list; /* Multicast mac addresses */
int mc_count; /* Number of installed mcasts */
int promiscuity;
int allmulti;
@ -493,6 +516,8 @@ struct net_device
void *saddr,
unsigned len);
int (*rebuild_header)(struct sk_buff *skb);
#define HAVE_SET_RX_MODE
void (*set_rx_mode)(struct net_device *dev);
#define HAVE_MULTICAST
void (*set_multicast_list)(struct net_device *dev);
#define HAVE_SET_MAC_ADDR
@ -540,17 +565,22 @@ struct net_device
struct device dev;
/* space for optional statistics and wireless sysfs groups */
struct attribute_group *sysfs_groups[3];
/* rtnetlink link ops */
const struct rtnl_link_ops *rtnl_link_ops;
/* The TX queue control structures */
unsigned int egress_subqueue_count;
struct net_device_subqueue egress_subqueue[0];
};
#define to_net_dev(d) container_of(d, struct net_device, dev)
#define NETDEV_ALIGN 32
#define NETDEV_ALIGN_CONST (NETDEV_ALIGN - 1)
static inline void *netdev_priv(struct net_device *dev)
static inline void *netdev_priv(const struct net_device *dev)
{
return (char *)dev + ((sizeof(struct net_device)
+ NETDEV_ALIGN_CONST)
& ~NETDEV_ALIGN_CONST);
return dev->priv;
}
#define SET_MODULE_OWNER(dev) do { } while (0)
@ -702,6 +732,62 @@ static inline int netif_running(const struct net_device *dev)
return test_bit(__LINK_STATE_START, &dev->state);
}
/*
* Routines to manage the subqueues on a device. We only need start
* stop, and a check if it's stopped. All other device management is
* done at the overall netdevice level.
* Also test the device if we're multiqueue.
*/
static inline void netif_start_subqueue(struct net_device *dev, u16 queue_index)
{
#ifdef CONFIG_NETDEVICES_MULTIQUEUE
clear_bit(__LINK_STATE_XOFF, &dev->egress_subqueue[queue_index].state);
#endif
}
static inline void netif_stop_subqueue(struct net_device *dev, u16 queue_index)
{
#ifdef CONFIG_NETDEVICES_MULTIQUEUE
#ifdef CONFIG_NETPOLL_TRAP
if (netpoll_trap())
return;
#endif
set_bit(__LINK_STATE_XOFF, &dev->egress_subqueue[queue_index].state);
#endif
}
static inline int netif_subqueue_stopped(const struct net_device *dev,
u16 queue_index)
{
#ifdef CONFIG_NETDEVICES_MULTIQUEUE
return test_bit(__LINK_STATE_XOFF,
&dev->egress_subqueue[queue_index].state);
#else
return 0;
#endif
}
static inline void netif_wake_subqueue(struct net_device *dev, u16 queue_index)
{
#ifdef CONFIG_NETDEVICES_MULTIQUEUE
#ifdef CONFIG_NETPOLL_TRAP
if (netpoll_trap())
return;
#endif
if (test_and_clear_bit(__LINK_STATE_XOFF,
&dev->egress_subqueue[queue_index].state))
__netif_schedule(dev);
#endif
}
static inline int netif_is_multiqueue(const struct net_device *dev)
{
#ifdef CONFIG_NETDEVICES_MULTIQUEUE
return (!!(NETIF_F_MULTI_QUEUE & dev->features));
#else
return 0;
#endif
}
/* Use this variant when it is known for sure that it
* is executing from interrupt context.
@ -930,6 +1016,14 @@ static inline void netif_rx_complete(struct net_device *dev)
{
unsigned long flags;
#ifdef CONFIG_NETPOLL
/* Prevent race with netpoll - yes, this is a kludge.
* But at least it doesn't penalize the non-netpoll
* code path. */
if (test_bit(__LINK_STATE_POLL_LIST_FROZEN, &dev->state))
return;
#endif
local_irq_save(flags);
__netif_rx_complete(dev);
local_irq_restore(flags);
@ -992,15 +1086,24 @@ static inline void netif_tx_disable(struct net_device *dev)
extern void ether_setup(struct net_device *dev);
/* Support for loadable net-drivers */
extern struct net_device *alloc_netdev(int sizeof_priv, const char *name,
void (*setup)(struct net_device *));
extern struct net_device *alloc_netdev_mq(int sizeof_priv, const char *name,
void (*setup)(struct net_device *),
unsigned int queue_count);
#define alloc_netdev(sizeof_priv, name, setup) \
alloc_netdev_mq(sizeof_priv, name, setup, 1)
extern int register_netdev(struct net_device *dev);
extern void unregister_netdev(struct net_device *dev);
/* Functions used for multicast support */
extern void dev_mc_upload(struct net_device *dev);
/* Functions used for secondary unicast and multicast support */
extern void dev_set_rx_mode(struct net_device *dev);
extern void __dev_set_rx_mode(struct net_device *dev);
extern int dev_unicast_delete(struct net_device *dev, void *addr, int alen);
extern int dev_unicast_add(struct net_device *dev, void *addr, int alen);
extern int dev_mc_delete(struct net_device *dev, void *addr, int alen, int all);
extern int dev_mc_add(struct net_device *dev, void *addr, int alen, int newonly);
extern void dev_mc_discard(struct net_device *dev);
extern int __dev_addr_delete(struct dev_addr_list **list, int *count, void *addr, int alen, int all);
extern int __dev_addr_add(struct dev_addr_list **list, int *count, void *addr, int alen, int newonly);
extern void __dev_addr_discard(struct dev_addr_list **list);
extern void dev_set_promiscuity(struct net_device *dev, int inc);
extern void dev_set_allmulti(struct net_device *dev, int inc);
extern void netdev_state_change(struct net_device *dev);

View File

@ -275,7 +275,8 @@ struct nf_queue_handler {
};
extern int nf_register_queue_handler(int pf,
struct nf_queue_handler *qh);
extern int nf_unregister_queue_handler(int pf);
extern int nf_unregister_queue_handler(int pf,
struct nf_queue_handler *qh);
extern void nf_unregister_queue_handlers(struct nf_queue_handler *qh);
extern void nf_reinject(struct sk_buff *skb,
struct nf_info *info,

View File

@ -4,6 +4,8 @@
#include <linux/netfilter/nf_conntrack_common.h>
extern const char *pptp_msg_name[];
/* state of the control session */
enum pptp_ctrlsess_state {
PPTP_SESSION_NONE, /* no session present */

View File

@ -141,22 +141,22 @@ struct xt_match
/* Arguments changed since 2.6.9, as this must now handle
non-linear skb, using skb_header_pointer and
skb_ip_make_writable. */
int (*match)(const struct sk_buff *skb,
const struct net_device *in,
const struct net_device *out,
const struct xt_match *match,
const void *matchinfo,
int offset,
unsigned int protoff,
int *hotdrop);
bool (*match)(const struct sk_buff *skb,
const struct net_device *in,
const struct net_device *out,
const struct xt_match *match,
const void *matchinfo,
int offset,
unsigned int protoff,
bool *hotdrop);
/* Called when user tries to insert an entry of this type. */
/* Should return true or false. */
int (*checkentry)(const char *tablename,
const void *ip,
const struct xt_match *match,
void *matchinfo,
unsigned int hook_mask);
bool (*checkentry)(const char *tablename,
const void *ip,
const struct xt_match *match,
void *matchinfo,
unsigned int hook_mask);
/* Called when entry of this type deleted. */
void (*destroy)(const struct xt_match *match, void *matchinfo);
@ -202,11 +202,11 @@ struct xt_target
hook_mask is a bitmask of hooks from which it can be
called. */
/* Should return true or false. */
int (*checkentry)(const char *tablename,
const void *entry,
const struct xt_target *target,
void *targinfo,
unsigned int hook_mask);
bool (*checkentry)(const char *tablename,
const void *entry,
const struct xt_target *target,
void *targinfo,
unsigned int hook_mask);
/* Called when entry of this type deleted. */
void (*destroy)(const struct xt_target *target, void *targinfo);

View File

@ -0,0 +1,40 @@
#ifndef _XT_U32_H
#define _XT_U32_H 1
enum xt_u32_ops {
XT_U32_AND,
XT_U32_LEFTSH,
XT_U32_RIGHTSH,
XT_U32_AT,
};
struct xt_u32_location_element {
u_int32_t number;
u_int8_t nextop;
};
struct xt_u32_value_element {
u_int32_t min;
u_int32_t max;
};
/*
* Any way to allow for an arbitrary number of elements?
* For now, I settle with a limit of 10 each.
*/
#define XT_U32_MAXSIZE 10
struct xt_u32_test {
struct xt_u32_location_element location[XT_U32_MAXSIZE+1];
struct xt_u32_value_element value[XT_U32_MAXSIZE+1];
u_int8_t nnums;
u_int8_t nvalues;
};
struct xt_u32 {
struct xt_u32_test tests[XT_U32_MAXSIZE+1];
u_int8_t ntests;
u_int8_t invert;
};
#endif /* _XT_U32_H */

View File

@ -24,7 +24,7 @@ struct ipt_clusterip_tgt_info {
u_int16_t num_total_nodes;
u_int16_t num_local_nodes;
u_int16_t local_nodes[CLUSTERIP_MAX_NODES];
enum clusterip_hashmode hash_mode;
u_int32_t hash_mode;
u_int32_t hash_initval;
struct clusterip_config *config;

View File

@ -44,8 +44,14 @@ struct ip6t_ip6 {
char iniface[IFNAMSIZ], outiface[IFNAMSIZ];
unsigned char iniface_mask[IFNAMSIZ], outiface_mask[IFNAMSIZ];
/* ARGH, HopByHop uses 0, so can't do 0 = ANY,
instead IP6T_F_NOPROTO must be set */
/* Upper protocol number
* - The allowed value is 0 (any) or protocol number of last parsable
* header, which is 50 (ESP), 59 (No Next Header), 135 (MH), or
* the non IPv6 extension headers.
* - The protocol numbers of IPv6 extension headers except of ESP and
* MH do not match any packets.
* - You also need to set IP6T_FLAGS_PROTO to "flags" to check protocol.
*/
u_int16_t proto;
/* TOS to match iff flags & IP6T_F_TOS */
u_int8_t tos;

View File

@ -403,16 +403,13 @@ enum
* 1..32767 Reserved for ematches inside kernel tree
* 32768..65535 Free to use, not reliable
*/
enum
{
TCF_EM_CONTAINER,
TCF_EM_CMP,
TCF_EM_NBYTE,
TCF_EM_U32,
TCF_EM_META,
TCF_EM_TEXT,
__TCF_EM_MAX
};
#define TCF_EM_CONTAINER 0
#define TCF_EM_CMP 1
#define TCF_EM_NBYTE 2
#define TCF_EM_U32 3
#define TCF_EM_META 4
#define TCF_EM_TEXT 5
#define TCF_EM_MAX 5
enum
{

View File

@ -101,6 +101,15 @@ struct tc_prio_qopt
__u8 priomap[TC_PRIO_MAX+1]; /* Map: logical priority -> PRIO band */
};
enum
{
TCA_PRIO_UNSPEC,
TCA_PRIO_MQ,
__TCA_PRIO_MAX
};
#define TCA_PRIO_MAX (__TCA_PRIO_MAX - 1)
/* TBF section */
struct tc_tbf_qopt

View File

@ -261,7 +261,7 @@ enum rtattr_type_t
RTA_FLOW,
RTA_CACHEINFO,
RTA_SESSION,
RTA_MP_ALGO,
RTA_MP_ALGO, /* no longer used */
RTA_TABLE,
__RTA_MAX
};
@ -570,10 +570,16 @@ static __inline__ int rtattr_strcmp(const struct rtattr *rta, const char *str)
}
extern int rtattr_parse(struct rtattr *tb[], int maxattr, struct rtattr *rta, int len);
extern int __rtattr_parse_nested_compat(struct rtattr *tb[], int maxattr,
struct rtattr *rta, int len);
#define rtattr_parse_nested(tb, max, rta) \
rtattr_parse((tb), (max), RTA_DATA((rta)), RTA_PAYLOAD((rta)))
#define rtattr_parse_nested_compat(tb, max, rta, data, len) \
({ data = RTA_PAYLOAD(rta) >= len ? RTA_DATA(rta) : NULL; \
__rtattr_parse_nested_compat(tb, max, rta, len); })
extern int rtnetlink_send(struct sk_buff *skb, u32 pid, u32 group, int echo);
extern int rtnl_unicast(struct sk_buff *skb, u32 pid);
extern int rtnl_notify(struct sk_buff *skb, u32 pid, u32 group,
@ -638,6 +644,18 @@ extern void __rta_fill(struct sk_buff *skb, int attrtype, int attrlen, const voi
({ (start)->rta_len = skb_tail_pointer(skb) - (unsigned char *)(start); \
(skb)->len; })
#define RTA_NEST_COMPAT(skb, type, attrlen, data) \
({ struct rtattr *__start = (struct rtattr *)skb_tail_pointer(skb); \
RTA_PUT(skb, type, attrlen, data); \
RTA_NEST(skb, type); \
__start; })
#define RTA_NEST_COMPAT_END(skb, start) \
({ struct rtattr *__nest = (void *)(start) + NLMSG_ALIGN((start)->rta_len); \
(start)->rta_len = skb_tail_pointer(skb) - (unsigned char *)(start); \
RTA_NEST_END(skb, __nest); \
(skb)->len; })
#define RTA_NEST_CANCEL(skb, start) \
({ if (start) \
skb_trim(skb, (unsigned char *) (start) - (skb)->data); \

View File

@ -65,13 +65,20 @@
* is able to produce some skb->csum, it MUST use COMPLETE,
* not UNNECESSARY.
*
* PARTIAL: identical to the case for output below. This may occur
* on a packet received directly from another Linux OS, e.g.,
* a virtualised Linux kernel on the same host. The packet can
* be treated in the same way as UNNECESSARY except that on
* output (i.e., forwarding) the checksum must be filled in
* by the OS or the hardware.
*
* B. Checksumming on output.
*
* NONE: skb is checksummed by protocol or csum is not required.
*
* PARTIAL: device is required to csum packet as seen by hard_start_xmit
* from skb->transport_header to the end and to record the checksum
* at skb->transport_header + skb->csum.
* from skb->csum_start to the end and to record the checksum
* at skb->csum_start + skb->csum_offset.
*
* Device must show its capabilities in dev->features, set
* at device setup time.
@ -82,6 +89,7 @@
* TCP/UDP over IPv4. Sigh. Vendors like this
* way by an unknown reason. Though, see comment above
* about CHECKSUM_UNNECESSARY. 8)
* NETIF_F_IPV6_CSUM about as dumb as the last one but does IPv6 instead.
*
* Any questions? No questions, good. --ANK
*/
@ -147,8 +155,8 @@ struct skb_shared_info {
/* We divide dataref into two halves. The higher 16 bits hold references
* to the payload part of skb->data. The lower 16 bits hold references to
* the entire skb->data. It is up to the users of the skb to agree on
* where the payload starts.
* the entire skb->data. A clone of a headerless skb holds the length of
* the header in skb->hdr_len.
*
* All users must obey the rule that the skb->data reference count must be
* greater than or equal to the payload reference count.
@ -196,7 +204,6 @@ typedef unsigned char *sk_buff_data_t;
* @sk: Socket we are owned by
* @tstamp: Time we arrived
* @dev: Device we arrived on/are leaving by
* @iif: ifindex of device we arrived on
* @transport_header: Transport layer header
* @network_header: Network layer header
* @mac_header: Link layer header
@ -206,6 +213,7 @@ typedef unsigned char *sk_buff_data_t;
* @len: Length of actual data
* @data_len: Data length
* @mac_len: Length of link layer header
* @hdr_len: writable header length of cloned skb
* @csum: Checksum (must include start/offset pair)
* @csum_start: Offset from skb->head where checksumming should start
* @csum_offset: Offset from csum_start where checksum should be stored
@ -227,9 +235,12 @@ typedef unsigned char *sk_buff_data_t;
* @mark: Generic packet mark
* @nfct: Associated connection, if any
* @ipvs_property: skbuff is owned by ipvs
* @nf_trace: netfilter packet trace flag
* @nfctinfo: Relationship of this skb to the connection
* @nfct_reasm: netfilter conntrack re-assembly pointer
* @nf_bridge: Saved data about a bridged frame - see br_netfilter.c
* @iif: ifindex of device we arrived on
* @queue_mapping: Queue mapping for multiqueue devices
* @tc_index: Traffic control index
* @tc_verd: traffic control verdict
* @dma_cookie: a cookie to one of several possible DMA operations
@ -245,8 +256,6 @@ struct sk_buff {
struct sock *sk;
ktime_t tstamp;
struct net_device *dev;
int iif;
/* 4 byte hole on 64 bit*/
struct dst_entry *dst;
struct sec_path *sp;
@ -260,8 +269,9 @@ struct sk_buff {
char cb[48];
unsigned int len,
data_len,
mac_len;
data_len;
__u16 mac_len,
hdr_len;
union {
__wsum csum;
struct {
@ -277,7 +287,8 @@ struct sk_buff {
nfctinfo:3;
__u8 pkt_type:3,
fclone:2,
ipvs_property:1;
ipvs_property:1,
nf_trace:1;
__be16 protocol;
void (*destructor)(struct sk_buff *skb);
@ -288,12 +299,18 @@ struct sk_buff {
#ifdef CONFIG_BRIDGE_NETFILTER
struct nf_bridge_info *nf_bridge;
#endif
int iif;
__u16 queue_mapping;
#ifdef CONFIG_NET_SCHED
__u16 tc_index; /* traffic control index */
#ifdef CONFIG_NET_CLS_ACT
__u16 tc_verd; /* traffic control verdict */
#endif
#endif
/* 2 byte hole */
#ifdef CONFIG_NET_DMA
dma_cookie_t dma_cookie;
#endif
@ -1321,6 +1338,20 @@ static inline struct sk_buff *netdev_alloc_skb(struct net_device *dev,
return __netdev_alloc_skb(dev, length, GFP_ATOMIC);
}
/**
* skb_clone_writable - is the header of a clone writable
* @skb: buffer to check
* @len: length up to which to write
*
* Returns true if modifying the header part of the cloned buffer
* does not requires the data to be copied.
*/
static inline int skb_clone_writable(struct sk_buff *skb, int len)
{
return !skb_header_cloned(skb) &&
skb_headroom(skb) + len <= skb->hdr_len;
}
/**
* skb_cow - copy header of skb when it is required
* @skb: buffer to cow
@ -1709,6 +1740,20 @@ static inline void skb_init_secmark(struct sk_buff *skb)
{ }
#endif
static inline void skb_set_queue_mapping(struct sk_buff *skb, u16 queue_mapping)
{
#ifdef CONFIG_NETDEVICES_MULTIQUEUE
skb->queue_mapping = queue_mapping;
#endif
}
static inline void skb_copy_queue_mapping(struct sk_buff *to, const struct sk_buff *from)
{
#ifdef CONFIG_NETDEVICES_MULTIQUEUE
to->queue_mapping = from->queue_mapping;
#endif
}
static inline int skb_is_gso(const struct sk_buff *skb)
{
return skb_shinfo(skb)->gso_size;

View File

@ -287,6 +287,7 @@ struct ucred {
#define SOL_NETLINK 270
#define SOL_TIPC 271
#define SOL_RXRPC 272
#define SOL_PPPOL2TP 273
/* IPX options */
#define IPX_TYPE 1

View File

@ -42,6 +42,7 @@ static inline struct udphdr *udp_hdr(const struct sk_buff *skb)
/* UDP encapsulation types */
#define UDP_ENCAP_ESPINUDP_NON_IKE 1 /* draft-ietf-ipsec-nat-t-ike-00/01 */
#define UDP_ENCAP_ESPINUDP 2 /* draft-ietf-ipsec-udp-encaps-06 */
#define UDP_ENCAP_L2TPINUDP 3 /* rfc2661 */
#ifdef __KERNEL__
#include <linux/types.h>
@ -70,6 +71,11 @@ struct udp_sock {
#define UDPLITE_SEND_CC 0x2 /* set via udplite setsockopt */
#define UDPLITE_RECV_CC 0x4 /* set via udplite setsocktopt */
__u8 pcflag; /* marks socket as UDP-Lite if > 0 */
__u8 unused[3];
/*
* For encapsulation sockets.
*/
int (*encap_rcv)(struct sock *sk, struct sk_buff *skb);
};
static inline struct udp_sock *udp_sk(const struct sock *sk)

View File

@ -19,7 +19,6 @@ struct tcf_common {
struct gnet_stats_basic tcfc_bstats;
struct gnet_stats_queue tcfc_qstats;
struct gnet_stats_rate_est tcfc_rate_est;
spinlock_t *tcfc_stats_lock;
spinlock_t tcfc_lock;
};
#define tcf_next common.tcfc_next
@ -32,7 +31,6 @@ struct tcf_common {
#define tcf_bstats common.tcfc_bstats
#define tcf_qstats common.tcfc_qstats
#define tcf_rate_est common.tcfc_rate_est
#define tcf_stats_lock common.tcfc_stats_lock
#define tcf_lock common.tcfc_lock
struct tcf_police {

View File

@ -61,7 +61,7 @@ extern int addrconf_set_dstaddr(void __user *arg);
extern int ipv6_chk_addr(struct in6_addr *addr,
struct net_device *dev,
int strict);
#ifdef CONFIG_IPV6_MIP6
#if defined(CONFIG_IPV6_MIP6) || defined(CONFIG_IPV6_MIP6_MODULE)
extern int ipv6_chk_home_addr(struct in6_addr *addr);
#endif
extern struct inet6_ifaddr * ipv6_get_ifaddr(struct in6_addr *addr,

View File

@ -79,9 +79,10 @@ struct unix_sock {
struct mutex readlock;
struct sock *peer;
struct sock *other;
struct sock *gc_tree;
struct list_head link;
atomic_t inflight;
spinlock_t lock;
unsigned int gc_candidate : 1;
wait_queue_head_t peer_wait;
};
#define unix_sk(__sk) ((struct unix_sock *)__sk)

View File

@ -107,14 +107,14 @@ enum {
#define HCI_IDLE_TIMEOUT (6000) /* 6 seconds */
#define HCI_INIT_TIMEOUT (10000) /* 10 seconds */
/* HCI Packet types */
/* HCI data types */
#define HCI_COMMAND_PKT 0x01
#define HCI_ACLDATA_PKT 0x02
#define HCI_SCODATA_PKT 0x03
#define HCI_EVENT_PKT 0x04
#define HCI_VENDOR_PKT 0xff
/* HCI Packet types */
/* HCI packet types */
#define HCI_DM1 0x0008
#define HCI_DM3 0x0400
#define HCI_DM5 0x4000
@ -129,6 +129,14 @@ enum {
#define SCO_PTYPE_MASK (HCI_HV1 | HCI_HV2 | HCI_HV3)
#define ACL_PTYPE_MASK (~SCO_PTYPE_MASK)
/* eSCO packet types */
#define ESCO_HV1 0x0001
#define ESCO_HV2 0x0002
#define ESCO_HV3 0x0004
#define ESCO_EV3 0x0008
#define ESCO_EV4 0x0010
#define ESCO_EV5 0x0020
/* ACL flags */
#define ACL_CONT 0x01
#define ACL_START 0x02
@ -138,6 +146,7 @@ enum {
/* Baseband links */
#define SCO_LINK 0x00
#define ACL_LINK 0x01
#define ESCO_LINK 0x02
/* LMP features */
#define LMP_3SLOT 0x01
@ -162,6 +171,11 @@ enum {
#define LMP_PSCHEME 0x02
#define LMP_PCONTROL 0x04
#define LMP_ESCO 0x80
#define LMP_EV4 0x01
#define LMP_EV5 0x02
#define LMP_SNIFF_SUBR 0x02
/* Connection modes */

View File

@ -78,6 +78,7 @@ struct hci_dev {
__u16 voice_setting;
__u16 pkt_type;
__u16 esco_type;
__u16 link_policy;
__u16 link_mode;
@ -109,6 +110,7 @@ struct hci_dev {
struct sk_buff_head cmd_q;
struct sk_buff *sent_cmd;
struct sk_buff *reassembly[3];
struct semaphore req_lock;
wait_queue_head_t req_wait_q;
@ -437,6 +439,8 @@ static inline int hci_recv_frame(struct sk_buff *skb)
return 0;
}
int hci_recv_fragment(struct hci_dev *hdev, int type, void *data, int count);
int hci_register_sysfs(struct hci_dev *hdev);
void hci_unregister_sysfs(struct hci_dev *hdev);
void hci_conn_add_sysfs(struct hci_conn *conn);
@ -449,6 +453,7 @@ void hci_conn_del_sysfs(struct hci_conn *conn);
#define lmp_encrypt_capable(dev) ((dev)->features[0] & LMP_ENCRYPT)
#define lmp_sniff_capable(dev) ((dev)->features[0] & LMP_SNIFF)
#define lmp_sniffsubr_capable(dev) ((dev)->features[5] & LMP_SNIFF_SUBR)
#define lmp_esco_capable(dev) ((dev)->features[3] & LMP_ESCO)
/* ----- HCI protocols ----- */
struct hci_proto {

View File

@ -323,6 +323,7 @@ int rfcomm_connect_ind(struct rfcomm_session *s, u8 channel, struct rfcomm_dlc
#define RFCOMM_RELEASE_ONHUP 1
#define RFCOMM_HANGUP_NOW 2
#define RFCOMM_TTY_ATTACHED 3
#define RFCOMM_TTY_RELEASED 4
struct rfcomm_dev_req {
s16 dev_id;

View File

@ -3,7 +3,6 @@
#include <linux/dn.h>
#include <net/sock.h>
#include <net/tcp.h>
#include <asm/byteorder.h>
#define dn_ntohs(x) le16_to_cpu(x)

View File

@ -47,7 +47,6 @@ struct dst_entry
#define DST_NOXFRM 2
#define DST_NOPOLICY 4
#define DST_NOHASH 8
#define DST_BALANCED 0x10
unsigned long expires;
unsigned short header_len; /* more space at head required */

View File

@ -67,20 +67,16 @@ struct flowi {
__be32 spi;
#ifdef CONFIG_IPV6_MIP6
struct {
__u8 type;
} mht;
#endif
} uli_u;
#define fl_ip_sport uli_u.ports.sport
#define fl_ip_dport uli_u.ports.dport
#define fl_icmp_type uli_u.icmpt.type
#define fl_icmp_code uli_u.icmpt.code
#define fl_ipsec_spi uli_u.spi
#ifdef CONFIG_IPV6_MIP6
#define fl_mh_type uli_u.mht.type
#endif
__u32 secid; /* used by xfrm; see secid.txt */
} __attribute__((__aligned__(BITS_PER_LONG/8)));

Some files were not shown because too many files have changed in this diff Show More