Once enable rts/cts for aggregation queue, do not disable until the
last aggregation queue closed.
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
When detect wrong state on shutdown aggregation queue, show more
information for debugging
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Modify comments on how to enable and change debug_level
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
When in D3 state, use low retry limit for both data and rts
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
When disable aggregation request come in on wrong agg state. ignore it
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
This one can be _very_ noisy.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
While inspecting the code, I saw that iwl_tx_queue_unmap modifies
the read pointer of the Tx queue without taking any locks. This means
that it can race with the reclaim flow. This can possibly lead to
a DMA warning complaining that we unmap the same buffer twice.
This is more a W/A than a fix since it is really weird to take
sta_lock inside iwl_tx_queue_unmap, but it can help until we revamp
the locking model in the transport layer.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Add additional sku to 6005 series of devices
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Occasionally, the device will send interrupts
while it is resuming, at a point where we are
not set up again to handle them. This causes
the core IRQ handling to completely disable
the IRQ, and then the driver won't work again
until it is reloaded/rebound.
To fix this issue disable the IRQ on suspend,
this will cause us to only get interrupts
again after we've setup everything on resume.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
A pointer to the bus structure is still in iwl_priv. Finish
cleanup and remove it.
Signed-off-by: Don Fry <donald.h.fry@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Not needed since driver split.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
For some reason, WoWLAN doesn't always seem to
be happy with more advanced LQ commands. Since
we don't need them as we're not going to send
a lot of data, simply program the station with
the very simple default LQ command.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
802.11 says:
"Sequence numbers for QoS (+)Null frames may be
set to any value."
However, if we use the normal counters then peers
will get confused with aggregation since there'll
be holes in the sequence number sequence.
To avoid that, don't assign sequence numbers to
QoS Null frames.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Wey-Yi GUy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Updating the beacon every time right after one was
transmitted is pointless, most of the time we might
not even have to update it. We will update it every
time it changes, which includes from set_tim(), a
callback iwlwifi didn't implement so far.
This also reduces latency for clients, previously
we would update the beacon right after the previous
one was transmitted, and then a TIM change would
only take effect after that again -- updating the
beacon right after the TIM changes makes the TIM
change go out to the air faster.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
For command queue testing, add "echo test" to debugfs
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
check the RF KILL flag in queue stuck watch dog function
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
When detect command queue stuck, instead of reload the firmware
do the "echo" test to make sure it is really stuck before reload
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
For error condition, STATUS_HCMD_ACTIVE already got clear before receive
tx cmd complete, give warning
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Add "echo" host command for testing and drebugging to make sure uCode still
responding
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
When detect cmd queue time out, display the current read/write pointer
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
ftmac100 allocates a page per skb fragment. We must account
PAGE_SIZE increments on skb->truesize, not the actual frag length.
If frame is under 64 bytes, page is freed, so increase truesize only for
bigger frames.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
CC: Po-Yu Chuang <ratbert@faraday-tech.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add a 'truesize' argument to niu_rx_skb_append(), filled with rcr_size
by the caller to properly account frag sizes in skb->truesize
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
vmxnet3 allocates a page per skb fragment. We must account
PAGE_SIZE increments on skb->truesize, not the actual frag length.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
CC: Shreyas Bhatewara <sbhatewara@vmware.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
ftgmac100 allocates a page per skb fragment. We must account
PAGE_SIZE increments on skb->truesize, not the actual frag length.
If frame is under 64 bytes, page is freed, and truesize adjusted.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
CC: Po-Yu Chuang <ratbert@faraday-tech.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
sky2 allocates a page per skb fragment. We must account
PAGE_SIZE increments on skb->truesize, not the actual frag length.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
e1000e allocates a page per skb fragment. We must account
PAGE_SIZE increments on skb->truesize, not the actual frag length.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
CC: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
ixgbe allocates half a page per skb fragment. We must account
PAGE_SIZE/2 increments on skb->truesize, not the actual frag length.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
CC: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
e1000 allocates half a page per skb fragment. We must account
PAGE_SIZE/2 increments on skb->truesize, not the actual frag length.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
e1000 allocates a full page per skb fragment. We must account PAGE_SIZE
increments on skb->truesize, not the actual frag length.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
bnx2 allocates a full page per fragment. We must account PAGE_SIZE
increments on skb->truesize, not the actual frag length.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fix skb truesize underestimations of this driver.
Each frag truesize is exactly rx_frag_size bytes. (2048 bytes per
default)
A driver should not use "sizeof(struct sk_buff)" at all.
Signed-off-by: Eric Dumazet <eric.dumazet>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
skb truesize currently accounts for sk_buff struct and part of skb head.
kmalloc() roundings are also ignored.
Considering that skb_shared_info is larger than sk_buff, its time to
take it into account for better memory accounting.
This patch introduces SKB_TRUESIZE(X) macro to centralize various
assumptions into a single place.
At skb alloc phase, we put skb_shared_info struct at the exact end of
skb head, to allow a better use of memory (lowering number of
reallocations), since kmalloc() gives us power-of-two memory blocks.
Unless SLUB/SLUB debug is active, both skb->head and skb_shared_info are
aligned to cache lines, as before.
Note: This patch might trigger performance regressions because of
misconfigured protocol stacks, hitting per socket or global memory
limits that were previously not reached. But its a necessary step for a
more accurate memory accounting.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
CC: Andi Kleen <ak@linux.intel.com>
CC: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This change updates the driver version to 3.2.10.
Signed-off-by: Carolyn Wyborny <carolyn.wyborny@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch adds VMDq loopback pf support for i350 devices. The patch
is necessary since the register that enabled loopback was moved and
renamed from DTXSWC to TXSWC.
Signed-off-by: "Akeem G. Abodunrin" <akeem.g.abodunrin@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
igb_update/validate_nvm_checksum_with_offset() should be static.
Also removes unneeded prototypes for the above functions.
Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
On i350 when traffic is looped back from a VF to the PF the value is byte
swapped from the normal format. In order to address this we need to add a
flag indicating that the ring will need to byte swap the loopback packets
prior to processing them.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Since we mask interrupts in EIMS not in IMS there is no need to re-enable
mask bits in that register. As such we can remove the write to IMS from
the end of igb_msix_other.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This change allows support for per packet timesync and global device reset
on the i350 adapter. These features were supported on both 82580 and i350
however it looks like several checks where not updated and as such the i350
support was not enabled.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This change makes certain that one interrupt is always initialized in
igb_request_irq. In addition we drop the use of adapter->pdev and
instead just call pdev since we made a local copy of the pointer earlier in
the function.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This is mostly a drop of unnecessary pointer defines for q_vector when we
don't have issues with line width and don't have multiple references to
the pointer.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Correct a check for change in FCoE priority when IEEE mode DCB is in use.
In IEEE mode a different function has to be used to get the FCoE priority
mask. Also, the check for the mask assumed that only one priority was set.
In case there should be more than one, check just the bit.
These changes help avoid link flapping issues that can come up when IEEE
DCB is in use.
Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Add 2 new counters to ethtool:
1. Count DDP allocation failure since we max the number of buffers
allowed in one DDP context.
2. Count DDP allocation failure since we max the number of buffers
allowed in one DDP context when we alloc an extra buffer.
Signed-off-by: Amir Hanania <amir.hanania@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
It is possible for a VF to set an invalid target DMA address in its
Tx/Rx descriptor buffer pointers. The workarounds in this patch
will guard against such an event and issue a VFLR to the VF in response.
The VFLR will shut down the VF until an administrator can take action
to investigate the event and correct the problem.
Signed-off-by: Greg Rose <gregory.v.rose@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch exposes the tos value for the TCP sockets when the TOS flag
is requested in the ext_flags for the inet_diag request. This would mainly be
used to expose TOS values for both for TCP and UDP sockets. Currently it is
supported for TCP. When netlink support for UDP would be added the support
to expose the TOS values would alse be done. For IPV4 tos value is exposed
and for IPV6 tclass value is exposed.
Signed-off-by: Murali Raja <muralira@google.com>
Acked-by: Stephen Hemminger <shemminger@vyatta.com>
Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>