Commit Graph

402994 Commits

Author SHA1 Message Date
Johannes Berg cbb346f2fc iwlwifi: mvm: add missing break in debugfs
When writing the disable_power_off value, the LPRX
enable value also gets written unintentionally, so
fix that by adding the missing break statement.

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
2013-10-29 14:52:45 +01:00
Johannes Berg fb8b8ee10e iwlwifi: mvm: capture the FCS in monitor mode
This can be useful when using the device as a sniffer.

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
2013-10-29 14:52:25 +01:00
Johannes Berg bcbb8c9c7d iwlwifi: pcie: move warning message into warning
Having a WARN_ON() followed by a printed message is
less useful than having the message in the warning
so move the message.

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
2013-10-29 14:51:50 +01:00
Emmanuel Grumbach 22cba0c085 iwlwifi: mvm: BT Coex fix NULL pointer dereference
When we disassociate, mac80211 removes the station and
then, it sets the bss it unsets the assoc bool in bss_info.

Since the firwmware wants it the opposite (first set the
MAC context as unassoc, and only then, remove the STA of
the API), we have a small period of time in which the STA
in firmware doesn't have a valid ieee80211_sta pointer.
During that time, iwl_mvm_vif->ap_sta_id, is still set
to the STA in firmware that represent the AP.

This avoids:

[ 4481.476246] BUG: unable to handle kernel NULL pointer dereference at 00000045
[ 4481.479521] IP: [<f8416a6a>] iwl_mvm_bt_coex_reduced_txp+0x7a/0x190 [iwlmvm]
[ 4481.482023] *pde = 00000000
[ 4481.484332] Oops: 0000 [#1] SMP DEBUG_PAGEALLOC
[ 4481.486897] Modules linked in: netconsole configfs autofs4 rfcomm(O) bnep(O) nfsd nfs_acl auth_rpcgss exportfs nfs lockd binfmt_misc sunrpc fscache arc4 iwlmvm(O) mac80211(O) btusb(O) iwlwifi(O) bluetooth(O) cfg80211(O) snd_hda_codec_hdmi coretemp dell_wmi snd_hda_codec_idt compat(O) dell_laptop aesni_intel i915 sparse_keymap dcdbas cryptd psmouse serio_raw aes_i586 microcode snd_hda_intel drm_kms_helper snd_hda_codec drm snd_pcm snd_timer i2c_algo_bit video intel_agp intel_gtt snd soundcore snd_page_alloc crc32c_intel ahci sdhci_pci libahci sdhci mmc_core e1000e xhci_hcd [last unloaded: configfs]
[ 4481.502983]
[ 4481.505599] Pid: 6507, comm: kworker/0:1 Tainted: G           O 3.4.43-dev #1 Dell Inc. Latitude E6430/0CMDYV
[ 4481.508575] EIP: 0060:[<f8416a6a>] EFLAGS: 00010246 CPU: 0
[ 4481.511248] EIP is at iwl_mvm_bt_coex_reduced_txp+0x7a/0x190 [iwlmvm]
[ 4481.513947] EAX: ffffffea EBX: 00000002 ECX: 00000001 EDX: 00000001
[ 4481.516710] ESI: ec6f0f28 EDI: 00000000 EBP: e8175dfc ESP: e8175d9c
[ 4481.519445]  DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068
[ 4481.522185] CR0: 8005003b CR2: 00000045 CR3: 01a5e000 CR4: 001407d0
[ 4481.524950] DR0: 00000000 DR1: 00000000 DR2: 00000000 DR3: 00000000
[ 4481.527768] DR6: ffff0ff0 DR7: 00000400
[ 4481.530565] Process kworker/0:1 (pid: 6507, ti=e8174000 task=e8032b20 task.ti=e8174000)
[ 4481.533447] Stack:
[ 4481.536379]  e472439f 00003a12 e8032b20 e8033048 00000001 e8175ddc 00000246 e8033040
[ 4481.540132]  00000002 01814990 ec4d1ddc e8175dcc 00000000 00000000 00000000 00000000
[ 4481.543867]  00000000 00000000 00000001 000001c8 009b0002 ec4d1ddc ec6f0f28 00000000
[ 4481.547633] Call Trace:
[ 4481.550578]  [<f8418027>] iwl_mvm_bt_rssi_event+0x197/0x220 [iwlmvm]
[ 4481.553537]  [<f840919c>] iwl_mvm_stat_iterator+0xdc/0x240 [iwlmvm]
[ 4481.556582]  [<f8d129c2>] __iterate_active_interfaces+0xe2/0x1f0 [mac80211]
[ 4481.559544]  [<f84090c0>] ? iwl_mvm_update_smps+0x90/0x90 [iwlmvm]
[ 4481.562519]  [<f84090c0>] ? iwl_mvm_update_smps+0x90/0x90 [iwlmvm]
[ 4481.565498]  [<f8d12b0c>] ieee80211_iterate_active_interfaces+0x3c/0x50 [mac80211]
[ 4481.568421]  [<f8409b43>] iwl_mvm_rx_statistics+0xb3/0x130 [iwlmvm]
[ 4481.571349]  [<f8405431>] iwl_mvm_async_handlers_wk+0xc1/0xf0 [iwlmvm]
[ 4481.574251]  [<c1052915>] ? process_one_work+0x105/0x5c0
[ 4481.577162]  [<c1052991>] process_one_work+0x181/0x5c0
[ 4481.580025]  [<c1052915>] ? process_one_work+0x105/0x5c0
[ 4481.582861]  [<f8405370>] ? iwl_mvm_rx_fw_logs+0x20/0x20 [iwlmvm]
[ 4481.585722]  [<c10530f1>] worker_thread+0x121/0x2c0
[ 4481.588536]  [<c1052fd0>] ? rescuer_thread+0x1d0/0x1d0
[ 4481.591323]  [<c105af0d>] kthread+0x7d/0x90
[ 4481.594059]  [<c105ae90>] ? flush_kthread_worker+0x120/0x120
[ 4481.596868]  [<c15b7cc2>] kernel_thread_helper+0x6/0x10
[ 4481.599605] Code: 9d de c3 c8 85 c0 74 0d 80 3d f8 ae 42 f8 00 0f 84 dc 00 00 00 8b 45 c8 0f b6 d3 31 ff 89 55 c0 8b 84 90 d8 03 00 00 0f b6 55 c7 <38> 50 5b 89 45 bc 0f 84 a8 00 00 00 a1 e4 d2 04 c2 85 c0 0f 84
[ 4481.611782] EIP: [<f8416a6a>] iwl_mvm_bt_coex_reduced_txp+0x7a/0x190 [iwlmvm] SS:ESP 0068:e8175d9c
[ 4481.614985] CR2: 0000000000000045
[ 4481.687441] ---[ end trace b11bc915fbac4412 ]---

Reviewed-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
2013-10-29 14:51:01 +01:00
Johannes Berg 84cf0e6207 iwlwifi: transport config n_no_reclaim_cmds should be unsigned
The number of commands can never be negative, so it should
be using an unsigned type. This also shuts up an smatch
warning elsewhere in the code.

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
2013-10-29 14:50:11 +01:00
Alexander Bondar e8e626ad0c iwlwifi: mvm: update UAPSD support TLV bits
Change old UAPSD bit to PM_CMD_SUPPORT, and add a new bit to indicate
real UAPSD support.
Don't use UAPSD when the firmware doesn't support it.

Signed-off-by: David Spinadel <david.spinadel@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
2013-10-29 14:45:23 +01:00
Daniel Vetter 1fbc0d789d drm/i915: Fix the PPT fdi lane bifurcate state handling on ivb
Originally I've thought that this is leftover hw state dirt from the
BIOS. But after way too much helpless flailing around on my part I've
noticed that the actual bug is when we change the state of an already
active pipe.

For example when we change the fdi lines from 2 to 3 without switching
off outputs in-between we'll never see the crucial on->off transition
in the ->modeset_global_resources hook the current logic relies on.

Patch version 2 got this right by instead also checking whether the
pipe is indeed active. But that in turn broke things when pipes have
been turned off through dpms since the bifurcate enabling is done in
the ->crtc_mode_set callback.

To address this issues discussed with Ville in the patch review move
the setting of the bifurcate bit into the ->crtc_enable hook. That way
we won't wreak havoc with this state when userspace puts all other
outputs into dpms off state. This also moves us forward with our
overall goal to unify the modeset and dpms on paths (which we need to
have to allow runtime pm in the dpms off state).

Unfortunately this requires us to move the bifurcate helpers around a
bit.

Also update the commit message, I've misanalyzed the bug rather badly.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=70507
Tested-by: Jan-Michael Brummer <jan.brummer@tabos.org>
Cc: stable@vger.kernel.org
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-10-29 13:52:56 +01:00
Holger Eitzenberger d954777324 netfilter: xt_NFQUEUE: fix --queue-bypass regression
V3 of the NFQUEUE target ignores the --queue-bypass flag,
causing packets to be dropped when the userspace listener
isn't running.

Regression is in since 8746ddcf12 ("netfilter: xt_NFQUEUE:
introduce CPU fanout").

Reported-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Holger Eitzenberger <holger@eitzenberger.org>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-29 13:05:54 +01:00
Wei Yongjun ed87ac09d8 i40e: fix error return code in i40e_probe()
Fix to return -ENOMEM in the memory alloc error handling
case instead of 0, as done elsewhere in this function.

Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2013-10-29 04:29:25 -07:00
Don Skidmore 44bd741e10 ixgbevf: Add zero_base handler to network statistics
This patch removes the need to keep a zero_base variable in the adapter
structure. Now we just use two different macros to set the non-zero and
zero base. This adds to readability and shortens some of the structure
initialization under 80 columns. The gathering of status for ethtool was
slightly modified to again better fit into 80 columns and become a bit
more readable.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2013-10-29 04:22:24 -07:00
Jacob Keller 3b5dca262f ixgbevf: add BP_EXTENDED_STATS for CONFIG_NET_RX_BUSY_POLL
This patch adds the extended statistics similar to the ixgbe driver. These
statistics keep track of how often the busy polling yields, as well as how many
packets are cleaned or missed by the polling routine.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2013-10-29 04:15:11 -07:00
Jacob Keller c777cdfa4e ixgbevf: implement CONFIG_NET_RX_BUSY_POLL
This patch enables CONFIG_NET_RX_BUSY_POLL support in the VF code. This enables
sockets which have enabled the SO_BUSY_POLL socket option to use the
ndo_busy_poll_recv operation which could result in lower latency, at the cost
of higher CPU utilization, and increased power usage. This support is similar
to how the ixgbe driver works.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2013-10-29 04:08:12 -07:00
Peter Zijlstra e8a923cc1f perf/x86: Fix NMI measurements
OK, so what I'm actually seeing on my WSM is that sched/clock.c is
'broken' for the purpose we're using it for.

What triggered it is that my WSM-EP is broken :-(

  [    0.001000] tsc: Fast TSC calibration using PIT
  [    0.002000] tsc: Detected 2533.715 MHz processor
  [    0.500180] TSC synchronization [CPU#0 -> CPU#6]:
  [    0.505197] Measured 3 cycles TSC warp between CPUs, turning off TSC clock.
  [    0.004000] tsc: Marking TSC unstable due to check_tsc_sync_source failed

For some reason it consistently detects TSC skew, even though NHM+
should have a single clock domain for 'reasonable' systems.

This marks sched_clock_stable=0, which means that we do fancy stuff to
try and get a 'sane' clock. Part of this fancy stuff relies on the tick,
clearly that's gone when NOHZ=y. So for idle cpus time gets stuck, until
it either wakes up or gets kicked by another cpu.

While this is perfectly fine for the scheduler -- it only cares about
actually running stuff, and when we're running stuff we're obviously not
idle. This does somewhat break down for perf which can trigger events
just fine on an otherwise idle cpu.

So I've got NMIs get get 'measured' as taking ~1ms, which actually
don't last nearly that long:

          <idle>-0     [013] d.h.   886.311970: rcu_nmi_enter <-do_nmi
  ...
          <idle>-0     [013] d.h.   886.311997: perf_sample_event_took: HERE!!! : 1040990

So ftrace (which uses sched_clock(), not the fancy bits) only sees
~27us, but we measure ~1ms !!

Now since all this measurement stuff lives in x86 code, we can actually
fix it.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: mingo@kernel.org
Cc: dave.hansen@linux.intel.com
Cc: eranian@google.com
Cc: Don Zickus <dzickus@redhat.com>
Cc: jmario@redhat.com
Cc: acme@infradead.org
Link: http://lkml.kernel.org/r/20131017133350.GG3364@laptop.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-29 12:01:20 +01:00
Peter Zijlstra bf378d341e perf: Fix perf ring buffer memory ordering
The PPC64 people noticed a missing memory barrier and crufty old
comments in the perf ring buffer code. So update all the comments and
add the missing barrier.

When the architecture implements local_t using atomic_long_t there
will be double barriers issued; but short of introducing more
conditional barrier primitives this is the best we can do.

Reported-by: Victor Kaplansky <victork@il.ibm.com>
Tested-by: Victor Kaplansky <victork@il.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Cc: michael@ellerman.id.au
Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
Cc: Michael Neuling <mikey@neuling.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: anton@samba.org
Cc: benh@kernel.crashing.org
Link: http://lkml.kernel.org/r/20131025173749.GG19466@laptop.lan
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-29 12:01:19 +01:00
Jacob Keller 08e50a20ed ixgbevf: have clean_rx_irq return total_rx_packets cleaned
Rather than return true/false indicating whether there was budget left, return
the total packets cleaned. This currently has no use, but will be used in a
following patch which enables CONFIG_NET_RX_BUSY_POLL support in order to track
how many packets were cleaned during the busy poll as part of the extended
statistics.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2013-10-29 04:00:21 -07:00
Jacob Keller 0868161866 ixgbevf: add ixgbevf_rx_skb
This patch adds ixgbevf_rx_skb in line with how ixgbe handles the variations on
how packets can be received. It will be extended in a following patch for
CONFIG_NET_RX_BUSY_POLL support.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2013-10-29 03:53:30 -07:00
Jacob Keller 6a2aae5ae6 ixgbe: remove unnecessary duplication of PCIe bandwidth display
This patch removes the unnecessary display of PCIe bandwidth twice. Since the
ixgbe_check_minimum_link does a better job, and ensures accurate detection on
even complex chains, this older check is no longer necessary.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2013-10-29 03:45:57 -07:00
Jacob Keller 9f0a433ce6 ixgbe: show <2% for encoding loss on PCIe Gen3
This patch updates the ixgbe_check_minimum_link function to correctly show that
there is some minor loss of encoding, even though we don't calculate it in the
max GT/s equation. It is small enough to not bother, but is better to report it
than not.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2013-10-29 03:38:26 -07:00
Mel Gorman 0255d49184 mm: Account for a THP NUMA hinting update as one PTE update
A THP PMD update is accounted for as 512 pages updated in vmstat.  This is
large difference when estimating the cost of automatic NUMA balancing and
can be misleading when comparing results that had collapsed versus split
THP. This patch addresses the accounting issue.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: <stable@kernel.org>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-10-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-29 11:38:17 +01:00
Mel Gorman 3f926ab945 mm: Close races between THP migration and PMD numa clearing
THP migration uses the page lock to guard against parallel allocations
but there are cases like this still open

  Task A					Task B
  ---------------------				---------------------
  do_huge_pmd_numa_page				do_huge_pmd_numa_page
  lock_page
  mpol_misplaced == -1
  unlock_page
  goto clear_pmdnuma
						lock_page
						mpol_misplaced == 2
						migrate_misplaced_transhuge
  pmd = pmd_mknonnuma
  set_pmd_at

During hours of testing, one crashed with weird errors and while I have
no direct evidence, I suspect something like the race above happened.
This patch extends the page lock to being held until the pmd_numa is
cleared to prevent migration starting in parallel while the pmd_numa is
being cleared. It also flushes the old pmd entry and orders pagetable
insertion before rmap insertion.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: <stable@kernel.org>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-9-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-29 11:38:05 +01:00
Mel Gorman c61109e34f mm: numa: Sanitize task_numa_fault() callsites
There are three callers of task_numa_fault():

 - do_huge_pmd_numa_page():
     Accounts against the current node, not the node where the
     page resides, unless we migrated, in which case it accounts
     against the node we migrated to.

 - do_numa_page():
     Accounts against the current node, not the node where the
     page resides, unless we migrated, in which case it accounts
     against the node we migrated to.

 - do_pmd_numa_page():
     Accounts not at all when the page isn't migrated, otherwise
     accounts against the node we migrated towards.

This seems wrong to me; all three sites should have the same
sementaics, furthermore we should accounts against where the page
really is, we already know where the task is.

So modify all three sites to always account; we did after all receive
the fault; and always account to where the page is after migration,
regardless of success.

They all still differ on when they clear the PTE/PMD; ideally that
would get sorted too.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: <stable@kernel.org>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-8-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-29 11:37:52 +01:00
Mel Gorman 587fe586f4 mm: Prevent parallel splits during THP migration
THP migrations are serialised by the page lock but on its own that does
not prevent THP splits. If the page is split during THP migration then
the pmd_same checks will prevent page table corruption but the unlock page
and other fix-ups potentially will cause corruption. This patch takes the
anon_vma lock to prevent parallel splits during migration.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: <stable@kernel.org>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-7-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-29 11:37:39 +01:00
Mel Gorman 42836f5f8b mm: Wait for THP migrations to complete during NUMA hinting faults
The locking for migrating THP is unusual. While normal page migration
prevents parallel accesses using a migration PTE, THP migration relies on
a combination of the page_table_lock, the page lock and the existance of
the NUMA hinting PTE to guarantee safety but there is a bug in the scheme.

If a THP page is currently being migrated and another thread traps a
fault on the same page it checks if the page is misplaced. If it is not,
then pmd_numa is cleared. The problem is that it checks if the page is
misplaced without holding the page lock meaning that the racing thread
can be migrating the THP when the second thread clears the NUMA bit
and faults a stale page.

This patch checks if the page is potentially being migrated and stalls
using the lock_page if it is potentially being migrated before checking
if the page is misplaced or not.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: <stable@kernel.org>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-6-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-29 11:37:19 +01:00
Mel Gorman 1dd49bfa34 mm: numa: Do not account for a hinting fault if we raced
If another task handled a hinting fault in parallel then do not double
account for it.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: <stable@kernel.org>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-5-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-29 11:37:05 +01:00
Jacob Keller 27d9ce4fd0 ixgbe: fix qv_lock_napi call in ixgbe_napi_disable_all
ixgbe_napi_disable_all calls napi_disable on each queue, however the busy
polling code introduced a local_bh_disable()d context around the napi_disable.
The original author did not realize that napi_disable might sleep, which would
cause a sleep while atomic BUG. In addition, on a single processor system, the
ixgbe_qv_lock_napi loop shouldn't have to mdelay. This patch adds an
ixgbe_qv_disable along with a new IXGBE_QV_STATE_DISABLED bit, which it uses to
indicate to the poll and napi routines that the q_vector has been disabled. Now
the ixgbe_napi_disable_all function will wait until all pending work has been
finished and prevent any future work from being started.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Cc: Eliezer Tamir <eliezer.tamir@linux.intel.com>
Cc: Alexander Duyck <alexander.duyck@intel.com>
Cc: Hyong-Youb Kim <hykim@myri.com>
Cc: Amir Vadai <amirv@mellanox.com>
Cc: Dmitry Kravkov <dmitry@broadcom.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2013-10-29 03:30:08 -07:00
Jacob Keller 80c33ddd31 net: add might_sleep() call to napi_disable
napi_disable uses an msleep() call to wait for outstanding napi work to be
finished after setting the disable bit. It does not always sleep incase there
was no outstanding work. This resulted in a rare bug in ixgbe_down operation
where a napi_disable call took place inside of a local_bh_disable()d context.
In order to enable easier detection of future sleep while atomic BUGs, this
patch adds a might_sleep() call, so that every use of napi_disable during
atomic context will be visible.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Cc: Eliezer Tamir <eliezer.tamir@linux.intel.com>
Cc: Alexander Duyck <alexander.duyck@intel.com>
Cc: Hyong-Youb Kim <hykim@myri.com>
Cc: Amir Vadai <amirv@mellanox.com>
Cc: Dmitry Kravkov <dmitry@broadcom.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2013-10-29 02:40:21 -07:00
Joseph Gasparakis e6cd988c27 vxlan: Have the NIC drivers do less work for offloads
This patch removes the burden from the NIC drivers to check if the
vxlan driver is enabled in the kernel and also makes available
the vxlan headrooms to them.

Signed-off-by: Joseph Gasparakis <joseph.gasparakis@intel.com>
Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2013-10-29 02:39:13 -07:00
Ingo Molnar cd65718712 perf/urgent fixes:
. Add color overhead for stdio output buffer, which fixes
   --stdio output being chopped up on the hot (red) entries,
   fix from Jiri Olsa.
 
 . Get 'perf record -g -a sleep 1' working again, removing the
   need for -- separating perf options from the workload, restoring
   ages old behaviour, fix from Jiri Olsa.
   More patches allowing ~/.perfconfig setting up of default
   callchain collecting method ("fp" or "dwarf") left for next
   merge window.
 
 . Fixup mmap event consumption, where we were acking the
   consumption by writing the tail before actually accessing
   the event, which could lead to using overwritten records
   in things like 'perf record --call-graph'.  from Zhouyi Zhou.
 
 Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.14 (GNU/Linux)
 
 iQIcBAABAgAGBQJSbrexAAoJENZQFvNTUqpA+MIP/1dMAat/+jAobBvy9s69rJbT
 OV2QxqS5haA6rjbyogY7AiaCM/bSFO9dpLDF31NUTc3s6hiMsHnXnogQxvGGrBzT
 lzj45V9/a8XXl8Wxbpv0z4mI7ppNR3KjJdkBTsuqso5GivzMuG/8/N7ZxYB8kSlL
 vnWLRBKiBa/sDKIhKy5oEqmMMa+kPmwIXGZiKFxWkNMjmBtK80SB7MtPernhLNGf
 wj6EEulphPR1s8VHXqZIoijEvgw2vEXDTFapKd53EV+UPCII13NW7gicgPXyiWrY
 BIRW4Pwl98y9qHEc5gSq5Eotcl9JhIPQVegTSnvx8jJ7x0bYrxmChb/I9eAjWQcj
 YQAKGqUoQGu0PJv8TeIoGsxNMFxeeeMKzf275KKc5g6Les6gIHYKOmW7RPXHIIA7
 DHGYOJ808jMrNFmAkKNmxQGbE7forMFj6CPDgGpGezqTUbGsszpzga2QQgI0QmrY
 l8aPUsirF5cZ67vPk7dnUt58FayfKMdGME2dkU352hjOL1V5EeVdxx46zZsPQNyQ
 evQvkGBmRm6WFi6Sybjc6i4Gkn9F1O5sNDtEDbSQ6aZH5tNnktzLoufPrWwHKV95
 GzEHpCuAeZbagPF6WSxpNTb2xfScPb6PfveGm0gbypwl/KaAr47w8VCWJiuvpSOv
 yV83qEooKW0LUn4JXrH3
 =U+a8
 -----END PGP SIGNATURE-----

Merge tag 'perf-urgent-for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/urgent

Pull perf/urgent fixes from Arnaldo Carvalho de Melo:

 * Add color overhead for stdio output buffer, which fixes
   --stdio output being chopped up on the hot (red) entries,
   fix from Jiri Olsa.

 * Get 'perf record -g -a sleep 1' working again, removing the
   need for -- separating perf options from the workload, restoring
   ages old behaviour, fix from Jiri Olsa.
   More patches allowing ~/.perfconfig setting up of default
   callchain collecting method ("fp" or "dwarf") left for next
   merge window.

 * Fixup mmap event consumption, where we were acking the
   consumption by writing the tail before actually accessing
   the event, which could lead to using overwritten records
   in things like 'perf record --call-graph'. From Zhouyi Zhou.

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-29 09:06:07 +01:00
Mathias Krause 1c5ad13f7c net: esp{4,6}: get rid of struct esp_data
struct esp_data consists of a single pointer, vanishing the need for it
to be a structure. Fold the pointer into 'data' direcly, removing one
level of pointer indirection.

Signed-off-by: Mathias Krause <mathias.krause@secunet.com>
Cc: Steffen Klassert <steffen.klassert@secunet.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2013-10-29 06:39:42 +01:00
Mathias Krause 123b0d1ba0 net: esp{4,6}: remove padlen from struct esp_data
The padlen member of struct esp_data is always zero. Get rid of it.

Signed-off-by: Mathias Krause <mathias.krause@secunet.com>
Cc: Steffen Klassert <steffen.klassert@secunet.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2013-10-29 06:39:42 +01:00
Wei Liu 059dfa6a93 xen-netback: use jiffies_64 value to calculate credit timeout
time_after_eq() only works if the delta is < MAX_ULONG/2.

For a 32bit Dom0, if netfront sends packets at a very low rate, the time
between subsequent calls to tx_credit_exceeded() may exceed MAX_ULONG/2
and the test for timer_after_eq() will be incorrect. Credit will not be
replenished and the guest may become unable to send packets (e.g., if
prior to the long gap, all credit was exhausted).

Use jiffies_64 variant to mitigate this problem for 32bit Dom0.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Reviewed-by: David Vrabel <david.vrabel@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Jason Luan <jianhai.luan@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-10-29 00:24:49 -04:00
Zhi Yong Wu cdfb97bc01 net, mc: fix the incorrect comments in two mc-related functions
Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-10-29 00:19:05 -04:00
Zhi Yong Wu ab1a2d7773 net, iovec: fix the incorrect comment in memcpy_fromiovecend()
Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-10-29 00:19:04 -04:00
Zhi Yong Wu c4e819d16c net, datagram: fix the incorrect comment in zerocopy_sg_from_iovec()
Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-10-29 00:19:04 -04:00
Zhi Yong Wu 39deb2c7db vxlan: silence one build warning
drivers/net/vxlan.c: In function ‘vxlan_sock_add’:
drivers/net/vxlan.c:2298:11: warning: ‘sock’ may be used uninitialized in this function [-Wmaybe-uninitialized]
drivers/net/vxlan.c:2275:17: note: ‘sock’ was declared here
  LD      drivers/net/built-in.o

Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-10-29 00:19:04 -04:00
Hannes Frederic Sowa daba287b29 ipv4: fix DO and PROBE pmtu mode regarding local fragmentation with UFO/CORK
UFO as well as UDP_CORK do not respect IP_PMTUDISC_DO and
IP_PMTUDISC_PROBE well enough.

UFO enabled packet delivery just appends all frags to the cork and hands
it over to the network card. So we just deliver non-DF udp fragments
(DF-flag may get overwritten by hardware or virtual UFO enabled
interface).

UDP_CORK does enqueue the data until the cork is disengaged. At this
point it sets the correct IP_DF and local_df flags and hands it over to
ip_fragment which in this case will generate an icmp error which gets
appended to the error socket queue. This is not reflected in the syscall
error (of course, if UFO is enabled this also won't happen).

Improve this by checking the pmtudisc flags before appending data to the
socket and if we still can fit all data in one packet when IP_PMTUDISC_DO
or IP_PMTUDISC_PROBE is set, only then proceed.

We use (mtu-fragheaderlen) to check for the maximum length because we
ensure not to generate a fragment and non-fragmented data does not need
to have its length aligned on 64 bit boundaries. Also the passed in
ip_options are already aligned correctly.

Maybe, we can relax some other checks around ip_fragment. This needs
more research.

Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-10-29 00:15:22 -04:00
Ben Hutchings 262e827fe7 cxgb3: Fix length calculation in write_ofld_wr() on 32-bit architectures
The length calculation here is now invalid on 32-bit architectures,
since sk_buff::tail is a pointer and sk_buff::transport_header is
an integer offset:

drivers/net/ethernet/chelsio/cxgb3/sge.c: In function 'write_ofld_wr':
drivers/net/ethernet/chelsio/cxgb3/sge.c:1603:9: warning: passing argument 4 of 'make_sgl' makes integer from pointer without a cast [enabled by default]
         adap->pdev);
         ^
drivers/net/ethernet/chelsio/cxgb3/sge.c:964:28: note: expected 'unsigned int' but argument is of type 'sk_buff_data_t'
 static inline unsigned int make_sgl(const struct sk_buff *skb,
                            ^

Use the appropriate skb accessor functions.

Compile-tested only.

Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Fixes: 1a37e412a0 ('net: Use 16bits for *_headers fields of struct skbuff')
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-10-29 00:14:03 -04:00
Ariel Elior 826cb7b43b bnx2x: Disable VF access on PF removal
When the bnx2x driver is rmmoded, if VFs of a given PF will be assigned
to a VM then that PF will be unable to call `pci_disable_sriov()'.

If for that same PF there would also exist unassigned VFs in the hypervisor,
the result will be that after the removal there will still be virtual PCI
functions on the hypervisor.
If the bnx2x module were to be re-inserted, the result will be that the VFs
on the hypervisor will be re-probed directly following the PF's probe, even
though that in regular loading flow sriov is only enabled once PF is loaded.
The probed VF will then try to access its bar, causing a PCI error as the HW
is not in a state enabling such a request.

This patch adds a missing disablement procedure to the PF's removal, one that
sets registers viewable to the VF to indicate that the VFs have no permission
to access the bar, thus resulting in probe errors instead of PCI errors.

Signed-off-by: Ariel Elior <ariele@broadcom.com>
Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-10-29 00:12:45 -04:00
Dmitry Kravkov e3ed4eaef4 bnx2x: prevent FW assert on low mem during unload
Buffers for FW statistics were allocated at an inappropriate time; In a machine
where the driver encounters problems allocating all of its queues, the driver
would still create FW requests for the statistics of the non-existing queues.
The wrong order of memory allocation could lead to zeroed statistics messages
being sent, leading to fw assert in case function 0 was down.

This changes the order of allocations, guaranteeing that statistic requests will
only be generated for actual queues.

Signed-off-by: Dmitry Kravkov <dmitry@broadcom.com>
Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: Ariel Elior <ariele@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-10-29 00:12:45 -04:00
Eric Dumazet 0d08c42cf9 tcp: gso: fix truesize tracking
commit 6ff50cd555 ("tcp: gso: do not generate out of order packets")
had an heuristic that can trigger a warning in skb_try_coalesce(),
because skb->truesize of the gso segments were exactly set to mss.

This breaks the requirement that

skb->truesize >= skb->len + truesizeof(struct sk_buff);

It can trivially be reproduced by :

ifconfig lo mtu 1500
ethtool -K lo tso off
netperf

As the skbs are looped into the TCP networking stack, skb_try_coalesce()
warns us of these skb under-estimating their truesize.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-10-29 00:04:47 -04:00
Michael Dalton 2613af0ed1 virtio_net: migrate mergeable rx buffers to page frag allocators
The virtio_net driver's mergeable receive buffer allocator
uses 4KB packet buffers. For MTU-sized traffic, SKB truesize
is > 4KB but only ~1500 bytes of the buffer is used to store
packet data, reducing the effective TCP window size
substantially. This patch addresses the performance concerns
with mergeable receive buffers by allocating MTU-sized packet
buffers using page frag allocators. If more than MAX_SKB_FRAGS
buffers are needed, the SKB frag_list is used.

Signed-off-by: Michael Dalton <mwdalton@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-10-28 23:56:46 -04:00
David S. Miller 5d9efa7ee9 ipv6: Remove privacy config option.
The code for privacy extentions is very mature, and making it
configurable only gives marginal memory/code savings in exchange
for obfuscation and hard to read code via CPP ifdef'ery.

Signed-off-by: David S. Miller <davem@davemloft.net>
2013-10-28 20:07:50 -04:00
Linus Torvalds c9ca72fc56 Xtensa patchset for v3.12-rc6
-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.12 (GNU/Linux)
 
 iQIcBAABAgAGBQJSXuCxAAoJEI9vqH3mFV2sdWwP/11xVQFtzoT01k9P4fZeFCMT
 +dGztMBY6bODMEyeC9raX6sJhuOYNivS2IBFJ4qv9p+kcCplYE0EsB3MfOQ60NdV
 eV2DdnHrPKgIPUutGf7mcZ5bXB8q8HEuEFn0ONsCdNTAxzrcLxaPZU3MeGhSfL0H
 d/RPPmSW8+vLEMWUe58EB2J8Z1DJye4O0pbxskjGA64KrLssKBG3LWBQBdTsihkE
 r2olYYT0j7osJ7loLbyll/CcacQTfJbQT8Xd0Y7MbLYR/G6EVMwb6PkzkFnUPMIQ
 zLwMAOgL8/E250p4ab0Wy8vf9rW+938qR93oZEyO8TRDdF/mYNQb+GK+ZKUrMrxg
 eZ/pOhKQ+xUKtt23xrvbXR3nTnI9AfiJ1nhuO4UmX5WIhyJXDcYYk+rujlEduxGm
 IJ7p5cRWBdqC+iIiSzQJjyW4drIXU6QJ+ureOGo0vBcBJsLRbUFYLt+08Rb60/d1
 zgS8L7DpKRUZlIrbD6/HxyUimJCTGERsvTSDZYCYs1+ri/YKV4a8Ukfsl8bWLbhd
 Le5PyMeusWaM6MHCIOJPys2SpsKGl9UCaVr7Nz1Lr6HIqZJUN61rNVSIcUpufiqt
 hPSYtljmz6746Z6QvNW0OD8+qE5foplCnv94Oqj9tb24gHfINPXkjiaEKhPDICi0
 6cCQj4pCsjjlzXXmn5Fp
 =sopZ
 -----END PGP SIGNATURE-----

Merge tag 'xtensa-next-20131015' of git://github.com/czankel/xtensa-linux

Pull Xtensa patchset from Chris Zankel:
 "The main patch fixes a bug that can cause a kernel panic, and was
  introduced in rc1.  The other two have been discovered by a uclibc
  test and 'coccinelle'"

* tag 'xtensa-next-20131015' of git://github.com/czankel/xtensa-linux:
  xtensa: Cocci spatch "noderef"
  xtensa: don't use alternate signal stack on threads
  xtensa: fix fast_syscall_spill_registers_fixup
2013-10-28 16:58:05 -07:00
Linus Torvalds 5d914a959d SCSI fixes on 20131028
This is a set of four patches that revert functionality introduced in the
 merge window to sg.  The locking changes turned out to introduce this bug:
 
     [  205.372901] [ BUG: lock held when returning to user space! ]
 [...]
     [  205.373285]  #0:  (&sdp->o_sem){.+.+..}, at: [<ffffffff8161e650>] sg_open+0x3a0/0x4d0
 
 The fix is large, so at this late stage we'd like to revert the functionality
 and start again in the next merge window.
 
 Signed-off-by: James Bottomley <JBottomley@Parallels.com>
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.19 (GNU/Linux)
 
 iQEcBAABAgAGBQJSbneXAAoJEDeqqVYsXL0Mr+AIAJRkPeIzyX59Y16nAlxY8tZj
 MCYFZ9v9TiVEXm2+AM/Z/Y6jVYG3wgkRzZARyOxeNeRsyWFMsm74WQG5AIopwL5B
 liN9tMlcxl3Ofdh9Ww1Os7AaIhm3xmiRTNsNirNxgh5Y3+XH8MMTv5QR97XIt5Ys
 arDJUAJKtYJIcgMbYpDdQ7vb8deNsrwC0mQ2vbdox6tMKn8wIigsyZKI/+55imJP
 uPZjruhFQxhzMzi5+kR8q/Y2hj+2Lo0pSIEWjO2K/xqLzMI/8IkCwx5gYnNdxONE
 Rrj/6VevFd0w3G9EWb8d4HhptyTMFkq4D8XoSxepSFdA1Ti4kVixwXw+PRdEU/I=
 =pSy2
 -----END PGP SIGNATURE-----

Merge tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Pull SCSI fixes from James Bottomley:
 "This is a set of four patches that revert functionality introduced in
  the merge window to sg.  The locking changes turned out to introduce
  this bug:

      [  205.372901] [ BUG: lock held when returning to user space! ]
   [...]
      [  205.373285]  #0:  (&sdp->o_sem){.+.+..}, at: [<ffffffff8161e650>] sg_open+0x3a0/0x4d0

  The fix is large, so at this late stage we'd like to revert the
  functionality and start again in the next merge window"

* tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi:
  [SCSI] Revert "sg: use rwsem to solve race during exclusive open"
  [SCSI] Revert "sg: no need sg_open_exclusive_lock"
  [SCSI] Revert "sg: checking sdp->detached isn't protected when open"
  [SCSI] Revert "sg: push file descriptor list locking down to per-device locking"
2013-10-28 16:57:13 -07:00
David S. Miller d5d45d4294 Merge branch '6lowpan'
Alexander Aring says:

====================
6lowpan: trivial changes

This patch series includes some trivial changes to prepare the 6lowpan stack
for upcomming patch-series which mainly fix fragmentation according to rfc4944
and udp handling(which is currently broken).

Changes since v3:
  - really fix intendation in patch 3/5

Changes since v2:
  - change intendation in patch 3/5
  - fix typo in 5/5 unecessary -> unnecessary
  - add missing 6lowpan tag in cover-letter
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2013-10-28 19:48:29 -04:00
Alexander Aring 8ef007fd1d 6lowpan: remove unnecessary break
Signed-off-by: Alexander Aring <alex.aring@gmail.com>
Reviewed-by: Werner Almesberger <werner@almesberger.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-10-28 19:47:52 -04:00
Alexander Aring b236b954de 6lowpan: remove skb->dev assignment
This patch removes the assignment of skb->dev. We don't need it here because
we use the netdev_alloc_skb_ip_align function which already sets the
skb->dev.

Signed-off-by: Alexander Aring <alex.aring@gmail.com>
Reviewed-by: Werner Almesberger <werner@almesberger.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-10-28 19:47:52 -04:00
Alexander Aring b614442f34 6lowpan: use netdev_alloc_skb instead dev_alloc_skb
This patch uses the netdev_alloc_skb instead dev_alloc_skb function and
drops the seperate assignment to skb->dev.

Signed-off-by: Alexander Aring <alex.aring@gmail.com>
Reviewed-by: Werner Almesberger <werner@almesberger.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-10-28 19:47:51 -04:00
Alexander Aring 53cb5717b4 6lowpan: remove unnecessary check on err >= 0
The err variable can only be zero in this case.

Signed-off-by: Alexander Aring <alex.aring@gmail.com>
Reviewed-by: Werner Almesberger <werner@almesberger.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-10-28 19:47:51 -04:00
Alexander Aring 545f3613a8 6lowpan: remove unnecessary ret variable
Signed-off-by: Alexander Aring <alex.aring@gmail.com>
Reviewed-by: Werner Almesberger <werner@almesberger.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-10-28 19:47:51 -04:00