2019-06-04 16:11:33 +08:00
|
|
|
// SPDX-License-Identifier: GPL-2.0-only
|
2008-09-08 23:44:25 +08:00
|
|
|
/*
|
2008-09-08 23:44:27 +08:00
|
|
|
* Scanning implementation
|
|
|
|
*
|
2008-09-08 23:44:25 +08:00
|
|
|
* Copyright 2003, Jouni Malinen <jkmaline@cc.hut.fi>
|
|
|
|
* Copyright 2004, Instant802 Networks, Inc.
|
|
|
|
* Copyright 2005, Devicescape Software, Inc.
|
|
|
|
* Copyright 2006-2007 Jiri Benc <jbenc@suse.cz>
|
|
|
|
* Copyright 2007, Michael Wu <flamingice@sourmilk.net>
|
2015-06-03 15:44:17 +08:00
|
|
|
* Copyright 2013-2015 Intel Mobile Communications GmbH
|
2017-08-06 16:38:23 +08:00
|
|
|
* Copyright 2016-2017 Intel Deutschland GmbH
|
mac80211: always allocate struct ieee802_11_elems
As the 802.11 spec evolves, we need to parse more and more
elements. This is causing the struct to grow, and we can no
longer get away with putting it on the stack.
Change the API to always dynamically allocate and return an
allocated pointer that must be kfree()d later.
As an alternative, I contemplated a scheme whereby we'd say
in the code which elements we needed, e.g.
DECLARE_ELEMENT_PARSER(elems,
SUPPORTED_CHANNELS,
CHANNEL_SWITCH,
EXT(KEY_DELIVERY));
ieee802_11_parse_elems(..., &elems, ...);
and while I think this is possible and will save us a lot
since most individual places only care about a small subset
of the elements, it ended up being a bit more work since a
lot of places do the parsing and then pass the struct to
other functions, sometimes with multiple levels.
Link: https://lore.kernel.org/r/20210920154009.26caff6b5998.I05ae58768e990e611aee8eca8abefd9d7bc15e05@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2021-09-20 21:40:10 +08:00
|
|
|
* Copyright (C) 2018-2021 Intel Corporation
|
2008-09-08 23:44:25 +08:00
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/if_arp.h>
|
2012-03-01 22:22:09 +08:00
|
|
|
#include <linux/etherdevice.h>
|
2009-01-23 01:07:31 +08:00
|
|
|
#include <linux/rtnetlink.h>
|
2010-02-24 21:19:21 +08:00
|
|
|
#include <net/sch_generic.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 16:04:11 +08:00
|
|
|
#include <linux/slab.h>
|
2011-07-15 23:47:34 +08:00
|
|
|
#include <linux/export.h>
|
2018-05-28 21:47:41 +08:00
|
|
|
#include <linux/random.h>
|
2008-09-08 23:44:25 +08:00
|
|
|
#include <net/mac80211.h>
|
|
|
|
|
|
|
|
#include "ieee80211_i.h"
|
2009-04-24 00:52:52 +08:00
|
|
|
#include "driver-ops.h"
|
2008-09-08 23:44:27 +08:00
|
|
|
#include "mesh.h"
|
2008-09-08 23:44:25 +08:00
|
|
|
|
|
|
|
#define IEEE80211_PROBE_DELAY (HZ / 33)
|
|
|
|
#define IEEE80211_CHANNEL_TIME (HZ / 33)
|
mac80211: improve latency and throughput while software scanning
Patch vastly improve latency while scanning. Slight throughput
improvements were observed as well. Is intended for improve performance
of voice and video applications, when scan is periodically requested by
user space (i.e. default NetworkManager behaviour).
Patch remove latency requirement based on PM_QOS_NETWORK_LATENCY,
this value is 2000 seconds by default (i.e. approximately 0.5 hour !?!).
Also remove listen interval requirement, which based on beaconing and
depending on BSS parameters. It can make we stay off-channel for a
second or more.
Instead try to offer the best latency that we could, i.e. be off-channel
no longer than PASSIVE channel scan time: 125 ms. That mean we will
scan two ACTIVE channels and go back to on-channel, and one PASSIVE
channel, and go back to on-channel.
Patch also decrease PASSIVE channel scan time to about 110 ms.
As drawback patch increase overall scan time. On my tests, when scanning
both 2GHz and 5GHz bands, scanning time increase from 5 seconds up to 10
seconds. Since that increase happen only when we are associated, I think
it can be acceptable. If eventually better scan time is needed for
situations when we lose signal and quickly need to decide to which AP
roam, additional scan flag or parameter can be introduced.
I tested patch by doing:
while true; do iw dev wlan0 scan; sleep 3; done > /dev/null
and
ping -i0.2 -c 1000 HOST
on remote and local machine, results are as below:
* Ping from local periodically scanning machine to AP:
Unpatched: rtt min/avg/max/mdev = 0.928/24.946/182.135/36.873 ms
Patched: rtt min/avg/max/mdev = 0.928/19.678/150.845/33.130 ms
* Ping from remote machine to periodically scanning machine:
Unpatched: rtt min/avg/max/mdev = 1.637/120.683/709.139/164.337 ms
Patched: rtt min/avg/max/mdev = 1.807/26.893/201.435/40.284 ms
Throughput measured by scp show following results.
* Upload to periodically scanning machine:
Unpatched: 3.9MB/s 03:15
Patched: 4.3MB/s 02:58
* Download from periodically scanning machine:
Unpatched: 5.5MB/s 02:17
Patched: 6.2MB/s 02:02
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2013-01-23 19:32:45 +08:00
|
|
|
#define IEEE80211_PASSIVE_CHANNEL_TIME (HZ / 9)
|
2008-09-08 23:44:25 +08:00
|
|
|
|
2008-09-08 23:44:27 +08:00
|
|
|
void ieee80211_rx_bss_put(struct ieee80211_local *local,
|
2008-09-11 06:01:55 +08:00
|
|
|
struct ieee80211_bss *bss)
|
2008-09-08 23:44:27 +08:00
|
|
|
{
|
2009-12-23 20:15:39 +08:00
|
|
|
if (!bss)
|
|
|
|
return;
|
2013-02-01 08:49:58 +08:00
|
|
|
cfg80211_put_bss(local->hw.wiphy,
|
|
|
|
container_of((void *)bss, struct cfg80211_bss, priv));
|
2008-09-08 23:44:27 +08:00
|
|
|
}
|
|
|
|
|
2010-01-12 16:42:31 +08:00
|
|
|
static bool is_uapsd_supported(struct ieee802_11_elems *elems)
|
|
|
|
{
|
|
|
|
u8 qos_info;
|
|
|
|
|
|
|
|
if (elems->wmm_info && elems->wmm_info_len == 7
|
|
|
|
&& elems->wmm_info[5] == 1)
|
|
|
|
qos_info = elems->wmm_info[6];
|
|
|
|
else if (elems->wmm_param && elems->wmm_param_len == 24
|
|
|
|
&& elems->wmm_param[5] == 1)
|
|
|
|
qos_info = elems->wmm_param[6];
|
|
|
|
else
|
|
|
|
/* no valid wmm information or parameter element found */
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return qos_info & IEEE80211_WMM_IE_AP_QOSINFO_UAPSD;
|
|
|
|
}
|
|
|
|
|
2019-01-17 02:35:38 +08:00
|
|
|
static void
|
|
|
|
ieee80211_update_bss_from_elems(struct ieee80211_local *local,
|
|
|
|
struct ieee80211_bss *bss,
|
|
|
|
struct ieee802_11_elems *elems,
|
|
|
|
struct ieee80211_rx_status *rx_status,
|
|
|
|
bool beacon)
|
|
|
|
{
|
|
|
|
int clen, srlen;
|
|
|
|
|
|
|
|
if (beacon)
|
|
|
|
bss->device_ts_beacon = rx_status->device_timestamp;
|
|
|
|
else
|
|
|
|
bss->device_ts_presp = rx_status->device_timestamp;
|
|
|
|
|
|
|
|
if (elems->parse_error) {
|
|
|
|
if (beacon)
|
|
|
|
bss->corrupt_data |= IEEE80211_BSS_CORRUPT_BEACON;
|
|
|
|
else
|
|
|
|
bss->corrupt_data |= IEEE80211_BSS_CORRUPT_PROBE_RESP;
|
|
|
|
} else {
|
|
|
|
if (beacon)
|
|
|
|
bss->corrupt_data &= ~IEEE80211_BSS_CORRUPT_BEACON;
|
|
|
|
else
|
|
|
|
bss->corrupt_data &= ~IEEE80211_BSS_CORRUPT_PROBE_RESP;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* save the ERP value so that it is available at association time */
|
|
|
|
if (elems->erp_info && (!elems->parse_error ||
|
|
|
|
!(bss->valid_data & IEEE80211_BSS_VALID_ERP))) {
|
|
|
|
bss->erp_value = elems->erp_info[0];
|
|
|
|
bss->has_erp_value = true;
|
|
|
|
if (!elems->parse_error)
|
|
|
|
bss->valid_data |= IEEE80211_BSS_VALID_ERP;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* replace old supported rates if we get new values */
|
|
|
|
if (!elems->parse_error ||
|
|
|
|
!(bss->valid_data & IEEE80211_BSS_VALID_RATES)) {
|
|
|
|
srlen = 0;
|
|
|
|
if (elems->supp_rates) {
|
|
|
|
clen = IEEE80211_MAX_SUPP_RATES;
|
|
|
|
if (clen > elems->supp_rates_len)
|
|
|
|
clen = elems->supp_rates_len;
|
|
|
|
memcpy(bss->supp_rates, elems->supp_rates, clen);
|
|
|
|
srlen += clen;
|
|
|
|
}
|
|
|
|
if (elems->ext_supp_rates) {
|
|
|
|
clen = IEEE80211_MAX_SUPP_RATES - srlen;
|
|
|
|
if (clen > elems->ext_supp_rates_len)
|
|
|
|
clen = elems->ext_supp_rates_len;
|
|
|
|
memcpy(bss->supp_rates + srlen, elems->ext_supp_rates,
|
|
|
|
clen);
|
|
|
|
srlen += clen;
|
|
|
|
}
|
|
|
|
if (srlen) {
|
|
|
|
bss->supp_rates_len = srlen;
|
|
|
|
if (!elems->parse_error)
|
|
|
|
bss->valid_data |= IEEE80211_BSS_VALID_RATES;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!elems->parse_error ||
|
|
|
|
!(bss->valid_data & IEEE80211_BSS_VALID_WMM)) {
|
|
|
|
bss->wmm_used = elems->wmm_param || elems->wmm_info;
|
|
|
|
bss->uapsd_supported = is_uapsd_supported(elems);
|
|
|
|
if (!elems->parse_error)
|
|
|
|
bss->valid_data |= IEEE80211_BSS_VALID_WMM;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (beacon) {
|
|
|
|
struct ieee80211_supported_band *sband =
|
|
|
|
local->hw.wiphy->bands[rx_status->band];
|
|
|
|
if (!(rx_status->encoding == RX_ENC_HT) &&
|
|
|
|
!(rx_status->encoding == RX_ENC_VHT))
|
|
|
|
bss->beacon_rate =
|
|
|
|
&sband->bitrates[rx_status->rate_idx];
|
|
|
|
}
|
2020-05-29 03:34:35 +08:00
|
|
|
|
|
|
|
if (elems->vht_cap_elem)
|
|
|
|
bss->vht_cap_info =
|
|
|
|
le32_to_cpu(elems->vht_cap_elem->vht_cap_info);
|
|
|
|
else
|
|
|
|
bss->vht_cap_info = 0;
|
2019-01-17 02:35:38 +08:00
|
|
|
}
|
|
|
|
|
2008-09-11 06:01:55 +08:00
|
|
|
struct ieee80211_bss *
|
2008-09-08 23:44:27 +08:00
|
|
|
ieee80211_bss_info_update(struct ieee80211_local *local,
|
|
|
|
struct ieee80211_rx_status *rx_status,
|
2012-12-10 22:19:13 +08:00
|
|
|
struct ieee80211_mgmt *mgmt, size_t len,
|
|
|
|
struct ieee80211_channel *channel)
|
2008-09-08 23:44:27 +08:00
|
|
|
{
|
2020-09-22 10:28:08 +08:00
|
|
|
bool beacon = ieee80211_is_beacon(mgmt->frame_control) ||
|
|
|
|
ieee80211_is_s1g_beacon(mgmt->frame_control);
|
2019-01-17 00:22:56 +08:00
|
|
|
struct cfg80211_bss *cbss, *non_tx_cbss;
|
|
|
|
struct ieee80211_bss *bss, *non_tx_bss;
|
2016-02-24 05:05:06 +08:00
|
|
|
struct cfg80211_inform_bss bss_meta = {
|
|
|
|
.boottime_ns = rx_status->boottime_ns,
|
|
|
|
};
|
2015-06-03 15:44:17 +08:00
|
|
|
bool signal_valid;
|
2016-07-05 20:23:12 +08:00
|
|
|
struct ieee80211_sub_if_data *scan_sdata;
|
mac80211: always allocate struct ieee802_11_elems
As the 802.11 spec evolves, we need to parse more and more
elements. This is causing the struct to grow, and we can no
longer get away with putting it on the stack.
Change the API to always dynamically allocate and return an
allocated pointer that must be kfree()d later.
As an alternative, I contemplated a scheme whereby we'd say
in the code which elements we needed, e.g.
DECLARE_ELEMENT_PARSER(elems,
SUPPORTED_CHANNELS,
CHANNEL_SWITCH,
EXT(KEY_DELIVERY));
ieee802_11_parse_elems(..., &elems, ...);
and while I think this is possible and will save us a lot
since most individual places only care about a small subset
of the elements, it ended up being a bit more work since a
lot of places do the parsing and then pass the struct to
other functions, sometimes with multiple levels.
Link: https://lore.kernel.org/r/20210920154009.26caff6b5998.I05ae58768e990e611aee8eca8abefd9d7bc15e05@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2021-09-20 21:40:10 +08:00
|
|
|
struct ieee802_11_elems *elems;
|
2019-01-16 18:14:41 +08:00
|
|
|
size_t baselen;
|
|
|
|
u8 *elements;
|
2009-02-11 04:25:55 +08:00
|
|
|
|
2018-03-15 00:58:34 +08:00
|
|
|
if (rx_status->flag & RX_FLAG_NO_SIGNAL_VAL)
|
|
|
|
bss_meta.signal = 0; /* invalid signal indication */
|
|
|
|
else if (ieee80211_hw_check(&local->hw, SIGNAL_DBM))
|
2015-10-13 17:36:21 +08:00
|
|
|
bss_meta.signal = rx_status->signal * 100;
|
2015-06-03 03:39:54 +08:00
|
|
|
else if (ieee80211_hw_check(&local->hw, SIGNAL_UNSPEC))
|
2015-10-13 17:36:21 +08:00
|
|
|
bss_meta.signal = (rx_status->signal * 100) / local->hw.max_signal;
|
2009-02-11 04:25:55 +08:00
|
|
|
|
2015-10-13 17:36:21 +08:00
|
|
|
bss_meta.scan_width = NL80211_BSS_CHAN_WIDTH_20;
|
2017-04-26 18:14:59 +08:00
|
|
|
if (rx_status->bw == RATE_INFO_BW_5)
|
2015-10-13 17:36:21 +08:00
|
|
|
bss_meta.scan_width = NL80211_BSS_CHAN_WIDTH_5;
|
2017-04-26 18:14:59 +08:00
|
|
|
else if (rx_status->bw == RATE_INFO_BW_10)
|
2015-10-13 17:36:21 +08:00
|
|
|
bss_meta.scan_width = NL80211_BSS_CHAN_WIDTH_10;
|
2013-07-08 22:55:56 +08:00
|
|
|
|
2015-10-13 17:36:21 +08:00
|
|
|
bss_meta.chan = channel;
|
2016-07-05 20:23:12 +08:00
|
|
|
|
|
|
|
rcu_read_lock();
|
|
|
|
scan_sdata = rcu_dereference(local->scan_sdata);
|
|
|
|
if (scan_sdata && scan_sdata->vif.type == NL80211_IFTYPE_STATION &&
|
wifi: mac80211: move interface config to new struct
We'll use bss_conf for per-link configuration later, so
move out all the non-link-specific data out into a new
struct ieee80211_vif_cfg used in the vif.
Some adjustments were done with the following spatch:
@@
expression sdata;
struct ieee80211_vif *vifp;
identifier var = { assoc, ibss_joined, aid, arp_addr_list, arp_addr_cnt, ssid, ssid_len, s1g, ibss_creator };
@@
(
-sdata->vif.bss_conf.var
+sdata->vif.cfg.var
|
-vifp->bss_conf.var
+vifp->cfg.var
)
@bss_conf@
struct ieee80211_bss_conf *bss_conf;
identifier var = { assoc, ibss_joined, aid, arp_addr_list, arp_addr_cnt, ssid, ssid_len, s1g, ibss_creator };
@@
-bss_conf->var
+vif_cfg->var
(though more manual fixups were needed, e.g. replacing
"vif_cfg->" by "vif->cfg." in many files.)
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2022-05-10 23:05:04 +08:00
|
|
|
scan_sdata->vif.cfg.assoc &&
|
2016-07-05 20:23:12 +08:00
|
|
|
ieee80211_have_rx_timestamp(rx_status)) {
|
|
|
|
bss_meta.parent_tsf =
|
|
|
|
ieee80211_calculate_rx_timestamp(local, rx_status,
|
|
|
|
len + FCS_LEN, 24);
|
|
|
|
ether_addr_copy(bss_meta.parent_bssid,
|
|
|
|
scan_sdata->vif.bss_conf.bssid);
|
|
|
|
}
|
|
|
|
rcu_read_unlock();
|
|
|
|
|
2015-10-13 17:36:21 +08:00
|
|
|
cbss = cfg80211_inform_bss_frame_data(local->hw.wiphy, &bss_meta,
|
|
|
|
mgmt, len, GFP_ATOMIC);
|
2009-12-23 20:15:39 +08:00
|
|
|
if (!cbss)
|
2009-02-11 04:26:00 +08:00
|
|
|
return NULL;
|
2019-01-16 18:14:41 +08:00
|
|
|
|
|
|
|
if (ieee80211_is_probe_resp(mgmt->frame_control)) {
|
|
|
|
elements = mgmt->u.probe_resp.variable;
|
|
|
|
baselen = offsetof(struct ieee80211_mgmt,
|
|
|
|
u.probe_resp.variable);
|
2020-09-22 10:28:08 +08:00
|
|
|
} else if (ieee80211_is_s1g_beacon(mgmt->frame_control)) {
|
|
|
|
struct ieee80211_ext *ext = (void *) mgmt;
|
|
|
|
|
|
|
|
baselen = offsetof(struct ieee80211_ext, u.s1g_beacon.variable);
|
|
|
|
elements = ext->u.s1g_beacon.variable;
|
2019-01-16 18:14:41 +08:00
|
|
|
} else {
|
|
|
|
baselen = offsetof(struct ieee80211_mgmt, u.beacon.variable);
|
|
|
|
elements = mgmt->u.beacon.variable;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (baselen > len)
|
|
|
|
return NULL;
|
|
|
|
|
2022-06-29 19:29:05 +08:00
|
|
|
elems = ieee802_11_parse_elems(elements, len - baselen, false, cbss);
|
mac80211: always allocate struct ieee802_11_elems
As the 802.11 spec evolves, we need to parse more and more
elements. This is causing the struct to grow, and we can no
longer get away with putting it on the stack.
Change the API to always dynamically allocate and return an
allocated pointer that must be kfree()d later.
As an alternative, I contemplated a scheme whereby we'd say
in the code which elements we needed, e.g.
DECLARE_ELEMENT_PARSER(elems,
SUPPORTED_CHANNELS,
CHANNEL_SWITCH,
EXT(KEY_DELIVERY));
ieee802_11_parse_elems(..., &elems, ...);
and while I think this is possible and will save us a lot
since most individual places only care about a small subset
of the elements, it ended up being a bit more work since a
lot of places do the parsing and then pass the struct to
other functions, sometimes with multiple levels.
Link: https://lore.kernel.org/r/20210920154009.26caff6b5998.I05ae58768e990e611aee8eca8abefd9d7bc15e05@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2021-09-20 21:40:10 +08:00
|
|
|
if (!elems)
|
|
|
|
return NULL;
|
2019-01-16 18:14:41 +08:00
|
|
|
|
2015-06-03 15:44:17 +08:00
|
|
|
/* In case the signal is invalid update the status */
|
2020-02-15 06:23:38 +08:00
|
|
|
signal_valid = channel == cbss->channel;
|
2015-06-03 15:44:17 +08:00
|
|
|
if (!signal_valid)
|
|
|
|
rx_status->flag |= RX_FLAG_NO_SIGNAL_VAL;
|
2009-02-11 04:26:00 +08:00
|
|
|
|
2009-12-23 20:15:39 +08:00
|
|
|
bss = (void *)cbss->priv;
|
mac80211: always allocate struct ieee802_11_elems
As the 802.11 spec evolves, we need to parse more and more
elements. This is causing the struct to grow, and we can no
longer get away with putting it on the stack.
Change the API to always dynamically allocate and return an
allocated pointer that must be kfree()d later.
As an alternative, I contemplated a scheme whereby we'd say
in the code which elements we needed, e.g.
DECLARE_ELEMENT_PARSER(elems,
SUPPORTED_CHANNELS,
CHANNEL_SWITCH,
EXT(KEY_DELIVERY));
ieee802_11_parse_elems(..., &elems, ...);
and while I think this is possible and will save us a lot
since most individual places only care about a small subset
of the elements, it ended up being a bit more work since a
lot of places do the parsing and then pass the struct to
other functions, sometimes with multiple levels.
Link: https://lore.kernel.org/r/20210920154009.26caff6b5998.I05ae58768e990e611aee8eca8abefd9d7bc15e05@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2021-09-20 21:40:10 +08:00
|
|
|
ieee80211_update_bss_from_elems(local, bss, elems, rx_status, beacon);
|
2022-06-29 19:29:05 +08:00
|
|
|
kfree(elems);
|
2013-05-19 19:23:57 +08:00
|
|
|
|
2019-01-17 00:22:56 +08:00
|
|
|
list_for_each_entry(non_tx_cbss, &cbss->nontrans_list, nontrans_list) {
|
|
|
|
non_tx_bss = (void *)non_tx_cbss->priv;
|
|
|
|
|
2022-06-29 19:29:05 +08:00
|
|
|
elems = ieee802_11_parse_elems(elements, len - baselen, false,
|
|
|
|
non_tx_cbss);
|
|
|
|
if (!elems)
|
|
|
|
continue;
|
|
|
|
|
mac80211: always allocate struct ieee802_11_elems
As the 802.11 spec evolves, we need to parse more and more
elements. This is causing the struct to grow, and we can no
longer get away with putting it on the stack.
Change the API to always dynamically allocate and return an
allocated pointer that must be kfree()d later.
As an alternative, I contemplated a scheme whereby we'd say
in the code which elements we needed, e.g.
DECLARE_ELEMENT_PARSER(elems,
SUPPORTED_CHANNELS,
CHANNEL_SWITCH,
EXT(KEY_DELIVERY));
ieee802_11_parse_elems(..., &elems, ...);
and while I think this is possible and will save us a lot
since most individual places only care about a small subset
of the elements, it ended up being a bit more work since a
lot of places do the parsing and then pass the struct to
other functions, sometimes with multiple levels.
Link: https://lore.kernel.org/r/20210920154009.26caff6b5998.I05ae58768e990e611aee8eca8abefd9d7bc15e05@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2021-09-20 21:40:10 +08:00
|
|
|
ieee80211_update_bss_from_elems(local, non_tx_bss, elems,
|
2019-01-17 00:22:56 +08:00
|
|
|
rx_status, beacon);
|
2022-06-29 19:29:05 +08:00
|
|
|
kfree(elems);
|
2019-01-17 00:22:56 +08:00
|
|
|
}
|
|
|
|
|
2008-09-08 23:44:27 +08:00
|
|
|
return bss;
|
|
|
|
}
|
2008-09-08 23:44:25 +08:00
|
|
|
|
2017-08-06 16:38:23 +08:00
|
|
|
static bool ieee80211_scan_accept_presp(struct ieee80211_sub_if_data *sdata,
|
|
|
|
u32 scan_flags, const u8 *da)
|
|
|
|
{
|
|
|
|
if (!sdata)
|
|
|
|
return false;
|
|
|
|
/* accept broadcast for OCE */
|
|
|
|
if (scan_flags & NL80211_SCAN_FLAG_ACCEPT_BCAST_PROBE_RESP &&
|
|
|
|
is_broadcast_ether_addr(da))
|
|
|
|
return true;
|
|
|
|
if (scan_flags & NL80211_SCAN_FLAG_RANDOM_ADDR)
|
|
|
|
return true;
|
|
|
|
return ether_addr_equal(da, sdata->vif.addr);
|
|
|
|
}
|
|
|
|
|
2012-07-07 04:19:27 +08:00
|
|
|
void ieee80211_scan_rx(struct ieee80211_local *local, struct sk_buff *skb)
|
2008-09-08 23:44:26 +08:00
|
|
|
{
|
2009-06-17 19:13:00 +08:00
|
|
|
struct ieee80211_rx_status *rx_status = IEEE80211_SKB_RXCB(skb);
|
2012-07-07 04:19:27 +08:00
|
|
|
struct ieee80211_sub_if_data *sdata1, *sdata2;
|
|
|
|
struct ieee80211_mgmt *mgmt = (void *)skb->data;
|
2008-09-11 06:01:55 +08:00
|
|
|
struct ieee80211_bss *bss;
|
2008-09-08 23:44:26 +08:00
|
|
|
struct ieee80211_channel *channel;
|
2021-05-10 12:16:49 +08:00
|
|
|
size_t min_hdr_len = offsetof(struct ieee80211_mgmt,
|
|
|
|
u.probe_resp.variable);
|
|
|
|
|
|
|
|
if (!ieee80211_is_probe_resp(mgmt->frame_control) &&
|
|
|
|
!ieee80211_is_beacon(mgmt->frame_control) &&
|
|
|
|
!ieee80211_is_s1g_beacon(mgmt->frame_control))
|
|
|
|
return;
|
2008-09-08 23:44:26 +08:00
|
|
|
|
2020-09-22 10:28:08 +08:00
|
|
|
if (ieee80211_is_s1g_beacon(mgmt->frame_control)) {
|
2021-05-10 12:16:49 +08:00
|
|
|
if (ieee80211_is_s1g_short_beacon(mgmt->frame_control))
|
|
|
|
min_hdr_len = offsetof(struct ieee80211_ext,
|
|
|
|
u.s1g_short_beacon.variable);
|
|
|
|
else
|
|
|
|
min_hdr_len = offsetof(struct ieee80211_ext,
|
|
|
|
u.s1g_beacon);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (skb->len < min_hdr_len)
|
2012-07-07 04:19:27 +08:00
|
|
|
return;
|
2008-09-08 23:44:26 +08:00
|
|
|
|
2012-07-07 04:19:27 +08:00
|
|
|
sdata1 = rcu_dereference(local->scan_sdata);
|
|
|
|
sdata2 = rcu_dereference(local->sched_scan_sdata);
|
2008-09-08 23:44:26 +08:00
|
|
|
|
2012-07-07 04:19:27 +08:00
|
|
|
if (likely(!sdata1 && !sdata2))
|
|
|
|
return;
|
2008-09-08 23:44:26 +08:00
|
|
|
|
2022-04-20 18:49:07 +08:00
|
|
|
if (test_and_clear_bit(SCAN_BEACON_WAIT, &local->scanning)) {
|
|
|
|
/*
|
|
|
|
* we were passive scanning because of radar/no-IR, but
|
|
|
|
* the beacon/proberesp rx gives us an opportunity to upgrade
|
|
|
|
* to active scan
|
|
|
|
*/
|
|
|
|
set_bit(SCAN_BEACON_DONE, &local->scanning);
|
|
|
|
ieee80211_queue_delayed_work(&local->hw, &local->scan_work, 0);
|
|
|
|
}
|
|
|
|
|
2012-07-07 04:19:27 +08:00
|
|
|
if (ieee80211_is_probe_resp(mgmt->frame_control)) {
|
2014-06-13 04:24:31 +08:00
|
|
|
struct cfg80211_scan_request *scan_req;
|
|
|
|
struct cfg80211_sched_scan_request *sched_scan_req;
|
2017-08-06 16:38:23 +08:00
|
|
|
u32 scan_req_flags = 0, sched_scan_req_flags = 0;
|
2014-06-13 04:24:31 +08:00
|
|
|
|
|
|
|
scan_req = rcu_dereference(local->scan_req);
|
|
|
|
sched_scan_req = rcu_dereference(local->sched_scan_req);
|
|
|
|
|
2017-08-06 16:38:23 +08:00
|
|
|
if (scan_req)
|
|
|
|
scan_req_flags = scan_req->flags;
|
|
|
|
|
|
|
|
if (sched_scan_req)
|
|
|
|
sched_scan_req_flags = sched_scan_req->flags;
|
|
|
|
|
|
|
|
/* ignore ProbeResp to foreign address or non-bcast (OCE)
|
|
|
|
* unless scanning with randomised address
|
2014-06-13 04:24:31 +08:00
|
|
|
*/
|
2017-08-06 16:38:23 +08:00
|
|
|
if (!ieee80211_scan_accept_presp(sdata1, scan_req_flags,
|
|
|
|
mgmt->da) &&
|
|
|
|
!ieee80211_scan_accept_presp(sdata2, sched_scan_req_flags,
|
|
|
|
mgmt->da))
|
2012-07-07 04:19:27 +08:00
|
|
|
return;
|
2008-09-08 23:44:26 +08:00
|
|
|
}
|
|
|
|
|
2020-04-02 09:18:05 +08:00
|
|
|
channel = ieee80211_get_channel_khz(local->hw.wiphy,
|
|
|
|
ieee80211_rx_status_to_khz(rx_status));
|
2008-09-08 23:44:26 +08:00
|
|
|
|
|
|
|
if (!channel || channel->flags & IEEE80211_CHAN_DISABLED)
|
2012-07-07 04:19:27 +08:00
|
|
|
return;
|
2008-09-08 23:44:26 +08:00
|
|
|
|
2012-07-07 04:19:27 +08:00
|
|
|
bss = ieee80211_bss_info_update(local, rx_status,
|
2019-01-16 18:14:41 +08:00
|
|
|
mgmt, skb->len,
|
2012-12-10 22:19:13 +08:00
|
|
|
channel);
|
2008-10-11 08:29:55 +08:00
|
|
|
if (bss)
|
2012-07-07 04:19:27 +08:00
|
|
|
ieee80211_rx_bss_put(local, bss);
|
2008-09-08 23:44:26 +08:00
|
|
|
}
|
|
|
|
|
2013-07-08 22:55:53 +08:00
|
|
|
static void
|
|
|
|
ieee80211_prepare_scan_chandef(struct cfg80211_chan_def *chandef,
|
|
|
|
enum nl80211_bss_scan_width scan_width)
|
|
|
|
{
|
|
|
|
memset(chandef, 0, sizeof(*chandef));
|
|
|
|
switch (scan_width) {
|
|
|
|
case NL80211_BSS_CHAN_WIDTH_5:
|
|
|
|
chandef->width = NL80211_CHAN_WIDTH_5;
|
|
|
|
break;
|
|
|
|
case NL80211_BSS_CHAN_WIDTH_10:
|
|
|
|
chandef->width = NL80211_CHAN_WIDTH_10;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
chandef->width = NL80211_CHAN_WIDTH_20_NOHT;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-10-28 03:59:55 +08:00
|
|
|
/* return false if no more work */
|
2020-05-29 03:34:39 +08:00
|
|
|
static bool ieee80211_prep_hw_scan(struct ieee80211_sub_if_data *sdata)
|
2009-10-28 03:59:55 +08:00
|
|
|
{
|
2020-05-29 03:34:39 +08:00
|
|
|
struct ieee80211_local *local = sdata->local;
|
2014-11-19 18:55:49 +08:00
|
|
|
struct cfg80211_scan_request *req;
|
2013-07-08 22:55:53 +08:00
|
|
|
struct cfg80211_chan_def chandef;
|
2014-02-05 21:21:13 +08:00
|
|
|
u8 bands_used = 0;
|
2009-10-28 03:59:55 +08:00
|
|
|
int i, ielen, n_chans;
|
2018-05-28 21:47:41 +08:00
|
|
|
u32 flags = 0;
|
2009-10-28 03:59:55 +08:00
|
|
|
|
2014-11-19 18:55:49 +08:00
|
|
|
req = rcu_dereference_protected(local->scan_req,
|
|
|
|
lockdep_is_held(&local->mtx));
|
|
|
|
|
2013-09-16 16:12:07 +08:00
|
|
|
if (test_bit(SCAN_HW_CANCELLED, &local->scanning))
|
|
|
|
return false;
|
|
|
|
|
2015-06-03 03:39:54 +08:00
|
|
|
if (ieee80211_hw_check(&local->hw, SINGLE_SCAN_ON_ALL_BANDS)) {
|
2009-10-28 03:59:55 +08:00
|
|
|
for (i = 0; i < req->n_channels; i++) {
|
2014-02-05 21:21:13 +08:00
|
|
|
local->hw_scan_req->req.channels[i] = req->channels[i];
|
|
|
|
bands_used |= BIT(req->channels[i]->band);
|
|
|
|
}
|
|
|
|
|
|
|
|
n_chans = req->n_channels;
|
|
|
|
} else {
|
|
|
|
do {
|
2016-04-12 21:56:15 +08:00
|
|
|
if (local->hw_scan_band == NUM_NL80211_BANDS)
|
2014-02-05 21:21:13 +08:00
|
|
|
return false;
|
|
|
|
|
|
|
|
n_chans = 0;
|
|
|
|
|
|
|
|
for (i = 0; i < req->n_channels; i++) {
|
|
|
|
if (req->channels[i]->band !=
|
|
|
|
local->hw_scan_band)
|
|
|
|
continue;
|
|
|
|
local->hw_scan_req->req.channels[n_chans] =
|
2009-10-28 03:59:55 +08:00
|
|
|
req->channels[i];
|
|
|
|
n_chans++;
|
2014-02-05 21:21:13 +08:00
|
|
|
bands_used |= BIT(req->channels[i]->band);
|
2009-10-28 03:59:55 +08:00
|
|
|
}
|
|
|
|
|
2014-02-05 21:21:13 +08:00
|
|
|
local->hw_scan_band++;
|
|
|
|
} while (!n_chans);
|
|
|
|
}
|
2009-10-28 03:59:55 +08:00
|
|
|
|
2014-02-05 21:21:13 +08:00
|
|
|
local->hw_scan_req->req.n_channels = n_chans;
|
2013-07-08 22:55:53 +08:00
|
|
|
ieee80211_prepare_scan_chandef(&chandef, req->scan_width);
|
2009-10-28 03:59:55 +08:00
|
|
|
|
2018-05-28 21:47:41 +08:00
|
|
|
if (req->flags & NL80211_SCAN_FLAG_MIN_PREQ_CONTENT)
|
|
|
|
flags |= IEEE80211_PROBE_FLAG_MIN_CONTENT;
|
|
|
|
|
2020-05-29 03:34:39 +08:00
|
|
|
ielen = ieee80211_build_preq_ies(sdata,
|
2014-02-05 21:21:13 +08:00
|
|
|
(u8 *)local->hw_scan_req->req.ie,
|
2012-11-29 19:45:18 +08:00
|
|
|
local->hw_scan_ies_bufsize,
|
2014-02-05 21:21:13 +08:00
|
|
|
&local->hw_scan_req->ies,
|
|
|
|
req->ie, req->ie_len,
|
2018-05-28 21:47:41 +08:00
|
|
|
bands_used, req->rates, &chandef,
|
|
|
|
flags);
|
2014-02-05 21:21:13 +08:00
|
|
|
local->hw_scan_req->req.ie_len = ielen;
|
|
|
|
local->hw_scan_req->req.no_cck = req->no_cck;
|
2014-06-13 04:24:31 +08:00
|
|
|
ether_addr_copy(local->hw_scan_req->req.mac_addr, req->mac_addr);
|
|
|
|
ether_addr_copy(local->hw_scan_req->req.mac_addr_mask,
|
|
|
|
req->mac_addr_mask);
|
2016-02-27 04:12:48 +08:00
|
|
|
ether_addr_copy(local->hw_scan_req->req.bssid, req->bssid);
|
2009-10-28 03:59:55 +08:00
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2013-12-05 17:21:27 +08:00
|
|
|
static void __ieee80211_scan_completed(struct ieee80211_hw *hw, bool aborted)
|
2008-09-08 23:44:25 +08:00
|
|
|
{
|
|
|
|
struct ieee80211_local *local = hw_to_local(hw);
|
2018-10-18 16:35:47 +08:00
|
|
|
bool hw_scan = test_bit(SCAN_HW_SCANNING, &local->scanning);
|
2013-12-05 17:21:28 +08:00
|
|
|
bool was_scanning = local->scanning;
|
2014-11-19 18:55:49 +08:00
|
|
|
struct cfg80211_scan_request *scan_req;
|
2014-06-13 04:24:31 +08:00
|
|
|
struct ieee80211_sub_if_data *scan_sdata;
|
2016-01-12 17:00:19 +08:00
|
|
|
struct ieee80211_sub_if_data *sdata;
|
2008-09-08 23:44:25 +08:00
|
|
|
|
2010-10-06 17:22:09 +08:00
|
|
|
lockdep_assert_held(&local->mtx);
|
2008-09-11 06:01:51 +08:00
|
|
|
|
2009-10-31 14:44:08 +08:00
|
|
|
/*
|
|
|
|
* It's ok to abort a not-yet-running scan (that
|
|
|
|
* we have one at all will be verified by checking
|
|
|
|
* local->scan_req next), but not to complete it
|
|
|
|
* successfully.
|
|
|
|
*/
|
|
|
|
if (WARN_ON(!local->scanning && !aborted))
|
|
|
|
aborted = true;
|
2008-09-11 06:01:51 +08:00
|
|
|
|
2010-10-06 17:22:09 +08:00
|
|
|
if (WARN_ON(!local->scan_req))
|
2011-03-07 22:48:41 +08:00
|
|
|
return;
|
2009-04-01 17:58:36 +08:00
|
|
|
|
2020-05-29 03:34:39 +08:00
|
|
|
scan_sdata = rcu_dereference_protected(local->scan_sdata,
|
|
|
|
lockdep_is_held(&local->mtx));
|
|
|
|
|
2014-02-05 21:21:13 +08:00
|
|
|
if (hw_scan && !aborted &&
|
2015-06-03 03:39:54 +08:00
|
|
|
!ieee80211_hw_check(&local->hw, SINGLE_SCAN_ON_ALL_BANDS) &&
|
2020-05-29 03:34:39 +08:00
|
|
|
ieee80211_prep_hw_scan(scan_sdata)) {
|
2012-07-07 03:39:28 +08:00
|
|
|
int rc;
|
|
|
|
|
|
|
|
rc = drv_hw_scan(local,
|
|
|
|
rcu_dereference_protected(local->scan_sdata,
|
|
|
|
lockdep_is_held(&local->mtx)),
|
|
|
|
local->hw_scan_req);
|
|
|
|
|
2010-10-06 17:22:11 +08:00
|
|
|
if (rc == 0)
|
2011-03-07 22:48:41 +08:00
|
|
|
return;
|
2016-07-05 20:23:12 +08:00
|
|
|
|
2016-07-05 20:23:13 +08:00
|
|
|
/* HW scan failed and is going to be reported as aborted,
|
|
|
|
* so clear old scan info.
|
2016-07-05 20:23:12 +08:00
|
|
|
*/
|
|
|
|
memset(&local->scan_info, 0, sizeof(local->scan_info));
|
2016-07-05 20:23:13 +08:00
|
|
|
aborted = true;
|
2009-10-28 03:59:55 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
kfree(local->hw_scan_req);
|
|
|
|
local->hw_scan_req = NULL;
|
2009-04-23 22:01:47 +08:00
|
|
|
|
2014-11-19 18:55:49 +08:00
|
|
|
scan_req = rcu_dereference_protected(local->scan_req,
|
|
|
|
lockdep_is_held(&local->mtx));
|
|
|
|
|
2016-07-05 22:10:13 +08:00
|
|
|
if (scan_req != local->int_scan_req) {
|
2016-07-05 20:23:12 +08:00
|
|
|
local->scan_info.aborted = aborted;
|
|
|
|
cfg80211_scan_done(scan_req, &local->scan_info);
|
2016-07-05 22:10:13 +08:00
|
|
|
}
|
2014-11-19 18:55:49 +08:00
|
|
|
RCU_INIT_POINTER(local->scan_req, NULL);
|
2014-03-24 03:21:43 +08:00
|
|
|
RCU_INIT_POINTER(local->scan_sdata, NULL);
|
2009-02-11 04:25:55 +08:00
|
|
|
|
2009-07-23 18:14:04 +08:00
|
|
|
local->scanning = 0;
|
2013-07-08 22:55:56 +08:00
|
|
|
local->scan_chandef.chan = NULL;
|
2009-04-23 22:01:47 +08:00
|
|
|
|
2011-11-08 23:21:21 +08:00
|
|
|
/* Set power back to normal operating levels. */
|
|
|
|
ieee80211_hw_config(local, 0);
|
2010-07-28 04:33:08 +08:00
|
|
|
|
2013-12-05 17:21:27 +08:00
|
|
|
if (!hw_scan) {
|
2010-10-06 17:22:09 +08:00
|
|
|
ieee80211_configure_filter(local);
|
2014-06-13 04:24:31 +08:00
|
|
|
drv_sw_scan_complete(local, scan_sdata);
|
2012-12-20 21:41:18 +08:00
|
|
|
ieee80211_offchannel_return(local);
|
2010-10-06 17:22:09 +08:00
|
|
|
}
|
2008-09-08 23:44:25 +08:00
|
|
|
|
2009-04-29 18:26:17 +08:00
|
|
|
ieee80211_recalc_idle(local);
|
2010-10-06 17:22:09 +08:00
|
|
|
|
2008-09-08 23:44:25 +08:00
|
|
|
ieee80211_mlme_notify_scan_completed(local);
|
2009-02-15 19:44:28 +08:00
|
|
|
ieee80211_ibss_notify_scan_completed(local);
|
2016-01-12 17:00:19 +08:00
|
|
|
|
|
|
|
/* Requeue all the work that might have been ignored while
|
|
|
|
* the scan was in progress; if there was none this will
|
|
|
|
* just be a no-op for the particular interface.
|
|
|
|
*/
|
|
|
|
list_for_each_entry_rcu(sdata, &local->interfaces, list) {
|
|
|
|
if (ieee80211_sdata_running(sdata))
|
|
|
|
ieee80211_queue_work(&sdata->local->hw, &sdata->work);
|
|
|
|
}
|
|
|
|
|
2013-12-05 17:21:28 +08:00
|
|
|
if (was_scanning)
|
|
|
|
ieee80211_start_next_roc(local);
|
2008-09-08 23:44:25 +08:00
|
|
|
}
|
2010-08-26 19:30:26 +08:00
|
|
|
|
2016-07-05 20:23:12 +08:00
|
|
|
void ieee80211_scan_completed(struct ieee80211_hw *hw,
|
|
|
|
struct cfg80211_scan_info *info)
|
2010-08-26 19:30:26 +08:00
|
|
|
{
|
|
|
|
struct ieee80211_local *local = hw_to_local(hw);
|
|
|
|
|
2016-09-14 15:37:54 +08:00
|
|
|
trace_api_scan_completed(local, info->aborted);
|
2010-08-26 19:30:26 +08:00
|
|
|
|
|
|
|
set_bit(SCAN_COMPLETED, &local->scanning);
|
2016-07-05 20:23:12 +08:00
|
|
|
if (info->aborted)
|
2010-08-26 19:30:26 +08:00
|
|
|
set_bit(SCAN_ABORTED, &local->scanning);
|
2016-07-05 20:23:12 +08:00
|
|
|
|
|
|
|
memcpy(&local->scan_info, info, sizeof(*info));
|
|
|
|
|
2010-08-26 19:30:26 +08:00
|
|
|
ieee80211_queue_delayed_work(&local->hw, &local->scan_work, 0);
|
|
|
|
}
|
2008-09-08 23:44:25 +08:00
|
|
|
EXPORT_SYMBOL(ieee80211_scan_completed);
|
|
|
|
|
2014-06-13 04:24:31 +08:00
|
|
|
static int ieee80211_start_sw_scan(struct ieee80211_local *local,
|
|
|
|
struct ieee80211_sub_if_data *sdata)
|
2009-04-23 22:01:47 +08:00
|
|
|
{
|
2012-07-26 20:55:08 +08:00
|
|
|
/* Software scan is not supported in multi-channel cases */
|
|
|
|
if (local->use_chanctx)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
2009-04-23 22:01:47 +08:00
|
|
|
/*
|
|
|
|
* Hardware/driver doesn't support hw_scan, so use software
|
|
|
|
* scanning instead. First send a nullfunc frame with power save
|
|
|
|
* bit on so that AP will buffer the frames for us while we are not
|
|
|
|
* listening, then send probe requests to each channel and wait for
|
|
|
|
* the responses. After all channels are scanned, tune back to the
|
|
|
|
* original channel and send a nullfunc frame with power save bit
|
|
|
|
* off to trigger the AP to send us all the buffered frames.
|
|
|
|
*
|
|
|
|
* Note that while local->sw_scanning is true everything else but
|
|
|
|
* nullfunc frames and probe requests will be dropped in
|
|
|
|
* ieee80211_tx_h_check_assoc().
|
|
|
|
*/
|
2014-06-13 04:24:31 +08:00
|
|
|
drv_sw_scan_start(local, sdata, local->scan_addr);
|
2009-04-23 22:01:47 +08:00
|
|
|
|
2012-03-27 13:31:06 +08:00
|
|
|
local->leave_oper_channel_time = jiffies;
|
2009-07-23 18:14:20 +08:00
|
|
|
local->next_scan_state = SCAN_DECISION;
|
2009-04-23 22:01:47 +08:00
|
|
|
local->scan_channel_idx = 0;
|
|
|
|
|
2012-12-20 21:41:18 +08:00
|
|
|
ieee80211_offchannel_stop_vifs(local);
|
2009-12-23 20:15:32 +08:00
|
|
|
|
2013-02-12 01:21:08 +08:00
|
|
|
/* ensure nullfunc is transmitted before leaving operating channel */
|
2015-01-07 21:42:39 +08:00
|
|
|
ieee80211_flush_queues(local, NULL, false);
|
2013-02-12 01:21:08 +08:00
|
|
|
|
2009-08-17 22:16:53 +08:00
|
|
|
ieee80211_configure_filter(local);
|
2009-04-23 22:01:47 +08:00
|
|
|
|
2011-02-08 05:44:38 +08:00
|
|
|
/* We need to set power level at maximum rate for scanning. */
|
|
|
|
ieee80211_hw_config(local, 0);
|
|
|
|
|
2009-07-30 08:08:07 +08:00
|
|
|
ieee80211_queue_delayed_work(&local->hw,
|
2011-11-08 23:21:21 +08:00
|
|
|
&local->scan_work, 0);
|
2009-04-23 22:01:47 +08:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-10-02 21:59:07 +08:00
|
|
|
static bool __ieee80211_can_leave_ch(struct ieee80211_sub_if_data *sdata)
|
|
|
|
{
|
|
|
|
struct ieee80211_local *local = sdata->local;
|
|
|
|
struct ieee80211_sub_if_data *sdata_iter;
|
|
|
|
|
|
|
|
if (!ieee80211_is_radar_required(local))
|
|
|
|
return true;
|
|
|
|
|
|
|
|
if (!regulatory_pre_cac_allowed(local->hw.wiphy))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
mutex_lock(&local->iflist_mtx);
|
|
|
|
list_for_each_entry(sdata_iter, &local->interfaces, list) {
|
|
|
|
if (sdata_iter->wdev.cac_started) {
|
|
|
|
mutex_unlock(&local->iflist_mtx);
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
mutex_unlock(&local->iflist_mtx);
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2012-03-28 22:01:19 +08:00
|
|
|
static bool ieee80211_can_scan(struct ieee80211_local *local,
|
|
|
|
struct ieee80211_sub_if_data *sdata)
|
|
|
|
{
|
2019-10-02 21:59:07 +08:00
|
|
|
if (!__ieee80211_can_leave_ch(sdata))
|
2013-02-09 01:16:20 +08:00
|
|
|
return false;
|
|
|
|
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 20:28:42 +08:00
|
|
|
if (!list_empty(&local->roc_list))
|
2012-03-28 22:01:19 +08:00
|
|
|
return false;
|
|
|
|
|
|
|
|
if (sdata->vif.type == NL80211_IFTYPE_STATION &&
|
2013-08-27 17:36:35 +08:00
|
|
|
sdata->u.mgd.flags & IEEE80211_STA_CONNECTION_POLL)
|
2012-03-28 22:01:19 +08:00
|
|
|
return false;
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
void ieee80211_run_deferred_scan(struct ieee80211_local *local)
|
|
|
|
{
|
|
|
|
lockdep_assert_held(&local->mtx);
|
|
|
|
|
|
|
|
if (!local->scan_req || local->scanning)
|
|
|
|
return;
|
|
|
|
|
2012-07-07 03:39:28 +08:00
|
|
|
if (!ieee80211_can_scan(local,
|
|
|
|
rcu_dereference_protected(
|
|
|
|
local->scan_sdata,
|
|
|
|
lockdep_is_held(&local->mtx))))
|
2012-03-28 22:01:19 +08:00
|
|
|
return;
|
|
|
|
|
|
|
|
ieee80211_queue_delayed_work(&local->hw, &local->scan_work,
|
|
|
|
round_jiffies_relative(0));
|
|
|
|
}
|
2009-04-23 22:01:47 +08:00
|
|
|
|
2018-05-28 21:47:39 +08:00
|
|
|
static void ieee80211_send_scan_probe_req(struct ieee80211_sub_if_data *sdata,
|
|
|
|
const u8 *src, const u8 *dst,
|
|
|
|
const u8 *ssid, size_t ssid_len,
|
|
|
|
const u8 *ie, size_t ie_len,
|
|
|
|
u32 ratemask, u32 flags, u32 tx_flags,
|
|
|
|
struct ieee80211_channel *channel)
|
|
|
|
{
|
|
|
|
struct sk_buff *skb;
|
|
|
|
|
|
|
|
skb = ieee80211_build_probe_req(sdata, src, dst, ratemask, channel,
|
|
|
|
ssid, ssid_len,
|
|
|
|
ie, ie_len, flags);
|
2018-05-28 21:47:41 +08:00
|
|
|
|
2018-05-28 21:47:39 +08:00
|
|
|
if (skb) {
|
2018-05-28 21:47:41 +08:00
|
|
|
if (flags & IEEE80211_PROBE_FLAG_RANDOM_SN) {
|
|
|
|
struct ieee80211_hdr *hdr = (void *)skb->data;
|
2020-07-23 18:01:51 +08:00
|
|
|
struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
|
2018-05-28 21:47:41 +08:00
|
|
|
u16 sn = get_random_u32();
|
|
|
|
|
2020-07-23 18:01:51 +08:00
|
|
|
info->control.flags |= IEEE80211_TX_CTRL_NO_SEQNO;
|
2018-05-28 21:47:41 +08:00
|
|
|
hdr->seq_ctrl =
|
|
|
|
cpu_to_le16(IEEE80211_SN_TO_SEQ(sn));
|
|
|
|
}
|
2018-05-28 21:47:39 +08:00
|
|
|
IEEE80211_SKB_CB(skb)->flags |= tx_flags;
|
2020-07-23 18:01:52 +08:00
|
|
|
ieee80211_tx_skb_tid_band(sdata, skb, 7, channel->band);
|
2018-05-28 21:47:39 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-04-18 01:54:16 +08:00
|
|
|
static void ieee80211_scan_state_send_probe(struct ieee80211_local *local,
|
|
|
|
unsigned long *next_delay)
|
|
|
|
{
|
|
|
|
int i;
|
2012-07-07 03:39:28 +08:00
|
|
|
struct ieee80211_sub_if_data *sdata;
|
2014-11-19 18:55:49 +08:00
|
|
|
struct cfg80211_scan_request *scan_req;
|
2016-04-12 21:56:15 +08:00
|
|
|
enum nl80211_band band = local->hw.conf.chandef.chan->band;
|
2018-05-28 21:47:41 +08:00
|
|
|
u32 flags = 0, tx_flags;
|
2013-02-12 01:21:07 +08:00
|
|
|
|
2014-11-19 18:55:49 +08:00
|
|
|
scan_req = rcu_dereference_protected(local->scan_req,
|
|
|
|
lockdep_is_held(&local->mtx));
|
|
|
|
|
2013-02-12 01:21:07 +08:00
|
|
|
tx_flags = IEEE80211_TX_INTFL_OFFCHAN_TX_OK;
|
2014-11-19 18:55:49 +08:00
|
|
|
if (scan_req->no_cck)
|
2013-02-12 01:21:07 +08:00
|
|
|
tx_flags |= IEEE80211_TX_CTL_NO_CCK_RATE;
|
2018-05-28 21:47:41 +08:00
|
|
|
if (scan_req->flags & NL80211_SCAN_FLAG_MIN_PREQ_CONTENT)
|
|
|
|
flags |= IEEE80211_PROBE_FLAG_MIN_CONTENT;
|
|
|
|
if (scan_req->flags & NL80211_SCAN_FLAG_RANDOM_SN)
|
|
|
|
flags |= IEEE80211_PROBE_FLAG_RANDOM_SN;
|
2012-04-18 01:54:16 +08:00
|
|
|
|
2012-07-07 03:39:28 +08:00
|
|
|
sdata = rcu_dereference_protected(local->scan_sdata,
|
2012-09-07 00:09:16 +08:00
|
|
|
lockdep_is_held(&local->mtx));
|
2012-07-07 03:39:28 +08:00
|
|
|
|
2014-11-19 18:55:49 +08:00
|
|
|
for (i = 0; i < scan_req->n_ssids; i++)
|
2018-05-28 21:47:39 +08:00
|
|
|
ieee80211_send_scan_probe_req(
|
2016-02-27 04:12:48 +08:00
|
|
|
sdata, local->scan_addr, scan_req->bssid,
|
2014-11-19 18:55:49 +08:00
|
|
|
scan_req->ssids[i].ssid, scan_req->ssids[i].ssid_len,
|
|
|
|
scan_req->ie, scan_req->ie_len,
|
2018-05-28 21:47:41 +08:00
|
|
|
scan_req->rates[band], flags,
|
2018-05-28 21:47:39 +08:00
|
|
|
tx_flags, local->hw.conf.chandef.chan);
|
2012-04-18 01:54:16 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* After sending probe requests, wait for probe responses
|
|
|
|
* on the channel.
|
|
|
|
*/
|
|
|
|
*next_delay = IEEE80211_CHANNEL_TIME;
|
|
|
|
local->next_scan_state = SCAN_DECISION;
|
|
|
|
}
|
|
|
|
|
2009-04-23 22:01:47 +08:00
|
|
|
static int __ieee80211_start_scan(struct ieee80211_sub_if_data *sdata,
|
|
|
|
struct cfg80211_scan_request *req)
|
|
|
|
{
|
|
|
|
struct ieee80211_local *local = sdata->local;
|
2018-10-18 16:35:47 +08:00
|
|
|
bool hw_scan = local->ops->hw_scan;
|
2009-04-23 22:01:47 +08:00
|
|
|
int rc;
|
|
|
|
|
2010-10-06 17:22:09 +08:00
|
|
|
lockdep_assert_held(&local->mtx);
|
|
|
|
|
2019-10-02 21:59:07 +08:00
|
|
|
if (local->scan_req)
|
|
|
|
return -EBUSY;
|
|
|
|
|
|
|
|
if (!__ieee80211_can_leave_ch(sdata))
|
2009-04-23 22:01:47 +08:00
|
|
|
return -EBUSY;
|
|
|
|
|
2012-03-28 22:01:19 +08:00
|
|
|
if (!ieee80211_can_scan(local, sdata)) {
|
2010-02-09 05:38:38 +08:00
|
|
|
/* wait for the work to finish/time out */
|
2014-11-19 18:55:49 +08:00
|
|
|
rcu_assign_pointer(local->scan_req, req);
|
2012-07-07 03:39:28 +08:00
|
|
|
rcu_assign_pointer(local->scan_sdata, sdata);
|
2010-02-03 17:22:31 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2018-10-18 16:35:47 +08:00
|
|
|
again:
|
|
|
|
if (hw_scan) {
|
2009-04-23 22:01:47 +08:00
|
|
|
u8 *ies;
|
|
|
|
|
2014-02-11 19:45:41 +08:00
|
|
|
local->hw_scan_ies_bufsize = local->scan_ies_len + req->ie_len;
|
2014-02-05 21:21:13 +08:00
|
|
|
|
2015-06-03 03:39:54 +08:00
|
|
|
if (ieee80211_hw_check(&local->hw, SINGLE_SCAN_ON_ALL_BANDS)) {
|
2014-02-05 21:21:13 +08:00
|
|
|
int i, n_bands = 0;
|
|
|
|
u8 bands_counted = 0;
|
|
|
|
|
|
|
|
for (i = 0; i < req->n_channels; i++) {
|
|
|
|
if (bands_counted & BIT(req->channels[i]->band))
|
|
|
|
continue;
|
|
|
|
bands_counted |= BIT(req->channels[i]->band);
|
|
|
|
n_bands++;
|
|
|
|
}
|
|
|
|
|
|
|
|
local->hw_scan_ies_bufsize *= n_bands;
|
|
|
|
}
|
|
|
|
|
2009-10-28 03:59:55 +08:00
|
|
|
local->hw_scan_req = kmalloc(
|
|
|
|
sizeof(*local->hw_scan_req) +
|
|
|
|
req->n_channels * sizeof(req->channels[0]) +
|
2012-11-29 19:45:18 +08:00
|
|
|
local->hw_scan_ies_bufsize, GFP_KERNEL);
|
2009-10-28 03:59:55 +08:00
|
|
|
if (!local->hw_scan_req)
|
2009-04-23 22:01:47 +08:00
|
|
|
return -ENOMEM;
|
|
|
|
|
2014-02-05 21:21:13 +08:00
|
|
|
local->hw_scan_req->req.ssids = req->ssids;
|
|
|
|
local->hw_scan_req->req.n_ssids = req->n_ssids;
|
2009-10-28 03:59:55 +08:00
|
|
|
ies = (u8 *)local->hw_scan_req +
|
|
|
|
sizeof(*local->hw_scan_req) +
|
|
|
|
req->n_channels * sizeof(req->channels[0]);
|
2014-02-05 21:21:13 +08:00
|
|
|
local->hw_scan_req->req.ie = ies;
|
|
|
|
local->hw_scan_req->req.flags = req->flags;
|
2016-02-27 04:12:48 +08:00
|
|
|
eth_broadcast_addr(local->hw_scan_req->req.bssid);
|
2016-07-05 20:23:12 +08:00
|
|
|
local->hw_scan_req->req.duration = req->duration;
|
|
|
|
local->hw_scan_req->req.duration_mandatory =
|
|
|
|
req->duration_mandatory;
|
2009-10-28 03:59:55 +08:00
|
|
|
|
|
|
|
local->hw_scan_band = 0;
|
2020-09-18 17:33:13 +08:00
|
|
|
local->hw_scan_req->req.n_6ghz_params = req->n_6ghz_params;
|
|
|
|
local->hw_scan_req->req.scan_6ghz_params =
|
|
|
|
req->scan_6ghz_params;
|
|
|
|
local->hw_scan_req->req.scan_6ghz = req->scan_6ghz;
|
2010-02-09 05:38:38 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* After allocating local->hw_scan_req, we must
|
|
|
|
* go through until ieee80211_prep_hw_scan(), so
|
|
|
|
* anything that might be changed here and leave
|
|
|
|
* this function early must not go after this
|
|
|
|
* allocation.
|
|
|
|
*/
|
2009-04-23 22:01:47 +08:00
|
|
|
}
|
|
|
|
|
2014-11-19 18:55:49 +08:00
|
|
|
rcu_assign_pointer(local->scan_req, req);
|
2012-07-07 03:39:28 +08:00
|
|
|
rcu_assign_pointer(local->scan_sdata, sdata);
|
2009-04-23 22:01:47 +08:00
|
|
|
|
2014-06-13 04:24:31 +08:00
|
|
|
if (req->flags & NL80211_SCAN_FLAG_RANDOM_ADDR)
|
|
|
|
get_random_mask_addr(local->scan_addr,
|
|
|
|
req->mac_addr,
|
|
|
|
req->mac_addr_mask);
|
|
|
|
else
|
|
|
|
memcpy(local->scan_addr, sdata->vif.addr, ETH_ALEN);
|
|
|
|
|
2018-10-18 16:35:47 +08:00
|
|
|
if (hw_scan) {
|
2009-07-23 18:14:04 +08:00
|
|
|
__set_bit(SCAN_HW_SCANNING, &local->scanning);
|
2012-04-18 01:54:16 +08:00
|
|
|
} else if ((req->n_channels == 1) &&
|
2013-03-25 23:26:57 +08:00
|
|
|
(req->channels[0] == local->_oper_chandef.chan)) {
|
2012-07-26 20:38:32 +08:00
|
|
|
/*
|
|
|
|
* If we are scanning only on the operating channel
|
|
|
|
* then we do not need to stop normal activities
|
2012-04-18 01:54:16 +08:00
|
|
|
*/
|
|
|
|
unsigned long next_delay;
|
|
|
|
|
|
|
|
__set_bit(SCAN_ONCHANNEL_SCANNING, &local->scanning);
|
|
|
|
|
|
|
|
ieee80211_recalc_idle(local);
|
|
|
|
|
|
|
|
/* Notify driver scan is starting, keep order of operations
|
|
|
|
* same as normal software scan, in case that matters. */
|
2014-06-13 04:24:31 +08:00
|
|
|
drv_sw_scan_start(local, sdata, local->scan_addr);
|
2012-04-18 01:54:16 +08:00
|
|
|
|
|
|
|
ieee80211_configure_filter(local); /* accept probe-responses */
|
|
|
|
|
|
|
|
/* We need to ensure power level is at max for scanning. */
|
|
|
|
ieee80211_hw_config(local, 0);
|
|
|
|
|
2015-11-21 18:13:40 +08:00
|
|
|
if ((req->channels[0]->flags & (IEEE80211_CHAN_NO_IR |
|
|
|
|
IEEE80211_CHAN_RADAR)) ||
|
2014-11-19 18:55:49 +08:00
|
|
|
!req->n_ssids) {
|
2012-04-18 01:54:16 +08:00
|
|
|
next_delay = IEEE80211_PASSIVE_CHANNEL_TIME;
|
2022-04-20 18:49:07 +08:00
|
|
|
if (req->n_ssids)
|
|
|
|
set_bit(SCAN_BEACON_WAIT, &local->scanning);
|
2012-04-18 01:54:16 +08:00
|
|
|
} else {
|
|
|
|
ieee80211_scan_state_send_probe(local, &next_delay);
|
|
|
|
next_delay = IEEE80211_CHANNEL_TIME;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Now, just wait a bit and we are all done! */
|
|
|
|
ieee80211_queue_delayed_work(&local->hw, &local->scan_work,
|
|
|
|
next_delay);
|
|
|
|
return 0;
|
|
|
|
} else {
|
|
|
|
/* Do normal software scan */
|
2009-07-23 18:14:04 +08:00
|
|
|
__set_bit(SCAN_SW_SCANNING, &local->scanning);
|
2012-04-18 01:54:16 +08:00
|
|
|
}
|
2010-02-09 05:38:38 +08:00
|
|
|
|
2009-04-29 18:26:17 +08:00
|
|
|
ieee80211_recalc_idle(local);
|
2009-04-23 22:01:47 +08:00
|
|
|
|
2018-10-18 16:35:47 +08:00
|
|
|
if (hw_scan) {
|
2020-05-29 03:34:39 +08:00
|
|
|
WARN_ON(!ieee80211_prep_hw_scan(sdata));
|
2010-04-27 17:59:34 +08:00
|
|
|
rc = drv_hw_scan(local, sdata, local->hw_scan_req);
|
2014-06-13 04:24:31 +08:00
|
|
|
} else {
|
|
|
|
rc = ieee80211_start_sw_scan(local, sdata);
|
|
|
|
}
|
2009-04-23 22:01:47 +08:00
|
|
|
|
|
|
|
if (rc) {
|
2009-10-28 03:59:55 +08:00
|
|
|
kfree(local->hw_scan_req);
|
|
|
|
local->hw_scan_req = NULL;
|
2009-07-23 18:14:04 +08:00
|
|
|
local->scanning = 0;
|
2009-04-23 22:01:47 +08:00
|
|
|
|
2009-04-29 18:26:17 +08:00
|
|
|
ieee80211_recalc_idle(local);
|
|
|
|
|
2009-04-23 22:01:47 +08:00
|
|
|
local->scan_req = NULL;
|
2014-03-24 03:21:43 +08:00
|
|
|
RCU_INIT_POINTER(local->scan_sdata, NULL);
|
2009-04-23 22:01:47 +08:00
|
|
|
}
|
|
|
|
|
2018-10-18 16:35:47 +08:00
|
|
|
if (hw_scan && rc == 1) {
|
|
|
|
/*
|
|
|
|
* we can't fall back to software for P2P-GO
|
|
|
|
* as it must update NoA etc.
|
|
|
|
*/
|
|
|
|
if (ieee80211_vif_type_p2p(&sdata->vif) ==
|
|
|
|
NL80211_IFTYPE_P2P_GO)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
hw_scan = false;
|
|
|
|
goto again;
|
|
|
|
}
|
|
|
|
|
2009-04-23 22:01:47 +08:00
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2010-02-24 21:19:21 +08:00
|
|
|
static unsigned long
|
|
|
|
ieee80211_scan_get_channel_time(struct ieee80211_channel *chan)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* TODO: channel switching also consumes quite some time,
|
|
|
|
* add that delay as well to get a better estimation
|
|
|
|
*/
|
2015-11-21 18:13:40 +08:00
|
|
|
if (chan->flags & (IEEE80211_CHAN_NO_IR | IEEE80211_CHAN_RADAR))
|
2010-02-24 21:19:21 +08:00
|
|
|
return IEEE80211_PASSIVE_CHANNEL_TIME;
|
|
|
|
return IEEE80211_PROBE_DELAY + IEEE80211_CHANNEL_TIME;
|
|
|
|
}
|
|
|
|
|
2010-10-06 17:22:09 +08:00
|
|
|
static void ieee80211_scan_state_decision(struct ieee80211_local *local,
|
|
|
|
unsigned long *next_delay)
|
2009-07-23 18:13:41 +08:00
|
|
|
{
|
2009-07-23 19:18:01 +08:00
|
|
|
bool associated = false;
|
2010-02-24 21:19:21 +08:00
|
|
|
bool tx_empty = true;
|
|
|
|
bool bad_latency;
|
2009-07-23 19:18:01 +08:00
|
|
|
struct ieee80211_sub_if_data *sdata;
|
2010-02-24 21:19:21 +08:00
|
|
|
struct ieee80211_channel *next_chan;
|
2012-10-12 12:03:35 +08:00
|
|
|
enum mac80211_scan_state next_scan_state;
|
2014-11-19 18:55:49 +08:00
|
|
|
struct cfg80211_scan_request *scan_req;
|
2009-07-23 19:18:01 +08:00
|
|
|
|
2010-02-24 21:19:21 +08:00
|
|
|
/*
|
|
|
|
* check if at least one STA interface is associated,
|
|
|
|
* check if at least one STA interface has pending tx frames
|
|
|
|
* and grab the lowest used beacon interval
|
|
|
|
*/
|
2009-07-23 19:18:01 +08:00
|
|
|
mutex_lock(&local->iflist_mtx);
|
|
|
|
list_for_each_entry(sdata, &local->interfaces, list) {
|
2009-12-23 20:15:31 +08:00
|
|
|
if (!ieee80211_sdata_running(sdata))
|
2009-07-23 19:18:01 +08:00
|
|
|
continue;
|
|
|
|
|
|
|
|
if (sdata->vif.type == NL80211_IFTYPE_STATION) {
|
|
|
|
if (sdata->u.mgd.associated) {
|
|
|
|
associated = true;
|
2010-02-24 21:19:21 +08:00
|
|
|
|
|
|
|
if (!qdisc_all_tx_empty(sdata->dev)) {
|
|
|
|
tx_empty = false;
|
|
|
|
break;
|
|
|
|
}
|
2009-07-23 19:18:01 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
mutex_unlock(&local->iflist_mtx);
|
|
|
|
|
2014-11-19 18:55:49 +08:00
|
|
|
scan_req = rcu_dereference_protected(local->scan_req,
|
|
|
|
lockdep_is_held(&local->mtx));
|
|
|
|
|
|
|
|
next_chan = scan_req->channels[local->scan_channel_idx];
|
mac80211: Optimize scans on current operating channel.
This should decrease un-necessary flushes, on/off channel work,
and channel changes in cases where the only scanned channel is
the current operating channel.
* Removes SCAN_OFF_CHANNEL flag, uses SDATA_STATE_OFFCHANNEL
and is-scanning flags instead.
* Add helper method to determine if we are currently configured
for the operating channel.
* Do no blindly go off/on channel in work.c Instead, only call
appropriate on/off code when we really need to change channels.
Always enable offchannel-ps mode when starting work,
and disable it when we are done.
* Consolidate ieee80211_offchannel_stop_station and
ieee80211_offchannel_stop_beaconing, call it
ieee80211_offchannel_stop_vifs instead.
* Accept non-beacon frames when scanning on operating channel.
* Scan state machine optimized to minimize on/off channel
transitions. Also, when going on-channel, go ahead and
re-enable beaconing. We're going to be there for 200ms,
so seems like some useful beaconing could happen.
Always enable offchannel-ps mode when starting software
scan, and disable it when we are done.
* Grab local->mtx earlier in __ieee80211_scan_completed_finish
so that we are protected when calling hw_config(), etc.
* Pass probe-responses up the stack if scanning on local
channel, so that mlme can take a look.
Signed-off-by: Ben Greear <greearb@candelatech.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2011-02-05 03:54:17 +08:00
|
|
|
|
2009-12-23 20:15:32 +08:00
|
|
|
/*
|
2011-11-08 23:21:21 +08:00
|
|
|
* we're currently scanning a different channel, let's
|
|
|
|
* see if we can scan another channel without interfering
|
|
|
|
* with the current traffic situation.
|
|
|
|
*
|
mac80211: improve latency and throughput while software scanning
Patch vastly improve latency while scanning. Slight throughput
improvements were observed as well. Is intended for improve performance
of voice and video applications, when scan is periodically requested by
user space (i.e. default NetworkManager behaviour).
Patch remove latency requirement based on PM_QOS_NETWORK_LATENCY,
this value is 2000 seconds by default (i.e. approximately 0.5 hour !?!).
Also remove listen interval requirement, which based on beaconing and
depending on BSS parameters. It can make we stay off-channel for a
second or more.
Instead try to offer the best latency that we could, i.e. be off-channel
no longer than PASSIVE channel scan time: 125 ms. That mean we will
scan two ACTIVE channels and go back to on-channel, and one PASSIVE
channel, and go back to on-channel.
Patch also decrease PASSIVE channel scan time to about 110 ms.
As drawback patch increase overall scan time. On my tests, when scanning
both 2GHz and 5GHz bands, scanning time increase from 5 seconds up to 10
seconds. Since that increase happen only when we are associated, I think
it can be acceptable. If eventually better scan time is needed for
situations when we lose signal and quickly need to decide to which AP
roam, additional scan flag or parameter can be introduced.
I tested patch by doing:
while true; do iw dev wlan0 scan; sleep 3; done > /dev/null
and
ping -i0.2 -c 1000 HOST
on remote and local machine, results are as below:
* Ping from local periodically scanning machine to AP:
Unpatched: rtt min/avg/max/mdev = 0.928/24.946/182.135/36.873 ms
Patched: rtt min/avg/max/mdev = 0.928/19.678/150.845/33.130 ms
* Ping from remote machine to periodically scanning machine:
Unpatched: rtt min/avg/max/mdev = 1.637/120.683/709.139/164.337 ms
Patched: rtt min/avg/max/mdev = 1.807/26.893/201.435/40.284 ms
Throughput measured by scp show following results.
* Upload to periodically scanning machine:
Unpatched: 3.9MB/s 03:15
Patched: 4.3MB/s 02:58
* Download from periodically scanning machine:
Unpatched: 5.5MB/s 02:17
Patched: 6.2MB/s 02:02
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2013-01-23 19:32:45 +08:00
|
|
|
* Keep good latency, do not stay off-channel more than 125 ms.
|
2009-12-23 20:15:32 +08:00
|
|
|
*/
|
|
|
|
|
2011-11-08 23:21:21 +08:00
|
|
|
bad_latency = time_after(jiffies +
|
mac80211: improve latency and throughput while software scanning
Patch vastly improve latency while scanning. Slight throughput
improvements were observed as well. Is intended for improve performance
of voice and video applications, when scan is periodically requested by
user space (i.e. default NetworkManager behaviour).
Patch remove latency requirement based on PM_QOS_NETWORK_LATENCY,
this value is 2000 seconds by default (i.e. approximately 0.5 hour !?!).
Also remove listen interval requirement, which based on beaconing and
depending on BSS parameters. It can make we stay off-channel for a
second or more.
Instead try to offer the best latency that we could, i.e. be off-channel
no longer than PASSIVE channel scan time: 125 ms. That mean we will
scan two ACTIVE channels and go back to on-channel, and one PASSIVE
channel, and go back to on-channel.
Patch also decrease PASSIVE channel scan time to about 110 ms.
As drawback patch increase overall scan time. On my tests, when scanning
both 2GHz and 5GHz bands, scanning time increase from 5 seconds up to 10
seconds. Since that increase happen only when we are associated, I think
it can be acceptable. If eventually better scan time is needed for
situations when we lose signal and quickly need to decide to which AP
roam, additional scan flag or parameter can be introduced.
I tested patch by doing:
while true; do iw dev wlan0 scan; sleep 3; done > /dev/null
and
ping -i0.2 -c 1000 HOST
on remote and local machine, results are as below:
* Ping from local periodically scanning machine to AP:
Unpatched: rtt min/avg/max/mdev = 0.928/24.946/182.135/36.873 ms
Patched: rtt min/avg/max/mdev = 0.928/19.678/150.845/33.130 ms
* Ping from remote machine to periodically scanning machine:
Unpatched: rtt min/avg/max/mdev = 1.637/120.683/709.139/164.337 ms
Patched: rtt min/avg/max/mdev = 1.807/26.893/201.435/40.284 ms
Throughput measured by scp show following results.
* Upload to periodically scanning machine:
Unpatched: 3.9MB/s 03:15
Patched: 4.3MB/s 02:58
* Download from periodically scanning machine:
Unpatched: 5.5MB/s 02:17
Patched: 6.2MB/s 02:02
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2013-01-23 19:32:45 +08:00
|
|
|
ieee80211_scan_get_channel_time(next_chan),
|
|
|
|
local->leave_oper_channel_time + HZ / 8);
|
2009-07-23 19:18:01 +08:00
|
|
|
|
2012-10-12 12:03:35 +08:00
|
|
|
if (associated && !tx_empty) {
|
2014-11-19 18:55:49 +08:00
|
|
|
if (scan_req->flags & NL80211_SCAN_FLAG_LOW_PRIORITY)
|
2012-10-12 12:03:35 +08:00
|
|
|
next_scan_state = SCAN_ABORT;
|
|
|
|
else
|
|
|
|
next_scan_state = SCAN_SUSPEND;
|
mac80211: improve latency and throughput while software scanning
Patch vastly improve latency while scanning. Slight throughput
improvements were observed as well. Is intended for improve performance
of voice and video applications, when scan is periodically requested by
user space (i.e. default NetworkManager behaviour).
Patch remove latency requirement based on PM_QOS_NETWORK_LATENCY,
this value is 2000 seconds by default (i.e. approximately 0.5 hour !?!).
Also remove listen interval requirement, which based on beaconing and
depending on BSS parameters. It can make we stay off-channel for a
second or more.
Instead try to offer the best latency that we could, i.e. be off-channel
no longer than PASSIVE channel scan time: 125 ms. That mean we will
scan two ACTIVE channels and go back to on-channel, and one PASSIVE
channel, and go back to on-channel.
Patch also decrease PASSIVE channel scan time to about 110 ms.
As drawback patch increase overall scan time. On my tests, when scanning
both 2GHz and 5GHz bands, scanning time increase from 5 seconds up to 10
seconds. Since that increase happen only when we are associated, I think
it can be acceptable. If eventually better scan time is needed for
situations when we lose signal and quickly need to decide to which AP
roam, additional scan flag or parameter can be introduced.
I tested patch by doing:
while true; do iw dev wlan0 scan; sleep 3; done > /dev/null
and
ping -i0.2 -c 1000 HOST
on remote and local machine, results are as below:
* Ping from local periodically scanning machine to AP:
Unpatched: rtt min/avg/max/mdev = 0.928/24.946/182.135/36.873 ms
Patched: rtt min/avg/max/mdev = 0.928/19.678/150.845/33.130 ms
* Ping from remote machine to periodically scanning machine:
Unpatched: rtt min/avg/max/mdev = 1.637/120.683/709.139/164.337 ms
Patched: rtt min/avg/max/mdev = 1.807/26.893/201.435/40.284 ms
Throughput measured by scp show following results.
* Upload to periodically scanning machine:
Unpatched: 3.9MB/s 03:15
Patched: 4.3MB/s 02:58
* Download from periodically scanning machine:
Unpatched: 5.5MB/s 02:17
Patched: 6.2MB/s 02:02
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2013-01-23 19:32:45 +08:00
|
|
|
} else if (associated && bad_latency) {
|
2012-10-12 12:03:35 +08:00
|
|
|
next_scan_state = SCAN_SUSPEND;
|
|
|
|
} else {
|
|
|
|
next_scan_state = SCAN_SET_CHANNEL;
|
|
|
|
}
|
|
|
|
|
|
|
|
local->next_scan_state = next_scan_state;
|
2009-07-23 19:18:01 +08:00
|
|
|
|
2011-11-08 23:21:21 +08:00
|
|
|
*next_delay = 0;
|
2009-07-23 19:18:01 +08:00
|
|
|
}
|
|
|
|
|
2009-07-23 18:13:56 +08:00
|
|
|
static void ieee80211_scan_state_set_channel(struct ieee80211_local *local,
|
|
|
|
unsigned long *next_delay)
|
|
|
|
{
|
|
|
|
int skip;
|
|
|
|
struct ieee80211_channel *chan;
|
2013-07-08 22:55:56 +08:00
|
|
|
enum nl80211_bss_scan_width oper_scan_width;
|
2014-11-19 18:55:49 +08:00
|
|
|
struct cfg80211_scan_request *scan_req;
|
|
|
|
|
|
|
|
scan_req = rcu_dereference_protected(local->scan_req,
|
|
|
|
lockdep_is_held(&local->mtx));
|
2009-07-23 18:13:56 +08:00
|
|
|
|
2009-07-23 18:13:41 +08:00
|
|
|
skip = 0;
|
2014-11-19 18:55:49 +08:00
|
|
|
chan = scan_req->channels[local->scan_channel_idx];
|
2009-07-23 18:13:41 +08:00
|
|
|
|
2013-07-08 22:55:56 +08:00
|
|
|
local->scan_chandef.chan = chan;
|
|
|
|
local->scan_chandef.center_freq1 = chan->center_freq;
|
2020-04-02 09:18:04 +08:00
|
|
|
local->scan_chandef.freq1_offset = chan->freq_offset;
|
2013-07-08 22:55:56 +08:00
|
|
|
local->scan_chandef.center_freq2 = 0;
|
2020-09-22 10:28:03 +08:00
|
|
|
|
|
|
|
/* For scanning on the S1G band, ignore scan_width (which is constant
|
|
|
|
* across all channels) for now since channel width is specific to each
|
|
|
|
* channel. Detect the required channel width here and likely revisit
|
|
|
|
* later. Maybe scan_width could be used to build the channel scan list?
|
|
|
|
*/
|
|
|
|
if (chan->band == NL80211_BAND_S1GHZ) {
|
|
|
|
local->scan_chandef.width = ieee80211_s1g_channel_width(chan);
|
|
|
|
goto set_channel;
|
|
|
|
}
|
|
|
|
|
2014-11-19 18:55:49 +08:00
|
|
|
switch (scan_req->scan_width) {
|
2013-07-08 22:55:56 +08:00
|
|
|
case NL80211_BSS_CHAN_WIDTH_5:
|
|
|
|
local->scan_chandef.width = NL80211_CHAN_WIDTH_5;
|
|
|
|
break;
|
|
|
|
case NL80211_BSS_CHAN_WIDTH_10:
|
|
|
|
local->scan_chandef.width = NL80211_CHAN_WIDTH_10;
|
|
|
|
break;
|
2020-06-02 14:22:47 +08:00
|
|
|
default:
|
2013-07-08 22:55:56 +08:00
|
|
|
case NL80211_BSS_CHAN_WIDTH_20:
|
|
|
|
/* If scanning on oper channel, use whatever channel-type
|
|
|
|
* is currently in use.
|
|
|
|
*/
|
|
|
|
oper_scan_width = cfg80211_chandef_to_scan_width(
|
|
|
|
&local->_oper_chandef);
|
|
|
|
if (chan == local->_oper_chandef.chan &&
|
2014-11-19 18:55:49 +08:00
|
|
|
oper_scan_width == scan_req->scan_width)
|
2013-07-08 22:55:56 +08:00
|
|
|
local->scan_chandef = local->_oper_chandef;
|
|
|
|
else
|
|
|
|
local->scan_chandef.width = NL80211_CHAN_WIDTH_20_NOHT;
|
|
|
|
break;
|
2020-09-22 10:28:03 +08:00
|
|
|
case NL80211_BSS_CHAN_WIDTH_1:
|
|
|
|
case NL80211_BSS_CHAN_WIDTH_2:
|
|
|
|
/* shouldn't get here, S1G handled above */
|
|
|
|
WARN_ON(1);
|
|
|
|
break;
|
2013-07-08 22:55:56 +08:00
|
|
|
}
|
mac80211: Optimize scans on current operating channel.
This should decrease un-necessary flushes, on/off channel work,
and channel changes in cases where the only scanned channel is
the current operating channel.
* Removes SCAN_OFF_CHANNEL flag, uses SDATA_STATE_OFFCHANNEL
and is-scanning flags instead.
* Add helper method to determine if we are currently configured
for the operating channel.
* Do no blindly go off/on channel in work.c Instead, only call
appropriate on/off code when we really need to change channels.
Always enable offchannel-ps mode when starting work,
and disable it when we are done.
* Consolidate ieee80211_offchannel_stop_station and
ieee80211_offchannel_stop_beaconing, call it
ieee80211_offchannel_stop_vifs instead.
* Accept non-beacon frames when scanning on operating channel.
* Scan state machine optimized to minimize on/off channel
transitions. Also, when going on-channel, go ahead and
re-enable beaconing. We're going to be there for 200ms,
so seems like some useful beaconing could happen.
Always enable offchannel-ps mode when starting software
scan, and disable it when we are done.
* Grab local->mtx earlier in __ieee80211_scan_completed_finish
so that we are protected when calling hw_config(), etc.
* Pass probe-responses up the stack if scanning on local
channel, so that mlme can take a look.
Signed-off-by: Ben Greear <greearb@candelatech.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2011-02-05 03:54:17 +08:00
|
|
|
|
2020-09-22 10:28:03 +08:00
|
|
|
set_channel:
|
2011-11-08 23:21:21 +08:00
|
|
|
if (ieee80211_hw_config(local, IEEE80211_CONF_CHANGE_CHANNEL))
|
|
|
|
skip = 1;
|
2009-07-23 18:13:41 +08:00
|
|
|
|
|
|
|
/* advance state machine to next channel/band */
|
|
|
|
local->scan_channel_idx++;
|
|
|
|
|
2009-07-25 23:25:51 +08:00
|
|
|
if (skip) {
|
|
|
|
/* if we skip this channel return to the decision state */
|
|
|
|
local->next_scan_state = SCAN_DECISION;
|
2009-07-23 18:13:56 +08:00
|
|
|
return;
|
2009-07-25 23:25:51 +08:00
|
|
|
}
|
2009-07-23 18:13:41 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Probe delay is used to update the NAV, cf. 11.1.3.2.2
|
|
|
|
* (which unfortunately doesn't say _why_ step a) is done,
|
|
|
|
* but it waits for the probe delay or until a frame is
|
|
|
|
* received - and the received frame would update the NAV).
|
|
|
|
* For now, we do not support waiting until a frame is
|
|
|
|
* received.
|
|
|
|
*
|
|
|
|
* In any case, it is not necessary for a passive scan.
|
|
|
|
*/
|
2015-11-21 18:13:40 +08:00
|
|
|
if ((chan->flags & (IEEE80211_CHAN_NO_IR | IEEE80211_CHAN_RADAR)) ||
|
|
|
|
!scan_req->n_ssids) {
|
2009-07-23 18:13:41 +08:00
|
|
|
*next_delay = IEEE80211_PASSIVE_CHANNEL_TIME;
|
2009-07-23 18:14:20 +08:00
|
|
|
local->next_scan_state = SCAN_DECISION;
|
2022-04-20 18:49:07 +08:00
|
|
|
if (scan_req->n_ssids)
|
|
|
|
set_bit(SCAN_BEACON_WAIT, &local->scanning);
|
2009-07-23 18:13:56 +08:00
|
|
|
return;
|
2009-07-23 18:13:41 +08:00
|
|
|
}
|
|
|
|
|
2009-07-23 18:13:56 +08:00
|
|
|
/* active scan, send probes */
|
2009-07-23 18:13:41 +08:00
|
|
|
*next_delay = IEEE80211_PROBE_DELAY;
|
2009-07-23 18:14:20 +08:00
|
|
|
local->next_scan_state = SCAN_SEND_PROBE;
|
2009-07-23 18:13:41 +08:00
|
|
|
}
|
|
|
|
|
2011-11-08 23:21:21 +08:00
|
|
|
static void ieee80211_scan_state_suspend(struct ieee80211_local *local,
|
|
|
|
unsigned long *next_delay)
|
|
|
|
{
|
|
|
|
/* switch back to the operating channel */
|
2013-07-08 22:55:56 +08:00
|
|
|
local->scan_chandef.chan = NULL;
|
2011-11-08 23:21:21 +08:00
|
|
|
ieee80211_hw_config(local, IEEE80211_CONF_CHANGE_CHANNEL);
|
|
|
|
|
2012-12-20 21:41:18 +08:00
|
|
|
/* disable PS */
|
|
|
|
ieee80211_offchannel_return(local);
|
2011-11-08 23:21:21 +08:00
|
|
|
|
|
|
|
*next_delay = HZ / 5;
|
|
|
|
/* afterwards, resume scan & go to next channel */
|
|
|
|
local->next_scan_state = SCAN_RESUME;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void ieee80211_scan_state_resume(struct ieee80211_local *local,
|
|
|
|
unsigned long *next_delay)
|
|
|
|
{
|
2012-12-20 21:41:18 +08:00
|
|
|
ieee80211_offchannel_stop_vifs(local);
|
2011-11-08 23:21:21 +08:00
|
|
|
|
|
|
|
if (local->ops->flush) {
|
2015-01-07 21:42:39 +08:00
|
|
|
ieee80211_flush_queues(local, NULL, false);
|
2011-11-08 23:21:21 +08:00
|
|
|
*next_delay = 0;
|
|
|
|
} else
|
|
|
|
*next_delay = HZ / 10;
|
|
|
|
|
|
|
|
/* remember when we left the operating channel */
|
|
|
|
local->leave_oper_channel_time = jiffies;
|
|
|
|
|
|
|
|
/* advance to the next channel to be scanned */
|
2011-12-24 21:13:28 +08:00
|
|
|
local->next_scan_state = SCAN_SET_CHANNEL;
|
2011-11-08 23:21:21 +08:00
|
|
|
}
|
|
|
|
|
2008-09-11 06:01:55 +08:00
|
|
|
void ieee80211_scan_work(struct work_struct *work)
|
2008-09-08 23:44:25 +08:00
|
|
|
{
|
|
|
|
struct ieee80211_local *local =
|
|
|
|
container_of(work, struct ieee80211_local, scan_work.work);
|
2011-03-07 22:48:41 +08:00
|
|
|
struct ieee80211_sub_if_data *sdata;
|
2014-11-19 18:55:49 +08:00
|
|
|
struct cfg80211_scan_request *scan_req;
|
2008-09-08 23:44:25 +08:00
|
|
|
unsigned long next_delay = 0;
|
2013-12-05 17:21:27 +08:00
|
|
|
bool aborted;
|
2008-09-08 23:44:25 +08:00
|
|
|
|
2010-10-06 17:22:08 +08:00
|
|
|
mutex_lock(&local->mtx);
|
2010-08-26 19:30:26 +08:00
|
|
|
|
2015-01-23 05:34:10 +08:00
|
|
|
if (!ieee80211_can_run_worker(local)) {
|
|
|
|
aborted = true;
|
|
|
|
goto out_complete;
|
|
|
|
}
|
|
|
|
|
2012-07-07 03:39:28 +08:00
|
|
|
sdata = rcu_dereference_protected(local->scan_sdata,
|
|
|
|
lockdep_is_held(&local->mtx));
|
2014-11-19 18:55:49 +08:00
|
|
|
scan_req = rcu_dereference_protected(local->scan_req,
|
|
|
|
lockdep_is_held(&local->mtx));
|
2011-03-07 22:48:41 +08:00
|
|
|
|
2012-04-18 01:54:16 +08:00
|
|
|
/* When scanning on-channel, the first-callback means completed. */
|
|
|
|
if (test_bit(SCAN_ONCHANNEL_SCANNING, &local->scanning)) {
|
|
|
|
aborted = test_and_clear_bit(SCAN_ABORTED, &local->scanning);
|
|
|
|
goto out_complete;
|
|
|
|
}
|
|
|
|
|
2010-10-06 17:22:08 +08:00
|
|
|
if (test_and_clear_bit(SCAN_COMPLETED, &local->scanning)) {
|
2010-08-26 19:30:26 +08:00
|
|
|
aborted = test_and_clear_bit(SCAN_ABORTED, &local->scanning);
|
2010-10-06 17:22:08 +08:00
|
|
|
goto out_complete;
|
2010-08-26 19:30:26 +08:00
|
|
|
}
|
|
|
|
|
2014-11-19 18:55:49 +08:00
|
|
|
if (!sdata || !scan_req)
|
2010-10-06 17:22:08 +08:00
|
|
|
goto out;
|
2009-04-23 22:01:47 +08:00
|
|
|
|
2014-11-12 16:08:29 +08:00
|
|
|
if (!local->scanning) {
|
2009-04-23 22:01:47 +08:00
|
|
|
int rc;
|
|
|
|
|
2014-11-19 18:55:49 +08:00
|
|
|
RCU_INIT_POINTER(local->scan_req, NULL);
|
2014-03-24 03:21:43 +08:00
|
|
|
RCU_INIT_POINTER(local->scan_sdata, NULL);
|
2009-04-23 22:01:47 +08:00
|
|
|
|
2014-11-19 18:55:49 +08:00
|
|
|
rc = __ieee80211_start_scan(sdata, scan_req);
|
2010-10-06 17:22:08 +08:00
|
|
|
if (rc) {
|
2010-10-06 17:22:12 +08:00
|
|
|
/* need to complete scan in cfg80211 */
|
2014-11-19 18:55:49 +08:00
|
|
|
rcu_assign_pointer(local->scan_req, scan_req);
|
2010-10-06 17:22:08 +08:00
|
|
|
aborted = true;
|
|
|
|
goto out_complete;
|
|
|
|
} else
|
|
|
|
goto out;
|
2009-04-23 22:01:47 +08:00
|
|
|
}
|
|
|
|
|
2022-04-20 18:49:07 +08:00
|
|
|
clear_bit(SCAN_BEACON_WAIT, &local->scanning);
|
|
|
|
|
2009-07-23 18:13:48 +08:00
|
|
|
/*
|
|
|
|
* as long as no delay is required advance immediately
|
|
|
|
* without scheduling a new work
|
|
|
|
*/
|
|
|
|
do {
|
2011-05-14 12:13:28 +08:00
|
|
|
if (!ieee80211_sdata_running(sdata)) {
|
|
|
|
aborted = true;
|
|
|
|
goto out_complete;
|
|
|
|
}
|
|
|
|
|
2022-04-20 18:49:07 +08:00
|
|
|
if (test_and_clear_bit(SCAN_BEACON_DONE, &local->scanning) &&
|
|
|
|
local->next_scan_state == SCAN_DECISION)
|
|
|
|
local->next_scan_state = SCAN_SEND_PROBE;
|
|
|
|
|
2009-07-23 18:14:20 +08:00
|
|
|
switch (local->next_scan_state) {
|
2009-07-23 18:13:56 +08:00
|
|
|
case SCAN_DECISION:
|
2010-10-06 17:22:09 +08:00
|
|
|
/* if no more bands/channels left, complete scan */
|
2014-11-19 18:55:49 +08:00
|
|
|
if (local->scan_channel_idx >= scan_req->n_channels) {
|
2010-10-06 17:22:09 +08:00
|
|
|
aborted = false;
|
|
|
|
goto out_complete;
|
|
|
|
}
|
|
|
|
ieee80211_scan_state_decision(local, &next_delay);
|
2009-07-23 18:13:48 +08:00
|
|
|
break;
|
2009-07-23 18:13:56 +08:00
|
|
|
case SCAN_SET_CHANNEL:
|
|
|
|
ieee80211_scan_state_set_channel(local, &next_delay);
|
|
|
|
break;
|
2009-07-23 18:13:48 +08:00
|
|
|
case SCAN_SEND_PROBE:
|
|
|
|
ieee80211_scan_state_send_probe(local, &next_delay);
|
|
|
|
break;
|
2011-11-08 23:21:21 +08:00
|
|
|
case SCAN_SUSPEND:
|
|
|
|
ieee80211_scan_state_suspend(local, &next_delay);
|
2009-07-23 19:18:01 +08:00
|
|
|
break;
|
2011-11-08 23:21:21 +08:00
|
|
|
case SCAN_RESUME:
|
|
|
|
ieee80211_scan_state_resume(local, &next_delay);
|
2009-07-23 19:18:01 +08:00
|
|
|
break;
|
2012-10-12 12:03:35 +08:00
|
|
|
case SCAN_ABORT:
|
|
|
|
aborted = true;
|
|
|
|
goto out_complete;
|
2009-07-23 18:13:48 +08:00
|
|
|
}
|
|
|
|
} while (next_delay == 0);
|
2008-09-08 23:44:25 +08:00
|
|
|
|
2009-07-30 08:08:07 +08:00
|
|
|
ieee80211_queue_delayed_work(&local->hw, &local->scan_work, next_delay);
|
2011-03-07 22:48:41 +08:00
|
|
|
goto out;
|
2010-10-06 17:22:08 +08:00
|
|
|
|
|
|
|
out_complete:
|
2013-12-05 17:21:27 +08:00
|
|
|
__ieee80211_scan_completed(&local->hw, aborted);
|
2010-10-06 17:22:08 +08:00
|
|
|
out:
|
|
|
|
mutex_unlock(&local->mtx);
|
2008-09-08 23:44:25 +08:00
|
|
|
}
|
|
|
|
|
2009-04-23 22:01:47 +08:00
|
|
|
int ieee80211_request_scan(struct ieee80211_sub_if_data *sdata,
|
|
|
|
struct cfg80211_scan_request *req)
|
2008-09-08 23:44:25 +08:00
|
|
|
{
|
2009-04-23 22:01:47 +08:00
|
|
|
int res;
|
2009-04-01 17:58:36 +08:00
|
|
|
|
2010-07-30 22:46:07 +08:00
|
|
|
mutex_lock(&sdata->local->mtx);
|
2009-04-23 22:01:47 +08:00
|
|
|
res = __ieee80211_start_scan(sdata, req);
|
2010-07-30 22:46:07 +08:00
|
|
|
mutex_unlock(&sdata->local->mtx);
|
2008-09-08 23:44:25 +08:00
|
|
|
|
2009-04-23 22:01:47 +08:00
|
|
|
return res;
|
2008-09-08 23:44:25 +08:00
|
|
|
}
|
|
|
|
|
2012-12-11 17:48:23 +08:00
|
|
|
int ieee80211_request_ibss_scan(struct ieee80211_sub_if_data *sdata,
|
|
|
|
const u8 *ssid, u8 ssid_len,
|
2015-03-20 13:37:00 +08:00
|
|
|
struct ieee80211_channel **channels,
|
|
|
|
unsigned int n_channels,
|
2013-07-08 22:55:56 +08:00
|
|
|
enum nl80211_bss_scan_width scan_width)
|
2008-09-08 23:44:25 +08:00
|
|
|
{
|
|
|
|
struct ieee80211_local *local = sdata->local;
|
2015-03-20 13:37:00 +08:00
|
|
|
int ret = -EBUSY, i, n_ch = 0;
|
2016-04-12 21:56:15 +08:00
|
|
|
enum nl80211_band band;
|
2009-02-11 04:25:55 +08:00
|
|
|
|
2010-07-30 22:46:07 +08:00
|
|
|
mutex_lock(&local->mtx);
|
2008-09-08 23:44:25 +08:00
|
|
|
|
2009-04-23 22:01:47 +08:00
|
|
|
/* busy scanning */
|
|
|
|
if (local->scan_req)
|
|
|
|
goto unlock;
|
2008-09-08 23:47:23 +08:00
|
|
|
|
2010-05-03 14:49:48 +08:00
|
|
|
/* fill internal scan request */
|
2015-03-20 13:37:00 +08:00
|
|
|
if (!channels) {
|
|
|
|
int max_n;
|
2010-05-03 14:49:48 +08:00
|
|
|
|
2016-04-12 21:56:15 +08:00
|
|
|
for (band = 0; band < NUM_NL80211_BANDS; band++) {
|
2020-09-18 17:33:13 +08:00
|
|
|
if (!local->hw.wiphy->bands[band] ||
|
|
|
|
band == NL80211_BAND_6GHZ)
|
2010-05-03 14:49:48 +08:00
|
|
|
continue;
|
2012-12-11 17:48:23 +08:00
|
|
|
|
|
|
|
max_n = local->hw.wiphy->bands[band]->n_channels;
|
|
|
|
for (i = 0; i < max_n; i++) {
|
|
|
|
struct ieee80211_channel *tmp_ch =
|
2010-05-03 14:49:48 +08:00
|
|
|
&local->hw.wiphy->bands[band]->channels[i];
|
2012-12-11 17:48:23 +08:00
|
|
|
|
2013-10-22 01:22:25 +08:00
|
|
|
if (tmp_ch->flags & (IEEE80211_CHAN_NO_IR |
|
2012-12-11 17:48:23 +08:00
|
|
|
IEEE80211_CHAN_DISABLED))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
local->int_scan_req->channels[n_ch] = tmp_ch;
|
|
|
|
n_ch++;
|
2010-05-03 14:49:48 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-12-11 17:48:23 +08:00
|
|
|
if (WARN_ON_ONCE(n_ch == 0))
|
|
|
|
goto unlock;
|
|
|
|
|
|
|
|
local->int_scan_req->n_channels = n_ch;
|
2010-05-03 14:49:48 +08:00
|
|
|
} else {
|
2015-03-20 13:37:00 +08:00
|
|
|
for (i = 0; i < n_channels; i++) {
|
|
|
|
if (channels[i]->flags & (IEEE80211_CHAN_NO_IR |
|
|
|
|
IEEE80211_CHAN_DISABLED))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
local->int_scan_req->channels[n_ch] = channels[i];
|
|
|
|
n_ch++;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (WARN_ON_ONCE(n_ch == 0))
|
2012-12-11 17:48:23 +08:00
|
|
|
goto unlock;
|
|
|
|
|
2015-03-20 13:37:00 +08:00
|
|
|
local->int_scan_req->n_channels = n_ch;
|
2010-05-03 14:49:48 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
local->int_scan_req->ssids = &local->scan_ssid;
|
|
|
|
local->int_scan_req->n_ssids = 1;
|
2013-07-08 22:55:56 +08:00
|
|
|
local->int_scan_req->scan_width = scan_width;
|
2009-08-07 23:54:07 +08:00
|
|
|
memcpy(local->int_scan_req->ssids[0].ssid, ssid, IEEE80211_MAX_SSID_LEN);
|
|
|
|
local->int_scan_req->ssids[0].ssid_len = ssid_len;
|
2008-09-08 23:47:23 +08:00
|
|
|
|
2009-08-07 23:54:07 +08:00
|
|
|
ret = __ieee80211_start_scan(sdata, sdata->local->int_scan_req);
|
2009-04-23 22:01:47 +08:00
|
|
|
unlock:
|
2010-07-30 22:46:07 +08:00
|
|
|
mutex_unlock(&local->mtx);
|
2009-04-23 22:01:47 +08:00
|
|
|
return ret;
|
2008-09-08 23:44:25 +08:00
|
|
|
}
|
2009-05-17 17:40:42 +08:00
|
|
|
|
2010-10-06 17:22:10 +08:00
|
|
|
/*
|
|
|
|
* Only call this function when a scan can't be queued -- under RTNL.
|
|
|
|
*/
|
2009-05-17 17:40:42 +08:00
|
|
|
void ieee80211_scan_cancel(struct ieee80211_local *local)
|
|
|
|
{
|
|
|
|
/*
|
2011-06-13 17:47:30 +08:00
|
|
|
* We are canceling software scan, or deferred scan that was not
|
2010-10-06 17:22:10 +08:00
|
|
|
* yet really started (see __ieee80211_start_scan ).
|
|
|
|
*
|
|
|
|
* Regarding hardware scan:
|
|
|
|
* - we can not call __ieee80211_scan_completed() as when
|
|
|
|
* SCAN_HW_SCANNING bit is set this function change
|
|
|
|
* local->hw_scan_req to operate on 5G band, what race with
|
|
|
|
* driver which can use local->hw_scan_req
|
|
|
|
*
|
|
|
|
* - we can not cancel scan_work since driver can schedule it
|
|
|
|
* by ieee80211_scan_completed(..., true) to finish scan
|
|
|
|
*
|
2011-06-13 17:47:30 +08:00
|
|
|
* Hence we only call the cancel_hw_scan() callback, but the low-level
|
|
|
|
* driver is still responsible for calling ieee80211_scan_completed()
|
|
|
|
* after the scan was completed/aborted.
|
2009-05-17 17:40:42 +08:00
|
|
|
*/
|
2010-10-06 17:22:10 +08:00
|
|
|
|
2010-07-30 22:46:07 +08:00
|
|
|
mutex_lock(&local->mtx);
|
2011-06-13 17:47:30 +08:00
|
|
|
if (!local->scan_req)
|
|
|
|
goto out;
|
|
|
|
|
2013-09-16 16:12:07 +08:00
|
|
|
/*
|
|
|
|
* We have a scan running and the driver already reported completion,
|
|
|
|
* but the worker hasn't run yet or is stuck on the mutex - mark it as
|
|
|
|
* cancelled.
|
|
|
|
*/
|
|
|
|
if (test_bit(SCAN_HW_SCANNING, &local->scanning) &&
|
|
|
|
test_bit(SCAN_COMPLETED, &local->scanning)) {
|
|
|
|
set_bit(SCAN_HW_CANCELLED, &local->scanning);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2011-06-13 17:47:30 +08:00
|
|
|
if (test_bit(SCAN_HW_SCANNING, &local->scanning)) {
|
2013-09-16 16:12:07 +08:00
|
|
|
/*
|
|
|
|
* Make sure that __ieee80211_scan_completed doesn't trigger a
|
|
|
|
* scan on another band.
|
|
|
|
*/
|
|
|
|
set_bit(SCAN_HW_CANCELLED, &local->scanning);
|
2011-06-13 17:47:30 +08:00
|
|
|
if (local->ops->cancel_hw_scan)
|
2012-07-07 03:39:28 +08:00
|
|
|
drv_cancel_hw_scan(local,
|
|
|
|
rcu_dereference_protected(local->scan_sdata,
|
|
|
|
lockdep_is_held(&local->mtx)));
|
2011-06-13 17:47:30 +08:00
|
|
|
goto out;
|
2010-10-06 17:22:10 +08:00
|
|
|
}
|
2011-06-13 17:47:30 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If the work is currently running, it must be blocked on
|
|
|
|
* the mutex, but we'll set scan_sdata = NULL and it'll
|
|
|
|
* simply exit once it acquires the mutex.
|
|
|
|
*/
|
|
|
|
cancel_delayed_work(&local->scan_work);
|
|
|
|
/* and clean up */
|
2016-07-05 20:23:12 +08:00
|
|
|
memset(&local->scan_info, 0, sizeof(local->scan_info));
|
2013-12-05 17:21:27 +08:00
|
|
|
__ieee80211_scan_completed(&local->hw, true);
|
2011-06-13 17:47:30 +08:00
|
|
|
out:
|
2011-03-07 22:48:41 +08:00
|
|
|
mutex_unlock(&local->mtx);
|
2009-05-17 17:40:42 +08:00
|
|
|
}
|
2011-05-11 22:09:36 +08:00
|
|
|
|
2013-12-09 03:48:57 +08:00
|
|
|
int __ieee80211_request_sched_scan_start(struct ieee80211_sub_if_data *sdata,
|
|
|
|
struct cfg80211_sched_scan_request *req)
|
2011-05-11 22:09:36 +08:00
|
|
|
{
|
|
|
|
struct ieee80211_local *local = sdata->local;
|
2014-02-06 22:15:23 +08:00
|
|
|
struct ieee80211_scan_ies sched_scan_ies = {};
|
2013-07-08 22:55:53 +08:00
|
|
|
struct cfg80211_chan_def chandef;
|
2014-02-06 22:15:23 +08:00
|
|
|
int ret, i, iebufsz, num_bands = 0;
|
2016-04-12 21:56:15 +08:00
|
|
|
u32 rate_masks[NUM_NL80211_BANDS] = {};
|
2014-02-06 22:15:23 +08:00
|
|
|
u8 bands_used = 0;
|
|
|
|
u8 *ie;
|
2018-05-28 21:47:41 +08:00
|
|
|
u32 flags = 0;
|
2012-11-29 19:45:18 +08:00
|
|
|
|
2014-02-11 19:45:41 +08:00
|
|
|
iebufsz = local->scan_ies_len + req->ie_len;
|
2011-05-11 22:09:36 +08:00
|
|
|
|
2013-12-09 03:48:57 +08:00
|
|
|
lockdep_assert_held(&local->mtx);
|
2011-05-11 22:09:36 +08:00
|
|
|
|
2013-12-09 03:48:57 +08:00
|
|
|
if (!local->ops->sched_scan_start)
|
|
|
|
return -ENOTSUPP;
|
2011-05-11 22:09:36 +08:00
|
|
|
|
2016-04-12 21:56:15 +08:00
|
|
|
for (i = 0; i < NUM_NL80211_BANDS; i++) {
|
2014-02-06 22:15:23 +08:00
|
|
|
if (local->hw.wiphy->bands[i]) {
|
|
|
|
bands_used |= BIT(i);
|
|
|
|
rate_masks[i] = (u32) -1;
|
|
|
|
num_bands++;
|
2011-05-11 22:09:36 +08:00
|
|
|
}
|
2014-02-06 22:15:23 +08:00
|
|
|
}
|
2011-05-11 22:09:36 +08:00
|
|
|
|
2018-05-28 21:47:41 +08:00
|
|
|
if (req->flags & NL80211_SCAN_FLAG_MIN_PREQ_CONTENT)
|
|
|
|
flags |= IEEE80211_PROBE_FLAG_MIN_CONTENT;
|
|
|
|
|
treewide: kzalloc() -> kcalloc()
The kzalloc() function has a 2-factor argument form, kcalloc(). This
patch replaces cases of:
kzalloc(a * b, gfp)
with:
kcalloc(a * b, gfp)
as well as handling cases of:
kzalloc(a * b * c, gfp)
with:
kzalloc(array3_size(a, b, c), gfp)
as it's slightly less ugly than:
kzalloc_array(array_size(a, b), c, gfp)
This does, however, attempt to ignore constant size factors like:
kzalloc(4 * 1024, gfp)
though any constants defined via macros get caught up in the conversion.
Any factors with a sizeof() of "unsigned char", "char", and "u8" were
dropped, since they're redundant.
The Coccinelle script used for this was:
// Fix redundant parens around sizeof().
@@
type TYPE;
expression THING, E;
@@
(
kzalloc(
- (sizeof(TYPE)) * E
+ sizeof(TYPE) * E
, ...)
|
kzalloc(
- (sizeof(THING)) * E
+ sizeof(THING) * E
, ...)
)
// Drop single-byte sizes and redundant parens.
@@
expression COUNT;
typedef u8;
typedef __u8;
@@
(
kzalloc(
- sizeof(u8) * (COUNT)
+ COUNT
, ...)
|
kzalloc(
- sizeof(__u8) * (COUNT)
+ COUNT
, ...)
|
kzalloc(
- sizeof(char) * (COUNT)
+ COUNT
, ...)
|
kzalloc(
- sizeof(unsigned char) * (COUNT)
+ COUNT
, ...)
|
kzalloc(
- sizeof(u8) * COUNT
+ COUNT
, ...)
|
kzalloc(
- sizeof(__u8) * COUNT
+ COUNT
, ...)
|
kzalloc(
- sizeof(char) * COUNT
+ COUNT
, ...)
|
kzalloc(
- sizeof(unsigned char) * COUNT
+ COUNT
, ...)
)
// 2-factor product with sizeof(type/expression) and identifier or constant.
@@
type TYPE;
expression THING;
identifier COUNT_ID;
constant COUNT_CONST;
@@
(
- kzalloc
+ kcalloc
(
- sizeof(TYPE) * (COUNT_ID)
+ COUNT_ID, sizeof(TYPE)
, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(TYPE) * COUNT_ID
+ COUNT_ID, sizeof(TYPE)
, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(TYPE) * (COUNT_CONST)
+ COUNT_CONST, sizeof(TYPE)
, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(TYPE) * COUNT_CONST
+ COUNT_CONST, sizeof(TYPE)
, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(THING) * (COUNT_ID)
+ COUNT_ID, sizeof(THING)
, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(THING) * COUNT_ID
+ COUNT_ID, sizeof(THING)
, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(THING) * (COUNT_CONST)
+ COUNT_CONST, sizeof(THING)
, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(THING) * COUNT_CONST
+ COUNT_CONST, sizeof(THING)
, ...)
)
// 2-factor product, only identifiers.
@@
identifier SIZE, COUNT;
@@
- kzalloc
+ kcalloc
(
- SIZE * COUNT
+ COUNT, SIZE
, ...)
// 3-factor product with 1 sizeof(type) or sizeof(expression), with
// redundant parens removed.
@@
expression THING;
identifier STRIDE, COUNT;
type TYPE;
@@
(
kzalloc(
- sizeof(TYPE) * (COUNT) * (STRIDE)
+ array3_size(COUNT, STRIDE, sizeof(TYPE))
, ...)
|
kzalloc(
- sizeof(TYPE) * (COUNT) * STRIDE
+ array3_size(COUNT, STRIDE, sizeof(TYPE))
, ...)
|
kzalloc(
- sizeof(TYPE) * COUNT * (STRIDE)
+ array3_size(COUNT, STRIDE, sizeof(TYPE))
, ...)
|
kzalloc(
- sizeof(TYPE) * COUNT * STRIDE
+ array3_size(COUNT, STRIDE, sizeof(TYPE))
, ...)
|
kzalloc(
- sizeof(THING) * (COUNT) * (STRIDE)
+ array3_size(COUNT, STRIDE, sizeof(THING))
, ...)
|
kzalloc(
- sizeof(THING) * (COUNT) * STRIDE
+ array3_size(COUNT, STRIDE, sizeof(THING))
, ...)
|
kzalloc(
- sizeof(THING) * COUNT * (STRIDE)
+ array3_size(COUNT, STRIDE, sizeof(THING))
, ...)
|
kzalloc(
- sizeof(THING) * COUNT * STRIDE
+ array3_size(COUNT, STRIDE, sizeof(THING))
, ...)
)
// 3-factor product with 2 sizeof(variable), with redundant parens removed.
@@
expression THING1, THING2;
identifier COUNT;
type TYPE1, TYPE2;
@@
(
kzalloc(
- sizeof(TYPE1) * sizeof(TYPE2) * COUNT
+ array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
, ...)
|
kzalloc(
- sizeof(TYPE1) * sizeof(THING2) * (COUNT)
+ array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
, ...)
|
kzalloc(
- sizeof(THING1) * sizeof(THING2) * COUNT
+ array3_size(COUNT, sizeof(THING1), sizeof(THING2))
, ...)
|
kzalloc(
- sizeof(THING1) * sizeof(THING2) * (COUNT)
+ array3_size(COUNT, sizeof(THING1), sizeof(THING2))
, ...)
|
kzalloc(
- sizeof(TYPE1) * sizeof(THING2) * COUNT
+ array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
, ...)
|
kzalloc(
- sizeof(TYPE1) * sizeof(THING2) * (COUNT)
+ array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
, ...)
)
// 3-factor product, only identifiers, with redundant parens removed.
@@
identifier STRIDE, SIZE, COUNT;
@@
(
kzalloc(
- (COUNT) * STRIDE * SIZE
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kzalloc(
- COUNT * (STRIDE) * SIZE
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kzalloc(
- COUNT * STRIDE * (SIZE)
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kzalloc(
- (COUNT) * (STRIDE) * SIZE
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kzalloc(
- COUNT * (STRIDE) * (SIZE)
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kzalloc(
- (COUNT) * STRIDE * (SIZE)
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kzalloc(
- (COUNT) * (STRIDE) * (SIZE)
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kzalloc(
- COUNT * STRIDE * SIZE
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
)
// Any remaining multi-factor products, first at least 3-factor products,
// when they're not all constants...
@@
expression E1, E2, E3;
constant C1, C2, C3;
@@
(
kzalloc(C1 * C2 * C3, ...)
|
kzalloc(
- (E1) * E2 * E3
+ array3_size(E1, E2, E3)
, ...)
|
kzalloc(
- (E1) * (E2) * E3
+ array3_size(E1, E2, E3)
, ...)
|
kzalloc(
- (E1) * (E2) * (E3)
+ array3_size(E1, E2, E3)
, ...)
|
kzalloc(
- E1 * E2 * E3
+ array3_size(E1, E2, E3)
, ...)
)
// And then all remaining 2 factors products when they're not all constants,
// keeping sizeof() as the second factor argument.
@@
expression THING, E1, E2;
type TYPE;
constant C1, C2, C3;
@@
(
kzalloc(sizeof(THING) * C2, ...)
|
kzalloc(sizeof(TYPE) * C2, ...)
|
kzalloc(C1 * C2 * C3, ...)
|
kzalloc(C1 * C2, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(TYPE) * (E2)
+ E2, sizeof(TYPE)
, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(TYPE) * E2
+ E2, sizeof(TYPE)
, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(THING) * (E2)
+ E2, sizeof(THING)
, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(THING) * E2
+ E2, sizeof(THING)
, ...)
|
- kzalloc
+ kcalloc
(
- (E1) * E2
+ E1, E2
, ...)
|
- kzalloc
+ kcalloc
(
- (E1) * (E2)
+ E1, E2
, ...)
|
- kzalloc
+ kcalloc
(
- E1 * E2
+ E1, E2
, ...)
)
Signed-off-by: Kees Cook <keescook@chromium.org>
2018-06-13 05:03:40 +08:00
|
|
|
ie = kcalloc(iebufsz, num_bands, GFP_KERNEL);
|
2014-02-06 22:15:23 +08:00
|
|
|
if (!ie) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto out;
|
2011-05-11 22:09:36 +08:00
|
|
|
}
|
|
|
|
|
2014-02-06 22:15:23 +08:00
|
|
|
ieee80211_prepare_scan_chandef(&chandef, req->scan_width);
|
|
|
|
|
2020-05-29 03:34:39 +08:00
|
|
|
ieee80211_build_preq_ies(sdata, ie, num_bands * iebufsz,
|
2016-11-24 12:45:49 +08:00
|
|
|
&sched_scan_ies, req->ie,
|
2018-05-28 21:47:38 +08:00
|
|
|
req->ie_len, bands_used, rate_masks, &chandef,
|
2018-05-28 21:47:41 +08:00
|
|
|
flags);
|
2014-02-06 22:15:23 +08:00
|
|
|
|
2012-09-05 01:15:01 +08:00
|
|
|
ret = drv_sched_scan_start(local, sdata, req, &sched_scan_ies);
|
2013-12-09 03:48:57 +08:00
|
|
|
if (ret == 0) {
|
2012-07-07 03:55:11 +08:00
|
|
|
rcu_assign_pointer(local->sched_scan_sdata, sdata);
|
2014-11-19 18:55:49 +08:00
|
|
|
rcu_assign_pointer(local->sched_scan_req, req);
|
2013-12-09 03:48:57 +08:00
|
|
|
}
|
2011-05-11 22:09:36 +08:00
|
|
|
|
2014-02-06 22:15:23 +08:00
|
|
|
kfree(ie);
|
2013-12-09 03:48:57 +08:00
|
|
|
|
2014-02-06 22:15:23 +08:00
|
|
|
out:
|
2013-12-09 03:48:57 +08:00
|
|
|
if (ret) {
|
|
|
|
/* Clean in case of failure after HW restart or upon resume. */
|
2014-03-24 03:21:43 +08:00
|
|
|
RCU_INIT_POINTER(local->sched_scan_sdata, NULL);
|
2014-11-19 18:55:49 +08:00
|
|
|
RCU_INIT_POINTER(local->sched_scan_req, NULL);
|
2013-12-09 03:48:57 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
int ieee80211_request_sched_scan_start(struct ieee80211_sub_if_data *sdata,
|
|
|
|
struct cfg80211_sched_scan_request *req)
|
|
|
|
{
|
|
|
|
struct ieee80211_local *local = sdata->local;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
mutex_lock(&local->mtx);
|
|
|
|
|
|
|
|
if (rcu_access_pointer(local->sched_scan_sdata)) {
|
|
|
|
mutex_unlock(&local->mtx);
|
|
|
|
return -EBUSY;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = __ieee80211_request_sched_scan_start(sdata, req);
|
|
|
|
|
2012-07-07 03:55:11 +08:00
|
|
|
mutex_unlock(&local->mtx);
|
2011-05-11 22:09:36 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2015-10-25 16:59:33 +08:00
|
|
|
int ieee80211_request_sched_scan_stop(struct ieee80211_local *local)
|
2011-05-11 22:09:36 +08:00
|
|
|
{
|
2015-10-25 16:59:33 +08:00
|
|
|
struct ieee80211_sub_if_data *sched_scan_sdata;
|
|
|
|
int ret = -ENOENT;
|
2011-05-11 22:09:36 +08:00
|
|
|
|
2012-07-07 03:55:11 +08:00
|
|
|
mutex_lock(&local->mtx);
|
2011-05-11 22:09:36 +08:00
|
|
|
|
|
|
|
if (!local->ops->sched_scan_stop) {
|
|
|
|
ret = -ENOTSUPP;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2013-12-09 03:48:57 +08:00
|
|
|
/* We don't want to restart sched scan anymore. */
|
2014-11-19 18:55:49 +08:00
|
|
|
RCU_INIT_POINTER(local->sched_scan_req, NULL);
|
2013-12-09 03:48:57 +08:00
|
|
|
|
2015-10-25 16:59:33 +08:00
|
|
|
sched_scan_sdata = rcu_dereference_protected(local->sched_scan_sdata,
|
|
|
|
lockdep_is_held(&local->mtx));
|
|
|
|
if (sched_scan_sdata) {
|
|
|
|
ret = drv_sched_scan_stop(local, sched_scan_sdata);
|
2014-03-16 16:49:54 +08:00
|
|
|
if (!ret)
|
2014-08-22 21:14:49 +08:00
|
|
|
RCU_INIT_POINTER(local->sched_scan_sdata, NULL);
|
2014-03-16 16:49:54 +08:00
|
|
|
}
|
2011-05-11 22:09:36 +08:00
|
|
|
out:
|
2012-07-07 03:55:11 +08:00
|
|
|
mutex_unlock(&local->mtx);
|
2011-05-11 22:09:36 +08:00
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
void ieee80211_sched_scan_results(struct ieee80211_hw *hw)
|
|
|
|
{
|
|
|
|
struct ieee80211_local *local = hw_to_local(hw);
|
|
|
|
|
|
|
|
trace_api_sched_scan_results(local);
|
|
|
|
|
2017-04-28 20:40:28 +08:00
|
|
|
cfg80211_sched_scan_results(hw->wiphy, 0);
|
2011-05-11 22:09:36 +08:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ieee80211_sched_scan_results);
|
|
|
|
|
2014-04-30 20:19:04 +08:00
|
|
|
void ieee80211_sched_scan_end(struct ieee80211_local *local)
|
2011-05-12 21:28:29 +08:00
|
|
|
{
|
|
|
|
mutex_lock(&local->mtx);
|
|
|
|
|
2012-07-07 03:55:11 +08:00
|
|
|
if (!rcu_access_pointer(local->sched_scan_sdata)) {
|
2011-05-12 21:28:29 +08:00
|
|
|
mutex_unlock(&local->mtx);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2014-03-24 03:21:43 +08:00
|
|
|
RCU_INIT_POINTER(local->sched_scan_sdata, NULL);
|
2011-05-12 21:28:29 +08:00
|
|
|
|
2013-12-09 03:48:57 +08:00
|
|
|
/* If sched scan was aborted by the driver. */
|
2014-11-19 18:55:49 +08:00
|
|
|
RCU_INIT_POINTER(local->sched_scan_req, NULL);
|
2013-12-09 03:48:57 +08:00
|
|
|
|
2011-05-12 21:28:29 +08:00
|
|
|
mutex_unlock(&local->mtx);
|
|
|
|
|
2017-04-28 20:40:28 +08:00
|
|
|
cfg80211_sched_scan_stopped(local->hw.wiphy, 0);
|
2011-05-12 21:28:29 +08:00
|
|
|
}
|
|
|
|
|
2014-04-30 20:19:04 +08:00
|
|
|
void ieee80211_sched_scan_stopped_work(struct work_struct *work)
|
|
|
|
{
|
|
|
|
struct ieee80211_local *local =
|
|
|
|
container_of(work, struct ieee80211_local,
|
|
|
|
sched_scan_stopped_work);
|
|
|
|
|
|
|
|
ieee80211_sched_scan_end(local);
|
|
|
|
}
|
|
|
|
|
2011-05-11 22:09:36 +08:00
|
|
|
void ieee80211_sched_scan_stopped(struct ieee80211_hw *hw)
|
|
|
|
{
|
|
|
|
struct ieee80211_local *local = hw_to_local(hw);
|
|
|
|
|
|
|
|
trace_api_sched_scan_stopped(local);
|
|
|
|
|
2016-01-05 22:28:06 +08:00
|
|
|
/*
|
|
|
|
* this shouldn't really happen, so for simplicity
|
|
|
|
* simply ignore it, and let mac80211 reconfigure
|
|
|
|
* the sched scan later on.
|
|
|
|
*/
|
|
|
|
if (local->in_reconfig)
|
|
|
|
return;
|
|
|
|
|
2013-11-06 17:34:36 +08:00
|
|
|
schedule_work(&local->sched_scan_stopped_work);
|
2011-05-11 22:09:36 +08:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ieee80211_sched_scan_stopped);
|