Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-next-2.6 into for-davem
This commit is contained in:
commit
204d1641d2
|
@ -0,0 +1,128 @@
|
|||
Linux NFC subsystem
|
||||
===================
|
||||
|
||||
The Near Field Communication (NFC) subsystem is required to standardize the
|
||||
NFC device drivers development and to create an unified userspace interface.
|
||||
|
||||
This document covers the architecture overview, the device driver interface
|
||||
description and the userspace interface description.
|
||||
|
||||
Architecture overview
|
||||
---------------------
|
||||
|
||||
The NFC subsystem is responsible for:
|
||||
- NFC adapters management;
|
||||
- Polling for targets;
|
||||
- Low-level data exchange;
|
||||
|
||||
The subsystem is divided in some parts. The 'core' is responsible for
|
||||
providing the device driver interface. On the other side, it is also
|
||||
responsible for providing an interface to control operations and low-level
|
||||
data exchange.
|
||||
|
||||
The control operations are available to userspace via generic netlink.
|
||||
|
||||
The low-level data exchange interface is provided by the new socket family
|
||||
PF_NFC. The NFC_SOCKPROTO_RAW performs raw communication with NFC targets.
|
||||
|
||||
|
||||
+--------------------------------------+
|
||||
| USER SPACE |
|
||||
+--------------------------------------+
|
||||
^ ^
|
||||
| low-level | control
|
||||
| data exchange | operations
|
||||
| |
|
||||
| v
|
||||
| +-----------+
|
||||
| AF_NFC | netlink |
|
||||
| socket +-----------+
|
||||
| raw ^
|
||||
| |
|
||||
v v
|
||||
+---------+ +-----------+
|
||||
| rawsock | <--------> | core |
|
||||
+---------+ +-----------+
|
||||
^
|
||||
|
|
||||
v
|
||||
+-----------+
|
||||
| driver |
|
||||
+-----------+
|
||||
|
||||
Device Driver Interface
|
||||
-----------------------
|
||||
|
||||
When registering on the NFC subsystem, the device driver must inform the core
|
||||
of the set of supported NFC protocols and the set of ops callbacks. The ops
|
||||
callbacks that must be implemented are the following:
|
||||
|
||||
* start_poll - setup the device to poll for targets
|
||||
* stop_poll - stop on progress polling operation
|
||||
* activate_target - select and initialize one of the targets found
|
||||
* deactivate_target - deselect and deinitialize the selected target
|
||||
* data_exchange - send data and receive the response (transceive operation)
|
||||
|
||||
Userspace interface
|
||||
--------------------
|
||||
|
||||
The userspace interface is divided in control operations and low-level data
|
||||
exchange operation.
|
||||
|
||||
CONTROL OPERATIONS:
|
||||
|
||||
Generic netlink is used to implement the interface to the control operations.
|
||||
The operations are composed by commands and events, all listed below:
|
||||
|
||||
* NFC_CMD_GET_DEVICE - get specific device info or dump the device list
|
||||
* NFC_CMD_START_POLL - setup a specific device to polling for targets
|
||||
* NFC_CMD_STOP_POLL - stop the polling operation in a specific device
|
||||
* NFC_CMD_GET_TARGET - dump the list of targets found by a specific device
|
||||
|
||||
* NFC_EVENT_DEVICE_ADDED - reports an NFC device addition
|
||||
* NFC_EVENT_DEVICE_REMOVED - reports an NFC device removal
|
||||
* NFC_EVENT_TARGETS_FOUND - reports START_POLL results when 1 or more targets
|
||||
are found
|
||||
|
||||
The user must call START_POLL to poll for NFC targets, passing the desired NFC
|
||||
protocols through NFC_ATTR_PROTOCOLS attribute. The device remains in polling
|
||||
state until it finds any target. However, the user can stop the polling
|
||||
operation by calling STOP_POLL command. In this case, it will be checked if
|
||||
the requester of STOP_POLL is the same of START_POLL.
|
||||
|
||||
If the polling operation finds one or more targets, the event TARGETS_FOUND is
|
||||
sent (including the device id). The user must call GET_TARGET to get the list of
|
||||
all targets found by such device. Each reply message has target attributes with
|
||||
relevant information such as the supported NFC protocols.
|
||||
|
||||
All polling operations requested through one netlink socket are stopped when
|
||||
it's closed.
|
||||
|
||||
LOW-LEVEL DATA EXCHANGE:
|
||||
|
||||
The userspace must use PF_NFC sockets to perform any data communication with
|
||||
targets. All NFC sockets use AF_NFC:
|
||||
|
||||
struct sockaddr_nfc {
|
||||
sa_family_t sa_family;
|
||||
__u32 dev_idx;
|
||||
__u32 target_idx;
|
||||
__u32 nfc_protocol;
|
||||
};
|
||||
|
||||
To establish a connection with one target, the user must create an
|
||||
NFC_SOCKPROTO_RAW socket and call the 'connect' syscall with the sockaddr_nfc
|
||||
struct correctly filled. All information comes from NFC_EVENT_TARGETS_FOUND
|
||||
netlink event. As a target can support more than one NFC protocol, the user
|
||||
must inform which protocol it wants to use.
|
||||
|
||||
Internally, 'connect' will result in an activate_target call to the driver.
|
||||
When the socket is closed, the target is deactivated.
|
||||
|
||||
The data format exchanged through the sockets is NFC protocol dependent. For
|
||||
instance, when communicating with MIFARE tags, the data exchanged are MIFARE
|
||||
commands and their responses.
|
||||
|
||||
The first received package is the response to the first sent package and so
|
||||
on. In order to allow valid "empty" responses, every data received has a NULL
|
||||
header of 1 byte.
|
|
@ -94,8 +94,6 @@ source "drivers/memstick/Kconfig"
|
|||
|
||||
source "drivers/leds/Kconfig"
|
||||
|
||||
source "drivers/nfc/Kconfig"
|
||||
|
||||
source "drivers/accessibility/Kconfig"
|
||||
|
||||
source "drivers/infiniband/Kconfig"
|
||||
|
|
|
@ -122,3 +122,4 @@ obj-y += ieee802154/
|
|||
obj-y += clk/
|
||||
|
||||
obj-$(CONFIG_HWSPINLOCK) += hwspinlock/
|
||||
obj-$(CONFIG_NFC) += nfc/
|
||||
|
|
|
@ -27,6 +27,12 @@ config BCMA_HOST_PCI
|
|||
bool "Support for BCMA on PCI-host bus"
|
||||
depends on BCMA_HOST_PCI_POSSIBLE
|
||||
|
||||
config BCMA_DRIVER_PCI_HOSTMODE
|
||||
bool "Driver for PCI core working in hostmode"
|
||||
depends on BCMA && MIPS
|
||||
help
|
||||
PCI core hostmode operation (external PCI bus).
|
||||
|
||||
config BCMA_DEBUG
|
||||
bool "BCMA debugging"
|
||||
depends on BCMA
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
bcma-y += main.o scan.o core.o sprom.o
|
||||
bcma-y += driver_chipcommon.o driver_chipcommon_pmu.o
|
||||
bcma-y += driver_pci.o
|
||||
bcma-$(CONFIG_BCMA_DRIVER_PCI_HOSTMODE) += driver_pci_host.o
|
||||
bcma-$(CONFIG_BCMA_HOST_PCI) += host_pci.o
|
||||
obj-$(CONFIG_BCMA) += bcma.o
|
||||
|
||||
|
|
|
@ -28,4 +28,8 @@ extern int __init bcma_host_pci_init(void);
|
|||
extern void __exit bcma_host_pci_exit(void);
|
||||
#endif /* CONFIG_BCMA_HOST_PCI */
|
||||
|
||||
#ifdef CONFIG_BCMA_DRIVER_PCI_HOSTMODE
|
||||
void bcma_core_pci_hostmode_init(struct bcma_drv_pci *pc);
|
||||
#endif /* CONFIG_BCMA_DRIVER_PCI_HOSTMODE */
|
||||
|
||||
#endif
|
||||
|
|
|
@ -157,11 +157,47 @@ static void bcma_pcicore_serdes_workaround(struct bcma_drv_pci *pc)
|
|||
* Init.
|
||||
**************************************************/
|
||||
|
||||
void bcma_core_pci_init(struct bcma_drv_pci *pc)
|
||||
static void bcma_core_pci_clientmode_init(struct bcma_drv_pci *pc)
|
||||
{
|
||||
bcma_pcicore_serdes_workaround(pc);
|
||||
}
|
||||
|
||||
static bool bcma_core_pci_is_in_hostmode(struct bcma_drv_pci *pc)
|
||||
{
|
||||
struct bcma_bus *bus = pc->core->bus;
|
||||
u16 chipid_top;
|
||||
|
||||
chipid_top = (bus->chipinfo.id & 0xFF00);
|
||||
if (chipid_top != 0x4700 &&
|
||||
chipid_top != 0x5300)
|
||||
return false;
|
||||
|
||||
if (bus->sprom.boardflags_lo & SSB_PCICORE_BFL_NOPCI)
|
||||
return false;
|
||||
|
||||
#if 0
|
||||
/* TODO: on BCMA we use address from EROM instead of magic formula */
|
||||
u32 tmp;
|
||||
return !mips_busprobe32(tmp, (bus->mmio +
|
||||
(pc->core->core_index * BCMA_CORE_SIZE)));
|
||||
#endif
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
void bcma_core_pci_init(struct bcma_drv_pci *pc)
|
||||
{
|
||||
if (bcma_core_pci_is_in_hostmode(pc)) {
|
||||
#ifdef CONFIG_BCMA_DRIVER_PCI_HOSTMODE
|
||||
bcma_core_pci_hostmode_init(pc);
|
||||
#else
|
||||
pr_err("Driver compiled without support for hostmode PCI\n");
|
||||
#endif /* CONFIG_BCMA_DRIVER_PCI_HOSTMODE */
|
||||
} else {
|
||||
bcma_core_pci_clientmode_init(pc);
|
||||
}
|
||||
}
|
||||
|
||||
int bcma_core_pci_irq_ctl(struct bcma_drv_pci *pc, struct bcma_device *core,
|
||||
bool enable)
|
||||
{
|
||||
|
|
|
@ -0,0 +1,14 @@
|
|||
/*
|
||||
* Broadcom specific AMBA
|
||||
* PCI Core in hostmode
|
||||
*
|
||||
* Licensed under the GNU/GPL. See COPYING for details.
|
||||
*/
|
||||
|
||||
#include "bcma_private.h"
|
||||
#include <linux/bcma/bcma.h>
|
||||
|
||||
void bcma_core_pci_hostmode_init(struct bcma_drv_pci *pc)
|
||||
{
|
||||
pr_err("No support for PCI core in hostmode yet\n");
|
||||
}
|
|
@ -67,6 +67,8 @@
|
|||
|
||||
#define PAYLOAD_MAX (CARL9170_MAX_CMD_LEN / 4 - 1)
|
||||
|
||||
static const u8 ar9170_qmap[__AR9170_NUM_TXQ] = { 3, 2, 1, 0 };
|
||||
|
||||
enum carl9170_rf_init_mode {
|
||||
CARL9170_RFI_NONE,
|
||||
CARL9170_RFI_WARM,
|
||||
|
@ -440,7 +442,6 @@ struct ar9170 {
|
|||
enum carl9170_ps_off_override_reasons {
|
||||
PS_OFF_VIF = BIT(0),
|
||||
PS_OFF_BCN = BIT(1),
|
||||
PS_OFF_5GHZ = BIT(2),
|
||||
};
|
||||
|
||||
struct carl9170_ba_stats {
|
||||
|
|
|
@ -237,7 +237,7 @@ static int carl9170_fw(struct ar9170 *ar, const __u8 *data, size_t len)
|
|||
ar->disable_offload = true;
|
||||
}
|
||||
|
||||
if (SUPP(CARL9170FW_PSM))
|
||||
if (SUPP(CARL9170FW_PSM) && SUPP(CARL9170FW_FIXED_5GHZ_PSM))
|
||||
ar->hw->flags |= IEEE80211_HW_SUPPORTS_PS;
|
||||
|
||||
if (!SUPP(CARL9170FW_USB_INIT_FIRMWARE)) {
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
* Firmware command interface definitions
|
||||
*
|
||||
* Copyright 2008, Johannes Berg <johannes@sipsolutions.net>
|
||||
* Copyright 2009, 2010, Christian Lamparter <chunkeey@googlemail.com>
|
||||
* Copyright 2009-2011 Christian Lamparter <chunkeey@googlemail.com>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License as published by
|
||||
|
@ -54,6 +54,7 @@ enum carl9170_cmd_oids {
|
|||
CARL9170_CMD_BCN_CTRL = 0x05,
|
||||
CARL9170_CMD_READ_TSF = 0x06,
|
||||
CARL9170_CMD_RX_FILTER = 0x07,
|
||||
CARL9170_CMD_WOL = 0x08,
|
||||
|
||||
/* CAM */
|
||||
CARL9170_CMD_EKEY = 0x10,
|
||||
|
@ -180,6 +181,21 @@ struct carl9170_bcn_ctrl_cmd {
|
|||
#define CARL9170_BCN_CTRL_DRAIN 0
|
||||
#define CARL9170_BCN_CTRL_CAB_TRIGGER 1
|
||||
|
||||
struct carl9170_wol_cmd {
|
||||
__le32 flags;
|
||||
u8 mac[6];
|
||||
u8 bssid[6];
|
||||
__le32 null_interval;
|
||||
__le32 free_for_use2;
|
||||
__le32 mask;
|
||||
u8 pattern[32];
|
||||
} __packed;
|
||||
|
||||
#define CARL9170_WOL_CMD_SIZE 60
|
||||
|
||||
#define CARL9170_WOL_DISCONNECT 1
|
||||
#define CARL9170_WOL_MAGIC_PKT 2
|
||||
|
||||
struct carl9170_cmd_head {
|
||||
union {
|
||||
struct {
|
||||
|
@ -203,6 +219,7 @@ struct carl9170_cmd {
|
|||
struct carl9170_write_reg wreg;
|
||||
struct carl9170_rf_init rf_init;
|
||||
struct carl9170_psm psm;
|
||||
struct carl9170_wol_cmd wol;
|
||||
struct carl9170_bcn_ctrl_cmd bcn_ctrl;
|
||||
struct carl9170_rx_filter_cmd rx_filter;
|
||||
u8 data[CARL9170_MAX_CMD_PAYLOAD_LEN];
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
*
|
||||
* Firmware descriptor format
|
||||
*
|
||||
* Copyright 2009, 2010, Christian Lamparter <chunkeey@googlemail.com>
|
||||
* Copyright 2009-2011 Christian Lamparter <chunkeey@googlemail.com>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License as published by
|
||||
|
@ -72,6 +72,9 @@ enum carl9170fw_feature_list {
|
|||
/* Wake up on WLAN */
|
||||
CARL9170FW_WOL,
|
||||
|
||||
/* Firmware supports PSM in the 5GHZ Band */
|
||||
CARL9170FW_FIXED_5GHZ_PSM,
|
||||
|
||||
/* KEEP LAST */
|
||||
__CARL9170FW_FEATURE_NUM
|
||||
};
|
||||
|
@ -82,6 +85,7 @@ enum carl9170fw_feature_list {
|
|||
#define DBG_MAGIC "DBG\0"
|
||||
#define CHK_MAGIC "CHK\0"
|
||||
#define TXSQ_MAGIC "TXSQ"
|
||||
#define WOL_MAGIC "WOL\0"
|
||||
#define LAST_MAGIC "LAST"
|
||||
|
||||
#define CARL9170FW_SET_DAY(d) (((d) - 1) % 31)
|
||||
|
@ -104,7 +108,7 @@ struct carl9170fw_desc_head {
|
|||
(sizeof(struct carl9170fw_desc_head))
|
||||
|
||||
#define CARL9170FW_OTUS_DESC_MIN_VER 6
|
||||
#define CARL9170FW_OTUS_DESC_CUR_VER 6
|
||||
#define CARL9170FW_OTUS_DESC_CUR_VER 7
|
||||
struct carl9170fw_otus_desc {
|
||||
struct carl9170fw_desc_head head;
|
||||
__le32 feature_set;
|
||||
|
@ -186,6 +190,16 @@ struct carl9170fw_txsq_desc {
|
|||
#define CARL9170FW_TXSQ_DESC_SIZE \
|
||||
(sizeof(struct carl9170fw_txsq_desc))
|
||||
|
||||
#define CARL9170FW_WOL_DESC_MIN_VER 1
|
||||
#define CARL9170FW_WOL_DESC_CUR_VER 1
|
||||
struct carl9170fw_wol_desc {
|
||||
struct carl9170fw_desc_head head;
|
||||
|
||||
__le32 supported_triggers; /* CARL9170_WOL_ */
|
||||
} __packed;
|
||||
#define CARL9170FW_WOL_DESC_SIZE \
|
||||
(sizeof(struct carl9170fw_wol_desc))
|
||||
|
||||
#define CARL9170FW_LAST_DESC_MIN_VER 1
|
||||
#define CARL9170FW_LAST_DESC_CUR_VER 2
|
||||
struct carl9170fw_last_desc {
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
* Register map, hardware-specific definitions
|
||||
*
|
||||
* Copyright 2008, Johannes Berg <johannes@sipsolutions.net>
|
||||
* Copyright 2009, 2010, Christian Lamparter <chunkeey@googlemail.com>
|
||||
* Copyright 2009-2011 Christian Lamparter <chunkeey@googlemail.com>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License as published by
|
||||
|
@ -357,7 +357,18 @@
|
|||
|
||||
#define AR9170_MAC_REG_DMA_WLAN_STATUS (AR9170_MAC_REG_BASE + 0xd38)
|
||||
#define AR9170_MAC_REG_DMA_STATUS (AR9170_MAC_REG_BASE + 0xd3c)
|
||||
#define AR9170_MAC_REG_DMA_TXQ_LAST_ADDR (AR9170_MAC_REG_BASE + 0xd40)
|
||||
#define AR9170_MAC_REG_DMA_TXQ0_LAST_ADDR (AR9170_MAC_REG_BASE + 0xd40)
|
||||
#define AR9170_MAC_REG_DMA_TXQ1_LAST_ADDR (AR9170_MAC_REG_BASE + 0xd44)
|
||||
#define AR9170_MAC_REG_DMA_TXQ2_LAST_ADDR (AR9170_MAC_REG_BASE + 0xd48)
|
||||
#define AR9170_MAC_REG_DMA_TXQ3_LAST_ADDR (AR9170_MAC_REG_BASE + 0xd4c)
|
||||
#define AR9170_MAC_REG_DMA_TXQ4_LAST_ADDR (AR9170_MAC_REG_BASE + 0xd50)
|
||||
#define AR9170_MAC_REG_DMA_TXQ0Q1_LEN (AR9170_MAC_REG_BASE + 0xd54)
|
||||
#define AR9170_MAC_REG_DMA_TXQ2Q3_LEN (AR9170_MAC_REG_BASE + 0xd58)
|
||||
#define AR9170_MAC_REG_DMA_TXQ4_LEN (AR9170_MAC_REG_BASE + 0xd5c)
|
||||
|
||||
#define AR9170_MAC_REG_DMA_TXQX_LAST_ADDR (AR9170_MAC_REG_BASE + 0xd74)
|
||||
#define AR9170_MAC_REG_DMA_TXQX_FAIL_ADDR (AR9170_MAC_REG_BASE + 0xd78)
|
||||
#define AR9170_MAC_REG_TXRX_MPI (AR9170_MAC_REG_BASE + 0xd7c)
|
||||
#define AR9170_MAC_TXRX_MPI_TX_MPI_MASK 0x0000000f
|
||||
#define AR9170_MAC_TXRX_MPI_TX_TO_MASK 0x0000fff0
|
||||
|
|
|
@ -345,11 +345,11 @@ static int carl9170_op_start(struct ieee80211_hw *hw)
|
|||
carl9170_zap_queues(ar);
|
||||
|
||||
/* reset QoS defaults */
|
||||
CARL9170_FILL_QUEUE(ar->edcf[0], 3, 15, 1023, 0); /* BEST EFFORT */
|
||||
CARL9170_FILL_QUEUE(ar->edcf[1], 2, 7, 15, 94); /* VIDEO */
|
||||
CARL9170_FILL_QUEUE(ar->edcf[2], 2, 3, 7, 47); /* VOICE */
|
||||
CARL9170_FILL_QUEUE(ar->edcf[3], 7, 15, 1023, 0); /* BACKGROUND */
|
||||
CARL9170_FILL_QUEUE(ar->edcf[4], 2, 3, 7, 0); /* SPECIAL */
|
||||
CARL9170_FILL_QUEUE(ar->edcf[AR9170_TXQ_VO], 2, 3, 7, 47);
|
||||
CARL9170_FILL_QUEUE(ar->edcf[AR9170_TXQ_VI], 2, 7, 15, 94);
|
||||
CARL9170_FILL_QUEUE(ar->edcf[AR9170_TXQ_BE], 3, 15, 1023, 0);
|
||||
CARL9170_FILL_QUEUE(ar->edcf[AR9170_TXQ_BK], 7, 15, 1023, 0);
|
||||
CARL9170_FILL_QUEUE(ar->edcf[AR9170_TXQ_SPECIAL], 2, 3, 7, 0);
|
||||
|
||||
ar->current_factor = ar->current_density = -1;
|
||||
/* "The first key is unique." */
|
||||
|
@ -1577,6 +1577,7 @@ void *carl9170_alloc(size_t priv_size)
|
|||
IEEE80211_HW_REPORTS_TX_ACK_STATUS |
|
||||
IEEE80211_HW_SUPPORTS_PS |
|
||||
IEEE80211_HW_PS_NULLFUNC_STACK |
|
||||
IEEE80211_HW_NEED_DTIM_PERIOD |
|
||||
IEEE80211_HW_SIGNAL_DBM;
|
||||
|
||||
if (!modparam_noht) {
|
||||
|
|
|
@ -1783,12 +1783,6 @@ int carl9170_set_channel(struct ar9170 *ar, struct ieee80211_channel *channel,
|
|||
}
|
||||
}
|
||||
|
||||
/* FIXME: PSM does not work in 5GHz Band */
|
||||
if (channel->band == IEEE80211_BAND_5GHZ)
|
||||
ar->ps.off_override |= PS_OFF_5GHZ;
|
||||
else
|
||||
ar->ps.off_override &= ~PS_OFF_5GHZ;
|
||||
|
||||
ar->channel = channel;
|
||||
ar->ht_settings = new_ht;
|
||||
return 0;
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
#ifndef __CARL9170_SHARED_VERSION_H
|
||||
#define __CARL9170_SHARED_VERSION_H
|
||||
#define CARL9170FW_VERSION_YEAR 11
|
||||
#define CARL9170FW_VERSION_MONTH 1
|
||||
#define CARL9170FW_VERSION_DAY 22
|
||||
#define CARL9170FW_VERSION_GIT "1.9.2"
|
||||
#define CARL9170FW_VERSION_MONTH 6
|
||||
#define CARL9170FW_VERSION_DAY 30
|
||||
#define CARL9170FW_VERSION_GIT "1.9.4"
|
||||
#endif /* __CARL9170_SHARED_VERSION_H */
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
* RX/TX meta descriptor format
|
||||
*
|
||||
* Copyright 2008, Johannes Berg <johannes@sipsolutions.net>
|
||||
* Copyright 2009, 2010, Christian Lamparter <chunkeey@googlemail.com>
|
||||
* Copyright 2009-2011 Christian Lamparter <chunkeey@googlemail.com>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License as published by
|
||||
|
@ -278,7 +278,7 @@ struct ar9170_tx_frame {
|
|||
struct carl9170_tx_superframe {
|
||||
struct carl9170_tx_superdesc s;
|
||||
struct ar9170_tx_frame f;
|
||||
} __packed;
|
||||
} __packed __aligned(4);
|
||||
|
||||
#endif /* __CARL9170FW__ */
|
||||
|
||||
|
@ -328,7 +328,7 @@ struct _carl9170_tx_superframe {
|
|||
struct _carl9170_tx_superdesc s;
|
||||
struct _ar9170_tx_hwdesc f;
|
||||
u8 frame_data[0];
|
||||
} __packed;
|
||||
} __packed __aligned(4);
|
||||
|
||||
#define CARL9170_TX_SUPERDESC_LEN 24
|
||||
#define AR9170_TX_HWDESC_LEN 8
|
||||
|
@ -404,16 +404,6 @@ static inline u8 ar9170_get_decrypt_type(struct ar9170_rx_macstatus *t)
|
|||
(t->DAidx & 0xc0) >> 6;
|
||||
}
|
||||
|
||||
enum ar9170_txq {
|
||||
AR9170_TXQ_BE,
|
||||
|
||||
AR9170_TXQ_VI,
|
||||
AR9170_TXQ_VO,
|
||||
AR9170_TXQ_BK,
|
||||
|
||||
__AR9170_NUM_TXQ,
|
||||
};
|
||||
|
||||
/*
|
||||
* This is an workaround for several undocumented bugs.
|
||||
* Don't mess with the QoS/AC <-> HW Queue map, if you don't
|
||||
|
@ -431,7 +421,14 @@ enum ar9170_txq {
|
|||
* result, this makes the device pretty much useless
|
||||
* for any serious 802.11n setup.
|
||||
*/
|
||||
static const u8 ar9170_qmap[__AR9170_NUM_TXQ] = { 2, 1, 0, 3 };
|
||||
enum ar9170_txq {
|
||||
AR9170_TXQ_BK = 0, /* TXQ0 */
|
||||
AR9170_TXQ_BE, /* TXQ1 */
|
||||
AR9170_TXQ_VI, /* TXQ2 */
|
||||
AR9170_TXQ_VO, /* TXQ3 */
|
||||
|
||||
__AR9170_NUM_TXQ,
|
||||
};
|
||||
|
||||
#define AR9170_TXQ_DEPTH 32
|
||||
|
||||
|
|
|
@ -1600,6 +1600,7 @@ void b43_dma_rx(struct b43_dmaring *ring)
|
|||
dma_rx(ring, &slot);
|
||||
update_max_used_slots(ring, ++used_slots);
|
||||
}
|
||||
wmb();
|
||||
ops->set_current_rxslot(ring, slot);
|
||||
ring->current_slot = slot;
|
||||
}
|
||||
|
|
|
@ -287,7 +287,7 @@ static const char *command_types[] = {
|
|||
"unused", /* HOST_INTERRUPT_COALESCING */
|
||||
"undefined",
|
||||
"CARD_DISABLE_PHY_OFF",
|
||||
"MSDU_TX_RATES" "undefined",
|
||||
"MSDU_TX_RATES",
|
||||
"undefined",
|
||||
"SET_STATION_STAT_BITS",
|
||||
"CLEAR_STATIONS_STAT_BITS",
|
||||
|
|
|
@ -14,6 +14,7 @@ iwlagn-objs += iwl-6000.o
|
|||
iwlagn-objs += iwl-1000.o
|
||||
iwlagn-objs += iwl-2000.o
|
||||
iwlagn-objs += iwl-pci.o
|
||||
iwlagn-objs += iwl-trans.o
|
||||
|
||||
iwlagn-$(CONFIG_IWLWIFI_DEBUGFS) += iwl-debugfs.o
|
||||
iwlagn-$(CONFIG_IWLWIFI_DEVICE_TRACING) += iwl-devtrace.o
|
||||
|
|
|
@ -138,7 +138,6 @@ static int iwl1000_hw_set_hw_params(struct iwl_priv *priv)
|
|||
|
||||
priv->hw_params.ht40_channel = BIT(IEEE80211_BAND_2GHZ) |
|
||||
BIT(IEEE80211_BAND_5GHZ);
|
||||
priv->hw_params.rx_wrt_ptr_reg = FH_RSCSR_CHNL0_WPTR;
|
||||
|
||||
priv->hw_params.tx_chains_num = num_of_ant(priv->cfg->valid_tx_ant);
|
||||
if (priv->cfg->rx_with_siso_diversity)
|
||||
|
@ -197,7 +196,6 @@ static struct iwl_lib_ops iwl1000_lib = {
|
|||
|
||||
static const struct iwl_ops iwl1000_ops = {
|
||||
.lib = &iwl1000_lib,
|
||||
.hcmd = &iwlagn_hcmd,
|
||||
.utils = &iwlagn_hcmd_utils,
|
||||
};
|
||||
|
||||
|
|
|
@ -50,11 +50,13 @@
|
|||
#define IWL2030_UCODE_API_MAX 5
|
||||
#define IWL2000_UCODE_API_MAX 5
|
||||
#define IWL105_UCODE_API_MAX 5
|
||||
#define IWL135_UCODE_API_MAX 5
|
||||
|
||||
/* Lowest firmware API version supported */
|
||||
#define IWL2030_UCODE_API_MIN 5
|
||||
#define IWL2000_UCODE_API_MIN 5
|
||||
#define IWL105_UCODE_API_MIN 5
|
||||
#define IWL135_UCODE_API_MIN 5
|
||||
|
||||
#define IWL2030_FW_PRE "iwlwifi-2030-"
|
||||
#define IWL2030_MODULE_FIRMWARE(api) IWL2030_FW_PRE __stringify(api) ".ucode"
|
||||
|
@ -65,6 +67,9 @@
|
|||
#define IWL105_FW_PRE "iwlwifi-105-"
|
||||
#define IWL105_MODULE_FIRMWARE(api) IWL105_FW_PRE __stringify(api) ".ucode"
|
||||
|
||||
#define IWL135_FW_PRE "iwlwifi-135-"
|
||||
#define IWL135_MODULE_FIRMWARE(api) IWL135_FW_PRE #api ".ucode"
|
||||
|
||||
static void iwl2000_set_ct_threshold(struct iwl_priv *priv)
|
||||
{
|
||||
/* want Celsius */
|
||||
|
@ -131,7 +136,6 @@ static int iwl2000_hw_set_hw_params(struct iwl_priv *priv)
|
|||
|
||||
priv->hw_params.ht40_channel = BIT(IEEE80211_BAND_2GHZ) |
|
||||
BIT(IEEE80211_BAND_5GHZ);
|
||||
priv->hw_params.rx_wrt_ptr_reg = FH_RSCSR_CHNL0_WPTR;
|
||||
|
||||
priv->hw_params.tx_chains_num = num_of_ant(priv->cfg->valid_tx_ant);
|
||||
if (priv->cfg->rx_with_siso_diversity)
|
||||
|
@ -193,25 +197,21 @@ static struct iwl_lib_ops iwl2000_lib = {
|
|||
|
||||
static const struct iwl_ops iwl2000_ops = {
|
||||
.lib = &iwl2000_lib,
|
||||
.hcmd = &iwlagn_hcmd,
|
||||
.utils = &iwlagn_hcmd_utils,
|
||||
};
|
||||
|
||||
static const struct iwl_ops iwl2030_ops = {
|
||||
.lib = &iwl2000_lib,
|
||||
.hcmd = &iwlagn_bt_hcmd,
|
||||
.utils = &iwlagn_hcmd_utils,
|
||||
};
|
||||
|
||||
static const struct iwl_ops iwl105_ops = {
|
||||
.lib = &iwl2000_lib,
|
||||
.hcmd = &iwlagn_hcmd,
|
||||
.utils = &iwlagn_hcmd_utils,
|
||||
};
|
||||
|
||||
static const struct iwl_ops iwl135_ops = {
|
||||
.lib = &iwl2000_lib,
|
||||
.hcmd = &iwlagn_bt_hcmd,
|
||||
.utils = &iwlagn_hcmd_utils,
|
||||
};
|
||||
|
||||
|
@ -344,9 +344,9 @@ struct iwl_cfg iwl105_bgn_cfg = {
|
|||
};
|
||||
|
||||
#define IWL_DEVICE_135 \
|
||||
.fw_name_pre = IWL105_FW_PRE, \
|
||||
.ucode_api_max = IWL105_UCODE_API_MAX, \
|
||||
.ucode_api_min = IWL105_UCODE_API_MIN, \
|
||||
.fw_name_pre = IWL135_FW_PRE, \
|
||||
.ucode_api_max = IWL135_UCODE_API_MAX, \
|
||||
.ucode_api_min = IWL135_UCODE_API_MIN, \
|
||||
.eeprom_ver = EEPROM_2000_EEPROM_VERSION, \
|
||||
.eeprom_calib_ver = EEPROM_2000_TX_POWER_VERSION, \
|
||||
.ops = &iwl135_ops, \
|
||||
|
@ -359,12 +359,12 @@ struct iwl_cfg iwl105_bgn_cfg = {
|
|||
.rx_with_siso_diversity = true \
|
||||
|
||||
struct iwl_cfg iwl135_bg_cfg = {
|
||||
.name = "105 Series 1x1 BG/BT",
|
||||
.name = "135 Series 1x1 BG/BT",
|
||||
IWL_DEVICE_135,
|
||||
};
|
||||
|
||||
struct iwl_cfg iwl135_bgn_cfg = {
|
||||
.name = "105 Series 1x1 BGN/BT",
|
||||
.name = "135 Series 1x1 BGN/BT",
|
||||
IWL_DEVICE_135,
|
||||
.ht_params = &iwl2000_ht_params,
|
||||
};
|
||||
|
@ -372,3 +372,4 @@ struct iwl_cfg iwl135_bgn_cfg = {
|
|||
MODULE_FIRMWARE(IWL2000_MODULE_FIRMWARE(IWL2000_UCODE_API_MAX));
|
||||
MODULE_FIRMWARE(IWL2030_MODULE_FIRMWARE(IWL2030_UCODE_API_MAX));
|
||||
MODULE_FIRMWARE(IWL105_MODULE_FIRMWARE(IWL105_UCODE_API_MAX));
|
||||
MODULE_FIRMWARE(IWL135_MODULE_FIRMWARE(IWL135_UCODE_API_MAX));
|
||||
|
|
|
@ -169,7 +169,6 @@ static int iwl5000_hw_set_hw_params(struct iwl_priv *priv)
|
|||
|
||||
priv->hw_params.ht40_channel = BIT(IEEE80211_BAND_2GHZ) |
|
||||
BIT(IEEE80211_BAND_5GHZ);
|
||||
priv->hw_params.rx_wrt_ptr_reg = FH_RSCSR_CHNL0_WPTR;
|
||||
|
||||
priv->hw_params.tx_chains_num = num_of_ant(priv->cfg->valid_tx_ant);
|
||||
priv->hw_params.rx_chains_num = num_of_ant(priv->cfg->valid_rx_ant);
|
||||
|
@ -214,7 +213,6 @@ static int iwl5150_hw_set_hw_params(struct iwl_priv *priv)
|
|||
|
||||
priv->hw_params.ht40_channel = BIT(IEEE80211_BAND_2GHZ) |
|
||||
BIT(IEEE80211_BAND_5GHZ);
|
||||
priv->hw_params.rx_wrt_ptr_reg = FH_RSCSR_CHNL0_WPTR;
|
||||
|
||||
priv->hw_params.tx_chains_num = num_of_ant(priv->cfg->valid_tx_ant);
|
||||
priv->hw_params.rx_chains_num = num_of_ant(priv->cfg->valid_rx_ant);
|
||||
|
@ -379,13 +377,11 @@ static struct iwl_lib_ops iwl5150_lib = {
|
|||
|
||||
static const struct iwl_ops iwl5000_ops = {
|
||||
.lib = &iwl5000_lib,
|
||||
.hcmd = &iwlagn_hcmd,
|
||||
.utils = &iwlagn_hcmd_utils,
|
||||
};
|
||||
|
||||
static const struct iwl_ops iwl5150_ops = {
|
||||
.lib = &iwl5150_lib,
|
||||
.hcmd = &iwlagn_hcmd,
|
||||
.utils = &iwlagn_hcmd_utils,
|
||||
};
|
||||
|
||||
|
|
|
@ -157,7 +157,6 @@ static int iwl6000_hw_set_hw_params(struct iwl_priv *priv)
|
|||
|
||||
priv->hw_params.ht40_channel = BIT(IEEE80211_BAND_2GHZ) |
|
||||
BIT(IEEE80211_BAND_5GHZ);
|
||||
priv->hw_params.rx_wrt_ptr_reg = FH_RSCSR_CHNL0_WPTR;
|
||||
|
||||
priv->hw_params.tx_chains_num = num_of_ant(priv->cfg->valid_tx_ant);
|
||||
if (priv->cfg->rx_with_siso_diversity)
|
||||
|
@ -328,27 +327,23 @@ static struct iwl_nic_ops iwl6150_nic_ops = {
|
|||
|
||||
static const struct iwl_ops iwl6000_ops = {
|
||||
.lib = &iwl6000_lib,
|
||||
.hcmd = &iwlagn_hcmd,
|
||||
.utils = &iwlagn_hcmd_utils,
|
||||
};
|
||||
|
||||
static const struct iwl_ops iwl6050_ops = {
|
||||
.lib = &iwl6000_lib,
|
||||
.hcmd = &iwlagn_hcmd,
|
||||
.utils = &iwlagn_hcmd_utils,
|
||||
.nic = &iwl6050_nic_ops,
|
||||
};
|
||||
|
||||
static const struct iwl_ops iwl6150_ops = {
|
||||
.lib = &iwl6000_lib,
|
||||
.hcmd = &iwlagn_hcmd,
|
||||
.utils = &iwlagn_hcmd_utils,
|
||||
.nic = &iwl6150_nic_ops,
|
||||
};
|
||||
|
||||
static const struct iwl_ops iwl6030_ops = {
|
||||
.lib = &iwl6030_lib,
|
||||
.hcmd = &iwlagn_bt_hcmd,
|
||||
.utils = &iwlagn_hcmd_utils,
|
||||
};
|
||||
|
||||
|
|
|
@ -205,7 +205,7 @@ static int iwlagn_calc_rssi(struct iwl_priv *priv,
|
|||
return max_rssi - agc - IWLAGN_RSSI_OFFSET;
|
||||
}
|
||||
|
||||
static int iwlagn_set_pan_params(struct iwl_priv *priv)
|
||||
int iwlagn_set_pan_params(struct iwl_priv *priv)
|
||||
{
|
||||
struct iwl_wipan_params_cmd cmd;
|
||||
struct iwl_rxon_context *ctx_bss, *ctx_pan;
|
||||
|
@ -297,20 +297,6 @@ static int iwlagn_set_pan_params(struct iwl_priv *priv)
|
|||
return ret;
|
||||
}
|
||||
|
||||
struct iwl_hcmd_ops iwlagn_hcmd = {
|
||||
.set_rxon_chain = iwlagn_set_rxon_chain,
|
||||
.set_tx_ant = iwlagn_send_tx_ant_config,
|
||||
.send_bt_config = iwl_send_bt_config,
|
||||
.set_pan_params = iwlagn_set_pan_params,
|
||||
};
|
||||
|
||||
struct iwl_hcmd_ops iwlagn_bt_hcmd = {
|
||||
.set_rxon_chain = iwlagn_set_rxon_chain,
|
||||
.set_tx_ant = iwlagn_send_tx_ant_config,
|
||||
.send_bt_config = iwlagn_send_advance_bt_config,
|
||||
.set_pan_params = iwlagn_set_pan_params,
|
||||
};
|
||||
|
||||
struct iwl_hcmd_utils_ops iwlagn_hcmd_utils = {
|
||||
.build_addsta_hcmd = iwlagn_build_addsta_hcmd,
|
||||
.gain_computation = iwlagn_gain_computation,
|
||||
|
|
|
@ -628,38 +628,6 @@ struct iwl_mod_params iwlagn_mod_params = {
|
|||
/* the rest are 0 by default */
|
||||
};
|
||||
|
||||
void iwlagn_rx_queue_reset(struct iwl_priv *priv, struct iwl_rx_queue *rxq)
|
||||
{
|
||||
unsigned long flags;
|
||||
int i;
|
||||
spin_lock_irqsave(&rxq->lock, flags);
|
||||
INIT_LIST_HEAD(&rxq->rx_free);
|
||||
INIT_LIST_HEAD(&rxq->rx_used);
|
||||
/* Fill the rx_used queue with _all_ of the Rx buffers */
|
||||
for (i = 0; i < RX_FREE_BUFFERS + RX_QUEUE_SIZE; i++) {
|
||||
/* In the reset function, these buffers may have been allocated
|
||||
* to an SKB, so we need to unmap and free potential storage */
|
||||
if (rxq->pool[i].page != NULL) {
|
||||
dma_unmap_page(priv->bus.dev, rxq->pool[i].page_dma,
|
||||
PAGE_SIZE << priv->hw_params.rx_page_order,
|
||||
DMA_FROM_DEVICE);
|
||||
__iwl_free_pages(priv, rxq->pool[i].page);
|
||||
rxq->pool[i].page = NULL;
|
||||
}
|
||||
list_add_tail(&rxq->pool[i].list, &rxq->rx_used);
|
||||
}
|
||||
|
||||
for (i = 0; i < RX_QUEUE_SIZE; i++)
|
||||
rxq->queue[i] = NULL;
|
||||
|
||||
/* Set us so that we have processed and used all buffers, but have
|
||||
* not restocked the Rx queue with fresh buffers */
|
||||
rxq->read = rxq->write = 0;
|
||||
rxq->write_actual = 0;
|
||||
rxq->free_count = 0;
|
||||
spin_unlock_irqrestore(&rxq->lock, flags);
|
||||
}
|
||||
|
||||
int iwlagn_rx_init(struct iwl_priv *priv, struct iwl_rx_queue *rxq)
|
||||
{
|
||||
u32 rb_size;
|
||||
|
@ -731,7 +699,6 @@ int iwlagn_hw_nic_init(struct iwl_priv *priv)
|
|||
{
|
||||
unsigned long flags;
|
||||
struct iwl_rx_queue *rxq = &priv->rxq;
|
||||
int ret;
|
||||
|
||||
/* nic_init */
|
||||
spin_lock_irqsave(&priv->lock, flags);
|
||||
|
@ -747,14 +714,7 @@ int iwlagn_hw_nic_init(struct iwl_priv *priv)
|
|||
priv->cfg->ops->lib->apm_ops.config(priv);
|
||||
|
||||
/* Allocate the RX queue, or reset if it is already allocated */
|
||||
if (!rxq->bd) {
|
||||
ret = iwl_rx_queue_alloc(priv);
|
||||
if (ret) {
|
||||
IWL_ERR(priv, "Unable to initialize Rx queue\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
} else
|
||||
iwlagn_rx_queue_reset(priv, rxq);
|
||||
priv->trans.ops->rx_init(priv);
|
||||
|
||||
iwlagn_rx_replenish(priv);
|
||||
|
||||
|
@ -768,12 +728,8 @@ int iwlagn_hw_nic_init(struct iwl_priv *priv)
|
|||
spin_unlock_irqrestore(&priv->lock, flags);
|
||||
|
||||
/* Allocate or reset and init all Tx and Command queues */
|
||||
if (!priv->txq) {
|
||||
ret = iwlagn_txq_ctx_alloc(priv);
|
||||
if (ret)
|
||||
return ret;
|
||||
} else
|
||||
iwlagn_txq_ctx_reset(priv);
|
||||
if (priv->trans.ops->tx_init(priv))
|
||||
return -ENOMEM;
|
||||
|
||||
if (priv->cfg->base_params->shadow_reg_enable) {
|
||||
/* enable shadow regs in HW */
|
||||
|
@ -949,33 +905,6 @@ void iwlagn_rx_replenish_now(struct iwl_priv *priv)
|
|||
iwlagn_rx_queue_restock(priv);
|
||||
}
|
||||
|
||||
/* Assumes that the skb field of the buffers in 'pool' is kept accurate.
|
||||
* If an SKB has been detached, the POOL needs to have its SKB set to NULL
|
||||
* This free routine walks the list of POOL entries and if SKB is set to
|
||||
* non NULL it is unmapped and freed
|
||||
*/
|
||||
void iwlagn_rx_queue_free(struct iwl_priv *priv, struct iwl_rx_queue *rxq)
|
||||
{
|
||||
int i;
|
||||
for (i = 0; i < RX_QUEUE_SIZE + RX_FREE_BUFFERS; i++) {
|
||||
if (rxq->pool[i].page != NULL) {
|
||||
dma_unmap_page(priv->bus.dev, rxq->pool[i].page_dma,
|
||||
PAGE_SIZE << priv->hw_params.rx_page_order,
|
||||
DMA_FROM_DEVICE);
|
||||
__iwl_free_pages(priv, rxq->pool[i].page);
|
||||
rxq->pool[i].page = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
dma_free_coherent(priv->bus.dev, 4 * RX_QUEUE_SIZE,
|
||||
rxq->bd, rxq->bd_dma);
|
||||
dma_free_coherent(priv->bus.dev,
|
||||
sizeof(struct iwl_rb_status),
|
||||
rxq->rb_stts, rxq->rb_stts_dma);
|
||||
rxq->bd = NULL;
|
||||
rxq->rb_stts = NULL;
|
||||
}
|
||||
|
||||
int iwlagn_rxq_stop(struct iwl_priv *priv)
|
||||
{
|
||||
|
||||
|
@ -1437,17 +1366,14 @@ int iwlagn_request_scan(struct iwl_priv *priv, struct ieee80211_vif *vif)
|
|||
/* set scan bit here for PAN params */
|
||||
set_bit(STATUS_SCAN_HW, &priv->status);
|
||||
|
||||
if (priv->cfg->ops->hcmd->set_pan_params) {
|
||||
ret = priv->cfg->ops->hcmd->set_pan_params(priv);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
ret = iwlagn_set_pan_params(priv);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = iwl_send_cmd_sync(priv, &cmd);
|
||||
if (ret) {
|
||||
clear_bit(STATUS_SCAN_HW, &priv->status);
|
||||
if (priv->cfg->ops->hcmd->set_pan_params)
|
||||
priv->cfg->ops->hcmd->set_pan_params(priv);
|
||||
iwlagn_set_pan_params(priv);
|
||||
}
|
||||
|
||||
return ret;
|
||||
|
|
|
@ -436,11 +436,9 @@ int iwlagn_commit_rxon(struct iwl_priv *priv, struct iwl_rxon_context *ctx)
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (priv->cfg->ops->hcmd->set_pan_params) {
|
||||
ret = priv->cfg->ops->hcmd->set_pan_params(priv);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
ret = iwlagn_set_pan_params(priv);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (new_assoc)
|
||||
return iwlagn_rxon_connect(priv, ctx);
|
||||
|
@ -483,9 +481,8 @@ int iwlagn_mac_config(struct ieee80211_hw *hw, u32 changed)
|
|||
* set up the SM PS mode to OFF if an HT channel is
|
||||
* configured.
|
||||
*/
|
||||
if (priv->cfg->ops->hcmd->set_rxon_chain)
|
||||
for_each_context(priv, ctx)
|
||||
priv->cfg->ops->hcmd->set_rxon_chain(priv, ctx);
|
||||
for_each_context(priv, ctx)
|
||||
iwlagn_set_rxon_chain(priv, ctx);
|
||||
}
|
||||
|
||||
if (changed & IEEE80211_CONF_CHANGE_CHANNEL) {
|
||||
|
@ -741,8 +738,7 @@ void iwlagn_bss_info_changed(struct ieee80211_hw *hw,
|
|||
iwl_set_rxon_ht(priv, &priv->current_ht_config);
|
||||
}
|
||||
|
||||
if (priv->cfg->ops->hcmd->set_rxon_chain)
|
||||
priv->cfg->ops->hcmd->set_rxon_chain(priv, ctx);
|
||||
iwlagn_set_rxon_chain(priv, ctx);
|
||||
|
||||
if (bss_conf->use_cts_prot && (priv->band != IEEE80211_BAND_5GHZ))
|
||||
ctx->staging.flags |= RXON_FLG_TGG_PROTECT_MSK;
|
||||
|
@ -821,6 +817,5 @@ void iwlagn_post_scan(struct iwl_priv *priv)
|
|||
if (memcmp(&ctx->staging, &ctx->active, sizeof(ctx->staging)))
|
||||
iwlagn_commit_rxon(priv, ctx);
|
||||
|
||||
if (priv->cfg->ops->hcmd->set_pan_params)
|
||||
priv->cfg->ops->hcmd->set_pan_params(priv);
|
||||
iwlagn_set_pan_params(priv);
|
||||
}
|
||||
|
|
|
@ -877,96 +877,6 @@ void iwlagn_hw_txq_ctx_free(struct iwl_priv *priv)
|
|||
iwl_free_txq_mem(priv);
|
||||
}
|
||||
|
||||
/**
|
||||
* iwlagn_txq_ctx_alloc - allocate TX queue context
|
||||
* Allocate all Tx DMA structures and initialize them
|
||||
*
|
||||
* @param priv
|
||||
* @return error code
|
||||
*/
|
||||
int iwlagn_txq_ctx_alloc(struct iwl_priv *priv)
|
||||
{
|
||||
int ret;
|
||||
int txq_id, slots_num;
|
||||
unsigned long flags;
|
||||
|
||||
/* Free all tx/cmd queues and keep-warm buffer */
|
||||
iwlagn_hw_txq_ctx_free(priv);
|
||||
|
||||
ret = iwlagn_alloc_dma_ptr(priv, &priv->scd_bc_tbls,
|
||||
priv->hw_params.scd_bc_tbls_size);
|
||||
if (ret) {
|
||||
IWL_ERR(priv, "Scheduler BC Table allocation failed\n");
|
||||
goto error_bc_tbls;
|
||||
}
|
||||
/* Alloc keep-warm buffer */
|
||||
ret = iwlagn_alloc_dma_ptr(priv, &priv->kw, IWL_KW_SIZE);
|
||||
if (ret) {
|
||||
IWL_ERR(priv, "Keep Warm allocation failed\n");
|
||||
goto error_kw;
|
||||
}
|
||||
|
||||
/* allocate tx queue structure */
|
||||
ret = iwl_alloc_txq_mem(priv);
|
||||
if (ret)
|
||||
goto error;
|
||||
|
||||
spin_lock_irqsave(&priv->lock, flags);
|
||||
|
||||
/* Turn off all Tx DMA fifos */
|
||||
iwlagn_txq_set_sched(priv, 0);
|
||||
|
||||
/* Tell NIC where to find the "keep warm" buffer */
|
||||
iwl_write_direct32(priv, FH_KW_MEM_ADDR_REG, priv->kw.dma >> 4);
|
||||
|
||||
spin_unlock_irqrestore(&priv->lock, flags);
|
||||
|
||||
/* Alloc and init all Tx queues, including the command queue (#4/#9) */
|
||||
for (txq_id = 0; txq_id < priv->hw_params.max_txq_num; txq_id++) {
|
||||
slots_num = (txq_id == priv->cmd_queue) ?
|
||||
TFD_CMD_SLOTS : TFD_TX_CMD_SLOTS;
|
||||
ret = iwl_tx_queue_init(priv, &priv->txq[txq_id], slots_num,
|
||||
txq_id);
|
||||
if (ret) {
|
||||
IWL_ERR(priv, "Tx %d queue init failed\n", txq_id);
|
||||
goto error;
|
||||
}
|
||||
}
|
||||
|
||||
return ret;
|
||||
|
||||
error:
|
||||
iwlagn_hw_txq_ctx_free(priv);
|
||||
iwlagn_free_dma_ptr(priv, &priv->kw);
|
||||
error_kw:
|
||||
iwlagn_free_dma_ptr(priv, &priv->scd_bc_tbls);
|
||||
error_bc_tbls:
|
||||
return ret;
|
||||
}
|
||||
|
||||
void iwlagn_txq_ctx_reset(struct iwl_priv *priv)
|
||||
{
|
||||
int txq_id, slots_num;
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&priv->lock, flags);
|
||||
|
||||
/* Turn off all Tx DMA fifos */
|
||||
iwlagn_txq_set_sched(priv, 0);
|
||||
|
||||
/* Tell NIC where to find the "keep warm" buffer */
|
||||
iwl_write_direct32(priv, FH_KW_MEM_ADDR_REG, priv->kw.dma >> 4);
|
||||
|
||||
spin_unlock_irqrestore(&priv->lock, flags);
|
||||
|
||||
/* Alloc and init all Tx queues, including the command queue (#4) */
|
||||
for (txq_id = 0; txq_id < priv->hw_params.max_txq_num; txq_id++) {
|
||||
slots_num = txq_id == priv->cmd_queue ?
|
||||
TFD_CMD_SLOTS : TFD_TX_CMD_SLOTS;
|
||||
iwl_tx_queue_reset(priv, &priv->txq[txq_id], slots_num, txq_id);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* iwlagn_txq_ctx_stop - Stop all Tx DMA channels
|
||||
*/
|
||||
|
|
|
@ -386,11 +386,13 @@ static int iwlagn_alive_notify(struct iwl_priv *priv)
|
|||
spin_lock_irqsave(&priv->lock, flags);
|
||||
|
||||
priv->scd_base_addr = iwl_read_prph(priv, IWLAGN_SCD_SRAM_BASE_ADDR);
|
||||
a = priv->scd_base_addr + IWLAGN_SCD_CONTEXT_DATA_OFFSET;
|
||||
for (; a < priv->scd_base_addr + IWLAGN_SCD_TX_STTS_BITMAP_OFFSET;
|
||||
a = priv->scd_base_addr + IWLAGN_SCD_CONTEXT_MEM_LOWER_BOUND;
|
||||
/* reset conext data memory */
|
||||
for (; a < priv->scd_base_addr + IWLAGN_SCD_CONTEXT_MEM_UPPER_BOUND;
|
||||
a += 4)
|
||||
iwl_write_targ_mem(priv, a, 0);
|
||||
for (; a < priv->scd_base_addr + IWLAGN_SCD_TRANSLATE_TBL_OFFSET;
|
||||
/* reset tx status memory */
|
||||
for (; a < priv->scd_base_addr + IWLAGN_SCD_TX_STTS_MEM_UPPER_BOUND;
|
||||
a += 4)
|
||||
iwl_write_targ_mem(priv, a, 0);
|
||||
for (; a < priv->scd_base_addr +
|
||||
|
|
|
@ -56,7 +56,7 @@
|
|||
#include "iwl-agn-calib.h"
|
||||
#include "iwl-agn.h"
|
||||
#include "iwl-pci.h"
|
||||
|
||||
#include "iwl-trans.h"
|
||||
|
||||
/******************************************************************************
|
||||
*
|
||||
|
@ -90,12 +90,10 @@ void iwl_update_chain_flags(struct iwl_priv *priv)
|
|||
{
|
||||
struct iwl_rxon_context *ctx;
|
||||
|
||||
if (priv->cfg->ops->hcmd->set_rxon_chain) {
|
||||
for_each_context(priv, ctx) {
|
||||
priv->cfg->ops->hcmd->set_rxon_chain(priv, ctx);
|
||||
if (ctx->active.rx_chain != ctx->staging.rx_chain)
|
||||
iwlagn_commit_rxon(priv, ctx);
|
||||
}
|
||||
for_each_context(priv, ctx) {
|
||||
iwlagn_set_rxon_chain(priv, ctx);
|
||||
if (ctx->active.rx_chain != ctx->staging.rx_chain)
|
||||
iwlagn_commit_rxon(priv, ctx);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -260,7 +258,7 @@ static void iwl_bg_bt_runtime_config(struct work_struct *work)
|
|||
/* dont send host command if rf-kill is on */
|
||||
if (!iwl_is_ready_rf(priv))
|
||||
return;
|
||||
priv->cfg->ops->hcmd->send_bt_config(priv);
|
||||
iwlagn_send_advance_bt_config(priv);
|
||||
}
|
||||
|
||||
static void iwl_bg_bt_full_concurrency(struct work_struct *work)
|
||||
|
@ -287,12 +285,11 @@ static void iwl_bg_bt_full_concurrency(struct work_struct *work)
|
|||
* to avoid 3-wire collisions
|
||||
*/
|
||||
for_each_context(priv, ctx) {
|
||||
if (priv->cfg->ops->hcmd->set_rxon_chain)
|
||||
priv->cfg->ops->hcmd->set_rxon_chain(priv, ctx);
|
||||
iwlagn_set_rxon_chain(priv, ctx);
|
||||
iwlagn_commit_rxon(priv, ctx);
|
||||
}
|
||||
|
||||
priv->cfg->ops->hcmd->send_bt_config(priv);
|
||||
iwlagn_send_advance_bt_config(priv);
|
||||
out:
|
||||
mutex_unlock(&priv->mutex);
|
||||
}
|
||||
|
@ -2017,7 +2014,7 @@ int iwl_alive_start(struct iwl_priv *priv)
|
|||
priv->bt_valid = IWLAGN_BT_ALL_VALID_MSK;
|
||||
priv->kill_ack_mask = IWLAGN_BT_KILL_ACK_MASK_DEFAULT;
|
||||
priv->kill_cts_mask = IWLAGN_BT_KILL_CTS_MASK_DEFAULT;
|
||||
priv->cfg->ops->hcmd->send_bt_config(priv);
|
||||
iwlagn_send_advance_bt_config(priv);
|
||||
priv->bt_valid = IWLAGN_BT_VALID_ENABLE_FLAGS;
|
||||
iwlagn_send_prio_tbl(priv);
|
||||
|
||||
|
@ -2030,7 +2027,13 @@ int iwl_alive_start(struct iwl_priv *priv)
|
|||
BT_COEX_PRIO_TBL_EVT_INIT_CALIB2);
|
||||
if (ret)
|
||||
return ret;
|
||||
} else {
|
||||
/*
|
||||
* default is 2-wire BT coexexistence support
|
||||
*/
|
||||
iwl_send_bt_config(priv);
|
||||
}
|
||||
|
||||
if (priv->hw_params.calib_rt_cfg)
|
||||
iwlagn_send_calib_cfg_rt(priv, priv->hw_params.calib_rt_cfg);
|
||||
|
||||
|
@ -2039,8 +2042,7 @@ int iwl_alive_start(struct iwl_priv *priv)
|
|||
priv->active_rate = IWL_RATES_MASK;
|
||||
|
||||
/* Configure Tx antenna selection based on H/W config */
|
||||
if (priv->cfg->ops->hcmd->set_tx_ant)
|
||||
priv->cfg->ops->hcmd->set_tx_ant(priv, priv->cfg->valid_tx_ant);
|
||||
iwlagn_send_tx_ant_config(priv, priv->cfg->valid_tx_ant);
|
||||
|
||||
if (iwl_is_associated_ctx(ctx)) {
|
||||
struct iwl_rxon_cmd *active_rxon =
|
||||
|
@ -2054,16 +2056,7 @@ int iwl_alive_start(struct iwl_priv *priv)
|
|||
for_each_context(priv, tmp)
|
||||
iwl_connection_init_rx_config(priv, tmp);
|
||||
|
||||
if (priv->cfg->ops->hcmd->set_rxon_chain)
|
||||
priv->cfg->ops->hcmd->set_rxon_chain(priv, ctx);
|
||||
}
|
||||
|
||||
if (!priv->cfg->bt_params || (priv->cfg->bt_params &&
|
||||
!priv->cfg->bt_params->advanced_bt_coexist)) {
|
||||
/*
|
||||
* default is 2-wire BT coexexistence support
|
||||
*/
|
||||
priv->cfg->ops->hcmd->send_bt_config(priv);
|
||||
iwlagn_set_rxon_chain(priv, ctx);
|
||||
}
|
||||
|
||||
iwl_reset_run_time_calib(priv);
|
||||
|
@ -3288,9 +3281,7 @@ static int iwl_init_drv(struct iwl_priv *priv)
|
|||
priv->rx_statistics_jiffies = jiffies;
|
||||
|
||||
/* Choose which receivers/antennas to use */
|
||||
if (priv->cfg->ops->hcmd->set_rxon_chain)
|
||||
priv->cfg->ops->hcmd->set_rxon_chain(priv,
|
||||
&priv->contexts[IWL_RXON_CTX_BSS]);
|
||||
iwlagn_set_rxon_chain(priv, &priv->contexts[IWL_RXON_CTX_BSS]);
|
||||
|
||||
iwl_init_scan_params(priv);
|
||||
|
||||
|
@ -3517,6 +3508,8 @@ int iwl_probe(void *bus_specific, struct iwl_bus_ops *bus_ops,
|
|||
priv->bus.ops->set_drv_data(&priv->bus, priv);
|
||||
priv->bus.dev = priv->bus.ops->get_dev(&priv->bus);
|
||||
|
||||
iwl_trans_register(&priv->trans);
|
||||
|
||||
/* At this point both hw and priv are allocated. */
|
||||
|
||||
SET_IEEE80211_DEV(hw, priv->bus.dev);
|
||||
|
@ -3716,8 +3709,7 @@ void __devexit iwl_remove(struct iwl_priv * priv)
|
|||
|
||||
iwl_dealloc_ucode(priv);
|
||||
|
||||
if (priv->rxq.bd)
|
||||
iwlagn_rx_queue_free(priv, &priv->rxq);
|
||||
priv->trans.ops->rx_free(priv);
|
||||
iwlagn_hw_txq_ctx_free(priv);
|
||||
|
||||
iwl_eeprom_free(priv);
|
||||
|
@ -3820,6 +3812,10 @@ MODULE_PARM_DESC(plcp_check, "Check plcp health (default: 1 [enabled])");
|
|||
module_param_named(ack_check, iwlagn_mod_params.ack_check, bool, S_IRUGO);
|
||||
MODULE_PARM_DESC(ack_check, "Check ack health (default: 0 [disabled])");
|
||||
|
||||
module_param_named(wd_disable, iwlagn_mod_params.wd_disable, bool, S_IRUGO);
|
||||
MODULE_PARM_DESC(wd_disable,
|
||||
"Disable stuck queue watchdog timer (default: 0 [enabled])");
|
||||
|
||||
/*
|
||||
* set bt_coex_active to true, uCode will do kill/defer
|
||||
* every time the priority line is asserted (BT is sending signals on the
|
||||
|
|
|
@ -182,7 +182,6 @@ void iwlagn_temperature(struct iwl_priv *priv);
|
|||
u16 iwlagn_eeprom_calib_version(struct iwl_priv *priv);
|
||||
const u8 *iwlagn_eeprom_query_addr(const struct iwl_priv *priv,
|
||||
size_t offset);
|
||||
void iwlagn_rx_queue_reset(struct iwl_priv *priv, struct iwl_rx_queue *rxq);
|
||||
int iwlagn_rx_init(struct iwl_priv *priv, struct iwl_rx_queue *rxq);
|
||||
int iwlagn_hw_nic_init(struct iwl_priv *priv);
|
||||
int iwlagn_wait_tx_queue_empty(struct iwl_priv *priv);
|
||||
|
@ -194,7 +193,6 @@ void iwlagn_rx_queue_restock(struct iwl_priv *priv);
|
|||
void iwlagn_rx_allocate(struct iwl_priv *priv, gfp_t priority);
|
||||
void iwlagn_rx_replenish(struct iwl_priv *priv);
|
||||
void iwlagn_rx_replenish_now(struct iwl_priv *priv);
|
||||
void iwlagn_rx_queue_free(struct iwl_priv *priv, struct iwl_rx_queue *rxq);
|
||||
int iwlagn_rxq_stop(struct iwl_priv *priv);
|
||||
int iwlagn_hwrate_to_mac80211_idx(u32 rate_n_flags, enum ieee80211_band band);
|
||||
void iwl_setup_rx_handlers(struct iwl_priv *priv);
|
||||
|
@ -220,8 +218,6 @@ void iwlagn_rx_reply_compressed_ba(struct iwl_priv *priv,
|
|||
struct iwl_rx_mem_buffer *rxb);
|
||||
int iwlagn_tx_queue_reclaim(struct iwl_priv *priv, int txq_id, int index);
|
||||
void iwlagn_hw_txq_ctx_free(struct iwl_priv *priv);
|
||||
int iwlagn_txq_ctx_alloc(struct iwl_priv *priv);
|
||||
void iwlagn_txq_ctx_reset(struct iwl_priv *priv);
|
||||
void iwlagn_txq_ctx_stop(struct iwl_priv *priv);
|
||||
|
||||
static inline u32 iwl_tx_status_to_mac80211(u32 status)
|
||||
|
@ -260,6 +256,7 @@ int iwlagn_manage_ibss_station(struct iwl_priv *priv,
|
|||
/* hcmd */
|
||||
int iwlagn_send_tx_ant_config(struct iwl_priv *priv, u8 valid_tx_ant);
|
||||
int iwlagn_send_beacon_cmd(struct iwl_priv *priv);
|
||||
int iwlagn_set_pan_params(struct iwl_priv *priv);
|
||||
|
||||
/* bt coex */
|
||||
void iwlagn_send_advance_bt_config(struct iwl_priv *priv);
|
||||
|
|
|
@ -585,8 +585,7 @@ static void _iwl_set_rxon_ht(struct iwl_priv *priv,
|
|||
rxon->flags |= RXON_FLG_CHANNEL_MODE_LEGACY;
|
||||
}
|
||||
|
||||
if (priv->cfg->ops->hcmd->set_rxon_chain)
|
||||
priv->cfg->ops->hcmd->set_rxon_chain(priv, ctx);
|
||||
iwlagn_set_rxon_chain(priv, ctx);
|
||||
|
||||
IWL_DEBUG_ASSOC(priv, "rxon flags 0x%X operation mode :0x%X "
|
||||
"extension channel offset 0x%x\n",
|
||||
|
@ -1216,8 +1215,7 @@ static int iwl_set_mode(struct iwl_priv *priv, struct iwl_rxon_context *ctx)
|
|||
{
|
||||
iwl_connection_init_rx_config(priv, ctx);
|
||||
|
||||
if (priv->cfg->ops->hcmd->set_rxon_chain)
|
||||
priv->cfg->ops->hcmd->set_rxon_chain(priv, ctx);
|
||||
iwlagn_set_rxon_chain(priv, ctx);
|
||||
|
||||
return iwlagn_commit_rxon(priv, ctx);
|
||||
}
|
||||
|
@ -1372,20 +1370,6 @@ void iwl_mac_remove_interface(struct ieee80211_hw *hw,
|
|||
|
||||
}
|
||||
|
||||
int iwl_alloc_txq_mem(struct iwl_priv *priv)
|
||||
{
|
||||
if (!priv->txq)
|
||||
priv->txq = kzalloc(
|
||||
sizeof(struct iwl_tx_queue) *
|
||||
priv->cfg->base_params->num_of_queues,
|
||||
GFP_KERNEL);
|
||||
if (!priv->txq) {
|
||||
IWL_ERR(priv, "Not enough memory for txq\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
void iwl_free_txq_mem(struct iwl_priv *priv)
|
||||
{
|
||||
kfree(priv->txq);
|
||||
|
@ -1853,7 +1837,7 @@ void iwl_setup_watchdog(struct iwl_priv *priv)
|
|||
{
|
||||
unsigned int timeout = priv->cfg->base_params->wd_timeout;
|
||||
|
||||
if (timeout)
|
||||
if (timeout && !iwlagn_mod_params.wd_disable)
|
||||
mod_timer(&priv->watchdog,
|
||||
jiffies + msecs_to_jiffies(IWL_WD_TICK(timeout)));
|
||||
else
|
||||
|
|
|
@ -80,14 +80,6 @@ struct iwl_cmd;
|
|||
|
||||
#define IWL_CMD(x) case x: return #x
|
||||
|
||||
struct iwl_hcmd_ops {
|
||||
void (*set_rxon_chain)(struct iwl_priv *priv,
|
||||
struct iwl_rxon_context *ctx);
|
||||
int (*set_tx_ant)(struct iwl_priv *priv, u8 valid_tx_ant);
|
||||
void (*send_bt_config)(struct iwl_priv *priv);
|
||||
int (*set_pan_params)(struct iwl_priv *priv);
|
||||
};
|
||||
|
||||
struct iwl_hcmd_utils_ops {
|
||||
u16 (*build_addsta_hcmd)(const struct iwl_addsta_cmd *cmd, u8 *data);
|
||||
void (*gain_computation)(struct iwl_priv *priv,
|
||||
|
@ -146,7 +138,6 @@ struct iwl_nic_ops {
|
|||
|
||||
struct iwl_ops {
|
||||
const struct iwl_lib_ops *lib;
|
||||
const struct iwl_hcmd_ops *hcmd;
|
||||
const struct iwl_hcmd_utils_ops *utils;
|
||||
const struct iwl_nic_ops *nic;
|
||||
};
|
||||
|
@ -160,6 +151,7 @@ struct iwl_mod_params {
|
|||
int restart_fw; /* def: 1 = restart firmware */
|
||||
bool plcp_check; /* def: true = enable plcp health check */
|
||||
bool ack_check; /* def: false = disable ack health check */
|
||||
bool wd_disable; /* def: false = enable stuck queue check */
|
||||
bool bt_coex_active; /* def: true = enable bt coex */
|
||||
int led_mode; /* def: 0 = system default */
|
||||
bool no_sleep_autoadjust; /* def: true = disable autoadjust */
|
||||
|
@ -336,7 +328,6 @@ void iwl_mac_remove_interface(struct ieee80211_hw *hw,
|
|||
int iwl_mac_change_interface(struct ieee80211_hw *hw,
|
||||
struct ieee80211_vif *vif,
|
||||
enum nl80211_iftype newtype, bool newp2p);
|
||||
int iwl_alloc_txq_mem(struct iwl_priv *priv);
|
||||
void iwl_free_txq_mem(struct iwl_priv *priv);
|
||||
|
||||
#ifdef CONFIG_IWLWIFI_DEBUGFS
|
||||
|
@ -382,7 +373,6 @@ static inline void iwl_update_stats(struct iwl_priv *priv, bool is_tx,
|
|||
******************************************************/
|
||||
void iwl_cmd_queue_free(struct iwl_priv *priv);
|
||||
void iwl_cmd_queue_unmap(struct iwl_priv *priv);
|
||||
int iwl_rx_queue_alloc(struct iwl_priv *priv);
|
||||
void iwl_rx_queue_update_write_ptr(struct iwl_priv *priv,
|
||||
struct iwl_rx_queue *q);
|
||||
int iwl_rx_queue_space(const struct iwl_rx_queue *q);
|
||||
|
@ -396,11 +386,9 @@ void iwl_chswitch_done(struct iwl_priv *priv, bool is_success);
|
|||
* TX
|
||||
******************************************************/
|
||||
void iwl_txq_update_write_ptr(struct iwl_priv *priv, struct iwl_tx_queue *txq);
|
||||
int iwl_tx_queue_init(struct iwl_priv *priv, struct iwl_tx_queue *txq,
|
||||
int slots_num, u32 txq_id);
|
||||
void iwl_tx_queue_reset(struct iwl_priv *priv, struct iwl_tx_queue *txq,
|
||||
int slots_num, u32 txq_id);
|
||||
void iwl_tx_queue_free(struct iwl_priv *priv, int txq_id);
|
||||
int iwl_queue_init(struct iwl_priv *priv, struct iwl_queue *q,
|
||||
int count, int slots_num, u32 id);
|
||||
void iwl_tx_queue_unmap(struct iwl_priv *priv, int txq_id);
|
||||
void iwl_setup_watchdog(struct iwl_priv *priv);
|
||||
/*****************************************************
|
||||
|
|
|
@ -666,7 +666,6 @@ struct iwl_hw_params {
|
|||
u16 max_rxq_size;
|
||||
u16 max_rxq_log;
|
||||
u32 rx_page_order;
|
||||
u32 rx_wrt_ptr_reg;
|
||||
u8 max_stations;
|
||||
u8 ht40_channel;
|
||||
u8 max_beacon_itrvl; /* in 1024 ms */
|
||||
|
@ -1228,6 +1227,25 @@ struct iwl_bus {
|
|||
unsigned int irq;
|
||||
};
|
||||
|
||||
struct iwl_trans;
|
||||
|
||||
/**
|
||||
* struct iwl_trans_ops - transport specific operations
|
||||
|
||||
* @rx_init: inits the rx memory, allocate it if needed
|
||||
* @rx_free: frees the rx memory
|
||||
* @tx_init:inits the tx memory, allocate if needed
|
||||
*/
|
||||
struct iwl_trans_ops {
|
||||
int (*rx_init)(struct iwl_priv *priv);
|
||||
void (*rx_free)(struct iwl_priv *priv);
|
||||
int (*tx_init)(struct iwl_priv *priv);
|
||||
};
|
||||
|
||||
struct iwl_trans {
|
||||
const struct iwl_trans_ops *ops;
|
||||
};
|
||||
|
||||
struct iwl_priv {
|
||||
|
||||
/* ieee device used by generic ieee processing code */
|
||||
|
@ -1296,13 +1314,13 @@ struct iwl_priv {
|
|||
struct mutex mutex;
|
||||
|
||||
struct iwl_bus bus; /* bus specific data */
|
||||
struct iwl_trans trans;
|
||||
|
||||
/* microcode/device supports multiple contexts */
|
||||
u8 valid_contexts;
|
||||
|
||||
/* command queue number */
|
||||
u8 cmd_queue;
|
||||
u8 last_sync_cmd_id;
|
||||
|
||||
/* max number of station keys */
|
||||
u8 sta_key_max_num;
|
||||
|
|
|
@ -171,6 +171,8 @@ int iwl_send_cmd_sync(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
|
|||
int cmd_idx;
|
||||
int ret;
|
||||
|
||||
lockdep_assert_held(&priv->mutex);
|
||||
|
||||
if (WARN_ON(cmd->flags & CMD_ASYNC))
|
||||
return -EINVAL;
|
||||
|
||||
|
@ -181,16 +183,7 @@ int iwl_send_cmd_sync(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
|
|||
IWL_DEBUG_INFO(priv, "Attempting to send sync command %s\n",
|
||||
get_cmd_string(cmd->id));
|
||||
|
||||
if (test_and_set_bit(STATUS_HCMD_ACTIVE, &priv->status)) {
|
||||
IWL_ERR(priv, "STATUS_HCMD_ACTIVE already set while sending %s"
|
||||
". Previous SYNC cmdn is %s\n",
|
||||
get_cmd_string(cmd->id),
|
||||
get_cmd_string(priv->last_sync_cmd_id));
|
||||
WARN_ON(1);
|
||||
} else {
|
||||
priv->last_sync_cmd_id = cmd->id;
|
||||
}
|
||||
|
||||
set_bit(STATUS_HCMD_ACTIVE, &priv->status);
|
||||
IWL_DEBUG_INFO(priv, "Setting HCMD_ACTIVE for command %s\n",
|
||||
get_cmd_string(cmd->id));
|
||||
|
||||
|
|
|
@ -67,6 +67,7 @@
|
|||
#include "iwl-agn.h"
|
||||
#include "iwl-core.h"
|
||||
#include "iwl-io.h"
|
||||
#include "iwl-trans.h"
|
||||
|
||||
/* PCI registers */
|
||||
#define PCI_CFG_RETRY_TIMEOUT 0x041
|
||||
|
@ -93,7 +94,7 @@ static u16 iwl_pciexp_link_ctrl(struct iwl_bus *bus)
|
|||
u16 pci_lnk_ctl;
|
||||
struct pci_dev *pci_dev = IWL_BUS_GET_PCI_DEV(bus);
|
||||
|
||||
pos = pci_find_capability(pci_dev, PCI_CAP_ID_EXP);
|
||||
pos = pci_pcie_cap(pci_dev);
|
||||
pci_read_config_word(pci_dev, pos + PCI_EXP_LNKCTL, &pci_lnk_ctl);
|
||||
return pci_lnk_ctl;
|
||||
}
|
||||
|
|
|
@ -168,6 +168,7 @@
|
|||
* the scheduler (especially for queue #4/#9, the command queue, otherwise
|
||||
* the driver can't issue commands!):
|
||||
*/
|
||||
#define SCD_MEM_LOWER_BOUND (0x0000)
|
||||
|
||||
/**
|
||||
* Max Tx window size is the max number of contiguous TFDs that the scheduler
|
||||
|
@ -197,15 +198,23 @@
|
|||
#define IWLAGN_SCD_QUEUE_CTX_REG2_FRAME_LIMIT_POS (16)
|
||||
#define IWLAGN_SCD_QUEUE_CTX_REG2_FRAME_LIMIT_MSK (0x007F0000)
|
||||
|
||||
#define IWLAGN_SCD_CONTEXT_DATA_OFFSET (0x600)
|
||||
#define IWLAGN_SCD_TX_STTS_BITMAP_OFFSET (0x7B1)
|
||||
#define IWLAGN_SCD_TRANSLATE_TBL_OFFSET (0x7E0)
|
||||
/* Context Data */
|
||||
#define IWLAGN_SCD_CONTEXT_MEM_LOWER_BOUND (SCD_MEM_LOWER_BOUND + 0x600)
|
||||
#define IWLAGN_SCD_CONTEXT_MEM_UPPER_BOUND (SCD_MEM_LOWER_BOUND + 0x6A0)
|
||||
|
||||
/* Tx status */
|
||||
#define IWLAGN_SCD_TX_STTS_MEM_LOWER_BOUND (SCD_MEM_LOWER_BOUND + 0x6A0)
|
||||
#define IWLAGN_SCD_TX_STTS_MEM_UPPER_BOUND (SCD_MEM_LOWER_BOUND + 0x7E0)
|
||||
|
||||
/* Translation Data */
|
||||
#define IWLAGN_SCD_TRANS_TBL_MEM_LOWER_BOUND (SCD_MEM_LOWER_BOUND + 0x7E0)
|
||||
#define IWLAGN_SCD_TRANS_TBL_MEM_UPPER_BOUND (SCD_MEM_LOWER_BOUND + 0x808)
|
||||
|
||||
#define IWLAGN_SCD_CONTEXT_QUEUE_OFFSET(x)\
|
||||
(IWLAGN_SCD_CONTEXT_DATA_OFFSET + ((x) * 8))
|
||||
(IWLAGN_SCD_CONTEXT_MEM_LOWER_BOUND + ((x) * 8))
|
||||
|
||||
#define IWLAGN_SCD_TRANSLATE_TBL_OFFSET_QUEUE(x) \
|
||||
((IWLAGN_SCD_TRANSLATE_TBL_OFFSET + ((x) * 2)) & 0xfffc)
|
||||
((IWLAGN_SCD_TRANS_TBL_MEM_LOWER_BOUND + ((x) * 2)) & 0xfffc)
|
||||
|
||||
#define IWLAGN_SCD_QUEUECHAIN_SEL_ALL(priv) \
|
||||
(((1<<(priv)->hw_params.max_txq_num) - 1) &\
|
||||
|
|
|
@ -134,7 +134,6 @@ int iwl_rx_queue_space(const struct iwl_rx_queue *q)
|
|||
void iwl_rx_queue_update_write_ptr(struct iwl_priv *priv, struct iwl_rx_queue *q)
|
||||
{
|
||||
unsigned long flags;
|
||||
u32 rx_wrt_ptr_reg = priv->hw_params.rx_wrt_ptr_reg;
|
||||
u32 reg;
|
||||
|
||||
spin_lock_irqsave(&q->lock, flags);
|
||||
|
@ -146,7 +145,7 @@ void iwl_rx_queue_update_write_ptr(struct iwl_priv *priv, struct iwl_rx_queue *q
|
|||
/* shadow register enabled */
|
||||
/* Device expects a multiple of 8 */
|
||||
q->write_actual = (q->write & ~0x7);
|
||||
iwl_write32(priv, rx_wrt_ptr_reg, q->write_actual);
|
||||
iwl_write32(priv, FH_RSCSR_CHNL0_WPTR, q->write_actual);
|
||||
} else {
|
||||
/* If power-saving is in use, make sure device is awake */
|
||||
if (test_bit(STATUS_POWER_PMI, &priv->status)) {
|
||||
|
@ -162,14 +161,14 @@ void iwl_rx_queue_update_write_ptr(struct iwl_priv *priv, struct iwl_rx_queue *q
|
|||
}
|
||||
|
||||
q->write_actual = (q->write & ~0x7);
|
||||
iwl_write_direct32(priv, rx_wrt_ptr_reg,
|
||||
iwl_write_direct32(priv, FH_RSCSR_CHNL0_WPTR,
|
||||
q->write_actual);
|
||||
|
||||
/* Else device is assumed to be awake */
|
||||
} else {
|
||||
/* Device expects a multiple of 8 */
|
||||
q->write_actual = (q->write & ~0x7);
|
||||
iwl_write_direct32(priv, rx_wrt_ptr_reg,
|
||||
iwl_write_direct32(priv, FH_RSCSR_CHNL0_WPTR,
|
||||
q->write_actual);
|
||||
}
|
||||
}
|
||||
|
@ -179,46 +178,6 @@ void iwl_rx_queue_update_write_ptr(struct iwl_priv *priv, struct iwl_rx_queue *q
|
|||
spin_unlock_irqrestore(&q->lock, flags);
|
||||
}
|
||||
|
||||
int iwl_rx_queue_alloc(struct iwl_priv *priv)
|
||||
{
|
||||
struct iwl_rx_queue *rxq = &priv->rxq;
|
||||
struct device *dev = priv->bus.dev;
|
||||
int i;
|
||||
|
||||
spin_lock_init(&rxq->lock);
|
||||
INIT_LIST_HEAD(&rxq->rx_free);
|
||||
INIT_LIST_HEAD(&rxq->rx_used);
|
||||
|
||||
/* Alloc the circular buffer of Read Buffer Descriptors (RBDs) */
|
||||
rxq->bd = dma_alloc_coherent(dev, 4 * RX_QUEUE_SIZE, &rxq->bd_dma,
|
||||
GFP_KERNEL);
|
||||
if (!rxq->bd)
|
||||
goto err_bd;
|
||||
|
||||
rxq->rb_stts = dma_alloc_coherent(dev, sizeof(struct iwl_rb_status),
|
||||
&rxq->rb_stts_dma, GFP_KERNEL);
|
||||
if (!rxq->rb_stts)
|
||||
goto err_rb;
|
||||
|
||||
/* Fill the rx_used queue with _all_ of the Rx buffers */
|
||||
for (i = 0; i < RX_FREE_BUFFERS + RX_QUEUE_SIZE; i++)
|
||||
list_add_tail(&rxq->pool[i].list, &rxq->rx_used);
|
||||
|
||||
/* Set us so that we have processed and used all buffers, but have
|
||||
* not restocked the Rx queue with fresh buffers */
|
||||
rxq->read = rxq->write = 0;
|
||||
rxq->write_actual = 0;
|
||||
rxq->free_count = 0;
|
||||
rxq->need_update = 0;
|
||||
return 0;
|
||||
|
||||
err_rb:
|
||||
dma_free_coherent(dev, 4 * RX_QUEUE_SIZE, rxq->bd,
|
||||
rxq->bd_dma);
|
||||
err_bd:
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
/******************************************************************************
|
||||
*
|
||||
* Generic RX handler implementations
|
||||
|
|
|
@ -66,116 +66,144 @@
|
|||
#include <linux/types.h>
|
||||
|
||||
|
||||
/* Commands from user space to kernel space(IWL_TM_CMD_ID_APP2DEV_XX) and
|
||||
/*
|
||||
* Commands from user space to kernel space(IWL_TM_CMD_ID_APP2DEV_XX) and
|
||||
* from and kernel space to user space(IWL_TM_CMD_ID_DEV2APP_XX).
|
||||
* The command ID is carried with IWL_TM_ATTR_COMMAND. There are three types of
|
||||
* of command from user space and two types of command from kernel space.
|
||||
* See below.
|
||||
* The command ID is carried with IWL_TM_ATTR_COMMAND.
|
||||
*
|
||||
* @IWL_TM_CMD_APP2DEV_UCODE:
|
||||
* commands from user application to the uCode,
|
||||
* the actual uCode host command ID is carried with
|
||||
* IWL_TM_ATTR_UCODE_CMD_ID
|
||||
*
|
||||
* @IWL_TM_CMD_APP2DEV_REG_READ32:
|
||||
* @IWL_TM_CMD_APP2DEV_REG_WRITE32:
|
||||
* @IWL_TM_CMD_APP2DEV_REG_WRITE8:
|
||||
* commands from user applicaiton to access register
|
||||
*
|
||||
* @IWL_TM_CMD_APP2DEV_GET_DEVICENAME: retrieve device name
|
||||
* @IWL_TM_CMD_APP2DEV_LOAD_INIT_FW: load initial uCode image
|
||||
* @IWL_TM_CMD_APP2DEV_CFG_INIT_CALIB: perform calibration
|
||||
* @IWL_TM_CMD_APP2DEV_LOAD_RUNTIME_FW: load runtime uCode image
|
||||
* @IWL_TM_CMD_APP2DEV_GET_EEPROM: request EEPROM data
|
||||
* @IWL_TM_CMD_APP2DEV_FIXRATE_REQ: set fix MCS
|
||||
* commands fom user space for pure driver level operations
|
||||
*
|
||||
* @IWL_TM_CMD_APP2DEV_BEGIN_TRACE:
|
||||
* @IWL_TM_CMD_APP2DEV_END_TRACE:
|
||||
* @IWL_TM_CMD_APP2DEV_READ_TRACE:
|
||||
* commands fom user space for uCode trace operations
|
||||
*
|
||||
* @IWL_TM_CMD_DEV2APP_SYNC_RSP:
|
||||
* commands from kernel space to carry the synchronous response
|
||||
* to user application
|
||||
* @IWL_TM_CMD_DEV2APP_UCODE_RX_PKT:
|
||||
* commands from kernel space to multicast the spontaneous messages
|
||||
* to user application
|
||||
* @IWL_TM_CMD_DEV2APP_EEPROM_RSP:
|
||||
* commands from kernel space to carry the eeprom response
|
||||
* to user application
|
||||
*/
|
||||
enum iwl_tm_cmd_t {
|
||||
/* commands from user application to the uCode,
|
||||
* the actual uCode host command ID is carried with
|
||||
* IWL_TM_ATTR_UCODE_CMD_ID */
|
||||
IWL_TM_CMD_APP2DEV_UCODE = 1,
|
||||
|
||||
/* commands from user applicaiton to access register */
|
||||
IWL_TM_CMD_APP2DEV_REG_READ32,
|
||||
IWL_TM_CMD_APP2DEV_REG_WRITE32,
|
||||
IWL_TM_CMD_APP2DEV_REG_WRITE8,
|
||||
|
||||
/* commands fom user space for pure driver level operations */
|
||||
IWL_TM_CMD_APP2DEV_GET_DEVICENAME,
|
||||
IWL_TM_CMD_APP2DEV_LOAD_INIT_FW,
|
||||
IWL_TM_CMD_APP2DEV_CFG_INIT_CALIB,
|
||||
IWL_TM_CMD_APP2DEV_LOAD_RUNTIME_FW,
|
||||
IWL_TM_CMD_APP2DEV_GET_EEPROM,
|
||||
IWL_TM_CMD_APP2DEV_FIXRATE_REQ,
|
||||
/* if there is other new command for the driver layer operation,
|
||||
* append them here */
|
||||
|
||||
/* commands fom user space for uCode trace operations */
|
||||
IWL_TM_CMD_APP2DEV_BEGIN_TRACE,
|
||||
IWL_TM_CMD_APP2DEV_END_TRACE,
|
||||
IWL_TM_CMD_APP2DEV_READ_TRACE,
|
||||
|
||||
/* commands from kernel space to carry the synchronous response
|
||||
* to user application */
|
||||
IWL_TM_CMD_DEV2APP_SYNC_RSP,
|
||||
|
||||
/* commands from kernel space to multicast the spontaneous messages
|
||||
* to user application */
|
||||
IWL_TM_CMD_DEV2APP_UCODE_RX_PKT,
|
||||
|
||||
/* commands from kernel space to carry the eeprom response
|
||||
* to user application */
|
||||
IWL_TM_CMD_DEV2APP_EEPROM_RSP,
|
||||
|
||||
IWL_TM_CMD_MAX,
|
||||
IWL_TM_CMD_APP2DEV_UCODE = 1,
|
||||
IWL_TM_CMD_APP2DEV_REG_READ32 = 2,
|
||||
IWL_TM_CMD_APP2DEV_REG_WRITE32 = 3,
|
||||
IWL_TM_CMD_APP2DEV_REG_WRITE8 = 4,
|
||||
IWL_TM_CMD_APP2DEV_GET_DEVICENAME = 5,
|
||||
IWL_TM_CMD_APP2DEV_LOAD_INIT_FW = 6,
|
||||
IWL_TM_CMD_APP2DEV_CFG_INIT_CALIB = 7,
|
||||
IWL_TM_CMD_APP2DEV_LOAD_RUNTIME_FW = 8,
|
||||
IWL_TM_CMD_APP2DEV_GET_EEPROM = 9,
|
||||
IWL_TM_CMD_APP2DEV_FIXRATE_REQ = 10,
|
||||
IWL_TM_CMD_APP2DEV_BEGIN_TRACE = 11,
|
||||
IWL_TM_CMD_APP2DEV_END_TRACE = 12,
|
||||
IWL_TM_CMD_APP2DEV_READ_TRACE = 13,
|
||||
IWL_TM_CMD_DEV2APP_SYNC_RSP = 14,
|
||||
IWL_TM_CMD_DEV2APP_UCODE_RX_PKT = 15,
|
||||
IWL_TM_CMD_DEV2APP_EEPROM_RSP = 16,
|
||||
IWL_TM_CMD_MAX = 17,
|
||||
};
|
||||
|
||||
/*
|
||||
* Atrribute filed in testmode command
|
||||
* See enum iwl_tm_cmd_t.
|
||||
*
|
||||
* @IWL_TM_ATTR_NOT_APPLICABLE:
|
||||
* The attribute is not applicable or invalid
|
||||
* @IWL_TM_ATTR_COMMAND:
|
||||
* From user space to kernel space:
|
||||
* the command either destines to ucode, driver, or register;
|
||||
* From kernel space to user space:
|
||||
* the command either carries synchronous response,
|
||||
* or the spontaneous message multicast from the device;
|
||||
*
|
||||
* @IWL_TM_ATTR_UCODE_CMD_ID:
|
||||
* @IWL_TM_ATTR_UCODE_CMD_DATA:
|
||||
* When IWL_TM_ATTR_COMMAND is IWL_TM_CMD_APP2DEV_UCODE,
|
||||
* The mandatory fields are :
|
||||
* IWL_TM_ATTR_UCODE_CMD_ID for recognizable command ID;
|
||||
* IWL_TM_ATTR_COMMAND_FLAG for the flags of the commands;
|
||||
* The optional fields are:
|
||||
* IWL_TM_ATTR_UCODE_CMD_DATA for the actual command payload
|
||||
* to the ucode
|
||||
*
|
||||
* @IWL_TM_ATTR_REG_OFFSET:
|
||||
* @IWL_TM_ATTR_REG_VALUE8:
|
||||
* @IWL_TM_ATTR_REG_VALUE32:
|
||||
* When IWL_TM_ATTR_COMMAND is IWL_TM_CMD_APP2DEV_REG_XXX,
|
||||
* The mandatory fields are:
|
||||
* IWL_TM_ATTR_REG_OFFSET for the offset of the target register;
|
||||
* IWL_TM_ATTR_REG_VALUE8 or IWL_TM_ATTR_REG_VALUE32 for value
|
||||
*
|
||||
* @IWL_TM_ATTR_SYNC_RSP:
|
||||
* When IWL_TM_ATTR_COMMAND is IWL_TM_CMD_DEV2APP_SYNC_RSP,
|
||||
* The mandatory fields are:
|
||||
* IWL_TM_ATTR_SYNC_RSP for the data content responding to the user
|
||||
* application command
|
||||
*
|
||||
* @IWL_TM_ATTR_UCODE_RX_PKT:
|
||||
* When IWL_TM_ATTR_COMMAND is IWL_TM_CMD_DEV2APP_UCODE_RX_PKT,
|
||||
* The mandatory fields are:
|
||||
* IWL_TM_ATTR_UCODE_RX_PKT for the data content multicast to the user
|
||||
* application
|
||||
*
|
||||
* @IWL_TM_ATTR_EEPROM:
|
||||
* When IWL_TM_ATTR_COMMAND is IWL_TM_CMD_DEV2APP_EEPROM,
|
||||
* The mandatory fields are:
|
||||
* IWL_TM_ATTR_EEPROM for the data content responging to the user
|
||||
* application
|
||||
*
|
||||
* @IWL_TM_ATTR_TRACE_ADDR:
|
||||
* @IWL_TM_ATTR_TRACE_SIZE:
|
||||
* @IWL_TM_ATTR_TRACE_DUMP:
|
||||
* When IWL_TM_ATTR_COMMAND is IWL_TM_CMD_APP2DEV_XXX_TRACE,
|
||||
* The mandatory fields are:
|
||||
* IWL_TM_ATTR_MEM_TRACE_ADDR for the trace address
|
||||
* IWL_TM_ATTR_MEM_TRACE_SIZE for the trace buffer size
|
||||
* IWL_TM_ATTR_MEM_TRACE_DUMP for the trace dump
|
||||
*
|
||||
* @IWL_TM_ATTR_FIXRATE:
|
||||
* When IWL_TM_ATTR_COMMAND is IWL_TM_CMD_APP2DEV_FIXRATE_REQ,
|
||||
* The mandatory fields are:
|
||||
* IWL_TM_ATTR_FIXRATE for the fixed rate
|
||||
*
|
||||
*/
|
||||
enum iwl_tm_attr_t {
|
||||
IWL_TM_ATTR_NOT_APPLICABLE = 0,
|
||||
|
||||
/* From user space to kernel space:
|
||||
* the command either destines to ucode, driver, or register;
|
||||
* See enum iwl_tm_cmd_t.
|
||||
*
|
||||
* From kernel space to user space:
|
||||
* the command either carries synchronous response,
|
||||
* or the spontaneous message multicast from the device;
|
||||
* See enum iwl_tm_cmd_t. */
|
||||
IWL_TM_ATTR_COMMAND,
|
||||
|
||||
/* When IWL_TM_ATTR_COMMAND is IWL_TM_CMD_APP2DEV_UCODE,
|
||||
* The mandatory fields are :
|
||||
* IWL_TM_ATTR_UCODE_CMD_ID for recognizable command ID;
|
||||
* IWL_TM_ATTR_COMMAND_FLAG for the flags of the commands;
|
||||
* The optional fields are:
|
||||
* IWL_TM_ATTR_UCODE_CMD_DATA for the actual command payload
|
||||
* to the ucode */
|
||||
IWL_TM_ATTR_UCODE_CMD_ID,
|
||||
IWL_TM_ATTR_UCODE_CMD_DATA,
|
||||
|
||||
/* When IWL_TM_ATTR_COMMAND is IWL_TM_CMD_APP2DEV_REG_XXX,
|
||||
* The mandatory fields are:
|
||||
* IWL_TM_ATTR_REG_OFFSET for the offset of the target register;
|
||||
* IWL_TM_ATTR_REG_VALUE8 or IWL_TM_ATTR_REG_VALUE32 for value */
|
||||
IWL_TM_ATTR_REG_OFFSET,
|
||||
IWL_TM_ATTR_REG_VALUE8,
|
||||
IWL_TM_ATTR_REG_VALUE32,
|
||||
|
||||
/* When IWL_TM_ATTR_COMMAND is IWL_TM_CMD_DEV2APP_SYNC_RSP,
|
||||
* The mandatory fields are:
|
||||
* IWL_TM_ATTR_SYNC_RSP for the data content responding to the user
|
||||
* application command */
|
||||
IWL_TM_ATTR_SYNC_RSP,
|
||||
/* When IWL_TM_ATTR_COMMAND is IWL_TM_CMD_DEV2APP_UCODE_RX_PKT,
|
||||
* The mandatory fields are:
|
||||
* IWL_TM_ATTR_UCODE_RX_PKT for the data content multicast to the user
|
||||
* application */
|
||||
IWL_TM_ATTR_UCODE_RX_PKT,
|
||||
|
||||
/* When IWL_TM_ATTR_COMMAND is IWL_TM_CMD_DEV2APP_EEPROM,
|
||||
* The mandatory fields are:
|
||||
* IWL_TM_ATTR_EEPROM for the data content responging to the user
|
||||
* application */
|
||||
IWL_TM_ATTR_EEPROM,
|
||||
|
||||
/* When IWL_TM_ATTR_COMMAND is IWL_TM_CMD_APP2DEV_XXX_TRACE,
|
||||
* The mandatory fields are:
|
||||
* IWL_TM_ATTR_MEM_TRACE_ADDR for the trace address
|
||||
*/
|
||||
IWL_TM_ATTR_TRACE_ADDR,
|
||||
IWL_TM_ATTR_TRACE_SIZE,
|
||||
IWL_TM_ATTR_TRACE_DUMP,
|
||||
|
||||
/* When IWL_TM_ATTR_COMMAND is IWL_TM_CMD_APP2DEV_FIXRATE_REQ,
|
||||
* The mandatory fields are:
|
||||
* IWL_TM_ATTR_FIXRATE for the fixed rate
|
||||
*/
|
||||
IWL_TM_ATTR_FIXRATE,
|
||||
|
||||
IWL_TM_ATTR_MAX,
|
||||
IWL_TM_ATTR_NOT_APPLICABLE = 0,
|
||||
IWL_TM_ATTR_COMMAND = 1,
|
||||
IWL_TM_ATTR_UCODE_CMD_ID = 2,
|
||||
IWL_TM_ATTR_UCODE_CMD_DATA = 3,
|
||||
IWL_TM_ATTR_REG_OFFSET = 4,
|
||||
IWL_TM_ATTR_REG_VALUE8 = 5,
|
||||
IWL_TM_ATTR_REG_VALUE32 = 6,
|
||||
IWL_TM_ATTR_SYNC_RSP = 7,
|
||||
IWL_TM_ATTR_UCODE_RX_PKT = 8,
|
||||
IWL_TM_ATTR_EEPROM = 9,
|
||||
IWL_TM_ATTR_TRACE_ADDR = 10,
|
||||
IWL_TM_ATTR_TRACE_SIZE = 11,
|
||||
IWL_TM_ATTR_TRACE_DUMP = 12,
|
||||
IWL_TM_ATTR_FIXRATE = 13,
|
||||
IWL_TM_ATTR_MAX = 14,
|
||||
};
|
||||
|
||||
/* uCode trace buffer */
|
||||
|
|
|
@ -0,0 +1,423 @@
|
|||
/******************************************************************************
|
||||
*
|
||||
* This file is provided under a dual BSD/GPLv2 license. When using or
|
||||
* redistributing this file, you may do so under either license.
|
||||
*
|
||||
* GPL LICENSE SUMMARY
|
||||
*
|
||||
* Copyright(c) 2007 - 2011 Intel Corporation. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of version 2 of the GNU General Public License as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful, but
|
||||
* WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program; if not, write to the Free Software
|
||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110,
|
||||
* USA
|
||||
*
|
||||
* The full GNU General Public License is included in this distribution
|
||||
* in the file called LICENSE.GPL.
|
||||
*
|
||||
* Contact Information:
|
||||
* Intel Linux Wireless <ilw@linux.intel.com>
|
||||
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
|
||||
*
|
||||
* BSD LICENSE
|
||||
*
|
||||
* Copyright(c) 2005 - 2011 Intel Corporation. All rights reserved.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions
|
||||
* are met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above copyright
|
||||
* notice, this list of conditions and the following disclaimer in
|
||||
* the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name Intel Corporation nor the names of its
|
||||
* contributors may be used to endorse or promote products derived
|
||||
* from this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*
|
||||
*****************************************************************************/
|
||||
#include "iwl-dev.h"
|
||||
#include "iwl-trans.h"
|
||||
#include "iwl-core.h"
|
||||
#include "iwl-helpers.h"
|
||||
/*TODO remove uneeded includes when the transport layer tx_free will be here */
|
||||
#include "iwl-agn.h"
|
||||
|
||||
static int iwl_trans_rx_alloc(struct iwl_priv *priv)
|
||||
{
|
||||
struct iwl_rx_queue *rxq = &priv->rxq;
|
||||
struct device *dev = priv->bus.dev;
|
||||
|
||||
memset(&priv->rxq, 0, sizeof(priv->rxq));
|
||||
|
||||
spin_lock_init(&rxq->lock);
|
||||
INIT_LIST_HEAD(&rxq->rx_free);
|
||||
INIT_LIST_HEAD(&rxq->rx_used);
|
||||
|
||||
if (WARN_ON(rxq->bd || rxq->rb_stts))
|
||||
return -EINVAL;
|
||||
|
||||
/* Allocate the circular buffer of Read Buffer Descriptors (RBDs) */
|
||||
rxq->bd = dma_alloc_coherent(dev, sizeof(__le32) * RX_QUEUE_SIZE,
|
||||
&rxq->bd_dma, GFP_KERNEL);
|
||||
if (!rxq->bd)
|
||||
goto err_bd;
|
||||
memset(rxq->bd, 0, sizeof(__le32) * RX_QUEUE_SIZE);
|
||||
|
||||
/*Allocate the driver's pointer to receive buffer status */
|
||||
rxq->rb_stts = dma_alloc_coherent(dev, sizeof(*rxq->rb_stts),
|
||||
&rxq->rb_stts_dma, GFP_KERNEL);
|
||||
if (!rxq->rb_stts)
|
||||
goto err_rb_stts;
|
||||
memset(rxq->rb_stts, 0, sizeof(*rxq->rb_stts));
|
||||
|
||||
return 0;
|
||||
|
||||
err_rb_stts:
|
||||
dma_free_coherent(dev, sizeof(__le32) * RX_QUEUE_SIZE,
|
||||
rxq->bd, rxq->bd_dma);
|
||||
memset(&rxq->bd_dma, 0, sizeof(rxq->bd_dma));
|
||||
rxq->bd = NULL;
|
||||
err_bd:
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
static void iwl_trans_rxq_free_rx_bufs(struct iwl_priv *priv)
|
||||
{
|
||||
struct iwl_rx_queue *rxq = &priv->rxq;
|
||||
int i;
|
||||
|
||||
/* Fill the rx_used queue with _all_ of the Rx buffers */
|
||||
for (i = 0; i < RX_FREE_BUFFERS + RX_QUEUE_SIZE; i++) {
|
||||
/* In the reset function, these buffers may have been allocated
|
||||
* to an SKB, so we need to unmap and free potential storage */
|
||||
if (rxq->pool[i].page != NULL) {
|
||||
dma_unmap_page(priv->bus.dev, rxq->pool[i].page_dma,
|
||||
PAGE_SIZE << priv->hw_params.rx_page_order,
|
||||
DMA_FROM_DEVICE);
|
||||
__iwl_free_pages(priv, rxq->pool[i].page);
|
||||
rxq->pool[i].page = NULL;
|
||||
}
|
||||
list_add_tail(&rxq->pool[i].list, &rxq->rx_used);
|
||||
}
|
||||
}
|
||||
|
||||
static int iwl_trans_rx_init(struct iwl_priv *priv)
|
||||
{
|
||||
struct iwl_rx_queue *rxq = &priv->rxq;
|
||||
int i, err;
|
||||
unsigned long flags;
|
||||
|
||||
if (!rxq->bd) {
|
||||
err = iwl_trans_rx_alloc(priv);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
spin_lock_irqsave(&rxq->lock, flags);
|
||||
INIT_LIST_HEAD(&rxq->rx_free);
|
||||
INIT_LIST_HEAD(&rxq->rx_used);
|
||||
|
||||
iwl_trans_rxq_free_rx_bufs(priv);
|
||||
|
||||
for (i = 0; i < RX_QUEUE_SIZE; i++)
|
||||
rxq->queue[i] = NULL;
|
||||
|
||||
/* Set us so that we have processed and used all buffers, but have
|
||||
* not restocked the Rx queue with fresh buffers */
|
||||
rxq->read = rxq->write = 0;
|
||||
rxq->write_actual = 0;
|
||||
rxq->free_count = 0;
|
||||
spin_unlock_irqrestore(&rxq->lock, flags);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void iwl_trans_rx_free(struct iwl_priv *priv)
|
||||
{
|
||||
struct iwl_rx_queue *rxq = &priv->rxq;
|
||||
unsigned long flags;
|
||||
|
||||
/*if rxq->bd is NULL, it means that nothing has been allocated,
|
||||
* exit now */
|
||||
if (!rxq->bd) {
|
||||
IWL_DEBUG_INFO(priv, "Free NULL rx context\n");
|
||||
return;
|
||||
}
|
||||
|
||||
spin_lock_irqsave(&rxq->lock, flags);
|
||||
iwl_trans_rxq_free_rx_bufs(priv);
|
||||
spin_unlock_irqrestore(&rxq->lock, flags);
|
||||
|
||||
dma_free_coherent(priv->bus.dev, sizeof(__le32) * RX_QUEUE_SIZE,
|
||||
rxq->bd, rxq->bd_dma);
|
||||
memset(&rxq->bd_dma, 0, sizeof(rxq->bd_dma));
|
||||
rxq->bd = NULL;
|
||||
|
||||
if (rxq->rb_stts)
|
||||
dma_free_coherent(priv->bus.dev,
|
||||
sizeof(struct iwl_rb_status),
|
||||
rxq->rb_stts, rxq->rb_stts_dma);
|
||||
else
|
||||
IWL_DEBUG_INFO(priv, "Free rxq->rb_stts which is NULL\n");
|
||||
memset(&rxq->rb_stts_dma, 0, sizeof(rxq->rb_stts_dma));
|
||||
rxq->rb_stts = NULL;
|
||||
}
|
||||
|
||||
/* TODO:remove this code duplication */
|
||||
static inline int iwlagn_alloc_dma_ptr(struct iwl_priv *priv,
|
||||
struct iwl_dma_ptr *ptr, size_t size)
|
||||
{
|
||||
if (WARN_ON(ptr->addr))
|
||||
return -EINVAL;
|
||||
|
||||
ptr->addr = dma_alloc_coherent(priv->bus.dev, size,
|
||||
&ptr->dma, GFP_KERNEL);
|
||||
if (!ptr->addr)
|
||||
return -ENOMEM;
|
||||
ptr->size = size;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int iwl_trans_txq_alloc(struct iwl_priv *priv, struct iwl_tx_queue *txq,
|
||||
int slots_num, u32 txq_id)
|
||||
{
|
||||
size_t tfd_sz = priv->hw_params.tfd_size * TFD_QUEUE_SIZE_MAX;
|
||||
int i;
|
||||
|
||||
if (WARN_ON(txq->meta || txq->cmd || txq->txb || txq->tfds))
|
||||
return -EINVAL;
|
||||
|
||||
txq->meta = kzalloc(sizeof(txq->meta[0]) * slots_num,
|
||||
GFP_KERNEL);
|
||||
txq->cmd = kzalloc(sizeof(txq->cmd[0]) * slots_num,
|
||||
GFP_KERNEL);
|
||||
|
||||
if (!txq->meta || !txq->cmd)
|
||||
goto error;
|
||||
|
||||
for (i = 0; i < slots_num; i++) {
|
||||
txq->cmd[i] = kmalloc(sizeof(struct iwl_device_cmd),
|
||||
GFP_KERNEL);
|
||||
if (!txq->cmd[i])
|
||||
goto error;
|
||||
}
|
||||
|
||||
/* Alloc driver data array and TFD circular buffer */
|
||||
/* Driver private data, only for Tx (not command) queues,
|
||||
* not shared with device. */
|
||||
if (txq_id != priv->cmd_queue) {
|
||||
txq->txb = kzalloc(sizeof(txq->txb[0]) *
|
||||
TFD_QUEUE_SIZE_MAX, GFP_KERNEL);
|
||||
if (!txq->txb) {
|
||||
IWL_ERR(priv, "kmalloc for auxiliary BD "
|
||||
"structures failed\n");
|
||||
goto error;
|
||||
}
|
||||
} else {
|
||||
txq->txb = NULL;
|
||||
}
|
||||
|
||||
/* Circular buffer of transmit frame descriptors (TFDs),
|
||||
* shared with device */
|
||||
txq->tfds = dma_alloc_coherent(priv->bus.dev, tfd_sz, &txq->q.dma_addr,
|
||||
GFP_KERNEL);
|
||||
if (!txq->tfds) {
|
||||
IWL_ERR(priv, "dma_alloc_coherent(%zd) failed\n", tfd_sz);
|
||||
goto error;
|
||||
}
|
||||
txq->q.id = txq_id;
|
||||
|
||||
return 0;
|
||||
error:
|
||||
kfree(txq->txb);
|
||||
txq->txb = NULL;
|
||||
/* since txq->cmd has been zeroed,
|
||||
* all non allocated cmd[i] will be NULL */
|
||||
if (txq->cmd)
|
||||
for (i = 0; i < slots_num; i++)
|
||||
kfree(txq->cmd[i]);
|
||||
kfree(txq->meta);
|
||||
kfree(txq->cmd);
|
||||
txq->meta = NULL;
|
||||
txq->cmd = NULL;
|
||||
|
||||
return -ENOMEM;
|
||||
|
||||
}
|
||||
|
||||
static int iwl_trans_txq_init(struct iwl_priv *priv, struct iwl_tx_queue *txq,
|
||||
int slots_num, u32 txq_id)
|
||||
{
|
||||
int ret;
|
||||
|
||||
txq->need_update = 0;
|
||||
memset(txq->meta, 0, sizeof(txq->meta[0]) * slots_num);
|
||||
|
||||
/*
|
||||
* For the default queues 0-3, set up the swq_id
|
||||
* already -- all others need to get one later
|
||||
* (if they need one at all).
|
||||
*/
|
||||
if (txq_id < 4)
|
||||
iwl_set_swq_id(txq, txq_id, txq_id);
|
||||
|
||||
/* TFD_QUEUE_SIZE_MAX must be power-of-two size, otherwise
|
||||
* iwl_queue_inc_wrap and iwl_queue_dec_wrap are broken. */
|
||||
BUILD_BUG_ON(TFD_QUEUE_SIZE_MAX & (TFD_QUEUE_SIZE_MAX - 1));
|
||||
|
||||
/* Initialize queue's high/low-water marks, and head/tail indexes */
|
||||
ret = iwl_queue_init(priv, &txq->q, TFD_QUEUE_SIZE_MAX, slots_num,
|
||||
txq_id);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* Tell nic where to find circular buffer of Tx Frame Descriptors for
|
||||
* given Tx queue, and enable the DMA channel used for that queue.
|
||||
* Circular buffer (TFD queue in DRAM) physical base address */
|
||||
iwl_write_direct32(priv, FH_MEM_CBBC_QUEUE(txq_id),
|
||||
txq->q.dma_addr >> 8);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* iwl_trans_tx_alloc - allocate TX context
|
||||
* Allocate all Tx DMA structures and initialize them
|
||||
*
|
||||
* @param priv
|
||||
* @return error code
|
||||
*/
|
||||
static int iwl_trans_tx_alloc(struct iwl_priv *priv)
|
||||
{
|
||||
int ret;
|
||||
int txq_id, slots_num;
|
||||
|
||||
/*It is not allowed to alloc twice, so warn when this happens.
|
||||
* We cannot rely on the previous allocation, so free and fail */
|
||||
if (WARN_ON(priv->txq)) {
|
||||
ret = -EINVAL;
|
||||
goto error;
|
||||
}
|
||||
|
||||
ret = iwlagn_alloc_dma_ptr(priv, &priv->scd_bc_tbls,
|
||||
priv->hw_params.scd_bc_tbls_size);
|
||||
if (ret) {
|
||||
IWL_ERR(priv, "Scheduler BC Table allocation failed\n");
|
||||
goto error;
|
||||
}
|
||||
|
||||
/* Alloc keep-warm buffer */
|
||||
ret = iwlagn_alloc_dma_ptr(priv, &priv->kw, IWL_KW_SIZE);
|
||||
if (ret) {
|
||||
IWL_ERR(priv, "Keep Warm allocation failed\n");
|
||||
goto error;
|
||||
}
|
||||
|
||||
priv->txq = kzalloc(sizeof(struct iwl_tx_queue) *
|
||||
priv->cfg->base_params->num_of_queues, GFP_KERNEL);
|
||||
if (!priv->txq) {
|
||||
IWL_ERR(priv, "Not enough memory for txq\n");
|
||||
ret = ENOMEM;
|
||||
goto error;
|
||||
}
|
||||
|
||||
/* Alloc and init all Tx queues, including the command queue (#4/#9) */
|
||||
for (txq_id = 0; txq_id < priv->hw_params.max_txq_num; txq_id++) {
|
||||
slots_num = (txq_id == priv->cmd_queue) ?
|
||||
TFD_CMD_SLOTS : TFD_TX_CMD_SLOTS;
|
||||
ret = iwl_trans_txq_alloc(priv, &priv->txq[txq_id], slots_num,
|
||||
txq_id);
|
||||
if (ret) {
|
||||
IWL_ERR(priv, "Tx %d queue alloc failed\n", txq_id);
|
||||
goto error;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
error:
|
||||
iwlagn_hw_txq_ctx_free(priv);
|
||||
|
||||
return ret;
|
||||
}
|
||||
static int iwl_trans_tx_init(struct iwl_priv *priv)
|
||||
{
|
||||
int ret;
|
||||
int txq_id, slots_num;
|
||||
unsigned long flags;
|
||||
bool alloc = false;
|
||||
|
||||
if (!priv->txq) {
|
||||
ret = iwl_trans_tx_alloc(priv);
|
||||
if (ret)
|
||||
goto error;
|
||||
alloc = true;
|
||||
}
|
||||
|
||||
spin_lock_irqsave(&priv->lock, flags);
|
||||
|
||||
/* Turn off all Tx DMA fifos */
|
||||
iwl_write_prph(priv, IWLAGN_SCD_TXFACT, 0);
|
||||
|
||||
/* Tell NIC where to find the "keep warm" buffer */
|
||||
iwl_write_direct32(priv, FH_KW_MEM_ADDR_REG, priv->kw.dma >> 4);
|
||||
|
||||
spin_unlock_irqrestore(&priv->lock, flags);
|
||||
|
||||
/* Alloc and init all Tx queues, including the command queue (#4/#9) */
|
||||
for (txq_id = 0; txq_id < priv->hw_params.max_txq_num; txq_id++) {
|
||||
slots_num = (txq_id == priv->cmd_queue) ?
|
||||
TFD_CMD_SLOTS : TFD_TX_CMD_SLOTS;
|
||||
ret = iwl_trans_txq_init(priv, &priv->txq[txq_id], slots_num,
|
||||
txq_id);
|
||||
if (ret) {
|
||||
IWL_ERR(priv, "Tx %d queue init failed\n", txq_id);
|
||||
goto error;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
error:
|
||||
/*Upon error, free only if we allocated something */
|
||||
if (alloc)
|
||||
iwlagn_hw_txq_ctx_free(priv);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static const struct iwl_trans_ops trans_ops = {
|
||||
.rx_init = iwl_trans_rx_init,
|
||||
.rx_free = iwl_trans_rx_free,
|
||||
|
||||
.tx_init = iwl_trans_tx_init,
|
||||
};
|
||||
|
||||
void iwl_trans_register(struct iwl_trans *trans)
|
||||
{
|
||||
trans->ops = &trans_ops;
|
||||
}
|
|
@ -0,0 +1,64 @@
|
|||
/******************************************************************************
|
||||
*
|
||||
* This file is provided under a dual BSD/GPLv2 license. When using or
|
||||
* redistributing this file, you may do so under either license.
|
||||
*
|
||||
* GPL LICENSE SUMMARY
|
||||
*
|
||||
* Copyright(c) 2007 - 2011 Intel Corporation. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of version 2 of the GNU General Public License as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful, but
|
||||
* WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program; if not, write to the Free Software
|
||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110,
|
||||
* USA
|
||||
*
|
||||
* The full GNU General Public License is included in this distribution
|
||||
* in the file called LICENSE.GPL.
|
||||
*
|
||||
* Contact Information:
|
||||
* Intel Linux Wireless <ilw@linux.intel.com>
|
||||
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
|
||||
*
|
||||
* BSD LICENSE
|
||||
*
|
||||
* Copyright(c) 2005 - 2011 Intel Corporation. All rights reserved.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions
|
||||
* are met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above copyright
|
||||
* notice, this list of conditions and the following disclaimer in
|
||||
* the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name Intel Corporation nor the names of its
|
||||
* contributors may be used to endorse or promote products derived
|
||||
* from this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*
|
||||
*****************************************************************************/
|
||||
|
||||
void iwl_trans_register(struct iwl_trans *trans);
|
|
@ -220,24 +220,6 @@ int iwlagn_txq_attach_buf_to_tfd(struct iwl_priv *priv,
|
|||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Tell nic where to find circular buffer of Tx Frame Descriptors for
|
||||
* given Tx queue, and enable the DMA channel used for that queue.
|
||||
*
|
||||
* supports up to 16 Tx queues in DRAM, mapped to up to 8 Tx DMA
|
||||
* channels supported in hardware.
|
||||
*/
|
||||
static int iwlagn_tx_queue_init(struct iwl_priv *priv, struct iwl_tx_queue *txq)
|
||||
{
|
||||
int txq_id = txq->q.id;
|
||||
|
||||
/* Circular buffer (TFD queue in DRAM) physical base address */
|
||||
iwl_write_direct32(priv, FH_MEM_CBBC_QUEUE(txq_id),
|
||||
txq->q.dma_addr >> 8);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* iwl_tx_queue_unmap - Unmap any remaining DMA mappings and free skb's
|
||||
*/
|
||||
|
@ -392,11 +374,10 @@ int iwl_queue_space(const struct iwl_queue *q)
|
|||
return s;
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* iwl_queue_init - Initialize queue's high/low-water and read/write indexes
|
||||
*/
|
||||
static int iwl_queue_init(struct iwl_priv *priv, struct iwl_queue *q,
|
||||
int iwl_queue_init(struct iwl_priv *priv, struct iwl_queue *q,
|
||||
int count, int slots_num, u32 id)
|
||||
{
|
||||
q->n_bd = count;
|
||||
|
@ -426,124 +407,6 @@ static int iwl_queue_init(struct iwl_priv *priv, struct iwl_queue *q,
|
|||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* iwl_tx_queue_alloc - Alloc driver data and TFD CB for one Tx/cmd queue
|
||||
*/
|
||||
static int iwl_tx_queue_alloc(struct iwl_priv *priv,
|
||||
struct iwl_tx_queue *txq, u32 id)
|
||||
{
|
||||
struct device *dev = priv->bus.dev;
|
||||
size_t tfd_sz = priv->hw_params.tfd_size * TFD_QUEUE_SIZE_MAX;
|
||||
|
||||
/* Driver private data, only for Tx (not command) queues,
|
||||
* not shared with device. */
|
||||
if (id != priv->cmd_queue) {
|
||||
txq->txb = kzalloc(sizeof(txq->txb[0]) *
|
||||
TFD_QUEUE_SIZE_MAX, GFP_KERNEL);
|
||||
if (!txq->txb) {
|
||||
IWL_ERR(priv, "kmalloc for auxiliary BD "
|
||||
"structures failed\n");
|
||||
goto error;
|
||||
}
|
||||
} else {
|
||||
txq->txb = NULL;
|
||||
}
|
||||
|
||||
/* Circular buffer of transmit frame descriptors (TFDs),
|
||||
* shared with device */
|
||||
txq->tfds = dma_alloc_coherent(dev, tfd_sz, &txq->q.dma_addr,
|
||||
GFP_KERNEL);
|
||||
if (!txq->tfds) {
|
||||
IWL_ERR(priv, "dma_alloc_coherent(%zd) failed\n", tfd_sz);
|
||||
goto error;
|
||||
}
|
||||
txq->q.id = id;
|
||||
|
||||
return 0;
|
||||
|
||||
error:
|
||||
kfree(txq->txb);
|
||||
txq->txb = NULL;
|
||||
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
/**
|
||||
* iwl_tx_queue_init - Allocate and initialize one tx/cmd queue
|
||||
*/
|
||||
int iwl_tx_queue_init(struct iwl_priv *priv, struct iwl_tx_queue *txq,
|
||||
int slots_num, u32 txq_id)
|
||||
{
|
||||
int i, len;
|
||||
int ret;
|
||||
|
||||
txq->meta = kzalloc(sizeof(struct iwl_cmd_meta) * slots_num,
|
||||
GFP_KERNEL);
|
||||
txq->cmd = kzalloc(sizeof(struct iwl_device_cmd *) * slots_num,
|
||||
GFP_KERNEL);
|
||||
|
||||
if (!txq->meta || !txq->cmd)
|
||||
goto out_free_arrays;
|
||||
|
||||
len = sizeof(struct iwl_device_cmd);
|
||||
for (i = 0; i < slots_num; i++) {
|
||||
txq->cmd[i] = kmalloc(len, GFP_KERNEL);
|
||||
if (!txq->cmd[i])
|
||||
goto err;
|
||||
}
|
||||
|
||||
/* Alloc driver data array and TFD circular buffer */
|
||||
ret = iwl_tx_queue_alloc(priv, txq, txq_id);
|
||||
if (ret)
|
||||
goto err;
|
||||
|
||||
txq->need_update = 0;
|
||||
|
||||
/*
|
||||
* For the default queues 0-3, set up the swq_id
|
||||
* already -- all others need to get one later
|
||||
* (if they need one at all).
|
||||
*/
|
||||
if (txq_id < 4)
|
||||
iwl_set_swq_id(txq, txq_id, txq_id);
|
||||
|
||||
/* TFD_QUEUE_SIZE_MAX must be power-of-two size, otherwise
|
||||
* iwl_queue_inc_wrap and iwl_queue_dec_wrap are broken. */
|
||||
BUILD_BUG_ON(TFD_QUEUE_SIZE_MAX & (TFD_QUEUE_SIZE_MAX - 1));
|
||||
|
||||
/* Initialize queue's high/low-water marks, and head/tail indexes */
|
||||
ret = iwl_queue_init(priv, &txq->q, TFD_QUEUE_SIZE_MAX, slots_num, txq_id);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* Tell device where to find queue */
|
||||
iwlagn_tx_queue_init(priv, txq);
|
||||
|
||||
return 0;
|
||||
err:
|
||||
for (i = 0; i < slots_num; i++)
|
||||
kfree(txq->cmd[i]);
|
||||
out_free_arrays:
|
||||
kfree(txq->meta);
|
||||
kfree(txq->cmd);
|
||||
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
void iwl_tx_queue_reset(struct iwl_priv *priv, struct iwl_tx_queue *txq,
|
||||
int slots_num, u32 txq_id)
|
||||
{
|
||||
memset(txq->meta, 0, sizeof(struct iwl_cmd_meta) * slots_num);
|
||||
|
||||
txq->need_update = 0;
|
||||
|
||||
/* Initialize queue's high/low-water marks, and head/tail indexes */
|
||||
iwl_queue_init(priv, &txq->q, TFD_QUEUE_SIZE_MAX, slots_num, txq_id);
|
||||
|
||||
/* Tell device where to find queue */
|
||||
iwlagn_tx_queue_init(priv, txq);
|
||||
}
|
||||
|
||||
/*************** HOST COMMAND QUEUE FUNCTIONS *****/
|
||||
|
||||
/**
|
||||
|
|
|
@ -54,10 +54,10 @@
|
|||
|
||||
#define SDIO_MP_AGGR_DEF_PKT_LIMIT 8
|
||||
|
||||
#define SDIO_MP_TX_AGGR_DEF_BUF_SIZE (4096) /* 4K */
|
||||
#define SDIO_MP_TX_AGGR_DEF_BUF_SIZE (8192) /* 8K */
|
||||
|
||||
/* Multi port RX aggregation buffer size */
|
||||
#define SDIO_MP_RX_AGGR_DEF_BUF_SIZE (4096) /* 4K */
|
||||
#define SDIO_MP_RX_AGGR_DEF_BUF_SIZE (16384) /* 16K */
|
||||
|
||||
/* Misc. Config Register : Auto Re-enable interrupts */
|
||||
#define AUTO_RE_ENABLE_INT BIT(4)
|
||||
|
|
|
@ -1723,6 +1723,7 @@ static const struct ieee80211_ops rt2400pci_mac80211_ops = {
|
|||
.set_antenna = rt2x00mac_set_antenna,
|
||||
.get_antenna = rt2x00mac_get_antenna,
|
||||
.get_ringparam = rt2x00mac_get_ringparam,
|
||||
.tx_frames_pending = rt2x00mac_tx_frames_pending,
|
||||
};
|
||||
|
||||
static const struct rt2x00lib_ops rt2400pci_rt2x00_ops = {
|
||||
|
|
|
@ -2016,6 +2016,7 @@ static const struct ieee80211_ops rt2500pci_mac80211_ops = {
|
|||
.set_antenna = rt2x00mac_set_antenna,
|
||||
.get_antenna = rt2x00mac_get_antenna,
|
||||
.get_ringparam = rt2x00mac_get_ringparam,
|
||||
.tx_frames_pending = rt2x00mac_tx_frames_pending,
|
||||
};
|
||||
|
||||
static const struct rt2x00lib_ops rt2500pci_rt2x00_ops = {
|
||||
|
|
|
@ -1827,6 +1827,7 @@ static const struct ieee80211_ops rt2500usb_mac80211_ops = {
|
|||
.set_antenna = rt2x00mac_set_antenna,
|
||||
.get_antenna = rt2x00mac_get_antenna,
|
||||
.get_ringparam = rt2x00mac_get_ringparam,
|
||||
.tx_frames_pending = rt2x00mac_tx_frames_pending,
|
||||
};
|
||||
|
||||
static const struct rt2x00lib_ops rt2500usb_rt2x00_ops = {
|
||||
|
|
|
@ -1031,6 +1031,7 @@ static const struct ieee80211_ops rt2800pci_mac80211_ops = {
|
|||
.flush = rt2x00mac_flush,
|
||||
.get_survey = rt2800_get_survey,
|
||||
.get_ringparam = rt2x00mac_get_ringparam,
|
||||
.tx_frames_pending = rt2x00mac_tx_frames_pending,
|
||||
};
|
||||
|
||||
static const struct rt2800_ops rt2800pci_rt2800_ops = {
|
||||
|
@ -1160,6 +1161,7 @@ static DEFINE_PCI_DEVICE_TABLE(rt2800pci_device_table) = {
|
|||
#endif
|
||||
#ifdef CONFIG_RT2800PCI_RT53XX
|
||||
{ PCI_DEVICE(0x1814, 0x5390) },
|
||||
{ PCI_DEVICE(0x1814, 0x539f) },
|
||||
#endif
|
||||
{ 0, }
|
||||
};
|
||||
|
|
|
@ -757,6 +757,7 @@ static const struct ieee80211_ops rt2800usb_mac80211_ops = {
|
|||
.flush = rt2x00mac_flush,
|
||||
.get_survey = rt2800_get_survey,
|
||||
.get_ringparam = rt2x00mac_get_ringparam,
|
||||
.tx_frames_pending = rt2x00mac_tx_frames_pending,
|
||||
};
|
||||
|
||||
static const struct rt2800_ops rt2800usb_rt2800_ops = {
|
||||
|
@ -1020,6 +1021,7 @@ static struct usb_device_id rt2800usb_device_table[] = {
|
|||
{ USB_DEVICE(0x0df6, 0x0048) },
|
||||
{ USB_DEVICE(0x0df6, 0x0051) },
|
||||
{ USB_DEVICE(0x0df6, 0x005f) },
|
||||
{ USB_DEVICE(0x0df6, 0x0060) },
|
||||
/* SMC */
|
||||
{ USB_DEVICE(0x083a, 0x6618) },
|
||||
{ USB_DEVICE(0x083a, 0x7511) },
|
||||
|
@ -1076,6 +1078,7 @@ static struct usb_device_id rt2800usb_device_table[] = {
|
|||
{ USB_DEVICE(0x148f, 0x3572) },
|
||||
/* Sitecom */
|
||||
{ USB_DEVICE(0x0df6, 0x0041) },
|
||||
{ USB_DEVICE(0x0df6, 0x0062) },
|
||||
/* Toshiba */
|
||||
{ USB_DEVICE(0x0930, 0x0a07) },
|
||||
/* Zinwell */
|
||||
|
@ -1174,8 +1177,6 @@ static struct usb_device_id rt2800usb_device_table[] = {
|
|||
{ USB_DEVICE(0x0df6, 0x004a) },
|
||||
{ USB_DEVICE(0x0df6, 0x004d) },
|
||||
{ USB_DEVICE(0x0df6, 0x0053) },
|
||||
{ USB_DEVICE(0x0df6, 0x0060) },
|
||||
{ USB_DEVICE(0x0df6, 0x0062) },
|
||||
/* SMC */
|
||||
{ USB_DEVICE(0x083a, 0xa512) },
|
||||
{ USB_DEVICE(0x083a, 0xc522) },
|
||||
|
|
|
@ -1277,6 +1277,7 @@ int rt2x00mac_set_antenna(struct ieee80211_hw *hw, u32 tx_ant, u32 rx_ant);
|
|||
int rt2x00mac_get_antenna(struct ieee80211_hw *hw, u32 *tx_ant, u32 *rx_ant);
|
||||
void rt2x00mac_get_ringparam(struct ieee80211_hw *hw,
|
||||
u32 *tx, u32 *tx_max, u32 *rx, u32 *rx_max);
|
||||
bool rt2x00mac_tx_frames_pending(struct ieee80211_hw *hw);
|
||||
|
||||
/*
|
||||
* Driver allocation handlers.
|
||||
|
|
|
@ -45,11 +45,11 @@ enum cipher rt2x00crypto_key_to_cipher(struct ieee80211_key_conf *key)
|
|||
}
|
||||
}
|
||||
|
||||
void rt2x00crypto_create_tx_descriptor(struct queue_entry *entry,
|
||||
void rt2x00crypto_create_tx_descriptor(struct rt2x00_dev *rt2x00dev,
|
||||
struct sk_buff *skb,
|
||||
struct txentry_desc *txdesc)
|
||||
{
|
||||
struct rt2x00_dev *rt2x00dev = entry->queue->rt2x00dev;
|
||||
struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(entry->skb);
|
||||
struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb);
|
||||
struct ieee80211_key_conf *hw_key = tx_info->control.hw_key;
|
||||
|
||||
if (!test_bit(CAPABILITY_HW_CRYPTO, &rt2x00dev->cap_flags) || !hw_key)
|
||||
|
|
|
@ -336,7 +336,8 @@ static inline void rt2x00debug_update_crypto(struct rt2x00_dev *rt2x00dev,
|
|||
*/
|
||||
#ifdef CONFIG_RT2X00_LIB_CRYPTO
|
||||
enum cipher rt2x00crypto_key_to_cipher(struct ieee80211_key_conf *key);
|
||||
void rt2x00crypto_create_tx_descriptor(struct queue_entry *entry,
|
||||
void rt2x00crypto_create_tx_descriptor(struct rt2x00_dev *rt2x00dev,
|
||||
struct sk_buff *skb,
|
||||
struct txentry_desc *txdesc);
|
||||
unsigned int rt2x00crypto_tx_overhead(struct rt2x00_dev *rt2x00dev,
|
||||
struct sk_buff *skb);
|
||||
|
|
|
@ -818,3 +818,17 @@ void rt2x00mac_get_ringparam(struct ieee80211_hw *hw,
|
|||
*rx_max = rt2x00dev->rx->limit;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(rt2x00mac_get_ringparam);
|
||||
|
||||
bool rt2x00mac_tx_frames_pending(struct ieee80211_hw *hw)
|
||||
{
|
||||
struct rt2x00_dev *rt2x00dev = hw->priv;
|
||||
struct data_queue *queue;
|
||||
|
||||
tx_queue_for_each(rt2x00dev, queue) {
|
||||
if (!rt2x00queue_empty(queue))
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(rt2x00mac_tx_frames_pending);
|
||||
|
|
|
@ -200,11 +200,12 @@ void rt2x00queue_remove_l2pad(struct sk_buff *skb, unsigned int header_length)
|
|||
skb_pull(skb, l2pad);
|
||||
}
|
||||
|
||||
static void rt2x00queue_create_tx_descriptor_seq(struct queue_entry *entry,
|
||||
static void rt2x00queue_create_tx_descriptor_seq(struct rt2x00_dev *rt2x00dev,
|
||||
struct sk_buff *skb,
|
||||
struct txentry_desc *txdesc)
|
||||
{
|
||||
struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(entry->skb);
|
||||
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)entry->skb->data;
|
||||
struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb);
|
||||
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
|
||||
struct rt2x00_intf *intf = vif_to_intf(tx_info->control.vif);
|
||||
|
||||
if (!(tx_info->flags & IEEE80211_TX_CTL_ASSIGN_SEQ))
|
||||
|
@ -212,7 +213,7 @@ static void rt2x00queue_create_tx_descriptor_seq(struct queue_entry *entry,
|
|||
|
||||
__set_bit(ENTRY_TXD_GENERATE_SEQ, &txdesc->flags);
|
||||
|
||||
if (!test_bit(REQUIRE_SW_SEQNO, &entry->queue->rt2x00dev->cap_flags))
|
||||
if (!test_bit(REQUIRE_SW_SEQNO, &rt2x00dev->cap_flags))
|
||||
return;
|
||||
|
||||
/*
|
||||
|
@ -237,12 +238,12 @@ static void rt2x00queue_create_tx_descriptor_seq(struct queue_entry *entry,
|
|||
|
||||
}
|
||||
|
||||
static void rt2x00queue_create_tx_descriptor_plcp(struct queue_entry *entry,
|
||||
static void rt2x00queue_create_tx_descriptor_plcp(struct rt2x00_dev *rt2x00dev,
|
||||
struct sk_buff *skb,
|
||||
struct txentry_desc *txdesc,
|
||||
const struct rt2x00_rate *hwrate)
|
||||
{
|
||||
struct rt2x00_dev *rt2x00dev = entry->queue->rt2x00dev;
|
||||
struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(entry->skb);
|
||||
struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb);
|
||||
struct ieee80211_tx_rate *txrate = &tx_info->control.rates[0];
|
||||
unsigned int data_length;
|
||||
unsigned int duration;
|
||||
|
@ -259,8 +260,8 @@ static void rt2x00queue_create_tx_descriptor_plcp(struct queue_entry *entry,
|
|||
txdesc->u.plcp.ifs = IFS_SIFS;
|
||||
|
||||
/* Data length + CRC + Crypto overhead (IV/EIV/ICV/MIC) */
|
||||
data_length = entry->skb->len + 4;
|
||||
data_length += rt2x00crypto_tx_overhead(rt2x00dev, entry->skb);
|
||||
data_length = skb->len + 4;
|
||||
data_length += rt2x00crypto_tx_overhead(rt2x00dev, skb);
|
||||
|
||||
/*
|
||||
* PLCP setup
|
||||
|
@ -301,13 +302,14 @@ static void rt2x00queue_create_tx_descriptor_plcp(struct queue_entry *entry,
|
|||
}
|
||||
}
|
||||
|
||||
static void rt2x00queue_create_tx_descriptor_ht(struct queue_entry *entry,
|
||||
static void rt2x00queue_create_tx_descriptor_ht(struct rt2x00_dev *rt2x00dev,
|
||||
struct sk_buff *skb,
|
||||
struct txentry_desc *txdesc,
|
||||
const struct rt2x00_rate *hwrate)
|
||||
{
|
||||
struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(entry->skb);
|
||||
struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb);
|
||||
struct ieee80211_tx_rate *txrate = &tx_info->control.rates[0];
|
||||
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)entry->skb->data;
|
||||
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
|
||||
|
||||
if (tx_info->control.sta)
|
||||
txdesc->u.ht.mpdu_density =
|
||||
|
@ -380,12 +382,12 @@ static void rt2x00queue_create_tx_descriptor_ht(struct queue_entry *entry,
|
|||
txdesc->u.ht.txop = TXOP_HTTXOP;
|
||||
}
|
||||
|
||||
static void rt2x00queue_create_tx_descriptor(struct queue_entry *entry,
|
||||
static void rt2x00queue_create_tx_descriptor(struct rt2x00_dev *rt2x00dev,
|
||||
struct sk_buff *skb,
|
||||
struct txentry_desc *txdesc)
|
||||
{
|
||||
struct rt2x00_dev *rt2x00dev = entry->queue->rt2x00dev;
|
||||
struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(entry->skb);
|
||||
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)entry->skb->data;
|
||||
struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb);
|
||||
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
|
||||
struct ieee80211_tx_rate *txrate = &tx_info->control.rates[0];
|
||||
struct ieee80211_rate *rate;
|
||||
const struct rt2x00_rate *hwrate = NULL;
|
||||
|
@ -395,8 +397,8 @@ static void rt2x00queue_create_tx_descriptor(struct queue_entry *entry,
|
|||
/*
|
||||
* Header and frame information.
|
||||
*/
|
||||
txdesc->length = entry->skb->len;
|
||||
txdesc->header_length = ieee80211_get_hdrlen_from_skb(entry->skb);
|
||||
txdesc->length = skb->len;
|
||||
txdesc->header_length = ieee80211_get_hdrlen_from_skb(skb);
|
||||
|
||||
/*
|
||||
* Check whether this frame is to be acked.
|
||||
|
@ -471,13 +473,15 @@ static void rt2x00queue_create_tx_descriptor(struct queue_entry *entry,
|
|||
/*
|
||||
* Apply TX descriptor handling by components
|
||||
*/
|
||||
rt2x00crypto_create_tx_descriptor(entry, txdesc);
|
||||
rt2x00queue_create_tx_descriptor_seq(entry, txdesc);
|
||||
rt2x00crypto_create_tx_descriptor(rt2x00dev, skb, txdesc);
|
||||
rt2x00queue_create_tx_descriptor_seq(rt2x00dev, skb, txdesc);
|
||||
|
||||
if (test_bit(REQUIRE_HT_TX_DESC, &rt2x00dev->cap_flags))
|
||||
rt2x00queue_create_tx_descriptor_ht(entry, txdesc, hwrate);
|
||||
rt2x00queue_create_tx_descriptor_ht(rt2x00dev, skb, txdesc,
|
||||
hwrate);
|
||||
else
|
||||
rt2x00queue_create_tx_descriptor_plcp(entry, txdesc, hwrate);
|
||||
rt2x00queue_create_tx_descriptor_plcp(rt2x00dev, skb, txdesc,
|
||||
hwrate);
|
||||
}
|
||||
|
||||
static int rt2x00queue_write_tx_data(struct queue_entry *entry,
|
||||
|
@ -555,33 +559,18 @@ int rt2x00queue_write_tx_frame(struct data_queue *queue, struct sk_buff *skb,
|
|||
bool local)
|
||||
{
|
||||
struct ieee80211_tx_info *tx_info;
|
||||
struct queue_entry *entry = rt2x00queue_get_entry(queue, Q_INDEX);
|
||||
struct queue_entry *entry;
|
||||
struct txentry_desc txdesc;
|
||||
struct skb_frame_desc *skbdesc;
|
||||
u8 rate_idx, rate_flags;
|
||||
|
||||
if (unlikely(rt2x00queue_full(queue))) {
|
||||
ERROR(queue->rt2x00dev,
|
||||
"Dropping frame due to full tx queue %d.\n", queue->qid);
|
||||
return -ENOBUFS;
|
||||
}
|
||||
|
||||
if (unlikely(test_and_set_bit(ENTRY_OWNER_DEVICE_DATA,
|
||||
&entry->flags))) {
|
||||
ERROR(queue->rt2x00dev,
|
||||
"Arrived at non-free entry in the non-full queue %d.\n"
|
||||
"Please file bug report to %s.\n",
|
||||
queue->qid, DRV_PROJECT);
|
||||
return -EINVAL;
|
||||
}
|
||||
int ret = 0;
|
||||
|
||||
/*
|
||||
* Copy all TX descriptor information into txdesc,
|
||||
* after that we are free to use the skb->cb array
|
||||
* for our information.
|
||||
*/
|
||||
entry->skb = skb;
|
||||
rt2x00queue_create_tx_descriptor(entry, &txdesc);
|
||||
rt2x00queue_create_tx_descriptor(queue->rt2x00dev, skb, &txdesc);
|
||||
|
||||
/*
|
||||
* All information is retrieved from the skb->cb array,
|
||||
|
@ -593,7 +582,6 @@ int rt2x00queue_write_tx_frame(struct data_queue *queue, struct sk_buff *skb,
|
|||
rate_flags = tx_info->control.rates[0].flags;
|
||||
skbdesc = get_skb_frame_desc(skb);
|
||||
memset(skbdesc, 0, sizeof(*skbdesc));
|
||||
skbdesc->entry = entry;
|
||||
skbdesc->tx_rate_idx = rate_idx;
|
||||
skbdesc->tx_rate_flags = rate_flags;
|
||||
|
||||
|
@ -622,9 +610,33 @@ int rt2x00queue_write_tx_frame(struct data_queue *queue, struct sk_buff *skb,
|
|||
* for PCI devices.
|
||||
*/
|
||||
if (test_bit(REQUIRE_L2PAD, &queue->rt2x00dev->cap_flags))
|
||||
rt2x00queue_insert_l2pad(entry->skb, txdesc.header_length);
|
||||
rt2x00queue_insert_l2pad(skb, txdesc.header_length);
|
||||
else if (test_bit(REQUIRE_DMA, &queue->rt2x00dev->cap_flags))
|
||||
rt2x00queue_align_frame(entry->skb);
|
||||
rt2x00queue_align_frame(skb);
|
||||
|
||||
spin_lock(&queue->tx_lock);
|
||||
|
||||
if (unlikely(rt2x00queue_full(queue))) {
|
||||
ERROR(queue->rt2x00dev,
|
||||
"Dropping frame due to full tx queue %d.\n", queue->qid);
|
||||
ret = -ENOBUFS;
|
||||
goto out;
|
||||
}
|
||||
|
||||
entry = rt2x00queue_get_entry(queue, Q_INDEX);
|
||||
|
||||
if (unlikely(test_and_set_bit(ENTRY_OWNER_DEVICE_DATA,
|
||||
&entry->flags))) {
|
||||
ERROR(queue->rt2x00dev,
|
||||
"Arrived at non-free entry in the non-full queue %d.\n"
|
||||
"Please file bug report to %s.\n",
|
||||
queue->qid, DRV_PROJECT);
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
skbdesc->entry = entry;
|
||||
entry->skb = skb;
|
||||
|
||||
/*
|
||||
* It could be possible that the queue was corrupted and this
|
||||
|
@ -634,7 +646,8 @@ int rt2x00queue_write_tx_frame(struct data_queue *queue, struct sk_buff *skb,
|
|||
if (unlikely(rt2x00queue_write_tx_data(entry, &txdesc))) {
|
||||
clear_bit(ENTRY_OWNER_DEVICE_DATA, &entry->flags);
|
||||
entry->skb = NULL;
|
||||
return -EIO;
|
||||
ret = -EIO;
|
||||
goto out;
|
||||
}
|
||||
|
||||
set_bit(ENTRY_DATA_PENDING, &entry->flags);
|
||||
|
@ -643,7 +656,9 @@ int rt2x00queue_write_tx_frame(struct data_queue *queue, struct sk_buff *skb,
|
|||
rt2x00queue_write_tx_descriptor(entry, &txdesc);
|
||||
rt2x00queue_kick_tx_queue(queue, &txdesc);
|
||||
|
||||
return 0;
|
||||
out:
|
||||
spin_unlock(&queue->tx_lock);
|
||||
return ret;
|
||||
}
|
||||
|
||||
int rt2x00queue_clear_beacon(struct rt2x00_dev *rt2x00dev,
|
||||
|
@ -697,7 +712,7 @@ int rt2x00queue_update_beacon_locked(struct rt2x00_dev *rt2x00dev,
|
|||
* after that we are free to use the skb->cb array
|
||||
* for our information.
|
||||
*/
|
||||
rt2x00queue_create_tx_descriptor(intf->beacon, &txdesc);
|
||||
rt2x00queue_create_tx_descriptor(rt2x00dev, intf->beacon->skb, &txdesc);
|
||||
|
||||
/*
|
||||
* Fill in skb descriptor
|
||||
|
@ -1184,6 +1199,7 @@ static void rt2x00queue_init(struct rt2x00_dev *rt2x00dev,
|
|||
struct data_queue *queue, enum data_queue_qid qid)
|
||||
{
|
||||
mutex_init(&queue->status_lock);
|
||||
spin_lock_init(&queue->tx_lock);
|
||||
spin_lock_init(&queue->index_lock);
|
||||
|
||||
queue->rt2x00dev = rt2x00dev;
|
||||
|
|
|
@ -432,6 +432,7 @@ enum data_queue_flags {
|
|||
* @flags: Entry flags, see &enum queue_entry_flags.
|
||||
* @status_lock: The mutex for protecting the start/stop/flush
|
||||
* handling on this queue.
|
||||
* @tx_lock: Spinlock to serialize tx operations on this queue.
|
||||
* @index_lock: Spinlock to protect index handling. Whenever @index, @index_done or
|
||||
* @index_crypt needs to be changed this lock should be grabbed to prevent
|
||||
* index corruption due to concurrency.
|
||||
|
@ -458,6 +459,7 @@ struct data_queue {
|
|||
unsigned long flags;
|
||||
|
||||
struct mutex status_lock;
|
||||
spinlock_t tx_lock;
|
||||
spinlock_t index_lock;
|
||||
|
||||
unsigned int count;
|
||||
|
|
|
@ -2982,6 +2982,7 @@ static const struct ieee80211_ops rt61pci_mac80211_ops = {
|
|||
.set_antenna = rt2x00mac_set_antenna,
|
||||
.get_antenna = rt2x00mac_get_antenna,
|
||||
.get_ringparam = rt2x00mac_get_ringparam,
|
||||
.tx_frames_pending = rt2x00mac_tx_frames_pending,
|
||||
};
|
||||
|
||||
static const struct rt2x00lib_ops rt61pci_rt2x00_ops = {
|
||||
|
|
|
@ -2314,6 +2314,7 @@ static const struct ieee80211_ops rt73usb_mac80211_ops = {
|
|||
.set_antenna = rt2x00mac_set_antenna,
|
||||
.get_antenna = rt2x00mac_get_antenna,
|
||||
.get_ringparam = rt2x00mac_get_ringparam,
|
||||
.tx_frames_pending = rt2x00mac_tx_frames_pending,
|
||||
};
|
||||
|
||||
static const struct rt2x00lib_ops rt73usb_rt2x00_ops = {
|
||||
|
|
|
@ -788,15 +788,11 @@ static irqreturn_t _rtl_pci_interrupt(int irq, void *dev_id)
|
|||
{
|
||||
struct ieee80211_hw *hw = dev_id;
|
||||
struct rtl_priv *rtlpriv = rtl_priv(hw);
|
||||
struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
|
||||
struct rtl_hal *rtlhal = rtl_hal(rtl_priv(hw));
|
||||
unsigned long flags;
|
||||
u32 inta = 0;
|
||||
u32 intb = 0;
|
||||
|
||||
if (rtlpci->irq_enabled == 0)
|
||||
return IRQ_HANDLED;
|
||||
|
||||
spin_lock_irqsave(&rtlpriv->locks.irq_th_lock, flags);
|
||||
|
||||
/*read ISR: 4/8bytes */
|
||||
|
|
|
@ -158,7 +158,6 @@ struct rtl_pci {
|
|||
bool first_init;
|
||||
bool being_init_adapter;
|
||||
bool init_ready;
|
||||
bool irq_enabled;
|
||||
|
||||
/*Tx */
|
||||
struct rtl8192_tx_ring tx_ring[RTL_PCI_MAX_TX_QUEUE_COUNT];
|
||||
|
|
|
@ -1183,7 +1183,6 @@ void rtl92ce_enable_interrupt(struct ieee80211_hw *hw)
|
|||
|
||||
rtl_write_dword(rtlpriv, REG_HIMR, rtlpci->irq_mask[0] & 0xFFFFFFFF);
|
||||
rtl_write_dword(rtlpriv, REG_HIMRE, rtlpci->irq_mask[1] & 0xFFFFFFFF);
|
||||
rtlpci->irq_enabled = true;
|
||||
}
|
||||
|
||||
void rtl92ce_disable_interrupt(struct ieee80211_hw *hw)
|
||||
|
@ -1193,7 +1192,6 @@ void rtl92ce_disable_interrupt(struct ieee80211_hw *hw)
|
|||
|
||||
rtl_write_dword(rtlpriv, REG_HIMR, IMR8190_DISABLED);
|
||||
rtl_write_dword(rtlpriv, REG_HIMRE, IMR8190_DISABLED);
|
||||
rtlpci->irq_enabled = false;
|
||||
synchronize_irq(rtlpci->pdev->irq);
|
||||
}
|
||||
|
||||
|
|
|
@ -380,13 +380,11 @@ void rtl92c_enable_interrupt(struct ieee80211_hw *hw)
|
|||
0xFFFFFFFF);
|
||||
rtl_write_dword(rtlpriv, REG_HIMRE, rtlpci->irq_mask[1] &
|
||||
0xFFFFFFFF);
|
||||
rtlpci->irq_enabled = true;
|
||||
} else {
|
||||
rtl_write_dword(rtlpriv, REG_HIMR, rtlusb->irq_mask[0] &
|
||||
0xFFFFFFFF);
|
||||
rtl_write_dword(rtlpriv, REG_HIMRE, rtlusb->irq_mask[1] &
|
||||
0xFFFFFFFF);
|
||||
rtlusb->irq_enabled = true;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -398,16 +396,9 @@ void rtl92c_init_interrupt(struct ieee80211_hw *hw)
|
|||
void rtl92c_disable_interrupt(struct ieee80211_hw *hw)
|
||||
{
|
||||
struct rtl_priv *rtlpriv = rtl_priv(hw);
|
||||
struct rtl_hal *rtlhal = rtl_hal(rtl_priv(hw));
|
||||
struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
|
||||
struct rtl_usb *rtlusb = rtl_usbdev(rtl_usbpriv(hw));
|
||||
|
||||
rtl_write_dword(rtlpriv, REG_HIMR, IMR8190_DISABLED);
|
||||
rtl_write_dword(rtlpriv, REG_HIMRE, IMR8190_DISABLED);
|
||||
if (IS_HARDWARE_TYPE_8192CE(rtlhal))
|
||||
rtlpci->irq_enabled = false;
|
||||
else if (IS_HARDWARE_TYPE_8192CU(rtlhal))
|
||||
rtlusb->irq_enabled = false;
|
||||
}
|
||||
|
||||
void rtl92c_set_qos(struct ieee80211_hw *hw, int aci)
|
||||
|
|
|
@ -449,7 +449,7 @@ void rtl92de_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val)
|
|||
case HW_VAR_CORRECT_TSF: {
|
||||
u8 btype_ibss = ((u8 *) (val))[0];
|
||||
|
||||
if (btype_ibss == true)
|
||||
if (btype_ibss)
|
||||
_rtl92de_stop_tx_beacon(hw);
|
||||
_rtl92de_set_bcn_ctrl_reg(hw, 0, BIT(3));
|
||||
rtl_write_dword(rtlpriv, REG_TSFTR,
|
||||
|
@ -457,7 +457,7 @@ void rtl92de_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val)
|
|||
rtl_write_dword(rtlpriv, REG_TSFTR + 4,
|
||||
(u32) ((mac->tsf >> 32) & 0xffffffff));
|
||||
_rtl92de_set_bcn_ctrl_reg(hw, BIT(3), 0);
|
||||
if (btype_ibss == true)
|
||||
if (btype_ibss)
|
||||
_rtl92de_resume_tx_beacon(hw);
|
||||
|
||||
break;
|
||||
|
@ -932,8 +932,8 @@ int rtl92de_hw_init(struct ieee80211_hw *hw)
|
|||
RT_TRACE(rtlpriv, COMP_ERR, DBG_WARNING,
|
||||
("Failed to download FW. Init HW "
|
||||
"without FW..\n"));
|
||||
err = 1;
|
||||
rtlhal->fw_ready = false;
|
||||
return 1;
|
||||
} else {
|
||||
rtlhal->fw_ready = true;
|
||||
}
|
||||
|
@ -1044,6 +1044,11 @@ int rtl92de_hw_init(struct ieee80211_hw *hw)
|
|||
if (((tmp_rega & BIT(11)) == BIT(11)))
|
||||
break;
|
||||
}
|
||||
/* check that loop was successful. If not, exit now */
|
||||
if (i == 10000) {
|
||||
rtlpci->init_ready = false;
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
rtlpci->init_ready = true;
|
||||
|
@ -1142,7 +1147,7 @@ void rtl92de_set_check_bssid(struct ieee80211_hw *hw, bool check_bssid)
|
|||
|
||||
if (rtlpriv->psc.rfpwr_state != ERFON)
|
||||
return;
|
||||
if (check_bssid == true) {
|
||||
if (check_bssid) {
|
||||
reg_rcr |= (RCR_CBSSID_DATA | RCR_CBSSID_BCN);
|
||||
rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_RCR, (u8 *)(®_rcr));
|
||||
_rtl92de_set_bcn_ctrl_reg(hw, 0, BIT(4));
|
||||
|
@ -1221,7 +1226,6 @@ void rtl92de_enable_interrupt(struct ieee80211_hw *hw)
|
|||
|
||||
rtl_write_dword(rtlpriv, REG_HIMR, rtlpci->irq_mask[0] & 0xFFFFFFFF);
|
||||
rtl_write_dword(rtlpriv, REG_HIMRE, rtlpci->irq_mask[1] & 0xFFFFFFFF);
|
||||
rtlpci->irq_enabled = true;
|
||||
}
|
||||
|
||||
void rtl92de_disable_interrupt(struct ieee80211_hw *hw)
|
||||
|
@ -1231,7 +1235,6 @@ void rtl92de_disable_interrupt(struct ieee80211_hw *hw)
|
|||
|
||||
rtl_write_dword(rtlpriv, REG_HIMR, IMR8190_DISABLED);
|
||||
rtl_write_dword(rtlpriv, REG_HIMRE, IMR8190_DISABLED);
|
||||
rtlpci->irq_enabled = false;
|
||||
synchronize_irq(rtlpci->pdev->irq);
|
||||
}
|
||||
|
||||
|
@ -1787,7 +1790,7 @@ static void _rtl92de_read_adapter_info(struct ieee80211_hw *hw)
|
|||
RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD, ("Autoload OK\n"));
|
||||
rtlefuse->autoload_failflag = false;
|
||||
}
|
||||
if (rtlefuse->autoload_failflag == true) {
|
||||
if (rtlefuse->autoload_failflag) {
|
||||
RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG,
|
||||
("RTL819X Not boot from eeprom, check it !!"));
|
||||
return;
|
||||
|
@ -2149,7 +2152,7 @@ bool rtl92de_gpio_radio_on_off_checking(struct ieee80211_hw *hw, u8 *valid)
|
|||
REG_MAC_PINMUX_CFG) & ~(BIT(3)));
|
||||
u1tmp = rtl_read_byte(rtlpriv, REG_GPIO_IO_SEL);
|
||||
e_rfpowerstate_toset = (u1tmp & BIT(3)) ? ERFON : ERFOFF;
|
||||
if ((ppsc->hwradiooff == true) && (e_rfpowerstate_toset == ERFON)) {
|
||||
if (ppsc->hwradiooff && (e_rfpowerstate_toset == ERFON)) {
|
||||
RT_TRACE(rtlpriv, COMP_RF, DBG_DMESG,
|
||||
("GPIOChangeRF - HW Radio ON, RF ON\n"));
|
||||
e_rfpowerstate_toset = ERFON;
|
||||
|
|
|
@ -93,7 +93,7 @@ void rtl92de_sw_led_off(struct ieee80211_hw *hw, struct rtl_led *pled)
|
|||
break;
|
||||
case LED_PIN_LED0:
|
||||
ledcfg &= 0xf0;
|
||||
if (pcipriv->ledctl.led_opendrain == true)
|
||||
if (pcipriv->ledctl.led_opendrain)
|
||||
rtl_write_byte(rtlpriv, REG_LEDCFG2,
|
||||
(ledcfg | BIT(1) | BIT(5) | BIT(6)));
|
||||
else
|
||||
|
|
|
@ -932,7 +932,7 @@ bool rtl92d_phy_config_rf_with_headerfile(struct ieee80211_hw *hw,
|
|||
enum rf_content content,
|
||||
enum radio_path rfpath)
|
||||
{
|
||||
int i, j;
|
||||
int i;
|
||||
u32 *radioa_array_table;
|
||||
u32 *radiob_array_table;
|
||||
u16 radioa_arraylen, radiob_arraylen;
|
||||
|
@ -974,13 +974,10 @@ bool rtl92d_phy_config_rf_with_headerfile(struct ieee80211_hw *hw,
|
|||
mdelay(50);
|
||||
} else if (radioa_array_table[i] == 0xfd) {
|
||||
/* delay_ms(5); */
|
||||
for (j = 0; j < 100; j++)
|
||||
udelay(MAX_STALL_TIME);
|
||||
mdelay(5);
|
||||
} else if (radioa_array_table[i] == 0xfc) {
|
||||
/* delay_ms(1); */
|
||||
for (j = 0; j < 20; j++)
|
||||
udelay(MAX_STALL_TIME);
|
||||
|
||||
mdelay(1);
|
||||
} else if (radioa_array_table[i] == 0xfb) {
|
||||
udelay(50);
|
||||
} else if (radioa_array_table[i] == 0xfa) {
|
||||
|
@ -1004,12 +1001,10 @@ bool rtl92d_phy_config_rf_with_headerfile(struct ieee80211_hw *hw,
|
|||
mdelay(50);
|
||||
} else if (radiob_array_table[i] == 0xfd) {
|
||||
/* delay_ms(5); */
|
||||
for (j = 0; j < 100; j++)
|
||||
udelay(MAX_STALL_TIME);
|
||||
mdelay(5);
|
||||
} else if (radiob_array_table[i] == 0xfc) {
|
||||
/* delay_ms(1); */
|
||||
for (j = 0; j < 20; j++)
|
||||
udelay(MAX_STALL_TIME);
|
||||
mdelay(1);
|
||||
} else if (radiob_array_table[i] == 0xfb) {
|
||||
udelay(50);
|
||||
} else if (radiob_array_table[i] == 0xfa) {
|
||||
|
@ -1276,7 +1271,7 @@ static void rtl92d_phy_switch_wirelessband(struct ieee80211_hw *hw, u8 band)
|
|||
{
|
||||
struct rtl_priv *rtlpriv = rtl_priv(hw);
|
||||
struct rtl_hal *rtlhal = rtl_hal(rtl_priv(hw));
|
||||
u8 i, value8;
|
||||
u8 value8;
|
||||
|
||||
RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD, ("==>\n"));
|
||||
rtlhal->bandset = band;
|
||||
|
@ -1321,8 +1316,7 @@ static void rtl92d_phy_switch_wirelessband(struct ieee80211_hw *hw, u8 band)
|
|||
rtl_write_byte(rtlpriv, (rtlhal->interfaceindex ==
|
||||
0 ? REG_MAC0 : REG_MAC1), value8);
|
||||
}
|
||||
for (i = 0; i < 20; i++)
|
||||
udelay(MAX_STALL_TIME);
|
||||
mdelay(1);
|
||||
RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD, ("<==Switch Band OK.\n"));
|
||||
}
|
||||
|
||||
|
@ -1684,7 +1678,7 @@ static u8 _rtl92d_phy_patha_iqk(struct ieee80211_hw *hw, bool configpathb)
|
|||
RTPRINT(rtlpriv, FINIT, INIT_IQK,
|
||||
("Delay %d ms for One shot, path A LOK & IQK.\n",
|
||||
IQK_DELAY_TIME));
|
||||
udelay(IQK_DELAY_TIME * 1000);
|
||||
mdelay(IQK_DELAY_TIME);
|
||||
/* Check failed */
|
||||
regeac = rtl_get_bbreg(hw, 0xeac, BMASKDWORD);
|
||||
RTPRINT(rtlpriv, FINIT, INIT_IQK, ("0xeac = 0x%x\n", regeac));
|
||||
|
@ -1755,7 +1749,7 @@ static u8 _rtl92d_phy_patha_iqk_5g_normal(struct ieee80211_hw *hw,
|
|||
RTPRINT(rtlpriv, FINIT, INIT_IQK,
|
||||
("Delay %d ms for One shot, path A LOK & IQK.\n",
|
||||
IQK_DELAY_TIME));
|
||||
udelay(IQK_DELAY_TIME * 1000 * 10);
|
||||
mdelay(IQK_DELAY_TIME * 10);
|
||||
/* Check failed */
|
||||
regeac = rtl_get_bbreg(hw, 0xeac, BMASKDWORD);
|
||||
RTPRINT(rtlpriv, FINIT, INIT_IQK, ("0xeac = 0x%x\n", regeac));
|
||||
|
@ -1808,7 +1802,7 @@ static u8 _rtl92d_phy_pathb_iqk(struct ieee80211_hw *hw)
|
|||
RTPRINT(rtlpriv, FINIT, INIT_IQK,
|
||||
("Delay %d ms for One shot, path B LOK & IQK.\n",
|
||||
IQK_DELAY_TIME));
|
||||
udelay(IQK_DELAY_TIME * 1000);
|
||||
mdelay(IQK_DELAY_TIME);
|
||||
/* Check failed */
|
||||
regeac = rtl_get_bbreg(hw, 0xeac, BMASKDWORD);
|
||||
RTPRINT(rtlpriv, FINIT, INIT_IQK, ("0xeac = 0x%x\n", regeac));
|
||||
|
@ -1875,7 +1869,7 @@ static u8 _rtl92d_phy_pathb_iqk_5g_normal(struct ieee80211_hw *hw)
|
|||
/* delay x ms */
|
||||
RTPRINT(rtlpriv, FINIT, INIT_IQK,
|
||||
("Delay %d ms for One shot, path B LOK & IQK.\n", 10));
|
||||
udelay(IQK_DELAY_TIME * 1000 * 10);
|
||||
mdelay(IQK_DELAY_TIME * 10);
|
||||
|
||||
/* Check failed */
|
||||
regeac = rtl_get_bbreg(hw, 0xeac, BMASKDWORD);
|
||||
|
@ -2206,7 +2200,7 @@ static void _rtl92d_phy_iq_calibrate_5g_normal(struct ieee80211_hw *hw,
|
|||
* PHY_REG.txt , and radio_a, radio_b.txt */
|
||||
|
||||
RTPRINT(rtlpriv, FINIT, INIT_IQK, ("IQK for 5G NORMAL:Start!!!\n"));
|
||||
udelay(IQK_DELAY_TIME * 1000 * 20);
|
||||
mdelay(IQK_DELAY_TIME * 20);
|
||||
if (t == 0) {
|
||||
bbvalue = rtl_get_bbreg(hw, RFPGA0_RFMOD, BMASKDWORD);
|
||||
RTPRINT(rtlpriv, FINIT, INIT_IQK, ("==>0x%08x\n", bbvalue));
|
||||
|
|
|
@ -87,7 +87,7 @@ void rtl92d_phy_rf6052_set_cck_txpower(struct ieee80211_hw *hw,
|
|||
|
||||
if (rtlefuse->eeprom_regulatory != 0)
|
||||
turbo_scanoff = true;
|
||||
if (mac->act_scanning == true) {
|
||||
if (mac->act_scanning) {
|
||||
tx_agc[RF90_PATH_A] = 0x3f3f3f3f;
|
||||
tx_agc[RF90_PATH_B] = 0x3f3f3f3f;
|
||||
if (turbo_scanoff) {
|
||||
|
@ -416,9 +416,9 @@ bool rtl92d_phy_enable_anotherphy(struct ieee80211_hw *hw, bool bmac0)
|
|||
struct rtl_priv *rtlpriv = rtl_priv(hw);
|
||||
struct rtl_hal *rtlhal = &(rtlpriv->rtlhal);
|
||||
u8 u1btmp;
|
||||
u8 direct = bmac0 == true ? BIT(3) | BIT(2) : BIT(3);
|
||||
u8 mac_reg = bmac0 == true ? REG_MAC1 : REG_MAC0;
|
||||
u8 mac_on_bit = bmac0 == true ? MAC1_ON : MAC0_ON;
|
||||
u8 direct = bmac0 ? BIT(3) | BIT(2) : BIT(3);
|
||||
u8 mac_reg = bmac0 ? REG_MAC1 : REG_MAC0;
|
||||
u8 mac_on_bit = bmac0 ? MAC1_ON : MAC0_ON;
|
||||
bool bresult = true; /* true: need to enable BB/RF power */
|
||||
|
||||
rtlhal->during_mac0init_radiob = false;
|
||||
|
@ -447,9 +447,9 @@ void rtl92d_phy_powerdown_anotherphy(struct ieee80211_hw *hw, bool bmac0)
|
|||
struct rtl_priv *rtlpriv = rtl_priv(hw);
|
||||
struct rtl_hal *rtlhal = &(rtlpriv->rtlhal);
|
||||
u8 u1btmp;
|
||||
u8 direct = bmac0 == true ? BIT(3) | BIT(2) : BIT(3);
|
||||
u8 mac_reg = bmac0 == true ? REG_MAC1 : REG_MAC0;
|
||||
u8 mac_on_bit = bmac0 == true ? MAC1_ON : MAC0_ON;
|
||||
u8 direct = bmac0 ? BIT(3) | BIT(2) : BIT(3);
|
||||
u8 mac_reg = bmac0 ? REG_MAC1 : REG_MAC0;
|
||||
u8 mac_on_bit = bmac0 ? MAC1_ON : MAC0_ON;
|
||||
|
||||
rtlhal->during_mac0init_radiob = false;
|
||||
rtlhal->during_mac1init_radioa = false;
|
||||
|
@ -573,7 +573,7 @@ bool rtl92d_phy_rf6052_config(struct ieee80211_hw *hw)
|
|||
udelay(1);
|
||||
switch (rfpath) {
|
||||
case RF90_PATH_A:
|
||||
if (true_bpath == true)
|
||||
if (true_bpath)
|
||||
rtstatus = rtl92d_phy_config_rf_with_headerfile(
|
||||
hw, radiob_txt,
|
||||
(enum radio_path)rfpath);
|
||||
|
|
|
@ -614,7 +614,7 @@ bool rtl92de_rx_query_desc(struct ieee80211_hw *hw, struct rtl_stats *stats,
|
|||
(u8)
|
||||
GET_RX_DESC_RXMCS(pdesc));
|
||||
rx_status->mactime = GET_RX_DESC_TSFL(pdesc);
|
||||
if (phystatus == true) {
|
||||
if (phystatus) {
|
||||
p_drvinfo = (struct rx_fwinfo_92d *)(skb->data +
|
||||
stats->rx_bufshift);
|
||||
_rtl92de_translate_rx_signal_stuff(hw,
|
||||
|
@ -876,7 +876,7 @@ void rtl92de_tx_fill_cmddesc(struct ieee80211_hw *hw,
|
|||
|
||||
void rtl92de_set_desc(u8 *pdesc, bool istx, u8 desc_name, u8 *val)
|
||||
{
|
||||
if (istx == true) {
|
||||
if (istx) {
|
||||
switch (desc_name) {
|
||||
case HW_DESC_OWN:
|
||||
wmb();
|
||||
|
@ -917,7 +917,7 @@ u32 rtl92de_get_desc(u8 *p_desc, bool istx, u8 desc_name)
|
|||
{
|
||||
u32 ret = 0;
|
||||
|
||||
if (istx == true) {
|
||||
if (istx) {
|
||||
switch (desc_name) {
|
||||
case HW_DESC_OWN:
|
||||
ret = GET_TX_DESC_OWN(p_desc);
|
||||
|
|
|
@ -1214,8 +1214,6 @@ void rtl92se_enable_interrupt(struct ieee80211_hw *hw)
|
|||
rtl_write_dword(rtlpriv, INTA_MASK, rtlpci->irq_mask[0]);
|
||||
/* Support Bit 32-37(Assign as Bit 0-5) interrupt setting now */
|
||||
rtl_write_dword(rtlpriv, INTA_MASK + 4, rtlpci->irq_mask[1] & 0x3F);
|
||||
|
||||
rtlpci->irq_enabled = true;
|
||||
}
|
||||
|
||||
void rtl92se_disable_interrupt(struct ieee80211_hw *hw)
|
||||
|
@ -1226,7 +1224,6 @@ void rtl92se_disable_interrupt(struct ieee80211_hw *hw)
|
|||
rtl_write_dword(rtlpriv, INTA_MASK, 0);
|
||||
rtl_write_dword(rtlpriv, INTA_MASK + 4, 0);
|
||||
|
||||
rtlpci->irq_enabled = false;
|
||||
synchronize_irq(rtlpci->pdev->irq);
|
||||
}
|
||||
|
||||
|
|
|
@ -11,7 +11,6 @@ config WL12XX
|
|||
depends on WL12XX_MENU && GENERIC_HARDIRQS
|
||||
depends on INET
|
||||
select FW_LOADER
|
||||
select CRC7
|
||||
---help---
|
||||
This module adds support for wireless adapters based on TI wl1271 and
|
||||
TI wl1273 chipsets. This module does *not* include support for wl1251.
|
||||
|
@ -33,6 +32,7 @@ config WL12XX_HT
|
|||
config WL12XX_SPI
|
||||
tristate "TI wl12xx SPI support"
|
||||
depends on WL12XX && SPI_MASTER
|
||||
select CRC7
|
||||
---help---
|
||||
This module adds support for the SPI interface of adapters using
|
||||
TI wl12xx chipsets. Select this if your platform is using
|
||||
|
|
|
@ -25,7 +25,6 @@
|
|||
|
||||
#include <linux/module.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/crc7.h>
|
||||
#include <linux/spi/spi.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
|
@ -1068,6 +1067,7 @@ int wl1271_acx_sta_mem_cfg(struct wl1271 *wl)
|
|||
mem_conf->tx_free_req = mem->min_req_tx_blocks;
|
||||
mem_conf->rx_free_req = mem->min_req_rx_blocks;
|
||||
mem_conf->tx_min = mem->tx_min;
|
||||
mem_conf->fwlog_blocks = wl->conf.fwlog.mem_blocks;
|
||||
|
||||
ret = wl1271_cmd_configure(wl, ACX_MEM_CFG, mem_conf,
|
||||
sizeof(*mem_conf));
|
||||
|
@ -1577,6 +1577,53 @@ out:
|
|||
return ret;
|
||||
}
|
||||
|
||||
int wl1271_acx_ps_rx_streaming(struct wl1271 *wl, bool enable)
|
||||
{
|
||||
struct wl1271_acx_ps_rx_streaming *rx_streaming;
|
||||
u32 conf_queues, enable_queues;
|
||||
int i, ret = 0;
|
||||
|
||||
wl1271_debug(DEBUG_ACX, "acx ps rx streaming");
|
||||
|
||||
rx_streaming = kzalloc(sizeof(*rx_streaming), GFP_KERNEL);
|
||||
if (!rx_streaming) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
conf_queues = wl->conf.rx_streaming.queues;
|
||||
if (enable)
|
||||
enable_queues = conf_queues;
|
||||
else
|
||||
enable_queues = 0;
|
||||
|
||||
for (i = 0; i < 8; i++) {
|
||||
/*
|
||||
* Skip non-changed queues, to avoid redundant acxs.
|
||||
* this check assumes conf.rx_streaming.queues can't
|
||||
* be changed while rx_streaming is enabled.
|
||||
*/
|
||||
if (!(conf_queues & BIT(i)))
|
||||
continue;
|
||||
|
||||
rx_streaming->tid = i;
|
||||
rx_streaming->enable = enable_queues & BIT(i);
|
||||
rx_streaming->period = wl->conf.rx_streaming.interval;
|
||||
rx_streaming->timeout = wl->conf.rx_streaming.interval;
|
||||
|
||||
ret = wl1271_cmd_configure(wl, ACX_PS_RX_STREAMING,
|
||||
rx_streaming,
|
||||
sizeof(*rx_streaming));
|
||||
if (ret < 0) {
|
||||
wl1271_warning("acx ps rx streaming failed: %d", ret);
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
out:
|
||||
kfree(rx_streaming);
|
||||
return ret;
|
||||
}
|
||||
|
||||
int wl1271_acx_max_tx_retry(struct wl1271 *wl)
|
||||
{
|
||||
struct wl1271_acx_max_tx_retry *acx = NULL;
|
||||
|
|
|
@ -828,6 +828,8 @@ struct wl1271_acx_sta_config_memory {
|
|||
u8 tx_free_req;
|
||||
u8 rx_free_req;
|
||||
u8 tx_min;
|
||||
u8 fwlog_blocks;
|
||||
u8 padding[3];
|
||||
} __packed;
|
||||
|
||||
struct wl1271_acx_mem_map {
|
||||
|
@ -1153,6 +1155,19 @@ struct wl1271_acx_fw_tsf_information {
|
|||
u8 padding[3];
|
||||
} __packed;
|
||||
|
||||
struct wl1271_acx_ps_rx_streaming {
|
||||
struct acx_header header;
|
||||
|
||||
u8 tid;
|
||||
u8 enable;
|
||||
|
||||
/* interval between triggers (10-100 msec) */
|
||||
u8 period;
|
||||
|
||||
/* timeout before first trigger (0-200 msec) */
|
||||
u8 timeout;
|
||||
} __packed;
|
||||
|
||||
struct wl1271_acx_max_tx_retry {
|
||||
struct acx_header header;
|
||||
|
||||
|
@ -1384,6 +1399,7 @@ int wl1271_acx_set_ba_session(struct wl1271 *wl,
|
|||
int wl1271_acx_set_ba_receiver_session(struct wl1271 *wl, u8 tid_index, u16 ssn,
|
||||
bool enable);
|
||||
int wl1271_acx_tsf_info(struct wl1271 *wl, u64 *mactime);
|
||||
int wl1271_acx_ps_rx_streaming(struct wl1271 *wl, bool enable);
|
||||
int wl1271_acx_max_tx_retry(struct wl1271 *wl);
|
||||
int wl1271_acx_config_ps(struct wl1271 *wl);
|
||||
int wl1271_acx_set_inconnection_sta(struct wl1271 *wl, u8 *addr);
|
||||
|
|
|
@ -102,6 +102,33 @@ static void wl1271_boot_set_ecpu_ctrl(struct wl1271 *wl, u32 flag)
|
|||
wl1271_write32(wl, ACX_REG_ECPU_CONTROL, cpu_ctrl);
|
||||
}
|
||||
|
||||
static unsigned int wl12xx_get_fw_ver_quirks(struct wl1271 *wl)
|
||||
{
|
||||
unsigned int quirks = 0;
|
||||
unsigned int *fw_ver = wl->chip.fw_ver;
|
||||
|
||||
/* Only for wl127x */
|
||||
if ((fw_ver[FW_VER_CHIP] == FW_VER_CHIP_WL127X) &&
|
||||
/* Check STA version */
|
||||
(((fw_ver[FW_VER_IF_TYPE] == FW_VER_IF_TYPE_STA) &&
|
||||
(fw_ver[FW_VER_MINOR] < FW_VER_MINOR_1_SPARE_STA_MIN)) ||
|
||||
/* Check AP version */
|
||||
((fw_ver[FW_VER_IF_TYPE] == FW_VER_IF_TYPE_AP) &&
|
||||
(fw_ver[FW_VER_MINOR] < FW_VER_MINOR_1_SPARE_AP_MIN))))
|
||||
quirks |= WL12XX_QUIRK_USE_2_SPARE_BLOCKS;
|
||||
|
||||
/* Only new station firmwares support routing fw logs to the host */
|
||||
if ((fw_ver[FW_VER_IF_TYPE] == FW_VER_IF_TYPE_STA) &&
|
||||
(fw_ver[FW_VER_MINOR] < FW_VER_MINOR_FWLOG_STA_MIN))
|
||||
quirks |= WL12XX_QUIRK_FWLOG_NOT_IMPLEMENTED;
|
||||
|
||||
/* This feature is not yet supported for AP mode */
|
||||
if (fw_ver[FW_VER_IF_TYPE] == FW_VER_IF_TYPE_AP)
|
||||
quirks |= WL12XX_QUIRK_FWLOG_NOT_IMPLEMENTED;
|
||||
|
||||
return quirks;
|
||||
}
|
||||
|
||||
static void wl1271_parse_fw_ver(struct wl1271 *wl)
|
||||
{
|
||||
int ret;
|
||||
|
@ -116,6 +143,9 @@ static void wl1271_parse_fw_ver(struct wl1271 *wl)
|
|||
memset(wl->chip.fw_ver, 0, sizeof(wl->chip.fw_ver));
|
||||
return;
|
||||
}
|
||||
|
||||
/* Check if any quirks are needed with older fw versions */
|
||||
wl->quirks |= wl12xx_get_fw_ver_quirks(wl);
|
||||
}
|
||||
|
||||
static void wl1271_boot_fw_version(struct wl1271 *wl)
|
||||
|
@ -749,6 +779,9 @@ int wl1271_load_firmware(struct wl1271 *wl)
|
|||
clk |= (wl->ref_clock << 1) << 4;
|
||||
}
|
||||
|
||||
if (wl->quirks & WL12XX_QUIRK_LPD_MODE)
|
||||
clk |= SCRATCH_ENABLE_LPD;
|
||||
|
||||
wl1271_write32(wl, DRPW_SCRATCH_START, clk);
|
||||
|
||||
wl1271_set_partition(wl, &part_table[PART_WORK]);
|
||||
|
|
|
@ -23,7 +23,6 @@
|
|||
|
||||
#include <linux/module.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/crc7.h>
|
||||
#include <linux/spi/spi.h>
|
||||
#include <linux/etherdevice.h>
|
||||
#include <linux/ieee80211.h>
|
||||
|
@ -106,7 +105,7 @@ int wl1271_cmd_send(struct wl1271 *wl, u16 id, void *buf, size_t len,
|
|||
|
||||
fail:
|
||||
WARN_ON(1);
|
||||
ieee80211_queue_work(wl->hw, &wl->recovery_work);
|
||||
wl12xx_queue_recovery_work(wl);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -135,6 +134,11 @@ int wl1271_cmd_general_parms(struct wl1271 *wl)
|
|||
/* Override the REF CLK from the NVS with the one from platform data */
|
||||
gen_parms->general_params.ref_clock = wl->ref_clock;
|
||||
|
||||
/* LPD mode enable (bits 6-7) in WL1271 AP mode only */
|
||||
if (wl->quirks & WL12XX_QUIRK_LPD_MODE)
|
||||
gen_parms->general_params.general_settings |=
|
||||
GENERAL_SETTINGS_DRPW_LPD;
|
||||
|
||||
ret = wl1271_cmd_test(wl, gen_parms, sizeof(*gen_parms), answer);
|
||||
if (ret < 0) {
|
||||
wl1271_warning("CMD_INI_FILE_GENERAL_PARAM failed");
|
||||
|
@ -352,7 +356,7 @@ static int wl1271_cmd_wait_for_event(struct wl1271 *wl, u32 mask)
|
|||
|
||||
ret = wl1271_cmd_wait_for_event_or_timeout(wl, mask);
|
||||
if (ret != 0) {
|
||||
ieee80211_queue_work(wl->hw, &wl->recovery_work);
|
||||
wl12xx_queue_recovery_work(wl);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -1223,3 +1227,87 @@ out_free:
|
|||
out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
int wl12xx_cmd_config_fwlog(struct wl1271 *wl)
|
||||
{
|
||||
struct wl12xx_cmd_config_fwlog *cmd;
|
||||
int ret = 0;
|
||||
|
||||
wl1271_debug(DEBUG_CMD, "cmd config firmware logger");
|
||||
|
||||
cmd = kzalloc(sizeof(*cmd), GFP_KERNEL);
|
||||
if (!cmd) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
cmd->logger_mode = wl->conf.fwlog.mode;
|
||||
cmd->log_severity = wl->conf.fwlog.severity;
|
||||
cmd->timestamp = wl->conf.fwlog.timestamp;
|
||||
cmd->output = wl->conf.fwlog.output;
|
||||
cmd->threshold = wl->conf.fwlog.threshold;
|
||||
|
||||
ret = wl1271_cmd_send(wl, CMD_CONFIG_FWLOGGER, cmd, sizeof(*cmd), 0);
|
||||
if (ret < 0) {
|
||||
wl1271_error("failed to send config firmware logger command");
|
||||
goto out_free;
|
||||
}
|
||||
|
||||
out_free:
|
||||
kfree(cmd);
|
||||
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
int wl12xx_cmd_start_fwlog(struct wl1271 *wl)
|
||||
{
|
||||
struct wl12xx_cmd_start_fwlog *cmd;
|
||||
int ret = 0;
|
||||
|
||||
wl1271_debug(DEBUG_CMD, "cmd start firmware logger");
|
||||
|
||||
cmd = kzalloc(sizeof(*cmd), GFP_KERNEL);
|
||||
if (!cmd) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = wl1271_cmd_send(wl, CMD_START_FWLOGGER, cmd, sizeof(*cmd), 0);
|
||||
if (ret < 0) {
|
||||
wl1271_error("failed to send start firmware logger command");
|
||||
goto out_free;
|
||||
}
|
||||
|
||||
out_free:
|
||||
kfree(cmd);
|
||||
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
int wl12xx_cmd_stop_fwlog(struct wl1271 *wl)
|
||||
{
|
||||
struct wl12xx_cmd_stop_fwlog *cmd;
|
||||
int ret = 0;
|
||||
|
||||
wl1271_debug(DEBUG_CMD, "cmd stop firmware logger");
|
||||
|
||||
cmd = kzalloc(sizeof(*cmd), GFP_KERNEL);
|
||||
if (!cmd) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = wl1271_cmd_send(wl, CMD_STOP_FWLOGGER, cmd, sizeof(*cmd), 0);
|
||||
if (ret < 0) {
|
||||
wl1271_error("failed to send stop firmware logger command");
|
||||
goto out_free;
|
||||
}
|
||||
|
||||
out_free:
|
||||
kfree(cmd);
|
||||
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
|
|
|
@ -70,6 +70,9 @@ int wl1271_cmd_start_bss(struct wl1271 *wl);
|
|||
int wl1271_cmd_stop_bss(struct wl1271 *wl);
|
||||
int wl1271_cmd_add_sta(struct wl1271 *wl, struct ieee80211_sta *sta, u8 hlid);
|
||||
int wl1271_cmd_remove_sta(struct wl1271 *wl, u8 hlid);
|
||||
int wl12xx_cmd_config_fwlog(struct wl1271 *wl);
|
||||
int wl12xx_cmd_start_fwlog(struct wl1271 *wl);
|
||||
int wl12xx_cmd_stop_fwlog(struct wl1271 *wl);
|
||||
|
||||
enum wl1271_commands {
|
||||
CMD_INTERROGATE = 1, /*use this to read information elements*/
|
||||
|
@ -107,6 +110,9 @@ enum wl1271_commands {
|
|||
CMD_START_PERIODIC_SCAN = 50,
|
||||
CMD_STOP_PERIODIC_SCAN = 51,
|
||||
CMD_SET_STA_STATE = 52,
|
||||
CMD_CONFIG_FWLOGGER = 53,
|
||||
CMD_START_FWLOGGER = 54,
|
||||
CMD_STOP_FWLOGGER = 55,
|
||||
|
||||
/* AP mode commands */
|
||||
CMD_BSS_START = 60,
|
||||
|
@ -575,4 +581,60 @@ struct wl1271_cmd_remove_sta {
|
|||
u8 padding1;
|
||||
} __packed;
|
||||
|
||||
/*
|
||||
* Continuous mode - packets are transferred to the host periodically
|
||||
* via the data path.
|
||||
* On demand - Log messages are stored in a cyclic buffer in the
|
||||
* firmware, and only transferred to the host when explicitly requested
|
||||
*/
|
||||
enum wl12xx_fwlogger_log_mode {
|
||||
WL12XX_FWLOG_CONTINUOUS,
|
||||
WL12XX_FWLOG_ON_DEMAND
|
||||
};
|
||||
|
||||
/* Include/exclude timestamps from the log messages */
|
||||
enum wl12xx_fwlogger_timestamp {
|
||||
WL12XX_FWLOG_TIMESTAMP_DISABLED,
|
||||
WL12XX_FWLOG_TIMESTAMP_ENABLED
|
||||
};
|
||||
|
||||
/*
|
||||
* Logs can be routed to the debug pinouts (where available), to the host bus
|
||||
* (SDIO/SPI), or dropped
|
||||
*/
|
||||
enum wl12xx_fwlogger_output {
|
||||
WL12XX_FWLOG_OUTPUT_NONE,
|
||||
WL12XX_FWLOG_OUTPUT_DBG_PINS,
|
||||
WL12XX_FWLOG_OUTPUT_HOST,
|
||||
};
|
||||
|
||||
struct wl12xx_cmd_config_fwlog {
|
||||
struct wl1271_cmd_header header;
|
||||
|
||||
/* See enum wl12xx_fwlogger_log_mode */
|
||||
u8 logger_mode;
|
||||
|
||||
/* Minimum log level threshold */
|
||||
u8 log_severity;
|
||||
|
||||
/* Include/exclude timestamps from the log messages */
|
||||
u8 timestamp;
|
||||
|
||||
/* See enum wl1271_fwlogger_output */
|
||||
u8 output;
|
||||
|
||||
/* Regulates the frequency of log messages */
|
||||
u8 threshold;
|
||||
|
||||
u8 padding[3];
|
||||
} __packed;
|
||||
|
||||
struct wl12xx_cmd_start_fwlog {
|
||||
struct wl1271_cmd_header header;
|
||||
} __packed;
|
||||
|
||||
struct wl12xx_cmd_stop_fwlog {
|
||||
struct wl1271_cmd_header header;
|
||||
} __packed;
|
||||
|
||||
#endif /* __WL1271_CMD_H__ */
|
||||
|
|
|
@ -1248,6 +1248,59 @@ struct conf_fm_coex {
|
|||
u8 swallow_clk_diff;
|
||||
};
|
||||
|
||||
struct conf_rx_streaming_settings {
|
||||
/*
|
||||
* RX Streaming duration (in msec) from last tx/rx
|
||||
*
|
||||
* Range: u32
|
||||
*/
|
||||
u32 duration;
|
||||
|
||||
/*
|
||||
* Bitmap of tids to be polled during RX streaming.
|
||||
* (Note: it doesn't look like it really matters)
|
||||
*
|
||||
* Range: 0x1-0xff
|
||||
*/
|
||||
u8 queues;
|
||||
|
||||
/*
|
||||
* RX Streaming interval.
|
||||
* (Note:this value is also used as the rx streaming timeout)
|
||||
* Range: 0 (disabled), 10 - 100
|
||||
*/
|
||||
u8 interval;
|
||||
|
||||
/*
|
||||
* enable rx streaming also when there is no coex activity
|
||||
*/
|
||||
u8 always;
|
||||
};
|
||||
|
||||
struct conf_fwlog {
|
||||
/* Continuous or on-demand */
|
||||
u8 mode;
|
||||
|
||||
/*
|
||||
* Number of memory blocks dedicated for the FW logger
|
||||
*
|
||||
* Range: 1-3, or 0 to disable the FW logger
|
||||
*/
|
||||
u8 mem_blocks;
|
||||
|
||||
/* Minimum log level threshold */
|
||||
u8 severity;
|
||||
|
||||
/* Include/exclude timestamps from the log messages */
|
||||
u8 timestamp;
|
||||
|
||||
/* See enum wl1271_fwlogger_output */
|
||||
u8 output;
|
||||
|
||||
/* Regulates the frequency of log messages */
|
||||
u8 threshold;
|
||||
};
|
||||
|
||||
struct conf_drv_settings {
|
||||
struct conf_sg_settings sg;
|
||||
struct conf_rx_settings rx;
|
||||
|
@ -1263,6 +1316,8 @@ struct conf_drv_settings {
|
|||
struct conf_memory_settings mem_wl127x;
|
||||
struct conf_memory_settings mem_wl128x;
|
||||
struct conf_fm_coex fm_coex;
|
||||
struct conf_rx_streaming_settings rx_streaming;
|
||||
struct conf_fwlog fwlog;
|
||||
u8 hci_io_ds;
|
||||
};
|
||||
|
||||
|
|
|
@ -71,6 +71,14 @@ static const struct file_operations name## _ops = { \
|
|||
if (!entry || IS_ERR(entry)) \
|
||||
goto err; \
|
||||
|
||||
#define DEBUGFS_ADD_PREFIX(prefix, name, parent) \
|
||||
do { \
|
||||
entry = debugfs_create_file(#name, 0400, parent, \
|
||||
wl, &prefix## _## name## _ops); \
|
||||
if (!entry || IS_ERR(entry)) \
|
||||
goto err; \
|
||||
} while (0);
|
||||
|
||||
#define DEBUGFS_FWSTATS_FILE(sub, name, fmt) \
|
||||
static ssize_t sub## _ ##name## _read(struct file *file, \
|
||||
char __user *userbuf, \
|
||||
|
@ -298,7 +306,7 @@ static ssize_t start_recovery_write(struct file *file,
|
|||
struct wl1271 *wl = file->private_data;
|
||||
|
||||
mutex_lock(&wl->mutex);
|
||||
ieee80211_queue_work(wl->hw, &wl->recovery_work);
|
||||
wl12xx_queue_recovery_work(wl);
|
||||
mutex_unlock(&wl->mutex);
|
||||
|
||||
return count;
|
||||
|
@ -527,11 +535,129 @@ static const struct file_operations beacon_interval_ops = {
|
|||
.llseek = default_llseek,
|
||||
};
|
||||
|
||||
static ssize_t rx_streaming_interval_write(struct file *file,
|
||||
const char __user *user_buf,
|
||||
size_t count, loff_t *ppos)
|
||||
{
|
||||
struct wl1271 *wl = file->private_data;
|
||||
char buf[10];
|
||||
size_t len;
|
||||
unsigned long value;
|
||||
int ret;
|
||||
|
||||
len = min(count, sizeof(buf) - 1);
|
||||
if (copy_from_user(buf, user_buf, len))
|
||||
return -EFAULT;
|
||||
buf[len] = '\0';
|
||||
|
||||
ret = kstrtoul(buf, 0, &value);
|
||||
if (ret < 0) {
|
||||
wl1271_warning("illegal value in rx_streaming_interval!");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* valid values: 0, 10-100 */
|
||||
if (value && (value < 10 || value > 100)) {
|
||||
wl1271_warning("value is not in range!");
|
||||
return -ERANGE;
|
||||
}
|
||||
|
||||
mutex_lock(&wl->mutex);
|
||||
|
||||
wl->conf.rx_streaming.interval = value;
|
||||
|
||||
ret = wl1271_ps_elp_wakeup(wl);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
wl1271_recalc_rx_streaming(wl);
|
||||
|
||||
wl1271_ps_elp_sleep(wl);
|
||||
out:
|
||||
mutex_unlock(&wl->mutex);
|
||||
return count;
|
||||
}
|
||||
|
||||
static ssize_t rx_streaming_interval_read(struct file *file,
|
||||
char __user *userbuf,
|
||||
size_t count, loff_t *ppos)
|
||||
{
|
||||
struct wl1271 *wl = file->private_data;
|
||||
return wl1271_format_buffer(userbuf, count, ppos,
|
||||
"%d\n", wl->conf.rx_streaming.interval);
|
||||
}
|
||||
|
||||
static const struct file_operations rx_streaming_interval_ops = {
|
||||
.read = rx_streaming_interval_read,
|
||||
.write = rx_streaming_interval_write,
|
||||
.open = wl1271_open_file_generic,
|
||||
.llseek = default_llseek,
|
||||
};
|
||||
|
||||
static ssize_t rx_streaming_always_write(struct file *file,
|
||||
const char __user *user_buf,
|
||||
size_t count, loff_t *ppos)
|
||||
{
|
||||
struct wl1271 *wl = file->private_data;
|
||||
char buf[10];
|
||||
size_t len;
|
||||
unsigned long value;
|
||||
int ret;
|
||||
|
||||
len = min(count, sizeof(buf) - 1);
|
||||
if (copy_from_user(buf, user_buf, len))
|
||||
return -EFAULT;
|
||||
buf[len] = '\0';
|
||||
|
||||
ret = kstrtoul(buf, 0, &value);
|
||||
if (ret < 0) {
|
||||
wl1271_warning("illegal value in rx_streaming_write!");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* valid values: 0, 10-100 */
|
||||
if (!(value == 0 || value == 1)) {
|
||||
wl1271_warning("value is not in valid!");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
mutex_lock(&wl->mutex);
|
||||
|
||||
wl->conf.rx_streaming.always = value;
|
||||
|
||||
ret = wl1271_ps_elp_wakeup(wl);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
wl1271_recalc_rx_streaming(wl);
|
||||
|
||||
wl1271_ps_elp_sleep(wl);
|
||||
out:
|
||||
mutex_unlock(&wl->mutex);
|
||||
return count;
|
||||
}
|
||||
|
||||
static ssize_t rx_streaming_always_read(struct file *file,
|
||||
char __user *userbuf,
|
||||
size_t count, loff_t *ppos)
|
||||
{
|
||||
struct wl1271 *wl = file->private_data;
|
||||
return wl1271_format_buffer(userbuf, count, ppos,
|
||||
"%d\n", wl->conf.rx_streaming.always);
|
||||
}
|
||||
|
||||
static const struct file_operations rx_streaming_always_ops = {
|
||||
.read = rx_streaming_always_read,
|
||||
.write = rx_streaming_always_write,
|
||||
.open = wl1271_open_file_generic,
|
||||
.llseek = default_llseek,
|
||||
};
|
||||
|
||||
static int wl1271_debugfs_add_files(struct wl1271 *wl,
|
||||
struct dentry *rootdir)
|
||||
{
|
||||
int ret = 0;
|
||||
struct dentry *entry, *stats;
|
||||
struct dentry *entry, *stats, *streaming;
|
||||
|
||||
stats = debugfs_create_dir("fw-statistics", rootdir);
|
||||
if (!stats || IS_ERR(stats)) {
|
||||
|
@ -640,6 +766,14 @@ static int wl1271_debugfs_add_files(struct wl1271 *wl,
|
|||
DEBUGFS_ADD(dtim_interval, rootdir);
|
||||
DEBUGFS_ADD(beacon_interval, rootdir);
|
||||
|
||||
streaming = debugfs_create_dir("rx_streaming", rootdir);
|
||||
if (!streaming || IS_ERR(streaming))
|
||||
goto err;
|
||||
|
||||
DEBUGFS_ADD_PREFIX(rx_streaming, interval, streaming);
|
||||
DEBUGFS_ADD_PREFIX(rx_streaming, always, streaming);
|
||||
|
||||
|
||||
return 0;
|
||||
|
||||
err:
|
||||
|
|
|
@ -133,10 +133,13 @@ static int wl1271_event_ps_report(struct wl1271 *wl,
|
|||
if (ret < 0)
|
||||
break;
|
||||
|
||||
/* enable beacon early termination */
|
||||
ret = wl1271_acx_bet_enable(wl, true);
|
||||
if (ret < 0)
|
||||
break;
|
||||
/*
|
||||
* BET has only a minor effect in 5GHz and masks
|
||||
* channel switch IEs, so we only enable BET on 2.4GHz
|
||||
*/
|
||||
if (wl->band == IEEE80211_BAND_2GHZ)
|
||||
/* enable beacon early termination */
|
||||
ret = wl1271_acx_bet_enable(wl, true);
|
||||
|
||||
if (wl->ps_compl) {
|
||||
complete(wl->ps_compl);
|
||||
|
@ -183,6 +186,21 @@ static void wl1271_stop_ba_event(struct wl1271 *wl, u8 ba_allowed)
|
|||
ieee80211_stop_rx_ba_session(wl->vif, wl->ba_rx_bitmap, wl->bssid);
|
||||
}
|
||||
|
||||
static void wl12xx_event_soft_gemini_sense(struct wl1271 *wl,
|
||||
u8 enable)
|
||||
{
|
||||
if (enable) {
|
||||
/* disable dynamic PS when requested by the firmware */
|
||||
ieee80211_disable_dyn_ps(wl->vif);
|
||||
set_bit(WL1271_FLAG_SOFT_GEMINI, &wl->flags);
|
||||
} else {
|
||||
ieee80211_enable_dyn_ps(wl->vif);
|
||||
clear_bit(WL1271_FLAG_SOFT_GEMINI, &wl->flags);
|
||||
wl1271_recalc_rx_streaming(wl);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
static void wl1271_event_mbox_dump(struct event_mailbox *mbox)
|
||||
{
|
||||
wl1271_debug(DEBUG_EVENT, "MBOX DUMP:");
|
||||
|
@ -226,14 +244,10 @@ static int wl1271_event_process(struct wl1271 *wl, struct event_mailbox *mbox)
|
|||
}
|
||||
}
|
||||
|
||||
/* disable dynamic PS when requested by the firmware */
|
||||
if (vector & SOFT_GEMINI_SENSE_EVENT_ID &&
|
||||
wl->bss_type == BSS_TYPE_STA_BSS) {
|
||||
if (mbox->soft_gemini_sense_info)
|
||||
ieee80211_disable_dyn_ps(wl->vif);
|
||||
else
|
||||
ieee80211_enable_dyn_ps(wl->vif);
|
||||
}
|
||||
wl->bss_type == BSS_TYPE_STA_BSS)
|
||||
wl12xx_event_soft_gemini_sense(wl,
|
||||
mbox->soft_gemini_sense_info);
|
||||
|
||||
/*
|
||||
* The BSS_LOSE_EVENT_ID is only needed while psm (and hence beacon
|
||||
|
|
|
@ -24,6 +24,9 @@
|
|||
#ifndef __INI_H__
|
||||
#define __INI_H__
|
||||
|
||||
#define GENERAL_SETTINGS_DRPW_LPD 0xc0
|
||||
#define SCRATCH_ENABLE_LPD BIT(25)
|
||||
|
||||
#define WL1271_INI_MAX_SMART_REFLEX_PARAM 16
|
||||
|
||||
struct wl1271_ini_general_params {
|
||||
|
|
|
@ -321,6 +321,20 @@ static int wl1271_init_beacon_broadcast(struct wl1271 *wl)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int wl12xx_init_fwlog(struct wl1271 *wl)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (wl->quirks & WL12XX_QUIRK_FWLOG_NOT_IMPLEMENTED)
|
||||
return 0;
|
||||
|
||||
ret = wl12xx_cmd_config_fwlog(wl);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int wl1271_sta_hw_init(struct wl1271 *wl)
|
||||
{
|
||||
int ret;
|
||||
|
@ -382,6 +396,11 @@ static int wl1271_sta_hw_init(struct wl1271 *wl)
|
|||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
/* Configure the FW logger */
|
||||
ret = wl12xx_init_fwlog(wl);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -23,7 +23,6 @@
|
|||
|
||||
#include <linux/module.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/crc7.h>
|
||||
#include <linux/spi/spi.h>
|
||||
|
||||
#include "wl12xx.h"
|
||||
|
@ -128,12 +127,14 @@ EXPORT_SYMBOL_GPL(wl1271_set_partition);
|
|||
|
||||
void wl1271_io_reset(struct wl1271 *wl)
|
||||
{
|
||||
wl->if_ops->reset(wl);
|
||||
if (wl->if_ops->reset)
|
||||
wl->if_ops->reset(wl);
|
||||
}
|
||||
|
||||
void wl1271_io_init(struct wl1271 *wl)
|
||||
{
|
||||
wl->if_ops->init(wl);
|
||||
if (wl->if_ops->init)
|
||||
wl->if_ops->init(wl);
|
||||
}
|
||||
|
||||
void wl1271_top_reg_write(struct wl1271 *wl, int addr, u16 val)
|
||||
|
|
|
@ -129,6 +129,20 @@ static inline void wl1271_write(struct wl1271 *wl, int addr, void *buf,
|
|||
wl1271_raw_write(wl, physical, buf, len, fixed);
|
||||
}
|
||||
|
||||
static inline void wl1271_read_hwaddr(struct wl1271 *wl, int hwaddr,
|
||||
void *buf, size_t len, bool fixed)
|
||||
{
|
||||
int physical;
|
||||
int addr;
|
||||
|
||||
/* Addresses are stored internally as addresses to 32 bytes blocks */
|
||||
addr = hwaddr << 5;
|
||||
|
||||
physical = wl1271_translate_addr(wl, addr);
|
||||
|
||||
wl1271_raw_read(wl, physical, buf, len, fixed);
|
||||
}
|
||||
|
||||
static inline u32 wl1271_read32(struct wl1271 *wl, int addr)
|
||||
{
|
||||
return wl1271_raw_read32(wl, wl1271_translate_addr(wl, addr));
|
||||
|
|
|
@ -31,6 +31,7 @@
|
|||
#include <linux/platform_device.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/wl12xx.h>
|
||||
#include <linux/sched.h>
|
||||
|
||||
#include "wl12xx.h"
|
||||
#include "wl12xx_80211.h"
|
||||
|
@ -362,9 +363,25 @@ static struct conf_drv_settings default_conf = {
|
|||
.fm_disturbed_band_margin = 0xff, /* default */
|
||||
.swallow_clk_diff = 0xff, /* default */
|
||||
},
|
||||
.rx_streaming = {
|
||||
.duration = 150,
|
||||
.queues = 0x1,
|
||||
.interval = 20,
|
||||
.always = 0,
|
||||
},
|
||||
.fwlog = {
|
||||
.mode = WL12XX_FWLOG_ON_DEMAND,
|
||||
.mem_blocks = 2,
|
||||
.severity = 0,
|
||||
.timestamp = WL12XX_FWLOG_TIMESTAMP_DISABLED,
|
||||
.output = WL12XX_FWLOG_OUTPUT_HOST,
|
||||
.threshold = 0,
|
||||
},
|
||||
.hci_io_ds = HCI_IO_DS_6MA,
|
||||
};
|
||||
|
||||
static char *fwlog_param;
|
||||
|
||||
static void __wl1271_op_remove_interface(struct wl1271 *wl,
|
||||
bool reset_tx_queues);
|
||||
static void wl1271_free_ap_keys(struct wl1271 *wl);
|
||||
|
@ -388,6 +405,22 @@ static struct platform_device wl1271_device = {
|
|||
static DEFINE_MUTEX(wl_list_mutex);
|
||||
static LIST_HEAD(wl_list);
|
||||
|
||||
static int wl1271_check_operstate(struct wl1271 *wl, unsigned char operstate)
|
||||
{
|
||||
int ret;
|
||||
if (operstate != IF_OPER_UP)
|
||||
return 0;
|
||||
|
||||
if (test_and_set_bit(WL1271_FLAG_STA_STATE_SENT, &wl->flags))
|
||||
return 0;
|
||||
|
||||
ret = wl1271_cmd_set_sta_state(wl);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
wl1271_info("Association completed.");
|
||||
return 0;
|
||||
}
|
||||
static int wl1271_dev_notify(struct notifier_block *me, unsigned long what,
|
||||
void *arg)
|
||||
{
|
||||
|
@ -437,11 +470,7 @@ static int wl1271_dev_notify(struct notifier_block *me, unsigned long what,
|
|||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
if ((dev->operstate == IF_OPER_UP) &&
|
||||
!test_and_set_bit(WL1271_FLAG_STA_STATE_SENT, &wl->flags)) {
|
||||
wl1271_cmd_set_sta_state(wl);
|
||||
wl1271_info("Association completed.");
|
||||
}
|
||||
wl1271_check_operstate(wl, dev->operstate);
|
||||
|
||||
wl1271_ps_elp_sleep(wl);
|
||||
|
||||
|
@ -473,6 +502,117 @@ static int wl1271_reg_notify(struct wiphy *wiphy,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int wl1271_set_rx_streaming(struct wl1271 *wl, bool enable)
|
||||
{
|
||||
int ret = 0;
|
||||
|
||||
/* we should hold wl->mutex */
|
||||
ret = wl1271_acx_ps_rx_streaming(wl, enable);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
if (enable)
|
||||
set_bit(WL1271_FLAG_RX_STREAMING_STARTED, &wl->flags);
|
||||
else
|
||||
clear_bit(WL1271_FLAG_RX_STREAMING_STARTED, &wl->flags);
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* this function is being called when the rx_streaming interval
|
||||
* has beed changed or rx_streaming should be disabled
|
||||
*/
|
||||
int wl1271_recalc_rx_streaming(struct wl1271 *wl)
|
||||
{
|
||||
int ret = 0;
|
||||
int period = wl->conf.rx_streaming.interval;
|
||||
|
||||
/* don't reconfigure if rx_streaming is disabled */
|
||||
if (!test_bit(WL1271_FLAG_RX_STREAMING_STARTED, &wl->flags))
|
||||
goto out;
|
||||
|
||||
/* reconfigure/disable according to new streaming_period */
|
||||
if (period &&
|
||||
test_bit(WL1271_FLAG_STA_ASSOCIATED, &wl->flags) &&
|
||||
(wl->conf.rx_streaming.always ||
|
||||
test_bit(WL1271_FLAG_SOFT_GEMINI, &wl->flags)))
|
||||
ret = wl1271_set_rx_streaming(wl, true);
|
||||
else {
|
||||
ret = wl1271_set_rx_streaming(wl, false);
|
||||
/* don't cancel_work_sync since we might deadlock */
|
||||
del_timer_sync(&wl->rx_streaming_timer);
|
||||
}
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void wl1271_rx_streaming_enable_work(struct work_struct *work)
|
||||
{
|
||||
int ret;
|
||||
struct wl1271 *wl =
|
||||
container_of(work, struct wl1271, rx_streaming_enable_work);
|
||||
|
||||
mutex_lock(&wl->mutex);
|
||||
|
||||
if (test_bit(WL1271_FLAG_RX_STREAMING_STARTED, &wl->flags) ||
|
||||
!test_bit(WL1271_FLAG_STA_ASSOCIATED, &wl->flags) ||
|
||||
(!wl->conf.rx_streaming.always &&
|
||||
!test_bit(WL1271_FLAG_SOFT_GEMINI, &wl->flags)))
|
||||
goto out;
|
||||
|
||||
if (!wl->conf.rx_streaming.interval)
|
||||
goto out;
|
||||
|
||||
ret = wl1271_ps_elp_wakeup(wl);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
ret = wl1271_set_rx_streaming(wl, true);
|
||||
if (ret < 0)
|
||||
goto out_sleep;
|
||||
|
||||
/* stop it after some time of inactivity */
|
||||
mod_timer(&wl->rx_streaming_timer,
|
||||
jiffies + msecs_to_jiffies(wl->conf.rx_streaming.duration));
|
||||
|
||||
out_sleep:
|
||||
wl1271_ps_elp_sleep(wl);
|
||||
out:
|
||||
mutex_unlock(&wl->mutex);
|
||||
}
|
||||
|
||||
static void wl1271_rx_streaming_disable_work(struct work_struct *work)
|
||||
{
|
||||
int ret;
|
||||
struct wl1271 *wl =
|
||||
container_of(work, struct wl1271, rx_streaming_disable_work);
|
||||
|
||||
mutex_lock(&wl->mutex);
|
||||
|
||||
if (!test_bit(WL1271_FLAG_RX_STREAMING_STARTED, &wl->flags))
|
||||
goto out;
|
||||
|
||||
ret = wl1271_ps_elp_wakeup(wl);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
ret = wl1271_set_rx_streaming(wl, false);
|
||||
if (ret)
|
||||
goto out_sleep;
|
||||
|
||||
out_sleep:
|
||||
wl1271_ps_elp_sleep(wl);
|
||||
out:
|
||||
mutex_unlock(&wl->mutex);
|
||||
}
|
||||
|
||||
static void wl1271_rx_streaming_timer(unsigned long data)
|
||||
{
|
||||
struct wl1271 *wl = (struct wl1271 *)data;
|
||||
ieee80211_queue_work(wl->hw, &wl->rx_streaming_disable_work);
|
||||
}
|
||||
|
||||
static void wl1271_conf_init(struct wl1271 *wl)
|
||||
{
|
||||
|
||||
|
@ -488,8 +628,24 @@ static void wl1271_conf_init(struct wl1271 *wl)
|
|||
|
||||
/* apply driver default configuration */
|
||||
memcpy(&wl->conf, &default_conf, sizeof(default_conf));
|
||||
}
|
||||
|
||||
/* Adjust settings according to optional module parameters */
|
||||
if (fwlog_param) {
|
||||
if (!strcmp(fwlog_param, "continuous")) {
|
||||
wl->conf.fwlog.mode = WL12XX_FWLOG_CONTINUOUS;
|
||||
} else if (!strcmp(fwlog_param, "ondemand")) {
|
||||
wl->conf.fwlog.mode = WL12XX_FWLOG_ON_DEMAND;
|
||||
} else if (!strcmp(fwlog_param, "dbgpins")) {
|
||||
wl->conf.fwlog.mode = WL12XX_FWLOG_CONTINUOUS;
|
||||
wl->conf.fwlog.output = WL12XX_FWLOG_OUTPUT_DBG_PINS;
|
||||
} else if (!strcmp(fwlog_param, "disable")) {
|
||||
wl->conf.fwlog.mem_blocks = 0;
|
||||
wl->conf.fwlog.output = WL12XX_FWLOG_OUTPUT_NONE;
|
||||
} else {
|
||||
wl1271_error("Unknown fwlog parameter %s", fwlog_param);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static int wl1271_plt_init(struct wl1271 *wl)
|
||||
{
|
||||
|
@ -741,7 +897,7 @@ static void wl1271_flush_deferred_work(struct wl1271 *wl)
|
|||
|
||||
/* Return sent skbs to the network stack */
|
||||
while ((skb = skb_dequeue(&wl->deferred_tx_queue)))
|
||||
ieee80211_tx_status(wl->hw, skb);
|
||||
ieee80211_tx_status_ni(wl->hw, skb);
|
||||
}
|
||||
|
||||
static void wl1271_netstack_work(struct work_struct *work)
|
||||
|
@ -808,7 +964,7 @@ irqreturn_t wl1271_irq(int irq, void *cookie)
|
|||
if (unlikely(intr & WL1271_ACX_INTR_WATCHDOG)) {
|
||||
wl1271_error("watchdog interrupt received! "
|
||||
"starting recovery.");
|
||||
ieee80211_queue_work(wl->hw, &wl->recovery_work);
|
||||
wl12xx_queue_recovery_work(wl);
|
||||
|
||||
/* restarting the chip. ignore any other interrupt. */
|
||||
goto out;
|
||||
|
@ -970,6 +1126,89 @@ out:
|
|||
return ret;
|
||||
}
|
||||
|
||||
void wl12xx_queue_recovery_work(struct wl1271 *wl)
|
||||
{
|
||||
if (!test_bit(WL1271_FLAG_RECOVERY_IN_PROGRESS, &wl->flags))
|
||||
ieee80211_queue_work(wl->hw, &wl->recovery_work);
|
||||
}
|
||||
|
||||
size_t wl12xx_copy_fwlog(struct wl1271 *wl, u8 *memblock, size_t maxlen)
|
||||
{
|
||||
size_t len = 0;
|
||||
|
||||
/* The FW log is a length-value list, find where the log end */
|
||||
while (len < maxlen) {
|
||||
if (memblock[len] == 0)
|
||||
break;
|
||||
if (len + memblock[len] + 1 > maxlen)
|
||||
break;
|
||||
len += memblock[len] + 1;
|
||||
}
|
||||
|
||||
/* Make sure we have enough room */
|
||||
len = min(len, (size_t)(PAGE_SIZE - wl->fwlog_size));
|
||||
|
||||
/* Fill the FW log file, consumed by the sysfs fwlog entry */
|
||||
memcpy(wl->fwlog + wl->fwlog_size, memblock, len);
|
||||
wl->fwlog_size += len;
|
||||
|
||||
return len;
|
||||
}
|
||||
|
||||
static void wl12xx_read_fwlog_panic(struct wl1271 *wl)
|
||||
{
|
||||
u32 addr;
|
||||
u32 first_addr;
|
||||
u8 *block;
|
||||
|
||||
if ((wl->quirks & WL12XX_QUIRK_FWLOG_NOT_IMPLEMENTED) ||
|
||||
(wl->conf.fwlog.mode != WL12XX_FWLOG_ON_DEMAND) ||
|
||||
(wl->conf.fwlog.mem_blocks == 0))
|
||||
return;
|
||||
|
||||
wl1271_info("Reading FW panic log");
|
||||
|
||||
block = kmalloc(WL12XX_HW_BLOCK_SIZE, GFP_KERNEL);
|
||||
if (!block)
|
||||
return;
|
||||
|
||||
/*
|
||||
* Make sure the chip is awake and the logger isn't active.
|
||||
* This might fail if the firmware hanged.
|
||||
*/
|
||||
if (!wl1271_ps_elp_wakeup(wl))
|
||||
wl12xx_cmd_stop_fwlog(wl);
|
||||
|
||||
/* Read the first memory block address */
|
||||
wl1271_fw_status(wl, wl->fw_status);
|
||||
first_addr = __le32_to_cpu(wl->fw_status->sta.log_start_addr);
|
||||
if (!first_addr)
|
||||
goto out;
|
||||
|
||||
/* Traverse the memory blocks linked list */
|
||||
addr = first_addr;
|
||||
do {
|
||||
memset(block, 0, WL12XX_HW_BLOCK_SIZE);
|
||||
wl1271_read_hwaddr(wl, addr, block, WL12XX_HW_BLOCK_SIZE,
|
||||
false);
|
||||
|
||||
/*
|
||||
* Memory blocks are linked to one another. The first 4 bytes
|
||||
* of each memory block hold the hardware address of the next
|
||||
* one. The last memory block points to the first one.
|
||||
*/
|
||||
addr = __le32_to_cpup((__le32 *)block);
|
||||
if (!wl12xx_copy_fwlog(wl, block + sizeof(addr),
|
||||
WL12XX_HW_BLOCK_SIZE - sizeof(addr)))
|
||||
break;
|
||||
} while (addr && (addr != first_addr));
|
||||
|
||||
wake_up_interruptible(&wl->fwlog_waitq);
|
||||
|
||||
out:
|
||||
kfree(block);
|
||||
}
|
||||
|
||||
static void wl1271_recovery_work(struct work_struct *work)
|
||||
{
|
||||
struct wl1271 *wl =
|
||||
|
@ -980,6 +1219,11 @@ static void wl1271_recovery_work(struct work_struct *work)
|
|||
if (wl->state != WL1271_STATE_ON)
|
||||
goto out;
|
||||
|
||||
/* Avoid a recursive recovery */
|
||||
set_bit(WL1271_FLAG_RECOVERY_IN_PROGRESS, &wl->flags);
|
||||
|
||||
wl12xx_read_fwlog_panic(wl);
|
||||
|
||||
wl1271_info("Hardware recovery in progress. FW ver: %s pc: 0x%x",
|
||||
wl->chip.fw_ver_str, wl1271_read32(wl, SCR_PAD4));
|
||||
|
||||
|
@ -996,6 +1240,9 @@ static void wl1271_recovery_work(struct work_struct *work)
|
|||
|
||||
/* reboot the chipset */
|
||||
__wl1271_op_remove_interface(wl, false);
|
||||
|
||||
clear_bit(WL1271_FLAG_RECOVERY_IN_PROGRESS, &wl->flags);
|
||||
|
||||
ieee80211_restart_hw(wl->hw);
|
||||
|
||||
/*
|
||||
|
@ -1074,9 +1321,13 @@ static int wl1271_chip_wakeup(struct wl1271 *wl)
|
|||
wl1271_debug(DEBUG_BOOT, "chip id 0x%x (1271 PG20)",
|
||||
wl->chip.id);
|
||||
|
||||
/* end-of-transaction flag should be set in wl127x AP mode */
|
||||
/*
|
||||
* 'end-of-transaction flag' and 'LPD mode flag'
|
||||
* should be set in wl127x AP mode only
|
||||
*/
|
||||
if (wl->bss_type == BSS_TYPE_AP_BSS)
|
||||
wl->quirks |= WL12XX_QUIRK_END_OF_TRANSACTION;
|
||||
wl->quirks |= (WL12XX_QUIRK_END_OF_TRANSACTION |
|
||||
WL12XX_QUIRK_LPD_MODE);
|
||||
|
||||
ret = wl1271_setup(wl);
|
||||
if (ret < 0)
|
||||
|
@ -1089,6 +1340,7 @@ static int wl1271_chip_wakeup(struct wl1271 *wl)
|
|||
ret = wl1271_setup(wl);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
if (wl1271_set_block_size(wl))
|
||||
wl->quirks |= WL12XX_QUIRK_BLOCKSIZE_ALIGNMENT;
|
||||
break;
|
||||
|
@ -1117,24 +1369,6 @@ out:
|
|||
return ret;
|
||||
}
|
||||
|
||||
static unsigned int wl1271_get_fw_ver_quirks(struct wl1271 *wl)
|
||||
{
|
||||
unsigned int quirks = 0;
|
||||
unsigned int *fw_ver = wl->chip.fw_ver;
|
||||
|
||||
/* Only for wl127x */
|
||||
if ((fw_ver[FW_VER_CHIP] == FW_VER_CHIP_WL127X) &&
|
||||
/* Check STA version */
|
||||
(((fw_ver[FW_VER_IF_TYPE] == FW_VER_IF_TYPE_STA) &&
|
||||
(fw_ver[FW_VER_MINOR] < FW_VER_MINOR_1_SPARE_STA_MIN)) ||
|
||||
/* Check AP version */
|
||||
((fw_ver[FW_VER_IF_TYPE] == FW_VER_IF_TYPE_AP) &&
|
||||
(fw_ver[FW_VER_MINOR] < FW_VER_MINOR_1_SPARE_AP_MIN))))
|
||||
quirks |= WL12XX_QUIRK_USE_2_SPARE_BLOCKS;
|
||||
|
||||
return quirks;
|
||||
}
|
||||
|
||||
int wl1271_plt_start(struct wl1271 *wl)
|
||||
{
|
||||
int retries = WL1271_BOOT_RETRIES;
|
||||
|
@ -1171,8 +1405,6 @@ int wl1271_plt_start(struct wl1271 *wl)
|
|||
wl1271_notice("firmware booted in PLT mode (%s)",
|
||||
wl->chip.fw_ver_str);
|
||||
|
||||
/* Check if any quirks are needed with older fw versions */
|
||||
wl->quirks |= wl1271_get_fw_ver_quirks(wl);
|
||||
goto out;
|
||||
|
||||
irq_disable:
|
||||
|
@ -1352,13 +1584,10 @@ static struct notifier_block wl1271_dev_notifier = {
|
|||
};
|
||||
|
||||
#ifdef CONFIG_PM
|
||||
static int wl1271_configure_suspend(struct wl1271 *wl)
|
||||
static int wl1271_configure_suspend_sta(struct wl1271 *wl)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (wl->bss_type != BSS_TYPE_STA_BSS)
|
||||
return 0;
|
||||
|
||||
mutex_lock(&wl->mutex);
|
||||
|
||||
ret = wl1271_ps_elp_wakeup(wl);
|
||||
|
@ -1403,11 +1632,41 @@ out:
|
|||
|
||||
}
|
||||
|
||||
static void wl1271_configure_resume(struct wl1271 *wl)
|
||||
static int wl1271_configure_suspend_ap(struct wl1271 *wl)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (wl->bss_type != BSS_TYPE_STA_BSS)
|
||||
mutex_lock(&wl->mutex);
|
||||
|
||||
ret = wl1271_ps_elp_wakeup(wl);
|
||||
if (ret < 0)
|
||||
goto out_unlock;
|
||||
|
||||
ret = wl1271_acx_set_ap_beacon_filter(wl, true);
|
||||
|
||||
wl1271_ps_elp_sleep(wl);
|
||||
out_unlock:
|
||||
mutex_unlock(&wl->mutex);
|
||||
return ret;
|
||||
|
||||
}
|
||||
|
||||
static int wl1271_configure_suspend(struct wl1271 *wl)
|
||||
{
|
||||
if (wl->bss_type == BSS_TYPE_STA_BSS)
|
||||
return wl1271_configure_suspend_sta(wl);
|
||||
if (wl->bss_type == BSS_TYPE_AP_BSS)
|
||||
return wl1271_configure_suspend_ap(wl);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void wl1271_configure_resume(struct wl1271 *wl)
|
||||
{
|
||||
int ret;
|
||||
bool is_sta = wl->bss_type == BSS_TYPE_STA_BSS;
|
||||
bool is_ap = wl->bss_type == BSS_TYPE_AP_BSS;
|
||||
|
||||
if (!is_sta && !is_ap)
|
||||
return;
|
||||
|
||||
mutex_lock(&wl->mutex);
|
||||
|
@ -1415,10 +1674,14 @@ static void wl1271_configure_resume(struct wl1271 *wl)
|
|||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
/* exit psm if it wasn't configured */
|
||||
if (!test_bit(WL1271_FLAG_PSM_REQUESTED, &wl->flags))
|
||||
wl1271_ps_set_mode(wl, STATION_ACTIVE_MODE,
|
||||
wl->basic_rate, true);
|
||||
if (is_sta) {
|
||||
/* exit psm if it wasn't configured */
|
||||
if (!test_bit(WL1271_FLAG_PSM_REQUESTED, &wl->flags))
|
||||
wl1271_ps_set_mode(wl, STATION_ACTIVE_MODE,
|
||||
wl->basic_rate, true);
|
||||
} else if (is_ap) {
|
||||
wl1271_acx_set_ap_beacon_filter(wl, false);
|
||||
}
|
||||
|
||||
wl1271_ps_elp_sleep(wl);
|
||||
out:
|
||||
|
@ -1429,69 +1692,69 @@ static int wl1271_op_suspend(struct ieee80211_hw *hw,
|
|||
struct cfg80211_wowlan *wow)
|
||||
{
|
||||
struct wl1271 *wl = hw->priv;
|
||||
int ret;
|
||||
|
||||
wl1271_debug(DEBUG_MAC80211, "mac80211 suspend wow=%d", !!wow);
|
||||
wl->wow_enabled = !!wow;
|
||||
if (wl->wow_enabled) {
|
||||
int ret;
|
||||
ret = wl1271_configure_suspend(wl);
|
||||
if (ret < 0) {
|
||||
wl1271_warning("couldn't prepare device to suspend");
|
||||
return ret;
|
||||
}
|
||||
/* flush any remaining work */
|
||||
wl1271_debug(DEBUG_MAC80211, "flushing remaining works");
|
||||
flush_delayed_work(&wl->scan_complete_work);
|
||||
WARN_ON(!wow || !wow->any);
|
||||
|
||||
/*
|
||||
* disable and re-enable interrupts in order to flush
|
||||
* the threaded_irq
|
||||
*/
|
||||
wl1271_disable_interrupts(wl);
|
||||
|
||||
/*
|
||||
* set suspended flag to avoid triggering a new threaded_irq
|
||||
* work. no need for spinlock as interrupts are disabled.
|
||||
*/
|
||||
set_bit(WL1271_FLAG_SUSPENDED, &wl->flags);
|
||||
|
||||
wl1271_enable_interrupts(wl);
|
||||
flush_work(&wl->tx_work);
|
||||
flush_delayed_work(&wl->pspoll_work);
|
||||
flush_delayed_work(&wl->elp_work);
|
||||
wl->wow_enabled = true;
|
||||
ret = wl1271_configure_suspend(wl);
|
||||
if (ret < 0) {
|
||||
wl1271_warning("couldn't prepare device to suspend");
|
||||
return ret;
|
||||
}
|
||||
/* flush any remaining work */
|
||||
wl1271_debug(DEBUG_MAC80211, "flushing remaining works");
|
||||
flush_delayed_work(&wl->scan_complete_work);
|
||||
|
||||
/*
|
||||
* disable and re-enable interrupts in order to flush
|
||||
* the threaded_irq
|
||||
*/
|
||||
wl1271_disable_interrupts(wl);
|
||||
|
||||
/*
|
||||
* set suspended flag to avoid triggering a new threaded_irq
|
||||
* work. no need for spinlock as interrupts are disabled.
|
||||
*/
|
||||
set_bit(WL1271_FLAG_SUSPENDED, &wl->flags);
|
||||
|
||||
wl1271_enable_interrupts(wl);
|
||||
flush_work(&wl->tx_work);
|
||||
flush_delayed_work(&wl->pspoll_work);
|
||||
flush_delayed_work(&wl->elp_work);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int wl1271_op_resume(struct ieee80211_hw *hw)
|
||||
{
|
||||
struct wl1271 *wl = hw->priv;
|
||||
unsigned long flags;
|
||||
bool run_irq_work = false;
|
||||
|
||||
wl1271_debug(DEBUG_MAC80211, "mac80211 resume wow=%d",
|
||||
wl->wow_enabled);
|
||||
WARN_ON(!wl->wow_enabled);
|
||||
|
||||
/*
|
||||
* re-enable irq_work enqueuing, and call irq_work directly if
|
||||
* there is a pending work.
|
||||
*/
|
||||
if (wl->wow_enabled) {
|
||||
struct wl1271 *wl = hw->priv;
|
||||
unsigned long flags;
|
||||
bool run_irq_work = false;
|
||||
spin_lock_irqsave(&wl->wl_lock, flags);
|
||||
clear_bit(WL1271_FLAG_SUSPENDED, &wl->flags);
|
||||
if (test_and_clear_bit(WL1271_FLAG_PENDING_WORK, &wl->flags))
|
||||
run_irq_work = true;
|
||||
spin_unlock_irqrestore(&wl->wl_lock, flags);
|
||||
|
||||
spin_lock_irqsave(&wl->wl_lock, flags);
|
||||
clear_bit(WL1271_FLAG_SUSPENDED, &wl->flags);
|
||||
if (test_and_clear_bit(WL1271_FLAG_PENDING_WORK, &wl->flags))
|
||||
run_irq_work = true;
|
||||
spin_unlock_irqrestore(&wl->wl_lock, flags);
|
||||
|
||||
if (run_irq_work) {
|
||||
wl1271_debug(DEBUG_MAC80211,
|
||||
"run postponed irq_work directly");
|
||||
wl1271_irq(0, wl);
|
||||
wl1271_enable_interrupts(wl);
|
||||
}
|
||||
|
||||
wl1271_configure_resume(wl);
|
||||
if (run_irq_work) {
|
||||
wl1271_debug(DEBUG_MAC80211,
|
||||
"run postponed irq_work directly");
|
||||
wl1271_irq(0, wl);
|
||||
wl1271_enable_interrupts(wl);
|
||||
}
|
||||
wl1271_configure_resume(wl);
|
||||
wl->wow_enabled = false;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -1629,9 +1892,6 @@ power_off:
|
|||
strncpy(wiphy->fw_version, wl->chip.fw_ver_str,
|
||||
sizeof(wiphy->fw_version));
|
||||
|
||||
/* Check if any quirks are needed with older fw versions */
|
||||
wl->quirks |= wl1271_get_fw_ver_quirks(wl);
|
||||
|
||||
/*
|
||||
* Now we know if 11a is supported (info from the NVS), so disable
|
||||
* 11a channels if not supported
|
||||
|
@ -1694,6 +1954,9 @@ static void __wl1271_op_remove_interface(struct wl1271 *wl,
|
|||
cancel_delayed_work_sync(&wl->scan_complete_work);
|
||||
cancel_work_sync(&wl->netstack_work);
|
||||
cancel_work_sync(&wl->tx_work);
|
||||
del_timer_sync(&wl->rx_streaming_timer);
|
||||
cancel_work_sync(&wl->rx_streaming_enable_work);
|
||||
cancel_work_sync(&wl->rx_streaming_disable_work);
|
||||
cancel_delayed_work_sync(&wl->pspoll_work);
|
||||
cancel_delayed_work_sync(&wl->elp_work);
|
||||
|
||||
|
@ -2780,24 +3043,6 @@ static void wl1271_bss_info_changed_ap(struct wl1271 *wl,
|
|||
}
|
||||
}
|
||||
|
||||
if (changed & BSS_CHANGED_IBSS) {
|
||||
wl1271_debug(DEBUG_ADHOC, "ibss_joined: %d",
|
||||
bss_conf->ibss_joined);
|
||||
|
||||
if (bss_conf->ibss_joined) {
|
||||
u32 rates = bss_conf->basic_rates;
|
||||
wl->basic_rate_set = wl1271_tx_enabled_rates_get(wl,
|
||||
rates);
|
||||
wl->basic_rate = wl1271_tx_min_rate_get(wl);
|
||||
|
||||
/* by default, use 11b rates */
|
||||
wl->rate_set = CONF_TX_IBSS_DEFAULT_RATES;
|
||||
ret = wl1271_acx_sta_rate_policies(wl);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
ret = wl1271_bss_erp_info_changed(wl, bss_conf, changed);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
@ -3023,6 +3268,24 @@ static void wl1271_bss_info_changed_sta(struct wl1271 *wl,
|
|||
}
|
||||
}
|
||||
|
||||
if (changed & BSS_CHANGED_IBSS) {
|
||||
wl1271_debug(DEBUG_ADHOC, "ibss_joined: %d",
|
||||
bss_conf->ibss_joined);
|
||||
|
||||
if (bss_conf->ibss_joined) {
|
||||
u32 rates = bss_conf->basic_rates;
|
||||
wl->basic_rate_set = wl1271_tx_enabled_rates_get(wl,
|
||||
rates);
|
||||
wl->basic_rate = wl1271_tx_min_rate_get(wl);
|
||||
|
||||
/* by default, use 11b rates */
|
||||
wl->rate_set = CONF_TX_IBSS_DEFAULT_RATES;
|
||||
ret = wl1271_acx_sta_rate_policies(wl);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
ret = wl1271_bss_erp_info_changed(wl, bss_conf, changed);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
@ -3061,6 +3324,7 @@ static void wl1271_bss_info_changed_sta(struct wl1271 *wl,
|
|||
wl1271_warning("cmd join failed %d", ret);
|
||||
goto out;
|
||||
}
|
||||
wl1271_check_operstate(wl, ieee80211_get_operstate(vif));
|
||||
}
|
||||
|
||||
out:
|
||||
|
@ -3784,6 +4048,69 @@ static ssize_t wl1271_sysfs_show_hw_pg_ver(struct device *dev,
|
|||
static DEVICE_ATTR(hw_pg_ver, S_IRUGO | S_IWUSR,
|
||||
wl1271_sysfs_show_hw_pg_ver, NULL);
|
||||
|
||||
static ssize_t wl1271_sysfs_read_fwlog(struct file *filp, struct kobject *kobj,
|
||||
struct bin_attribute *bin_attr,
|
||||
char *buffer, loff_t pos, size_t count)
|
||||
{
|
||||
struct device *dev = container_of(kobj, struct device, kobj);
|
||||
struct wl1271 *wl = dev_get_drvdata(dev);
|
||||
ssize_t len;
|
||||
int ret;
|
||||
|
||||
ret = mutex_lock_interruptible(&wl->mutex);
|
||||
if (ret < 0)
|
||||
return -ERESTARTSYS;
|
||||
|
||||
/* Let only one thread read the log at a time, blocking others */
|
||||
while (wl->fwlog_size == 0) {
|
||||
DEFINE_WAIT(wait);
|
||||
|
||||
prepare_to_wait_exclusive(&wl->fwlog_waitq,
|
||||
&wait,
|
||||
TASK_INTERRUPTIBLE);
|
||||
|
||||
if (wl->fwlog_size != 0) {
|
||||
finish_wait(&wl->fwlog_waitq, &wait);
|
||||
break;
|
||||
}
|
||||
|
||||
mutex_unlock(&wl->mutex);
|
||||
|
||||
schedule();
|
||||
finish_wait(&wl->fwlog_waitq, &wait);
|
||||
|
||||
if (signal_pending(current))
|
||||
return -ERESTARTSYS;
|
||||
|
||||
ret = mutex_lock_interruptible(&wl->mutex);
|
||||
if (ret < 0)
|
||||
return -ERESTARTSYS;
|
||||
}
|
||||
|
||||
/* Check if the fwlog is still valid */
|
||||
if (wl->fwlog_size < 0) {
|
||||
mutex_unlock(&wl->mutex);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Seeking is not supported - old logs are not kept. Disregard pos. */
|
||||
len = min(count, (size_t)wl->fwlog_size);
|
||||
wl->fwlog_size -= len;
|
||||
memcpy(buffer, wl->fwlog, len);
|
||||
|
||||
/* Make room for new messages */
|
||||
memmove(wl->fwlog, wl->fwlog + len, wl->fwlog_size);
|
||||
|
||||
mutex_unlock(&wl->mutex);
|
||||
|
||||
return len;
|
||||
}
|
||||
|
||||
static struct bin_attribute fwlog_attr = {
|
||||
.attr = {.name = "fwlog", .mode = S_IRUSR},
|
||||
.read = wl1271_sysfs_read_fwlog,
|
||||
};
|
||||
|
||||
int wl1271_register_hw(struct wl1271 *wl)
|
||||
{
|
||||
int ret;
|
||||
|
@ -3964,6 +4291,17 @@ struct ieee80211_hw *wl1271_alloc_hw(void)
|
|||
INIT_WORK(&wl->tx_work, wl1271_tx_work);
|
||||
INIT_WORK(&wl->recovery_work, wl1271_recovery_work);
|
||||
INIT_DELAYED_WORK(&wl->scan_complete_work, wl1271_scan_complete_work);
|
||||
INIT_WORK(&wl->rx_streaming_enable_work,
|
||||
wl1271_rx_streaming_enable_work);
|
||||
INIT_WORK(&wl->rx_streaming_disable_work,
|
||||
wl1271_rx_streaming_disable_work);
|
||||
|
||||
wl->freezable_wq = create_freezable_workqueue("wl12xx_wq");
|
||||
if (!wl->freezable_wq) {
|
||||
ret = -ENOMEM;
|
||||
goto err_hw;
|
||||
}
|
||||
|
||||
wl->channel = WL1271_DEFAULT_CHANNEL;
|
||||
wl->beacon_int = WL1271_DEFAULT_BEACON_INT;
|
||||
wl->default_key = 0;
|
||||
|
@ -3989,6 +4327,10 @@ struct ieee80211_hw *wl1271_alloc_hw(void)
|
|||
wl->quirks = 0;
|
||||
wl->platform_quirks = 0;
|
||||
wl->sched_scanning = false;
|
||||
setup_timer(&wl->rx_streaming_timer, wl1271_rx_streaming_timer,
|
||||
(unsigned long) wl);
|
||||
wl->fwlog_size = 0;
|
||||
init_waitqueue_head(&wl->fwlog_waitq);
|
||||
|
||||
memset(wl->tx_frames_map, 0, sizeof(wl->tx_frames_map));
|
||||
for (i = 0; i < ACX_TX_DESCRIPTORS; i++)
|
||||
|
@ -4006,7 +4348,7 @@ struct ieee80211_hw *wl1271_alloc_hw(void)
|
|||
wl->aggr_buf = (u8 *)__get_free_pages(GFP_KERNEL, order);
|
||||
if (!wl->aggr_buf) {
|
||||
ret = -ENOMEM;
|
||||
goto err_hw;
|
||||
goto err_wq;
|
||||
}
|
||||
|
||||
wl->dummy_packet = wl12xx_alloc_dummy_packet(wl);
|
||||
|
@ -4015,11 +4357,18 @@ struct ieee80211_hw *wl1271_alloc_hw(void)
|
|||
goto err_aggr;
|
||||
}
|
||||
|
||||
/* Allocate one page for the FW log */
|
||||
wl->fwlog = (u8 *)get_zeroed_page(GFP_KERNEL);
|
||||
if (!wl->fwlog) {
|
||||
ret = -ENOMEM;
|
||||
goto err_dummy_packet;
|
||||
}
|
||||
|
||||
/* Register platform device */
|
||||
ret = platform_device_register(wl->plat_dev);
|
||||
if (ret) {
|
||||
wl1271_error("couldn't register platform device");
|
||||
goto err_dummy_packet;
|
||||
goto err_fwlog;
|
||||
}
|
||||
dev_set_drvdata(&wl->plat_dev->dev, wl);
|
||||
|
||||
|
@ -4037,20 +4386,36 @@ struct ieee80211_hw *wl1271_alloc_hw(void)
|
|||
goto err_bt_coex_state;
|
||||
}
|
||||
|
||||
/* Create sysfs file for the FW log */
|
||||
ret = device_create_bin_file(&wl->plat_dev->dev, &fwlog_attr);
|
||||
if (ret < 0) {
|
||||
wl1271_error("failed to create sysfs file fwlog");
|
||||
goto err_hw_pg_ver;
|
||||
}
|
||||
|
||||
return hw;
|
||||
|
||||
err_hw_pg_ver:
|
||||
device_remove_file(&wl->plat_dev->dev, &dev_attr_hw_pg_ver);
|
||||
|
||||
err_bt_coex_state:
|
||||
device_remove_file(&wl->plat_dev->dev, &dev_attr_bt_coex_state);
|
||||
|
||||
err_platform:
|
||||
platform_device_unregister(wl->plat_dev);
|
||||
|
||||
err_fwlog:
|
||||
free_page((unsigned long)wl->fwlog);
|
||||
|
||||
err_dummy_packet:
|
||||
dev_kfree_skb(wl->dummy_packet);
|
||||
|
||||
err_aggr:
|
||||
free_pages((unsigned long)wl->aggr_buf, order);
|
||||
|
||||
err_wq:
|
||||
destroy_workqueue(wl->freezable_wq);
|
||||
|
||||
err_hw:
|
||||
wl1271_debugfs_exit(wl);
|
||||
kfree(plat_dev);
|
||||
|
@ -4066,7 +4431,15 @@ EXPORT_SYMBOL_GPL(wl1271_alloc_hw);
|
|||
|
||||
int wl1271_free_hw(struct wl1271 *wl)
|
||||
{
|
||||
/* Unblock any fwlog readers */
|
||||
mutex_lock(&wl->mutex);
|
||||
wl->fwlog_size = -1;
|
||||
wake_up_interruptible_all(&wl->fwlog_waitq);
|
||||
mutex_unlock(&wl->mutex);
|
||||
|
||||
device_remove_bin_file(&wl->plat_dev->dev, &fwlog_attr);
|
||||
platform_device_unregister(wl->plat_dev);
|
||||
free_page((unsigned long)wl->fwlog);
|
||||
dev_kfree_skb(wl->dummy_packet);
|
||||
free_pages((unsigned long)wl->aggr_buf,
|
||||
get_order(WL1271_AGGR_BUFFER_SIZE));
|
||||
|
@ -4081,6 +4454,7 @@ int wl1271_free_hw(struct wl1271 *wl)
|
|||
|
||||
kfree(wl->fw_status);
|
||||
kfree(wl->tx_res_if);
|
||||
destroy_workqueue(wl->freezable_wq);
|
||||
|
||||
ieee80211_free_hw(wl->hw);
|
||||
|
||||
|
@ -4093,6 +4467,10 @@ EXPORT_SYMBOL_GPL(wl12xx_debug_level);
|
|||
module_param_named(debug_level, wl12xx_debug_level, uint, S_IRUSR | S_IWUSR);
|
||||
MODULE_PARM_DESC(debug_level, "wl12xx debugging level");
|
||||
|
||||
module_param_named(fwlog, fwlog_param, charp, 0);
|
||||
MODULE_PARM_DESC(keymap,
|
||||
"FW logger options: continuous, ondemand, dbgpins or disable");
|
||||
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_AUTHOR("Luciano Coelho <coelho@ti.com>");
|
||||
MODULE_AUTHOR("Juuso Oikarinen <juuso.oikarinen@nokia.com>");
|
||||
|
|
|
@ -118,7 +118,7 @@ int wl1271_ps_elp_wakeup(struct wl1271 *wl)
|
|||
&compl, msecs_to_jiffies(WL1271_WAKEUP_TIMEOUT));
|
||||
if (ret == 0) {
|
||||
wl1271_error("ELP wakeup timeout!");
|
||||
ieee80211_queue_work(wl->hw, &wl->recovery_work);
|
||||
wl12xx_queue_recovery_work(wl);
|
||||
ret = -ETIMEDOUT;
|
||||
goto err;
|
||||
} else if (ret < 0) {
|
||||
|
@ -169,9 +169,11 @@ int wl1271_ps_set_mode(struct wl1271 *wl, enum wl1271_cmd_ps_mode mode,
|
|||
wl1271_debug(DEBUG_PSM, "leaving psm");
|
||||
|
||||
/* disable beacon early termination */
|
||||
ret = wl1271_acx_bet_enable(wl, false);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
if (wl->band == IEEE80211_BAND_2GHZ) {
|
||||
ret = wl1271_acx_bet_enable(wl, false);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* disable beacon filtering */
|
||||
ret = wl1271_acx_beacon_filter_opt(wl, false);
|
||||
|
@ -202,7 +204,7 @@ static void wl1271_ps_filter_frames(struct wl1271 *wl, u8 hlid)
|
|||
info = IEEE80211_SKB_CB(skb);
|
||||
info->flags |= IEEE80211_TX_STAT_TX_FILTERED;
|
||||
info->status.rates[0].idx = -1;
|
||||
ieee80211_tx_status(wl->hw, skb);
|
||||
ieee80211_tx_status_ni(wl->hw, skb);
|
||||
filtered++;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -22,6 +22,7 @@
|
|||
*/
|
||||
|
||||
#include <linux/gfp.h>
|
||||
#include <linux/sched.h>
|
||||
|
||||
#include "wl12xx.h"
|
||||
#include "acx.h"
|
||||
|
@ -95,6 +96,7 @@ static int wl1271_rx_handle_data(struct wl1271 *wl, u8 *data, u32 length)
|
|||
struct ieee80211_hdr *hdr;
|
||||
u8 *buf;
|
||||
u8 beacon = 0;
|
||||
u8 is_data = 0;
|
||||
|
||||
/*
|
||||
* In PLT mode we seem to get frames and mac80211 warns about them,
|
||||
|
@ -106,6 +108,13 @@ static int wl1271_rx_handle_data(struct wl1271 *wl, u8 *data, u32 length)
|
|||
/* the data read starts with the descriptor */
|
||||
desc = (struct wl1271_rx_descriptor *) data;
|
||||
|
||||
if (desc->packet_class == WL12XX_RX_CLASS_LOGGER) {
|
||||
size_t len = length - sizeof(*desc);
|
||||
wl12xx_copy_fwlog(wl, data + sizeof(*desc), len);
|
||||
wake_up_interruptible(&wl->fwlog_waitq);
|
||||
return 0;
|
||||
}
|
||||
|
||||
switch (desc->status & WL1271_RX_DESC_STATUS_MASK) {
|
||||
/* discard corrupted packets */
|
||||
case WL1271_RX_DESC_DRIVER_RX_Q_FAIL:
|
||||
|
@ -137,6 +146,8 @@ static int wl1271_rx_handle_data(struct wl1271 *wl, u8 *data, u32 length)
|
|||
hdr = (struct ieee80211_hdr *)skb->data;
|
||||
if (ieee80211_is_beacon(hdr->frame_control))
|
||||
beacon = 1;
|
||||
if (ieee80211_is_data_present(hdr->frame_control))
|
||||
is_data = 1;
|
||||
|
||||
wl1271_rx_status(wl, desc, IEEE80211_SKB_RXCB(skb), beacon);
|
||||
|
||||
|
@ -147,9 +158,9 @@ static int wl1271_rx_handle_data(struct wl1271 *wl, u8 *data, u32 length)
|
|||
skb_trim(skb, skb->len - desc->pad_len);
|
||||
|
||||
skb_queue_tail(&wl->deferred_rx_queue, skb);
|
||||
ieee80211_queue_work(wl->hw, &wl->netstack_work);
|
||||
queue_work(wl->freezable_wq, &wl->netstack_work);
|
||||
|
||||
return 0;
|
||||
return is_data;
|
||||
}
|
||||
|
||||
void wl1271_rx(struct wl1271 *wl, struct wl1271_fw_common_status *status)
|
||||
|
@ -162,6 +173,8 @@ void wl1271_rx(struct wl1271 *wl, struct wl1271_fw_common_status *status)
|
|||
u32 mem_block;
|
||||
u32 pkt_length;
|
||||
u32 pkt_offset;
|
||||
bool is_ap = (wl->bss_type == BSS_TYPE_AP_BSS);
|
||||
bool had_data = false;
|
||||
|
||||
while (drv_rx_counter != fw_rx_counter) {
|
||||
buf_size = 0;
|
||||
|
@ -214,9 +227,11 @@ void wl1271_rx(struct wl1271 *wl, struct wl1271_fw_common_status *status)
|
|||
* conditions, in that case the received frame will just
|
||||
* be dropped.
|
||||
*/
|
||||
wl1271_rx_handle_data(wl,
|
||||
wl->aggr_buf + pkt_offset,
|
||||
pkt_length);
|
||||
if (wl1271_rx_handle_data(wl,
|
||||
wl->aggr_buf + pkt_offset,
|
||||
pkt_length) == 1)
|
||||
had_data = true;
|
||||
|
||||
wl->rx_counter++;
|
||||
drv_rx_counter++;
|
||||
drv_rx_counter &= NUM_RX_PKT_DESC_MOD_MASK;
|
||||
|
@ -230,6 +245,20 @@ void wl1271_rx(struct wl1271 *wl, struct wl1271_fw_common_status *status)
|
|||
*/
|
||||
if (wl->quirks & WL12XX_QUIRK_END_OF_TRANSACTION)
|
||||
wl1271_write32(wl, RX_DRIVER_COUNTER_ADDRESS, wl->rx_counter);
|
||||
|
||||
if (!is_ap && wl->conf.rx_streaming.interval && had_data &&
|
||||
(wl->conf.rx_streaming.always ||
|
||||
test_bit(WL1271_FLAG_SOFT_GEMINI, &wl->flags))) {
|
||||
u32 timeout = wl->conf.rx_streaming.duration;
|
||||
|
||||
/* restart rx streaming */
|
||||
if (!test_bit(WL1271_FLAG_RX_STREAMING_STARTED, &wl->flags))
|
||||
ieee80211_queue_work(wl->hw,
|
||||
&wl->rx_streaming_enable_work);
|
||||
|
||||
mod_timer(&wl->rx_streaming_timer,
|
||||
jiffies + msecs_to_jiffies(timeout));
|
||||
}
|
||||
}
|
||||
|
||||
void wl1271_set_default_filters(struct wl1271 *wl)
|
||||
|
|
|
@ -97,6 +97,18 @@
|
|||
#define RX_BUF_SIZE_MASK 0xFFF00
|
||||
#define RX_BUF_SIZE_SHIFT_DIV 6
|
||||
|
||||
enum {
|
||||
WL12XX_RX_CLASS_UNKNOWN,
|
||||
WL12XX_RX_CLASS_MANAGEMENT,
|
||||
WL12XX_RX_CLASS_DATA,
|
||||
WL12XX_RX_CLASS_QOS_DATA,
|
||||
WL12XX_RX_CLASS_BCN_PRBRSP,
|
||||
WL12XX_RX_CLASS_EAPOL,
|
||||
WL12XX_RX_CLASS_BA_EVENT,
|
||||
WL12XX_RX_CLASS_AMSDU,
|
||||
WL12XX_RX_CLASS_LOGGER,
|
||||
};
|
||||
|
||||
struct wl1271_rx_descriptor {
|
||||
__le16 length;
|
||||
u8 status;
|
||||
|
|
|
@ -62,7 +62,7 @@ void wl1271_scan_complete_work(struct work_struct *work)
|
|||
|
||||
if (wl->scan.failed) {
|
||||
wl1271_info("Scan completed due to error.");
|
||||
ieee80211_queue_work(wl->hw, &wl->recovery_work);
|
||||
wl12xx_queue_recovery_work(wl);
|
||||
}
|
||||
|
||||
out:
|
||||
|
@ -326,7 +326,7 @@ wl1271_scan_get_sched_scan_channels(struct wl1271 *wl,
|
|||
struct cfg80211_sched_scan_request *req,
|
||||
struct conn_scan_ch_params *channels,
|
||||
u32 band, bool radar, bool passive,
|
||||
int start)
|
||||
int start, int max_channels)
|
||||
{
|
||||
struct conf_sched_scan_settings *c = &wl->conf.sched_scan;
|
||||
int i, j;
|
||||
|
@ -334,7 +334,7 @@ wl1271_scan_get_sched_scan_channels(struct wl1271 *wl,
|
|||
bool force_passive = !req->n_ssids;
|
||||
|
||||
for (i = 0, j = start;
|
||||
i < req->n_channels && j < MAX_CHANNELS_ALL_BANDS;
|
||||
i < req->n_channels && j < max_channels;
|
||||
i++) {
|
||||
flags = req->channels[i]->flags;
|
||||
|
||||
|
@ -380,46 +380,42 @@ wl1271_scan_get_sched_scan_channels(struct wl1271 *wl,
|
|||
return j - start;
|
||||
}
|
||||
|
||||
static int
|
||||
static bool
|
||||
wl1271_scan_sched_scan_channels(struct wl1271 *wl,
|
||||
struct cfg80211_sched_scan_request *req,
|
||||
struct wl1271_cmd_sched_scan_config *cfg)
|
||||
{
|
||||
int idx = 0;
|
||||
|
||||
cfg->passive[0] =
|
||||
wl1271_scan_get_sched_scan_channels(wl, req, cfg->channels,
|
||||
wl1271_scan_get_sched_scan_channels(wl, req, cfg->channels_2,
|
||||
IEEE80211_BAND_2GHZ,
|
||||
false, true, idx);
|
||||
idx += cfg->passive[0];
|
||||
|
||||
false, true, 0,
|
||||
MAX_CHANNELS_2GHZ);
|
||||
cfg->active[0] =
|
||||
wl1271_scan_get_sched_scan_channels(wl, req, cfg->channels,
|
||||
wl1271_scan_get_sched_scan_channels(wl, req, cfg->channels_2,
|
||||
IEEE80211_BAND_2GHZ,
|
||||
false, false, idx);
|
||||
/*
|
||||
* 5GHz channels always start at position 14, not immediately
|
||||
* after the last 2.4GHz channel
|
||||
*/
|
||||
idx = 14;
|
||||
|
||||
false, false,
|
||||
cfg->passive[0],
|
||||
MAX_CHANNELS_2GHZ);
|
||||
cfg->passive[1] =
|
||||
wl1271_scan_get_sched_scan_channels(wl, req, cfg->channels,
|
||||
wl1271_scan_get_sched_scan_channels(wl, req, cfg->channels_5,
|
||||
IEEE80211_BAND_5GHZ,
|
||||
false, true, idx);
|
||||
idx += cfg->passive[1];
|
||||
|
||||
false, true, 0,
|
||||
MAX_CHANNELS_5GHZ);
|
||||
cfg->dfs =
|
||||
wl1271_scan_get_sched_scan_channels(wl, req, cfg->channels,
|
||||
wl1271_scan_get_sched_scan_channels(wl, req, cfg->channels_5,
|
||||
IEEE80211_BAND_5GHZ,
|
||||
true, true, idx);
|
||||
idx += cfg->dfs;
|
||||
|
||||
true, true,
|
||||
cfg->passive[1],
|
||||
MAX_CHANNELS_5GHZ);
|
||||
cfg->active[1] =
|
||||
wl1271_scan_get_sched_scan_channels(wl, req, cfg->channels,
|
||||
wl1271_scan_get_sched_scan_channels(wl, req, cfg->channels_5,
|
||||
IEEE80211_BAND_5GHZ,
|
||||
false, false, idx);
|
||||
idx += cfg->active[1];
|
||||
false, false,
|
||||
cfg->passive[1] + cfg->dfs,
|
||||
MAX_CHANNELS_5GHZ);
|
||||
/* 802.11j channels are not supported yet */
|
||||
cfg->passive[2] = 0;
|
||||
cfg->active[2] = 0;
|
||||
|
||||
wl1271_debug(DEBUG_SCAN, " 2.4GHz: active %d passive %d",
|
||||
cfg->active[0], cfg->passive[0]);
|
||||
|
@ -427,7 +423,9 @@ wl1271_scan_sched_scan_channels(struct wl1271 *wl,
|
|||
cfg->active[1], cfg->passive[1]);
|
||||
wl1271_debug(DEBUG_SCAN, " DFS: %d", cfg->dfs);
|
||||
|
||||
return idx;
|
||||
return cfg->passive[0] || cfg->active[0] ||
|
||||
cfg->passive[1] || cfg->active[1] || cfg->dfs ||
|
||||
cfg->passive[2] || cfg->active[2];
|
||||
}
|
||||
|
||||
int wl1271_scan_sched_scan_config(struct wl1271 *wl,
|
||||
|
@ -436,7 +434,7 @@ int wl1271_scan_sched_scan_config(struct wl1271 *wl,
|
|||
{
|
||||
struct wl1271_cmd_sched_scan_config *cfg = NULL;
|
||||
struct conf_sched_scan_settings *c = &wl->conf.sched_scan;
|
||||
int i, total_channels, ret;
|
||||
int i, ret;
|
||||
bool force_passive = !req->n_ssids;
|
||||
|
||||
wl1271_debug(DEBUG_CMD, "cmd sched_scan scan config");
|
||||
|
@ -471,8 +469,7 @@ int wl1271_scan_sched_scan_config(struct wl1271 *wl,
|
|||
cfg->ssid_len = 0;
|
||||
}
|
||||
|
||||
total_channels = wl1271_scan_sched_scan_channels(wl, req, cfg);
|
||||
if (total_channels == 0) {
|
||||
if (!wl1271_scan_sched_scan_channels(wl, req, cfg)) {
|
||||
wl1271_error("scan channel list is empty");
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
|
|
|
@ -112,18 +112,13 @@ struct wl1271_cmd_trigger_scan_to {
|
|||
__le32 timeout;
|
||||
} __packed;
|
||||
|
||||
#define MAX_CHANNELS_ALL_BANDS 41
|
||||
#define MAX_CHANNELS_2GHZ 14
|
||||
#define MAX_CHANNELS_5GHZ 23
|
||||
#define MAX_CHANNELS_4GHZ 4
|
||||
|
||||
#define SCAN_MAX_CYCLE_INTERVALS 16
|
||||
#define SCAN_MAX_BANDS 3
|
||||
|
||||
enum {
|
||||
SCAN_CHANNEL_TYPE_2GHZ_PASSIVE,
|
||||
SCAN_CHANNEL_TYPE_2GHZ_ACTIVE,
|
||||
SCAN_CHANNEL_TYPE_5GHZ_PASSIVE,
|
||||
SCAN_CHANNEL_TYPE_5GHZ_ACTIVE,
|
||||
SCAN_CHANNEL_TYPE_5GHZ_DFS,
|
||||
};
|
||||
|
||||
enum {
|
||||
SCAN_SSID_FILTER_ANY = 0,
|
||||
SCAN_SSID_FILTER_SPECIFIC = 1,
|
||||
|
@ -182,7 +177,9 @@ struct wl1271_cmd_sched_scan_config {
|
|||
|
||||
u8 padding[3];
|
||||
|
||||
struct conn_scan_ch_params channels[MAX_CHANNELS_ALL_BANDS];
|
||||
struct conn_scan_ch_params channels_2[MAX_CHANNELS_2GHZ];
|
||||
struct conn_scan_ch_params channels_5[MAX_CHANNELS_5GHZ];
|
||||
struct conn_scan_ch_params channels_4[MAX_CHANNELS_4GHZ];
|
||||
} __packed;
|
||||
|
||||
|
||||
|
|
|
@ -23,7 +23,6 @@
|
|||
|
||||
#include <linux/irq.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/crc7.h>
|
||||
#include <linux/vmalloc.h>
|
||||
#include <linux/mmc/sdio_func.h>
|
||||
#include <linux/mmc/sdio_ids.h>
|
||||
|
@ -45,7 +44,7 @@
|
|||
#define SDIO_DEVICE_ID_TI_WL1271 0x4076
|
||||
#endif
|
||||
|
||||
static const struct sdio_device_id wl1271_devices[] = {
|
||||
static const struct sdio_device_id wl1271_devices[] __devinitconst = {
|
||||
{ SDIO_DEVICE(SDIO_VENDOR_ID_TI, SDIO_DEVICE_ID_TI_WL1271) },
|
||||
{}
|
||||
};
|
||||
|
@ -107,14 +106,6 @@ static void wl1271_sdio_enable_interrupts(struct wl1271 *wl)
|
|||
enable_irq(wl->irq);
|
||||
}
|
||||
|
||||
static void wl1271_sdio_reset(struct wl1271 *wl)
|
||||
{
|
||||
}
|
||||
|
||||
static void wl1271_sdio_init(struct wl1271 *wl)
|
||||
{
|
||||
}
|
||||
|
||||
static void wl1271_sdio_raw_read(struct wl1271 *wl, int addr, void *buf,
|
||||
size_t len, bool fixed)
|
||||
{
|
||||
|
@ -170,10 +161,12 @@ static int wl1271_sdio_power_on(struct wl1271 *wl)
|
|||
struct sdio_func *func = wl_to_func(wl);
|
||||
int ret;
|
||||
|
||||
/* Make sure the card will not be powered off by runtime PM */
|
||||
ret = pm_runtime_get_sync(&func->dev);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
/* If enabled, tell runtime PM not to power off the card */
|
||||
if (pm_runtime_enabled(&func->dev)) {
|
||||
ret = pm_runtime_get_sync(&func->dev);
|
||||
if (ret)
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Runtime PM might be disabled, so power up the card manually */
|
||||
ret = mmc_power_restore_host(func->card->host);
|
||||
|
@ -200,8 +193,11 @@ static int wl1271_sdio_power_off(struct wl1271 *wl)
|
|||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
/* Let runtime PM know the card is powered off */
|
||||
return pm_runtime_put_sync(&func->dev);
|
||||
/* If enabled, let runtime PM know the card is powered off */
|
||||
if (pm_runtime_enabled(&func->dev))
|
||||
ret = pm_runtime_put_sync(&func->dev);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int wl1271_sdio_set_power(struct wl1271 *wl, bool enable)
|
||||
|
@ -215,8 +211,6 @@ static int wl1271_sdio_set_power(struct wl1271 *wl, bool enable)
|
|||
static struct wl1271_if_operations sdio_ops = {
|
||||
.read = wl1271_sdio_raw_read,
|
||||
.write = wl1271_sdio_raw_write,
|
||||
.reset = wl1271_sdio_reset,
|
||||
.init = wl1271_sdio_init,
|
||||
.power = wl1271_sdio_set_power,
|
||||
.dev = wl1271_sdio_wl_to_dev,
|
||||
.enable_irq = wl1271_sdio_enable_interrupts,
|
||||
|
@ -278,18 +272,20 @@ static int __devinit wl1271_probe(struct sdio_func *func,
|
|||
goto out_free;
|
||||
}
|
||||
|
||||
enable_irq_wake(wl->irq);
|
||||
device_init_wakeup(wl1271_sdio_wl_to_dev(wl), 1);
|
||||
ret = enable_irq_wake(wl->irq);
|
||||
if (!ret) {
|
||||
wl->irq_wake_enabled = true;
|
||||
device_init_wakeup(wl1271_sdio_wl_to_dev(wl), 1);
|
||||
|
||||
/* if sdio can keep power while host is suspended, enable wow */
|
||||
mmcflags = sdio_get_host_pm_caps(func);
|
||||
wl1271_debug(DEBUG_SDIO, "sdio PM caps = 0x%x", mmcflags);
|
||||
|
||||
if (mmcflags & MMC_PM_KEEP_POWER)
|
||||
hw->wiphy->wowlan.flags = WIPHY_WOWLAN_ANY;
|
||||
}
|
||||
disable_irq(wl->irq);
|
||||
|
||||
/* if sdio can keep power while host is suspended, enable wow */
|
||||
mmcflags = sdio_get_host_pm_caps(func);
|
||||
wl1271_debug(DEBUG_SDIO, "sdio PM caps = 0x%x", mmcflags);
|
||||
|
||||
if (mmcflags & MMC_PM_KEEP_POWER)
|
||||
hw->wiphy->wowlan.flags = WIPHY_WOWLAN_ANY;
|
||||
|
||||
ret = wl1271_init_ieee80211(wl);
|
||||
if (ret)
|
||||
goto out_irq;
|
||||
|
@ -303,8 +299,6 @@ static int __devinit wl1271_probe(struct sdio_func *func,
|
|||
/* Tell PM core that we don't need the card to be powered now */
|
||||
pm_runtime_put_noidle(&func->dev);
|
||||
|
||||
wl1271_notice("initialized");
|
||||
|
||||
return 0;
|
||||
|
||||
out_irq:
|
||||
|
@ -324,8 +318,10 @@ static void __devexit wl1271_remove(struct sdio_func *func)
|
|||
pm_runtime_get_noresume(&func->dev);
|
||||
|
||||
wl1271_unregister_hw(wl);
|
||||
device_init_wakeup(wl1271_sdio_wl_to_dev(wl), 0);
|
||||
disable_irq_wake(wl->irq);
|
||||
if (wl->irq_wake_enabled) {
|
||||
device_init_wakeup(wl1271_sdio_wl_to_dev(wl), 0);
|
||||
disable_irq_wake(wl->irq);
|
||||
}
|
||||
free_irq(wl->irq, wl);
|
||||
wl1271_free_hw(wl);
|
||||
}
|
||||
|
@ -402,23 +398,12 @@ static struct sdio_driver wl1271_sdio_driver = {
|
|||
|
||||
static int __init wl1271_init(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = sdio_register_driver(&wl1271_sdio_driver);
|
||||
if (ret < 0) {
|
||||
wl1271_error("failed to register sdio driver: %d", ret);
|
||||
goto out;
|
||||
}
|
||||
|
||||
out:
|
||||
return ret;
|
||||
return sdio_register_driver(&wl1271_sdio_driver);
|
||||
}
|
||||
|
||||
static void __exit wl1271_exit(void)
|
||||
{
|
||||
sdio_unregister_driver(&wl1271_sdio_driver);
|
||||
|
||||
wl1271_notice("unloaded");
|
||||
}
|
||||
|
||||
module_init(wl1271_init);
|
||||
|
|
|
@ -436,8 +436,6 @@ static int __devinit wl1271_probe(struct spi_device *spi)
|
|||
if (ret)
|
||||
goto out_irq;
|
||||
|
||||
wl1271_notice("initialized");
|
||||
|
||||
return 0;
|
||||
|
||||
out_irq:
|
||||
|
@ -474,23 +472,12 @@ static struct spi_driver wl1271_spi_driver = {
|
|||
|
||||
static int __init wl1271_init(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = spi_register_driver(&wl1271_spi_driver);
|
||||
if (ret < 0) {
|
||||
wl1271_error("failed to register spi driver: %d", ret);
|
||||
goto out;
|
||||
}
|
||||
|
||||
out:
|
||||
return ret;
|
||||
return spi_register_driver(&wl1271_spi_driver);
|
||||
}
|
||||
|
||||
static void __exit wl1271_exit(void)
|
||||
{
|
||||
spi_unregister_driver(&wl1271_spi_driver);
|
||||
|
||||
wl1271_notice("unloaded");
|
||||
}
|
||||
|
||||
module_init(wl1271_init);
|
||||
|
|
|
@ -260,7 +260,7 @@ static int wl1271_tm_cmd_recover(struct wl1271 *wl, struct nlattr *tb[])
|
|||
{
|
||||
wl1271_debug(DEBUG_TESTMODE, "testmode cmd recover");
|
||||
|
||||
ieee80211_queue_work(wl->hw, &wl->recovery_work);
|
||||
wl12xx_queue_recovery_work(wl);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -562,17 +562,29 @@ static void wl1271_skb_queue_head(struct wl1271 *wl, struct sk_buff *skb)
|
|||
spin_unlock_irqrestore(&wl->wl_lock, flags);
|
||||
}
|
||||
|
||||
static bool wl1271_tx_is_data_present(struct sk_buff *skb)
|
||||
{
|
||||
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)(skb->data);
|
||||
|
||||
return ieee80211_is_data_present(hdr->frame_control);
|
||||
}
|
||||
|
||||
void wl1271_tx_work_locked(struct wl1271 *wl)
|
||||
{
|
||||
struct sk_buff *skb;
|
||||
u32 buf_offset = 0;
|
||||
bool sent_packets = false;
|
||||
bool had_data = false;
|
||||
bool is_ap = (wl->bss_type == BSS_TYPE_AP_BSS);
|
||||
int ret;
|
||||
|
||||
if (unlikely(wl->state == WL1271_STATE_OFF))
|
||||
return;
|
||||
|
||||
while ((skb = wl1271_skb_dequeue(wl))) {
|
||||
if (wl1271_tx_is_data_present(skb))
|
||||
had_data = true;
|
||||
|
||||
ret = wl1271_prepare_tx_frame(wl, skb, buf_offset);
|
||||
if (ret == -EAGAIN) {
|
||||
/*
|
||||
|
@ -619,6 +631,19 @@ out_ack:
|
|||
|
||||
wl1271_handle_tx_low_watermark(wl);
|
||||
}
|
||||
if (!is_ap && wl->conf.rx_streaming.interval && had_data &&
|
||||
(wl->conf.rx_streaming.always ||
|
||||
test_bit(WL1271_FLAG_SOFT_GEMINI, &wl->flags))) {
|
||||
u32 timeout = wl->conf.rx_streaming.duration;
|
||||
|
||||
/* enable rx streaming */
|
||||
if (!test_bit(WL1271_FLAG_RX_STREAMING_STARTED, &wl->flags))
|
||||
ieee80211_queue_work(wl->hw,
|
||||
&wl->rx_streaming_enable_work);
|
||||
|
||||
mod_timer(&wl->rx_streaming_timer,
|
||||
jiffies + msecs_to_jiffies(timeout));
|
||||
}
|
||||
}
|
||||
|
||||
void wl1271_tx_work(struct work_struct *work)
|
||||
|
@ -702,7 +727,7 @@ static void wl1271_tx_complete_packet(struct wl1271 *wl,
|
|||
|
||||
/* return the packet to the stack */
|
||||
skb_queue_tail(&wl->deferred_tx_queue, skb);
|
||||
ieee80211_queue_work(wl->hw, &wl->netstack_work);
|
||||
queue_work(wl->freezable_wq, &wl->netstack_work);
|
||||
wl1271_free_tx_id(wl, result->id);
|
||||
}
|
||||
|
||||
|
@ -757,7 +782,7 @@ void wl1271_tx_reset_link_queues(struct wl1271 *wl, u8 hlid)
|
|||
info = IEEE80211_SKB_CB(skb);
|
||||
info->status.rates[0].idx = -1;
|
||||
info->status.rates[0].count = 0;
|
||||
ieee80211_tx_status(wl->hw, skb);
|
||||
ieee80211_tx_status_ni(wl->hw, skb);
|
||||
total++;
|
||||
}
|
||||
}
|
||||
|
@ -795,7 +820,7 @@ void wl1271_tx_reset(struct wl1271 *wl, bool reset_tx_queues)
|
|||
info = IEEE80211_SKB_CB(skb);
|
||||
info->status.rates[0].idx = -1;
|
||||
info->status.rates[0].count = 0;
|
||||
ieee80211_tx_status(wl->hw, skb);
|
||||
ieee80211_tx_status_ni(wl->hw, skb);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -838,7 +863,7 @@ void wl1271_tx_reset(struct wl1271 *wl, bool reset_tx_queues)
|
|||
info->status.rates[0].idx = -1;
|
||||
info->status.rates[0].count = 0;
|
||||
|
||||
ieee80211_tx_status(wl->hw, skb);
|
||||
ieee80211_tx_status_ni(wl->hw, skb);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -226,6 +226,8 @@ enum {
|
|||
#define FW_VER_MINOR_1_SPARE_STA_MIN 58
|
||||
#define FW_VER_MINOR_1_SPARE_AP_MIN 47
|
||||
|
||||
#define FW_VER_MINOR_FWLOG_STA_MIN 70
|
||||
|
||||
struct wl1271_chip {
|
||||
u32 id;
|
||||
char fw_ver_str[ETHTOOL_BUSINFO_LEN];
|
||||
|
@ -284,8 +286,7 @@ struct wl1271_fw_sta_status {
|
|||
u8 tx_total;
|
||||
u8 reserved1;
|
||||
__le16 reserved2;
|
||||
/* Total structure size is 68 bytes */
|
||||
u32 padding;
|
||||
__le32 log_start_addr;
|
||||
} __packed;
|
||||
|
||||
struct wl1271_fw_full_status {
|
||||
|
@ -359,6 +360,9 @@ enum wl12xx_flags {
|
|||
WL1271_FLAG_DUMMY_PACKET_PENDING,
|
||||
WL1271_FLAG_SUSPENDED,
|
||||
WL1271_FLAG_PENDING_WORK,
|
||||
WL1271_FLAG_SOFT_GEMINI,
|
||||
WL1271_FLAG_RX_STREAMING_STARTED,
|
||||
WL1271_FLAG_RECOVERY_IN_PROGRESS,
|
||||
};
|
||||
|
||||
struct wl1271_link {
|
||||
|
@ -443,6 +447,7 @@ struct wl1271 {
|
|||
struct sk_buff_head deferred_tx_queue;
|
||||
|
||||
struct work_struct tx_work;
|
||||
struct workqueue_struct *freezable_wq;
|
||||
|
||||
/* Pending TX frames */
|
||||
unsigned long tx_frames_map[BITS_TO_LONGS(ACX_TX_DESCRIPTORS)];
|
||||
|
@ -468,6 +473,15 @@ struct wl1271 {
|
|||
/* Network stack work */
|
||||
struct work_struct netstack_work;
|
||||
|
||||
/* FW log buffer */
|
||||
u8 *fwlog;
|
||||
|
||||
/* Number of valid bytes in the FW log buffer */
|
||||
ssize_t fwlog_size;
|
||||
|
||||
/* Sysfs FW log entry readers wait queue */
|
||||
wait_queue_head_t fwlog_waitq;
|
||||
|
||||
/* Hardware recovery work */
|
||||
struct work_struct recovery_work;
|
||||
|
||||
|
@ -508,6 +522,11 @@ struct wl1271 {
|
|||
/* Default key (for WEP) */
|
||||
u32 default_key;
|
||||
|
||||
/* Rx Streaming */
|
||||
struct work_struct rx_streaming_enable_work;
|
||||
struct work_struct rx_streaming_disable_work;
|
||||
struct timer_list rx_streaming_timer;
|
||||
|
||||
unsigned int filters;
|
||||
unsigned int rx_config;
|
||||
unsigned int rx_filter;
|
||||
|
@ -573,6 +592,7 @@ struct wl1271 {
|
|||
* (currently, only "ANY" trigger is supported)
|
||||
*/
|
||||
bool wow_enabled;
|
||||
bool irq_wake_enabled;
|
||||
|
||||
/*
|
||||
* AP-mode - links indexed by HLID. The global and broadcast links
|
||||
|
@ -602,6 +622,9 @@ struct wl1271_station {
|
|||
|
||||
int wl1271_plt_start(struct wl1271 *wl);
|
||||
int wl1271_plt_stop(struct wl1271 *wl);
|
||||
int wl1271_recalc_rx_streaming(struct wl1271 *wl);
|
||||
void wl12xx_queue_recovery_work(struct wl1271 *wl);
|
||||
size_t wl12xx_copy_fwlog(struct wl1271 *wl, u8 *memblock, size_t maxlen);
|
||||
|
||||
#define JOIN_TIMEOUT 5000 /* 5000 milliseconds to join */
|
||||
|
||||
|
@ -637,4 +660,15 @@ int wl1271_plt_stop(struct wl1271 *wl);
|
|||
/* WL128X requires aggregated packets to be aligned to the SDIO block size */
|
||||
#define WL12XX_QUIRK_BLOCKSIZE_ALIGNMENT BIT(2)
|
||||
|
||||
/*
|
||||
* WL127X AP mode requires Low Power DRPw (LPD) enable to reduce power
|
||||
* consumption
|
||||
*/
|
||||
#define WL12XX_QUIRK_LPD_MODE BIT(3)
|
||||
|
||||
/* Older firmwares did not implement the FW logger over bus feature */
|
||||
#define WL12XX_QUIRK_FWLOG_NOT_IMPLEMENTED BIT(4)
|
||||
|
||||
#define WL12XX_HW_BLOCK_SIZE 256
|
||||
|
||||
#endif
|
||||
|
|
|
@ -2,17 +2,8 @@
|
|||
# Near Field Communication (NFC) devices
|
||||
#
|
||||
|
||||
menuconfig NFC_DEVICES
|
||||
bool "Near Field Communication (NFC) devices"
|
||||
default n
|
||||
---help---
|
||||
You'll have to say Y if your computer contains an NFC device that
|
||||
you want to use under Linux.
|
||||
|
||||
You can say N here if you don't have any Near Field Communication
|
||||
devices connected to your computer.
|
||||
|
||||
if NFC_DEVICES
|
||||
menu "Near Field Communication (NFC) devices"
|
||||
depends on NFC
|
||||
|
||||
config PN544_NFC
|
||||
tristate "PN544 NFC driver"
|
||||
|
@ -26,5 +17,14 @@ config PN544_NFC
|
|||
To compile this driver as a module, choose m here. The module will
|
||||
be called pn544.
|
||||
|
||||
config NFC_PN533
|
||||
tristate "NXP PN533 USB driver"
|
||||
depends on USB
|
||||
help
|
||||
NXP PN533 USB driver.
|
||||
This driver provides support for NFC NXP PN533 devices.
|
||||
|
||||
endif # NFC_DEVICES
|
||||
Say Y here to compile support for PN533 devices into the
|
||||
kernel or say M to compile it as module (pn533).
|
||||
|
||||
endmenu
|
||||
|
|
|
@ -3,3 +3,6 @@
|
|||
#
|
||||
|
||||
obj-$(CONFIG_PN544_NFC) += pn544.o
|
||||
obj-$(CONFIG_NFC_PN533) += pn533.o
|
||||
|
||||
ccflags-$(CONFIG_NFC_DEBUG) := -DDEBUG
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -734,12 +734,9 @@ out_free:
|
|||
static void ssb_pci_get_boardinfo(struct ssb_bus *bus,
|
||||
struct ssb_boardinfo *bi)
|
||||
{
|
||||
pci_read_config_word(bus->host_pci, PCI_SUBSYSTEM_VENDOR_ID,
|
||||
&bi->vendor);
|
||||
pci_read_config_word(bus->host_pci, PCI_SUBSYSTEM_ID,
|
||||
&bi->type);
|
||||
pci_read_config_word(bus->host_pci, PCI_REVISION_ID,
|
||||
&bi->rev);
|
||||
bi->vendor = bus->host_pci->subsystem_vendor;
|
||||
bi->type = bus->host_pci->subsystem_device;
|
||||
bi->rev = bus->host_pci->revision;
|
||||
}
|
||||
|
||||
int ssb_pci_get_invariants(struct ssb_bus *bus,
|
||||
|
|
|
@ -0,0 +1,126 @@
|
|||
/*
|
||||
* Copyright (C) 2011 Instituto Nokia de Tecnologia
|
||||
*
|
||||
* Authors:
|
||||
* Lauro Ramos Venancio <lauro.venancio@openbossa.org>
|
||||
* Aloisio Almeida Jr <aloisio.almeida@openbossa.org>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License as published by
|
||||
* the Free Software Foundation; either version 2 of the License, or
|
||||
* (at your option) any later version.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program; if not, write to the
|
||||
* Free Software Foundation, Inc.,
|
||||
* 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
|
||||
*/
|
||||
|
||||
#ifndef __LINUX_NFC_H
|
||||
#define __LINUX_NFC_H
|
||||
|
||||
#include <linux/types.h>
|
||||
#include <linux/socket.h>
|
||||
|
||||
#define NFC_GENL_NAME "nfc"
|
||||
#define NFC_GENL_VERSION 1
|
||||
|
||||
#define NFC_GENL_MCAST_EVENT_NAME "events"
|
||||
|
||||
/**
|
||||
* enum nfc_commands - supported nfc commands
|
||||
*
|
||||
* @NFC_CMD_UNSPEC: unspecified command
|
||||
*
|
||||
* @NFC_CMD_GET_DEVICE: request information about a device (requires
|
||||
* %NFC_ATTR_DEVICE_INDEX) or dump request to get a list of all nfc devices
|
||||
* @NFC_CMD_START_POLL: start polling for targets using the given protocols
|
||||
* (requires %NFC_ATTR_DEVICE_INDEX and %NFC_ATTR_PROTOCOLS)
|
||||
* @NFC_CMD_STOP_POLL: stop polling for targets (requires
|
||||
* %NFC_ATTR_DEVICE_INDEX)
|
||||
* @NFC_CMD_GET_TARGET: dump all targets found by the previous poll (requires
|
||||
* %NFC_ATTR_DEVICE_INDEX)
|
||||
* @NFC_EVENT_TARGETS_FOUND: event emitted when a new target is found
|
||||
* (it sends %NFC_ATTR_DEVICE_INDEX)
|
||||
* @NFC_EVENT_DEVICE_ADDED: event emitted when a new device is registred
|
||||
* (it sends %NFC_ATTR_DEVICE_NAME, %NFC_ATTR_DEVICE_INDEX and
|
||||
* %NFC_ATTR_PROTOCOLS)
|
||||
* @NFC_EVENT_DEVICE_REMOVED: event emitted when a device is removed
|
||||
* (it sends %NFC_ATTR_DEVICE_INDEX)
|
||||
*/
|
||||
enum nfc_commands {
|
||||
NFC_CMD_UNSPEC,
|
||||
NFC_CMD_GET_DEVICE,
|
||||
NFC_CMD_START_POLL,
|
||||
NFC_CMD_STOP_POLL,
|
||||
NFC_CMD_GET_TARGET,
|
||||
NFC_EVENT_TARGETS_FOUND,
|
||||
NFC_EVENT_DEVICE_ADDED,
|
||||
NFC_EVENT_DEVICE_REMOVED,
|
||||
/* private: internal use only */
|
||||
__NFC_CMD_AFTER_LAST
|
||||
};
|
||||
#define NFC_CMD_MAX (__NFC_CMD_AFTER_LAST - 1)
|
||||
|
||||
/**
|
||||
* enum nfc_attrs - supported nfc attributes
|
||||
*
|
||||
* @NFC_ATTR_UNSPEC: unspecified attribute
|
||||
*
|
||||
* @NFC_ATTR_DEVICE_INDEX: index of nfc device
|
||||
* @NFC_ATTR_DEVICE_NAME: device name, max 8 chars
|
||||
* @NFC_ATTR_PROTOCOLS: nfc protocols - bitwise or-ed combination from
|
||||
* NFC_PROTO_*_MASK constants
|
||||
* @NFC_ATTR_TARGET_INDEX: index of the nfc target
|
||||
* @NFC_ATTR_TARGET_SENS_RES: NFC-A targets extra information such as NFCID
|
||||
* @NFC_ATTR_TARGET_SEL_RES: NFC-A targets extra information (useful if the
|
||||
* target is not NFC-Forum compliant)
|
||||
*/
|
||||
enum nfc_attrs {
|
||||
NFC_ATTR_UNSPEC,
|
||||
NFC_ATTR_DEVICE_INDEX,
|
||||
NFC_ATTR_DEVICE_NAME,
|
||||
NFC_ATTR_PROTOCOLS,
|
||||
NFC_ATTR_TARGET_INDEX,
|
||||
NFC_ATTR_TARGET_SENS_RES,
|
||||
NFC_ATTR_TARGET_SEL_RES,
|
||||
/* private: internal use only */
|
||||
__NFC_ATTR_AFTER_LAST
|
||||
};
|
||||
#define NFC_ATTR_MAX (__NFC_ATTR_AFTER_LAST - 1)
|
||||
|
||||
#define NFC_DEVICE_NAME_MAXSIZE 8
|
||||
|
||||
/* NFC protocols */
|
||||
#define NFC_PROTO_JEWEL 1
|
||||
#define NFC_PROTO_MIFARE 2
|
||||
#define NFC_PROTO_FELICA 3
|
||||
#define NFC_PROTO_ISO14443 4
|
||||
#define NFC_PROTO_NFC_DEP 5
|
||||
|
||||
#define NFC_PROTO_MAX 6
|
||||
|
||||
/* NFC protocols masks used in bitsets */
|
||||
#define NFC_PROTO_JEWEL_MASK (1 << NFC_PROTO_JEWEL)
|
||||
#define NFC_PROTO_MIFARE_MASK (1 << NFC_PROTO_MIFARE)
|
||||
#define NFC_PROTO_FELICA_MASK (1 << NFC_PROTO_FELICA)
|
||||
#define NFC_PROTO_ISO14443_MASK (1 << NFC_PROTO_ISO14443)
|
||||
#define NFC_PROTO_NFC_DEP_MASK (1 << NFC_PROTO_NFC_DEP)
|
||||
|
||||
struct sockaddr_nfc {
|
||||
sa_family_t sa_family;
|
||||
__u32 dev_idx;
|
||||
__u32 target_idx;
|
||||
__u32 nfc_protocol;
|
||||
};
|
||||
|
||||
/* NFC socket protocols */
|
||||
#define NFC_SOCKPROTO_RAW 0
|
||||
#define NFC_SOCKPROTO_MAX 1
|
||||
|
||||
#endif /*__LINUX_NFC_H */
|
|
@ -483,6 +483,14 @@
|
|||
* more background information, see
|
||||
* http://wireless.kernel.org/en/users/Documentation/WoWLAN.
|
||||
*
|
||||
* @NL80211_CMD_SET_REKEY_OFFLOAD: This command is used give the driver
|
||||
* the necessary information for supporting GTK rekey offload. This
|
||||
* feature is typically used during WoWLAN. The configuration data
|
||||
* is contained in %NL80211_ATTR_REKEY_DATA (which is nested and
|
||||
* contains the data in sub-attributes). After rekeying happened,
|
||||
* this command may also be sent by the driver as an MLME event to
|
||||
* inform userspace of the new replay counter.
|
||||
*
|
||||
* @NL80211_CMD_MAX: highest used command number
|
||||
* @__NL80211_CMD_AFTER_LAST: internal use
|
||||
*/
|
||||
|
@ -605,6 +613,8 @@ enum nl80211_commands {
|
|||
NL80211_CMD_SCHED_SCAN_RESULTS,
|
||||
NL80211_CMD_SCHED_SCAN_STOPPED,
|
||||
|
||||
NL80211_CMD_SET_REKEY_OFFLOAD,
|
||||
|
||||
/* add new commands above here */
|
||||
|
||||
/* used to define NL80211_CMD_MAX below */
|
||||
|
@ -996,6 +1006,9 @@ enum nl80211_commands {
|
|||
* are managed in software: interfaces of these types aren't subject to
|
||||
* any restrictions in their number or combinations.
|
||||
*
|
||||
* @%NL80211_ATTR_REKEY_DATA: nested attribute containing the information
|
||||
* necessary for GTK rekeying in the device, see &enum nl80211_rekey_data.
|
||||
*
|
||||
* @NL80211_ATTR_MAX: highest attribute number currently defined
|
||||
* @__NL80211_ATTR_AFTER_LAST: internal use
|
||||
*/
|
||||
|
@ -1194,6 +1207,8 @@ enum nl80211_attrs {
|
|||
NL80211_ATTR_INTERFACE_COMBINATIONS,
|
||||
NL80211_ATTR_SOFTWARE_IFTYPES,
|
||||
|
||||
NL80211_ATTR_REKEY_DATA,
|
||||
|
||||
/* add attributes here, update the policy in nl80211.c */
|
||||
|
||||
__NL80211_ATTR_AFTER_LAST,
|
||||
|
@ -2361,4 +2376,28 @@ enum nl80211_plink_state {
|
|||
MAX_NL80211_PLINK_STATES = NUM_NL80211_PLINK_STATES - 1
|
||||
};
|
||||
|
||||
#define NL80211_KCK_LEN 16
|
||||
#define NL80211_KEK_LEN 16
|
||||
#define NL80211_REPLAY_CTR_LEN 8
|
||||
|
||||
/**
|
||||
* enum nl80211_rekey_data - attributes for GTK rekey offload
|
||||
* @__NL80211_REKEY_DATA_INVALID: invalid number for nested attributes
|
||||
* @NL80211_REKEY_DATA_KEK: key encryption key (binary)
|
||||
* @NL80211_REKEY_DATA_KCK: key confirmation key (binary)
|
||||
* @NL80211_REKEY_DATA_REPLAY_CTR: replay counter (binary)
|
||||
* @NUM_NL80211_REKEY_DATA: number of rekey attributes (internal)
|
||||
* @MAX_NL80211_REKEY_DATA: highest rekey attribute (internal)
|
||||
*/
|
||||
enum nl80211_rekey_data {
|
||||
__NL80211_REKEY_DATA_INVALID,
|
||||
NL80211_REKEY_DATA_KEK,
|
||||
NL80211_REKEY_DATA_KCK,
|
||||
NL80211_REKEY_DATA_REPLAY_CTR,
|
||||
|
||||
/* keep last */
|
||||
NUM_NL80211_REKEY_DATA,
|
||||
MAX_NL80211_REKEY_DATA = NUM_NL80211_REKEY_DATA - 1
|
||||
};
|
||||
|
||||
#endif /* __LINUX_NL80211_H */
|
||||
|
|
|
@ -192,7 +192,8 @@ struct ucred {
|
|||
#define AF_IEEE802154 36 /* IEEE802154 sockets */
|
||||
#define AF_CAIF 37 /* CAIF sockets */
|
||||
#define AF_ALG 38 /* Algorithm sockets */
|
||||
#define AF_MAX 39 /* For now.. */
|
||||
#define AF_NFC 39 /* NFC sockets */
|
||||
#define AF_MAX 40 /* For now.. */
|
||||
|
||||
/* Protocol families, same as address families. */
|
||||
#define PF_UNSPEC AF_UNSPEC
|
||||
|
@ -234,6 +235,7 @@ struct ucred {
|
|||
#define PF_IEEE802154 AF_IEEE802154
|
||||
#define PF_CAIF AF_CAIF
|
||||
#define PF_ALG AF_ALG
|
||||
#define PF_NFC AF_NFC
|
||||
#define PF_MAX AF_MAX
|
||||
|
||||
/* Maximum queue length specifiable by listen. */
|
||||
|
|
|
@ -99,7 +99,7 @@ struct ssb_sprom {
|
|||
struct ssb_boardinfo {
|
||||
u16 vendor;
|
||||
u16 type;
|
||||
u16 rev;
|
||||
u8 rev;
|
||||
};
|
||||
|
||||
|
||||
|
|
|
@ -1153,6 +1153,18 @@ struct cfg80211_wowlan {
|
|||
int n_patterns;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct cfg80211_gtk_rekey_data - rekey data
|
||||
* @kek: key encryption key
|
||||
* @kck: key confirmation key
|
||||
* @replay_ctr: replay counter
|
||||
*/
|
||||
struct cfg80211_gtk_rekey_data {
|
||||
u8 kek[NL80211_KEK_LEN];
|
||||
u8 kck[NL80211_KCK_LEN];
|
||||
u8 replay_ctr[NL80211_REPLAY_CTR_LEN];
|
||||
};
|
||||
|
||||
/**
|
||||
* struct cfg80211_ops - backend description for wireless configuration
|
||||
*
|
||||
|
@ -1197,6 +1209,8 @@ struct cfg80211_wowlan {
|
|||
*
|
||||
* @set_default_mgmt_key: set the default management frame key on an interface
|
||||
*
|
||||
* @set_rekey_data: give the data necessary for GTK rekeying to the driver
|
||||
*
|
||||
* @add_beacon: Add a beacon with given parameters, @head, @interval
|
||||
* and @dtim_period will be valid, @tail is optional.
|
||||
* @set_beacon: Change the beacon parameters for an access point mode
|
||||
|
@ -1499,6 +1513,9 @@ struct cfg80211_ops {
|
|||
struct net_device *dev,
|
||||
struct cfg80211_sched_scan_request *request);
|
||||
int (*sched_scan_stop)(struct wiphy *wiphy, struct net_device *dev);
|
||||
|
||||
int (*set_rekey_data)(struct wiphy *wiphy, struct net_device *dev,
|
||||
struct cfg80211_gtk_rekey_data *data);
|
||||
};
|
||||
|
||||
/*
|
||||
|
@ -3033,6 +3050,15 @@ void cfg80211_cqm_rssi_notify(struct net_device *dev,
|
|||
void cfg80211_cqm_pktloss_notify(struct net_device *dev,
|
||||
const u8 *peer, u32 num_packets, gfp_t gfp);
|
||||
|
||||
/**
|
||||
* cfg80211_gtk_rekey_notify - notify userspace about driver rekeying
|
||||
* @dev: network device
|
||||
* @bssid: BSSID of AP (to avoid races)
|
||||
* @replay_ctr: new replay counter
|
||||
*/
|
||||
void cfg80211_gtk_rekey_notify(struct net_device *dev, const u8 *bssid,
|
||||
const u8 *replay_ctr, gfp_t gfp);
|
||||
|
||||
/* Logging, debugging and troubleshooting/diagnostic helpers. */
|
||||
|
||||
/* wiphy_printk helpers, similar to dev_printk */
|
||||
|
|
|
@ -1628,6 +1628,10 @@ enum ieee80211_ampdu_mlme_action {
|
|||
* ask the device to suspend. This is only invoked when WoWLAN is
|
||||
* configured, otherwise the device is deconfigured completely and
|
||||
* reconfigured at resume time.
|
||||
* The driver may also impose special conditions under which it
|
||||
* wants to use the "normal" suspend (deconfigure), say if it only
|
||||
* supports WoWLAN when the device is associated. In this case, it
|
||||
* must return 1 from this function.
|
||||
*
|
||||
* @resume: If WoWLAN was configured, this indicates that mac80211 is
|
||||
* now resuming its operation, after this the device must be fully
|
||||
|
@ -1696,6 +1700,12 @@ enum ieee80211_ampdu_mlme_action {
|
|||
* which set IEEE80211_KEY_FLAG_TKIP_REQ_RX_P1_KEY.
|
||||
* The callback must be atomic.
|
||||
*
|
||||
* @set_rekey_data: If the device supports GTK rekeying, for example while the
|
||||
* host is suspended, it can assign this callback to retrieve the data
|
||||
* necessary to do GTK rekeying, this is the KEK, KCK and replay counter.
|
||||
* After rekeying was done it should (for example during resume) notify
|
||||
* userspace of the new replay counter using ieee80211_gtk_rekey_notify().
|
||||
*
|
||||
* @hw_scan: Ask the hardware to service the scan request, no need to start
|
||||
* the scan state machine in stack. The scan must honour the channel
|
||||
* configuration done by the regulatory agent in the wiphy's
|
||||
|
@ -1908,6 +1918,9 @@ struct ieee80211_ops {
|
|||
struct ieee80211_key_conf *conf,
|
||||
struct ieee80211_sta *sta,
|
||||
u32 iv32, u16 *phase1key);
|
||||
void (*set_rekey_data)(struct ieee80211_hw *hw,
|
||||
struct ieee80211_vif *vif,
|
||||
struct cfg80211_gtk_rekey_data *data);
|
||||
int (*hw_scan)(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
|
||||
struct cfg80211_scan_request *req);
|
||||
void (*cancel_hw_scan)(struct ieee80211_hw *hw,
|
||||
|
@ -2581,6 +2594,17 @@ ieee80211_get_buffered_bc(struct ieee80211_hw *hw, struct ieee80211_vif *vif);
|
|||
void ieee80211_get_tkip_key(struct ieee80211_key_conf *keyconf,
|
||||
struct sk_buff *skb,
|
||||
enum ieee80211_tkip_key_type type, u8 *key);
|
||||
|
||||
/**
|
||||
* ieee80211_gtk_rekey_notify - notify userspace supplicant of rekeying
|
||||
* @vif: virtual interface the rekeying was done on
|
||||
* @bssid: The BSSID of the AP, for checking association
|
||||
* @replay_ctr: the new replay counter after GTK rekeying
|
||||
* @gfp: allocation flags
|
||||
*/
|
||||
void ieee80211_gtk_rekey_notify(struct ieee80211_vif *vif, const u8 *bssid,
|
||||
const u8 *replay_ctr, gfp_t gfp);
|
||||
|
||||
/**
|
||||
* ieee80211_wake_queue - wake specific queue
|
||||
* @hw: pointer as obtained from ieee80211_alloc_hw().
|
||||
|
@ -2845,6 +2869,29 @@ struct ieee80211_sta *ieee80211_find_sta_by_ifaddr(struct ieee80211_hw *hw,
|
|||
void ieee80211_sta_block_awake(struct ieee80211_hw *hw,
|
||||
struct ieee80211_sta *pubsta, bool block);
|
||||
|
||||
/**
|
||||
* ieee80211_iter_keys - iterate keys programmed into the device
|
||||
* @hw: pointer obtained from ieee80211_alloc_hw()
|
||||
* @vif: virtual interface to iterate, may be %NULL for all
|
||||
* @iter: iterator function that will be called for each key
|
||||
* @iter_data: custom data to pass to the iterator function
|
||||
*
|
||||
* This function can be used to iterate all the keys known to
|
||||
* mac80211, even those that weren't previously programmed into
|
||||
* the device. This is intended for use in WoWLAN if the device
|
||||
* needs reprogramming of the keys during suspend. Note that due
|
||||
* to locking reasons, it is also only safe to call this at few
|
||||
* spots since it must hold the RTNL and be able to sleep.
|
||||
*/
|
||||
void ieee80211_iter_keys(struct ieee80211_hw *hw,
|
||||
struct ieee80211_vif *vif,
|
||||
void (*iter)(struct ieee80211_hw *hw,
|
||||
struct ieee80211_vif *vif,
|
||||
struct ieee80211_sta *sta,
|
||||
struct ieee80211_key_conf *key,
|
||||
void *data),
|
||||
void *iter_data);
|
||||
|
||||
/**
|
||||
* ieee80211_ap_probereq_get - retrieve a Probe Request template
|
||||
* @hw: pointer obtained from ieee80211_alloc_hw().
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue