2007-05-09 09:00:38 +08:00
|
|
|
/*
|
|
|
|
* Copyright (c) 2007 Cisco Systems, Inc. All rights reserved.
|
2008-07-26 01:32:52 +08:00
|
|
|
* Copyright (c) 2007, 2008 Mellanox Technologies. All rights reserved.
|
2007-05-09 09:00:38 +08:00
|
|
|
*
|
|
|
|
* This software is available to you under a choice of one of two
|
|
|
|
* licenses. You may choose to be licensed under the terms of the GNU
|
|
|
|
* General Public License (GPL) Version 2, available from the file
|
|
|
|
* COPYING in the main directory of this source tree, or the
|
|
|
|
* OpenIB.org BSD license below:
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or
|
|
|
|
* without modification, are permitted provided that the following
|
|
|
|
* conditions are met:
|
|
|
|
*
|
|
|
|
* - Redistributions of source code must retain the above
|
|
|
|
* copyright notice, this list of conditions and the following
|
|
|
|
* disclaimer.
|
|
|
|
*
|
|
|
|
* - Redistributions in binary form must reproduce the above
|
|
|
|
* copyright notice, this list of conditions and the following
|
|
|
|
* disclaimer in the documentation and/or other materials
|
|
|
|
* provided with the distribution.
|
|
|
|
*
|
|
|
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
|
|
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
|
|
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
|
|
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
|
|
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
|
|
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
|
|
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
|
|
* SOFTWARE.
|
|
|
|
*/
|
|
|
|
|
IB/mlx4: Use multiple WQ blocks to post smaller send WQEs
ConnectX HCA supports shrinking WQEs, so that a single work request
can be made of multiple units of wqe_shift. This way, WRs can differ
in size, and do not have to be a power of 2 in size, saving memory and
speeding up send WR posting. Unfortunately, if we do this then the
wqe_index field in CQEs can't be used to look up the WR ID anymore, so
our implementation does this only if selective signaling is off.
Further, on 32-bit platforms, we can't use vmap() to make the QP
buffer virtually contigious. Thus we have to use constant-sized WRs to
make sure a WR is always fully within a single page-sized chunk.
Finally, we use WRs with the NOP opcode to avoid wrapping around the
queue buffer in the middle of posting a WR, and we set the
NoErrorCompletion bit to avoid getting completions with error for NOP
WRs. However, NEC is only supported starting with firmware 2.2.232,
so we use constant-sized WRs for older firmware. And, since MLX QPs
only support SEND, we use constant-sized WRs in this case.
When stamping during NOP posting, do stamping following setting of the
NOP WQE valid bit.
Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-01-28 16:40:59 +08:00
|
|
|
#include <linux/log2.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 16:04:11 +08:00
|
|
|
#include <linux/slab.h>
|
2010-10-25 12:08:52 +08:00
|
|
|
#include <linux/netdevice.h>
|
IB/mlx4: Use multiple WQ blocks to post smaller send WQEs
ConnectX HCA supports shrinking WQEs, so that a single work request
can be made of multiple units of wqe_shift. This way, WRs can differ
in size, and do not have to be a power of 2 in size, saving memory and
speeding up send WR posting. Unfortunately, if we do this then the
wqe_index field in CQEs can't be used to look up the WR ID anymore, so
our implementation does this only if selective signaling is off.
Further, on 32-bit platforms, we can't use vmap() to make the QP
buffer virtually contigious. Thus we have to use constant-sized WRs to
make sure a WR is always fully within a single page-sized chunk.
Finally, we use WRs with the NOP opcode to avoid wrapping around the
queue buffer in the middle of posting a WR, and we set the
NoErrorCompletion bit to avoid getting completions with error for NOP
WRs. However, NEC is only supported starting with firmware 2.2.232,
so we use constant-sized WRs for older firmware. And, since MLX QPs
only support SEND, we use constant-sized WRs in this case.
When stamping during NOP posting, do stamping following setting of the
NOP WQE valid bit.
Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-01-28 16:40:59 +08:00
|
|
|
|
2007-05-09 09:00:38 +08:00
|
|
|
#include <rdma/ib_cache.h>
|
|
|
|
#include <rdma/ib_pack.h>
|
2010-08-26 22:19:22 +08:00
|
|
|
#include <rdma/ib_addr.h>
|
2012-08-03 16:40:40 +08:00
|
|
|
#include <rdma/ib_mad.h>
|
2007-05-09 09:00:38 +08:00
|
|
|
|
|
|
|
#include <linux/mlx4/qp.h>
|
|
|
|
|
|
|
|
#include "mlx4_ib.h"
|
|
|
|
#include "user.h"
|
|
|
|
|
|
|
|
enum {
|
|
|
|
MLX4_IB_ACK_REQ_FREQ = 8,
|
|
|
|
};
|
|
|
|
|
|
|
|
enum {
|
|
|
|
MLX4_IB_DEFAULT_SCHED_QUEUE = 0x83,
|
2010-10-25 12:08:52 +08:00
|
|
|
MLX4_IB_DEFAULT_QP0_SCHED_QUEUE = 0x3f,
|
|
|
|
MLX4_IB_LINK_TYPE_IB = 0,
|
|
|
|
MLX4_IB_LINK_TYPE_ETH = 1
|
2007-05-09 09:00:38 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
enum {
|
|
|
|
/*
|
2010-10-25 12:08:52 +08:00
|
|
|
* Largest possible UD header: send with GRH and immediate
|
2010-08-26 22:19:22 +08:00
|
|
|
* data plus 18 bytes for an Ethernet header with VLAN/802.1Q
|
|
|
|
* tag. (LRH would only use 8 bytes, so Ethernet is the
|
|
|
|
* biggest case)
|
2007-05-09 09:00:38 +08:00
|
|
|
*/
|
2010-08-26 22:19:22 +08:00
|
|
|
MLX4_IB_UD_HEADER_SIZE = 82,
|
2009-11-13 03:19:44 +08:00
|
|
|
MLX4_IB_LSO_HEADER_SPARE = 128,
|
2007-05-09 09:00:38 +08:00
|
|
|
};
|
|
|
|
|
2010-10-25 12:08:52 +08:00
|
|
|
enum {
|
|
|
|
MLX4_IB_IBOE_ETHERTYPE = 0x8915
|
|
|
|
};
|
|
|
|
|
2007-05-09 09:00:38 +08:00
|
|
|
struct mlx4_ib_sqp {
|
|
|
|
struct mlx4_ib_qp qp;
|
|
|
|
int pkey_index;
|
|
|
|
u32 qkey;
|
|
|
|
u32 send_psn;
|
|
|
|
struct ib_ud_header ud_header;
|
|
|
|
u8 header_buf[MLX4_IB_UD_HEADER_SIZE];
|
|
|
|
};
|
|
|
|
|
2007-10-18 23:36:43 +08:00
|
|
|
enum {
|
2009-11-13 03:19:44 +08:00
|
|
|
MLX4_IB_MIN_SQ_STRIDE = 6,
|
|
|
|
MLX4_IB_CACHE_LINE_SIZE = 64,
|
2007-10-18 23:36:43 +08:00
|
|
|
};
|
|
|
|
|
2012-01-17 19:39:07 +08:00
|
|
|
enum {
|
|
|
|
MLX4_RAW_QP_MTU = 7,
|
|
|
|
MLX4_RAW_QP_MSGMAX = 31,
|
|
|
|
};
|
|
|
|
|
2007-05-09 09:00:38 +08:00
|
|
|
static const __be32 mlx4_ib_opcode[] = {
|
2010-04-14 22:23:39 +08:00
|
|
|
[IB_WR_SEND] = cpu_to_be32(MLX4_OPCODE_SEND),
|
|
|
|
[IB_WR_LSO] = cpu_to_be32(MLX4_OPCODE_LSO),
|
|
|
|
[IB_WR_SEND_WITH_IMM] = cpu_to_be32(MLX4_OPCODE_SEND_IMM),
|
|
|
|
[IB_WR_RDMA_WRITE] = cpu_to_be32(MLX4_OPCODE_RDMA_WRITE),
|
|
|
|
[IB_WR_RDMA_WRITE_WITH_IMM] = cpu_to_be32(MLX4_OPCODE_RDMA_WRITE_IMM),
|
|
|
|
[IB_WR_RDMA_READ] = cpu_to_be32(MLX4_OPCODE_RDMA_READ),
|
|
|
|
[IB_WR_ATOMIC_CMP_AND_SWP] = cpu_to_be32(MLX4_OPCODE_ATOMIC_CS),
|
|
|
|
[IB_WR_ATOMIC_FETCH_AND_ADD] = cpu_to_be32(MLX4_OPCODE_ATOMIC_FA),
|
|
|
|
[IB_WR_SEND_WITH_INV] = cpu_to_be32(MLX4_OPCODE_SEND_INVAL),
|
|
|
|
[IB_WR_LOCAL_INV] = cpu_to_be32(MLX4_OPCODE_LOCAL_INVAL),
|
|
|
|
[IB_WR_FAST_REG_MR] = cpu_to_be32(MLX4_OPCODE_FMR),
|
|
|
|
[IB_WR_MASKED_ATOMIC_CMP_AND_SWP] = cpu_to_be32(MLX4_OPCODE_MASKED_ATOMIC_CS),
|
|
|
|
[IB_WR_MASKED_ATOMIC_FETCH_AND_ADD] = cpu_to_be32(MLX4_OPCODE_MASKED_ATOMIC_FA),
|
2013-02-07 00:19:15 +08:00
|
|
|
[IB_WR_BIND_MW] = cpu_to_be32(MLX4_OPCODE_BIND_MW),
|
2007-05-09 09:00:38 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
static struct mlx4_ib_sqp *to_msqp(struct mlx4_ib_qp *mqp)
|
|
|
|
{
|
|
|
|
return container_of(mqp, struct mlx4_ib_sqp, qp);
|
|
|
|
}
|
|
|
|
|
2012-08-03 16:40:40 +08:00
|
|
|
static int is_tunnel_qp(struct mlx4_ib_dev *dev, struct mlx4_ib_qp *qp)
|
|
|
|
{
|
|
|
|
if (!mlx4_is_master(dev->dev))
|
|
|
|
return 0;
|
|
|
|
|
mlx4: Modify proxy/tunnel QP mechanism so that guests do no calculations
Previously, the structure of a guest's proxy QPs followed the
structure of the PPF special qps (qp0 port 1, qp0 port 2, qp1 port 1,
qp1 port 2, ...). The guest then did offset calculations on the
sqp_base qp number that the PPF passed to it in QUERY_FUNC_CAP().
This is now changed so that the guest does no offset calculations
regarding proxy or tunnel QPs to use. This change frees the PPF from
needing to adhere to a specific order in allocating proxy and tunnel
QPs.
Now QUERY_FUNC_CAP provides each port individually with its proxy
qp0, proxy qp1, tunnel qp0, and tunnel qp1 QP numbers, and these are
used directly where required (with no offset calculations).
To accomplish this change, several fields were added to the phys_caps
structure for use by the PPF and by non-SR-IOV mode:
base_sqpn -- in non-sriov mode, this was formerly sqp_start.
base_proxy_sqpn -- the first physical proxy qp number -- used by PPF
base_tunnel_sqpn -- the first physical tunnel qp number -- used by PPF.
The current code in the PPF still adheres to the previous layout of
sqps, proxy-sqps and tunnel-sqps. However, the PPF can change this
layout without affecting VF or (paravirtualized) PF code.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-08-03 16:40:57 +08:00
|
|
|
return qp->mqp.qpn >= dev->dev->phys_caps.base_tunnel_sqpn &&
|
|
|
|
qp->mqp.qpn < dev->dev->phys_caps.base_tunnel_sqpn +
|
|
|
|
8 * MLX4_MFUNC_MAX;
|
2012-08-03 16:40:40 +08:00
|
|
|
}
|
|
|
|
|
2007-05-09 09:00:38 +08:00
|
|
|
static int is_sqp(struct mlx4_ib_dev *dev, struct mlx4_ib_qp *qp)
|
|
|
|
{
|
mlx4: Modify proxy/tunnel QP mechanism so that guests do no calculations
Previously, the structure of a guest's proxy QPs followed the
structure of the PPF special qps (qp0 port 1, qp0 port 2, qp1 port 1,
qp1 port 2, ...). The guest then did offset calculations on the
sqp_base qp number that the PPF passed to it in QUERY_FUNC_CAP().
This is now changed so that the guest does no offset calculations
regarding proxy or tunnel QPs to use. This change frees the PPF from
needing to adhere to a specific order in allocating proxy and tunnel
QPs.
Now QUERY_FUNC_CAP provides each port individually with its proxy
qp0, proxy qp1, tunnel qp0, and tunnel qp1 QP numbers, and these are
used directly where required (with no offset calculations).
To accomplish this change, several fields were added to the phys_caps
structure for use by the PPF and by non-SR-IOV mode:
base_sqpn -- in non-sriov mode, this was formerly sqp_start.
base_proxy_sqpn -- the first physical proxy qp number -- used by PPF
base_tunnel_sqpn -- the first physical tunnel qp number -- used by PPF.
The current code in the PPF still adheres to the previous layout of
sqps, proxy-sqps and tunnel-sqps. However, the PPF can change this
layout without affecting VF or (paravirtualized) PF code.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-08-03 16:40:57 +08:00
|
|
|
int proxy_sqp = 0;
|
|
|
|
int real_sqp = 0;
|
|
|
|
int i;
|
|
|
|
/* PPF or Native -- real SQP */
|
|
|
|
real_sqp = ((mlx4_is_master(dev->dev) || !mlx4_is_mfunc(dev->dev)) &&
|
|
|
|
qp->mqp.qpn >= dev->dev->phys_caps.base_sqpn &&
|
|
|
|
qp->mqp.qpn <= dev->dev->phys_caps.base_sqpn + 3);
|
|
|
|
if (real_sqp)
|
|
|
|
return 1;
|
|
|
|
/* VF or PF -- proxy SQP */
|
|
|
|
if (mlx4_is_mfunc(dev->dev)) {
|
|
|
|
for (i = 0; i < dev->dev->caps.num_ports; i++) {
|
|
|
|
if (qp->mqp.qpn == dev->dev->caps.qp0_proxy[i] ||
|
|
|
|
qp->mqp.qpn == dev->dev->caps.qp1_proxy[i]) {
|
|
|
|
proxy_sqp = 1;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return proxy_sqp;
|
2007-05-09 09:00:38 +08:00
|
|
|
}
|
|
|
|
|
2012-08-03 16:40:40 +08:00
|
|
|
/* used for INIT/CLOSE port logic */
|
2007-05-09 09:00:38 +08:00
|
|
|
static int is_qp0(struct mlx4_ib_dev *dev, struct mlx4_ib_qp *qp)
|
|
|
|
{
|
mlx4: Modify proxy/tunnel QP mechanism so that guests do no calculations
Previously, the structure of a guest's proxy QPs followed the
structure of the PPF special qps (qp0 port 1, qp0 port 2, qp1 port 1,
qp1 port 2, ...). The guest then did offset calculations on the
sqp_base qp number that the PPF passed to it in QUERY_FUNC_CAP().
This is now changed so that the guest does no offset calculations
regarding proxy or tunnel QPs to use. This change frees the PPF from
needing to adhere to a specific order in allocating proxy and tunnel
QPs.
Now QUERY_FUNC_CAP provides each port individually with its proxy
qp0, proxy qp1, tunnel qp0, and tunnel qp1 QP numbers, and these are
used directly where required (with no offset calculations).
To accomplish this change, several fields were added to the phys_caps
structure for use by the PPF and by non-SR-IOV mode:
base_sqpn -- in non-sriov mode, this was formerly sqp_start.
base_proxy_sqpn -- the first physical proxy qp number -- used by PPF
base_tunnel_sqpn -- the first physical tunnel qp number -- used by PPF.
The current code in the PPF still adheres to the previous layout of
sqps, proxy-sqps and tunnel-sqps. However, the PPF can change this
layout without affecting VF or (paravirtualized) PF code.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-08-03 16:40:57 +08:00
|
|
|
int proxy_qp0 = 0;
|
|
|
|
int real_qp0 = 0;
|
|
|
|
int i;
|
|
|
|
/* PPF or Native -- real QP0 */
|
|
|
|
real_qp0 = ((mlx4_is_master(dev->dev) || !mlx4_is_mfunc(dev->dev)) &&
|
|
|
|
qp->mqp.qpn >= dev->dev->phys_caps.base_sqpn &&
|
|
|
|
qp->mqp.qpn <= dev->dev->phys_caps.base_sqpn + 1);
|
|
|
|
if (real_qp0)
|
|
|
|
return 1;
|
|
|
|
/* VF or PF -- proxy QP0 */
|
|
|
|
if (mlx4_is_mfunc(dev->dev)) {
|
|
|
|
for (i = 0; i < dev->dev->caps.num_ports; i++) {
|
|
|
|
if (qp->mqp.qpn == dev->dev->caps.qp0_proxy[i]) {
|
|
|
|
proxy_qp0 = 1;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return proxy_qp0;
|
2007-05-09 09:00:38 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void *get_wqe(struct mlx4_ib_qp *qp, int offset)
|
|
|
|
{
|
2008-02-07 13:07:54 +08:00
|
|
|
return mlx4_buf_offset(&qp->buf, offset);
|
2007-05-09 09:00:38 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void *get_recv_wqe(struct mlx4_ib_qp *qp, int n)
|
|
|
|
{
|
|
|
|
return get_wqe(qp, qp->rq.offset + (n << qp->rq.wqe_shift));
|
|
|
|
}
|
|
|
|
|
|
|
|
static void *get_send_wqe(struct mlx4_ib_qp *qp, int n)
|
|
|
|
{
|
|
|
|
return get_wqe(qp, qp->sq.offset + (n << qp->sq.wqe_shift));
|
|
|
|
}
|
|
|
|
|
2007-06-18 23:13:48 +08:00
|
|
|
/*
|
|
|
|
* Stamp a SQ WQE so that it is invalid if prefetched by marking the
|
IB/mlx4: Use multiple WQ blocks to post smaller send WQEs
ConnectX HCA supports shrinking WQEs, so that a single work request
can be made of multiple units of wqe_shift. This way, WRs can differ
in size, and do not have to be a power of 2 in size, saving memory and
speeding up send WR posting. Unfortunately, if we do this then the
wqe_index field in CQEs can't be used to look up the WR ID anymore, so
our implementation does this only if selective signaling is off.
Further, on 32-bit platforms, we can't use vmap() to make the QP
buffer virtually contigious. Thus we have to use constant-sized WRs to
make sure a WR is always fully within a single page-sized chunk.
Finally, we use WRs with the NOP opcode to avoid wrapping around the
queue buffer in the middle of posting a WR, and we set the
NoErrorCompletion bit to avoid getting completions with error for NOP
WRs. However, NEC is only supported starting with firmware 2.2.232,
so we use constant-sized WRs for older firmware. And, since MLX QPs
only support SEND, we use constant-sized WRs in this case.
When stamping during NOP posting, do stamping following setting of the
NOP WQE valid bit.
Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-01-28 16:40:59 +08:00
|
|
|
* first four bytes of every 64 byte chunk with
|
|
|
|
* 0x7FFFFFF | (invalid_ownership_value << 31).
|
|
|
|
*
|
|
|
|
* When the max work request size is less than or equal to the WQE
|
|
|
|
* basic block size, as an optimization, we can stamp all WQEs with
|
|
|
|
* 0xffffffff, and skip the very first chunk of each WQE.
|
2007-06-18 23:13:48 +08:00
|
|
|
*/
|
IB/mlx4: Use multiple WQ blocks to post smaller send WQEs
ConnectX HCA supports shrinking WQEs, so that a single work request
can be made of multiple units of wqe_shift. This way, WRs can differ
in size, and do not have to be a power of 2 in size, saving memory and
speeding up send WR posting. Unfortunately, if we do this then the
wqe_index field in CQEs can't be used to look up the WR ID anymore, so
our implementation does this only if selective signaling is off.
Further, on 32-bit platforms, we can't use vmap() to make the QP
buffer virtually contigious. Thus we have to use constant-sized WRs to
make sure a WR is always fully within a single page-sized chunk.
Finally, we use WRs with the NOP opcode to avoid wrapping around the
queue buffer in the middle of posting a WR, and we set the
NoErrorCompletion bit to avoid getting completions with error for NOP
WRs. However, NEC is only supported starting with firmware 2.2.232,
so we use constant-sized WRs for older firmware. And, since MLX QPs
only support SEND, we use constant-sized WRs in this case.
When stamping during NOP posting, do stamping following setting of the
NOP WQE valid bit.
Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-01-28 16:40:59 +08:00
|
|
|
static void stamp_send_wqe(struct mlx4_ib_qp *qp, int n, int size)
|
2007-06-18 23:13:48 +08:00
|
|
|
{
|
2008-04-17 12:01:07 +08:00
|
|
|
__be32 *wqe;
|
2007-06-18 23:13:48 +08:00
|
|
|
int i;
|
IB/mlx4: Use multiple WQ blocks to post smaller send WQEs
ConnectX HCA supports shrinking WQEs, so that a single work request
can be made of multiple units of wqe_shift. This way, WRs can differ
in size, and do not have to be a power of 2 in size, saving memory and
speeding up send WR posting. Unfortunately, if we do this then the
wqe_index field in CQEs can't be used to look up the WR ID anymore, so
our implementation does this only if selective signaling is off.
Further, on 32-bit platforms, we can't use vmap() to make the QP
buffer virtually contigious. Thus we have to use constant-sized WRs to
make sure a WR is always fully within a single page-sized chunk.
Finally, we use WRs with the NOP opcode to avoid wrapping around the
queue buffer in the middle of posting a WR, and we set the
NoErrorCompletion bit to avoid getting completions with error for NOP
WRs. However, NEC is only supported starting with firmware 2.2.232,
so we use constant-sized WRs for older firmware. And, since MLX QPs
only support SEND, we use constant-sized WRs in this case.
When stamping during NOP posting, do stamping following setting of the
NOP WQE valid bit.
Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-01-28 16:40:59 +08:00
|
|
|
int s;
|
|
|
|
int ind;
|
|
|
|
void *buf;
|
|
|
|
__be32 stamp;
|
2008-07-15 14:48:44 +08:00
|
|
|
struct mlx4_wqe_ctrl_seg *ctrl;
|
IB/mlx4: Use multiple WQ blocks to post smaller send WQEs
ConnectX HCA supports shrinking WQEs, so that a single work request
can be made of multiple units of wqe_shift. This way, WRs can differ
in size, and do not have to be a power of 2 in size, saving memory and
speeding up send WR posting. Unfortunately, if we do this then the
wqe_index field in CQEs can't be used to look up the WR ID anymore, so
our implementation does this only if selective signaling is off.
Further, on 32-bit platforms, we can't use vmap() to make the QP
buffer virtually contigious. Thus we have to use constant-sized WRs to
make sure a WR is always fully within a single page-sized chunk.
Finally, we use WRs with the NOP opcode to avoid wrapping around the
queue buffer in the middle of posting a WR, and we set the
NoErrorCompletion bit to avoid getting completions with error for NOP
WRs. However, NEC is only supported starting with firmware 2.2.232,
so we use constant-sized WRs for older firmware. And, since MLX QPs
only support SEND, we use constant-sized WRs in this case.
When stamping during NOP posting, do stamping following setting of the
NOP WQE valid bit.
Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-01-28 16:40:59 +08:00
|
|
|
|
|
|
|
if (qp->sq_max_wqes_per_wr > 1) {
|
2008-07-15 14:48:44 +08:00
|
|
|
s = roundup(size, 1U << qp->sq.wqe_shift);
|
IB/mlx4: Use multiple WQ blocks to post smaller send WQEs
ConnectX HCA supports shrinking WQEs, so that a single work request
can be made of multiple units of wqe_shift. This way, WRs can differ
in size, and do not have to be a power of 2 in size, saving memory and
speeding up send WR posting. Unfortunately, if we do this then the
wqe_index field in CQEs can't be used to look up the WR ID anymore, so
our implementation does this only if selective signaling is off.
Further, on 32-bit platforms, we can't use vmap() to make the QP
buffer virtually contigious. Thus we have to use constant-sized WRs to
make sure a WR is always fully within a single page-sized chunk.
Finally, we use WRs with the NOP opcode to avoid wrapping around the
queue buffer in the middle of posting a WR, and we set the
NoErrorCompletion bit to avoid getting completions with error for NOP
WRs. However, NEC is only supported starting with firmware 2.2.232,
so we use constant-sized WRs for older firmware. And, since MLX QPs
only support SEND, we use constant-sized WRs in this case.
When stamping during NOP posting, do stamping following setting of the
NOP WQE valid bit.
Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-01-28 16:40:59 +08:00
|
|
|
for (i = 0; i < s; i += 64) {
|
|
|
|
ind = (i >> qp->sq.wqe_shift) + n;
|
|
|
|
stamp = ind & qp->sq.wqe_cnt ? cpu_to_be32(0x7fffffff) :
|
|
|
|
cpu_to_be32(0xffffffff);
|
|
|
|
buf = get_send_wqe(qp, ind & (qp->sq.wqe_cnt - 1));
|
|
|
|
wqe = buf + (i & ((1 << qp->sq.wqe_shift) - 1));
|
|
|
|
*wqe = stamp;
|
|
|
|
}
|
|
|
|
} else {
|
2008-07-15 14:48:44 +08:00
|
|
|
ctrl = buf = get_send_wqe(qp, n & (qp->sq.wqe_cnt - 1));
|
|
|
|
s = (ctrl->fence_size & 0x3f) << 4;
|
IB/mlx4: Use multiple WQ blocks to post smaller send WQEs
ConnectX HCA supports shrinking WQEs, so that a single work request
can be made of multiple units of wqe_shift. This way, WRs can differ
in size, and do not have to be a power of 2 in size, saving memory and
speeding up send WR posting. Unfortunately, if we do this then the
wqe_index field in CQEs can't be used to look up the WR ID anymore, so
our implementation does this only if selective signaling is off.
Further, on 32-bit platforms, we can't use vmap() to make the QP
buffer virtually contigious. Thus we have to use constant-sized WRs to
make sure a WR is always fully within a single page-sized chunk.
Finally, we use WRs with the NOP opcode to avoid wrapping around the
queue buffer in the middle of posting a WR, and we set the
NoErrorCompletion bit to avoid getting completions with error for NOP
WRs. However, NEC is only supported starting with firmware 2.2.232,
so we use constant-sized WRs for older firmware. And, since MLX QPs
only support SEND, we use constant-sized WRs in this case.
When stamping during NOP posting, do stamping following setting of the
NOP WQE valid bit.
Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-01-28 16:40:59 +08:00
|
|
|
for (i = 64; i < s; i += 64) {
|
|
|
|
wqe = buf + i;
|
2008-04-17 12:01:07 +08:00
|
|
|
*wqe = cpu_to_be32(0xffffffff);
|
IB/mlx4: Use multiple WQ blocks to post smaller send WQEs
ConnectX HCA supports shrinking WQEs, so that a single work request
can be made of multiple units of wqe_shift. This way, WRs can differ
in size, and do not have to be a power of 2 in size, saving memory and
speeding up send WR posting. Unfortunately, if we do this then the
wqe_index field in CQEs can't be used to look up the WR ID anymore, so
our implementation does this only if selective signaling is off.
Further, on 32-bit platforms, we can't use vmap() to make the QP
buffer virtually contigious. Thus we have to use constant-sized WRs to
make sure a WR is always fully within a single page-sized chunk.
Finally, we use WRs with the NOP opcode to avoid wrapping around the
queue buffer in the middle of posting a WR, and we set the
NoErrorCompletion bit to avoid getting completions with error for NOP
WRs. However, NEC is only supported starting with firmware 2.2.232,
so we use constant-sized WRs for older firmware. And, since MLX QPs
only support SEND, we use constant-sized WRs in this case.
When stamping during NOP posting, do stamping following setting of the
NOP WQE valid bit.
Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-01-28 16:40:59 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void post_nop_wqe(struct mlx4_ib_qp *qp, int n, int size)
|
|
|
|
{
|
|
|
|
struct mlx4_wqe_ctrl_seg *ctrl;
|
|
|
|
struct mlx4_wqe_inline_seg *inl;
|
|
|
|
void *wqe;
|
|
|
|
int s;
|
|
|
|
|
|
|
|
ctrl = wqe = get_send_wqe(qp, n & (qp->sq.wqe_cnt - 1));
|
|
|
|
s = sizeof(struct mlx4_wqe_ctrl_seg);
|
|
|
|
|
|
|
|
if (qp->ibqp.qp_type == IB_QPT_UD) {
|
|
|
|
struct mlx4_wqe_datagram_seg *dgram = wqe + sizeof *ctrl;
|
|
|
|
struct mlx4_av *av = (struct mlx4_av *)dgram->av;
|
|
|
|
memset(dgram, 0, sizeof *dgram);
|
|
|
|
av->port_pd = cpu_to_be32((qp->port << 24) | to_mpd(qp->ibqp.pd)->pdn);
|
|
|
|
s += sizeof(struct mlx4_wqe_datagram_seg);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Pad the remainder of the WQE with an inline data segment. */
|
|
|
|
if (size > s) {
|
|
|
|
inl = wqe + s;
|
|
|
|
inl->byte_count = cpu_to_be32(1 << 31 | (size - s - sizeof *inl));
|
|
|
|
}
|
|
|
|
ctrl->srcrb_flags = 0;
|
|
|
|
ctrl->fence_size = size / 16;
|
|
|
|
/*
|
|
|
|
* Make sure descriptor is fully written before setting ownership bit
|
|
|
|
* (because HW can start executing as soon as we do).
|
|
|
|
*/
|
|
|
|
wmb();
|
|
|
|
|
|
|
|
ctrl->owner_opcode = cpu_to_be32(MLX4_OPCODE_NOP | MLX4_WQE_CTRL_NEC) |
|
|
|
|
(n & qp->sq.wqe_cnt ? cpu_to_be32(1 << 31) : 0);
|
2007-06-18 23:13:48 +08:00
|
|
|
|
IB/mlx4: Use multiple WQ blocks to post smaller send WQEs
ConnectX HCA supports shrinking WQEs, so that a single work request
can be made of multiple units of wqe_shift. This way, WRs can differ
in size, and do not have to be a power of 2 in size, saving memory and
speeding up send WR posting. Unfortunately, if we do this then the
wqe_index field in CQEs can't be used to look up the WR ID anymore, so
our implementation does this only if selective signaling is off.
Further, on 32-bit platforms, we can't use vmap() to make the QP
buffer virtually contigious. Thus we have to use constant-sized WRs to
make sure a WR is always fully within a single page-sized chunk.
Finally, we use WRs with the NOP opcode to avoid wrapping around the
queue buffer in the middle of posting a WR, and we set the
NoErrorCompletion bit to avoid getting completions with error for NOP
WRs. However, NEC is only supported starting with firmware 2.2.232,
so we use constant-sized WRs for older firmware. And, since MLX QPs
only support SEND, we use constant-sized WRs in this case.
When stamping during NOP posting, do stamping following setting of the
NOP WQE valid bit.
Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-01-28 16:40:59 +08:00
|
|
|
stamp_send_wqe(qp, n + qp->sq_spare_wqes, size);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Post NOP WQE to prevent wrap-around in the middle of WR */
|
|
|
|
static inline unsigned pad_wraparound(struct mlx4_ib_qp *qp, int ind)
|
|
|
|
{
|
|
|
|
unsigned s = qp->sq.wqe_cnt - (ind & (qp->sq.wqe_cnt - 1));
|
|
|
|
if (unlikely(s < qp->sq_max_wqes_per_wr)) {
|
|
|
|
post_nop_wqe(qp, ind, s << qp->sq.wqe_shift);
|
|
|
|
ind += s;
|
|
|
|
}
|
|
|
|
return ind;
|
2007-06-18 23:13:48 +08:00
|
|
|
}
|
|
|
|
|
2007-05-09 09:00:38 +08:00
|
|
|
static void mlx4_ib_qp_event(struct mlx4_qp *qp, enum mlx4_event type)
|
|
|
|
{
|
|
|
|
struct ib_event event;
|
|
|
|
struct ib_qp *ibqp = &to_mibqp(qp)->ibqp;
|
|
|
|
|
|
|
|
if (type == MLX4_EVENT_TYPE_PATH_MIG)
|
|
|
|
to_mibqp(qp)->port = to_mibqp(qp)->alt_port;
|
|
|
|
|
|
|
|
if (ibqp->event_handler) {
|
|
|
|
event.device = ibqp->device;
|
|
|
|
event.element.qp = ibqp;
|
|
|
|
switch (type) {
|
|
|
|
case MLX4_EVENT_TYPE_PATH_MIG:
|
|
|
|
event.event = IB_EVENT_PATH_MIG;
|
|
|
|
break;
|
|
|
|
case MLX4_EVENT_TYPE_COMM_EST:
|
|
|
|
event.event = IB_EVENT_COMM_EST;
|
|
|
|
break;
|
|
|
|
case MLX4_EVENT_TYPE_SQ_DRAINED:
|
|
|
|
event.event = IB_EVENT_SQ_DRAINED;
|
|
|
|
break;
|
|
|
|
case MLX4_EVENT_TYPE_SRQ_QP_LAST_WQE:
|
|
|
|
event.event = IB_EVENT_QP_LAST_WQE_REACHED;
|
|
|
|
break;
|
|
|
|
case MLX4_EVENT_TYPE_WQ_CATAS_ERROR:
|
|
|
|
event.event = IB_EVENT_QP_FATAL;
|
|
|
|
break;
|
|
|
|
case MLX4_EVENT_TYPE_PATH_MIG_FAILED:
|
|
|
|
event.event = IB_EVENT_PATH_MIG_ERR;
|
|
|
|
break;
|
|
|
|
case MLX4_EVENT_TYPE_WQ_INVAL_REQ_ERROR:
|
|
|
|
event.event = IB_EVENT_QP_REQ_ERR;
|
|
|
|
break;
|
|
|
|
case MLX4_EVENT_TYPE_WQ_ACCESS_ERROR:
|
|
|
|
event.event = IB_EVENT_QP_ACCESS_ERR;
|
|
|
|
break;
|
|
|
|
default:
|
2012-04-29 22:04:26 +08:00
|
|
|
pr_warn("Unexpected event type %d "
|
2007-05-09 09:00:38 +08:00
|
|
|
"on QP %06x\n", type, qp->qpn);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
ibqp->event_handler(&event, ibqp->qp_context);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-08-03 16:40:40 +08:00
|
|
|
static int send_wqe_overhead(enum mlx4_ib_qp_type type, u32 flags)
|
2007-05-09 09:00:38 +08:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* UD WQEs must have a datagram segment.
|
|
|
|
* RC and UC WQEs might have a remote address segment.
|
|
|
|
* MLX WQEs need two extra inline data segments (for the UD
|
|
|
|
* header and space for the ICRC).
|
|
|
|
*/
|
|
|
|
switch (type) {
|
2012-08-03 16:40:40 +08:00
|
|
|
case MLX4_IB_QPT_UD:
|
2007-05-09 09:00:38 +08:00
|
|
|
return sizeof (struct mlx4_wqe_ctrl_seg) +
|
2008-04-17 12:09:27 +08:00
|
|
|
sizeof (struct mlx4_wqe_datagram_seg) +
|
2009-11-13 03:19:44 +08:00
|
|
|
((flags & MLX4_IB_QP_LSO) ? MLX4_IB_LSO_HEADER_SPARE : 0);
|
2012-08-03 16:40:40 +08:00
|
|
|
case MLX4_IB_QPT_PROXY_SMI_OWNER:
|
|
|
|
case MLX4_IB_QPT_PROXY_SMI:
|
|
|
|
case MLX4_IB_QPT_PROXY_GSI:
|
|
|
|
return sizeof (struct mlx4_wqe_ctrl_seg) +
|
|
|
|
sizeof (struct mlx4_wqe_datagram_seg) + 64;
|
|
|
|
case MLX4_IB_QPT_TUN_SMI_OWNER:
|
|
|
|
case MLX4_IB_QPT_TUN_GSI:
|
|
|
|
return sizeof (struct mlx4_wqe_ctrl_seg) +
|
|
|
|
sizeof (struct mlx4_wqe_datagram_seg);
|
|
|
|
|
|
|
|
case MLX4_IB_QPT_UC:
|
2007-05-09 09:00:38 +08:00
|
|
|
return sizeof (struct mlx4_wqe_ctrl_seg) +
|
|
|
|
sizeof (struct mlx4_wqe_raddr_seg);
|
2012-08-03 16:40:40 +08:00
|
|
|
case MLX4_IB_QPT_RC:
|
2007-05-09 09:00:38 +08:00
|
|
|
return sizeof (struct mlx4_wqe_ctrl_seg) +
|
|
|
|
sizeof (struct mlx4_wqe_atomic_seg) +
|
|
|
|
sizeof (struct mlx4_wqe_raddr_seg);
|
2012-08-03 16:40:40 +08:00
|
|
|
case MLX4_IB_QPT_SMI:
|
|
|
|
case MLX4_IB_QPT_GSI:
|
2007-05-09 09:00:38 +08:00
|
|
|
return sizeof (struct mlx4_wqe_ctrl_seg) +
|
|
|
|
ALIGN(MLX4_IB_UD_HEADER_SIZE +
|
2007-06-19 00:23:47 +08:00
|
|
|
DIV_ROUND_UP(MLX4_IB_UD_HEADER_SIZE,
|
|
|
|
MLX4_INLINE_ALIGN) *
|
2007-05-09 09:00:38 +08:00
|
|
|
sizeof (struct mlx4_wqe_inline_seg),
|
|
|
|
sizeof (struct mlx4_wqe_data_seg)) +
|
|
|
|
ALIGN(4 +
|
|
|
|
sizeof (struct mlx4_wqe_inline_seg),
|
|
|
|
sizeof (struct mlx4_wqe_data_seg));
|
|
|
|
default:
|
|
|
|
return sizeof (struct mlx4_wqe_ctrl_seg);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2007-05-17 15:32:41 +08:00
|
|
|
static int set_rq_size(struct mlx4_ib_dev *dev, struct ib_qp_cap *cap,
|
2011-06-03 02:32:15 +08:00
|
|
|
int is_user, int has_rq, struct mlx4_ib_qp *qp)
|
2007-05-09 09:00:38 +08:00
|
|
|
{
|
2007-05-17 15:32:41 +08:00
|
|
|
/* Sanity check RQ size before proceeding */
|
2012-05-24 21:08:08 +08:00
|
|
|
if (cap->max_recv_wr > dev->dev->caps.max_wqes - MLX4_IB_SQ_MAX_SPARE ||
|
|
|
|
cap->max_recv_sge > min(dev->dev->caps.max_sq_sg, dev->dev->caps.max_rq_sg))
|
2007-05-17 15:32:41 +08:00
|
|
|
return -EINVAL;
|
|
|
|
|
2011-06-03 02:32:15 +08:00
|
|
|
if (!has_rq) {
|
2007-06-08 14:24:39 +08:00
|
|
|
if (cap->max_recv_wr)
|
|
|
|
return -EINVAL;
|
2007-05-17 15:32:41 +08:00
|
|
|
|
2007-06-18 23:13:48 +08:00
|
|
|
qp->rq.wqe_cnt = qp->rq.max_gs = 0;
|
2007-06-08 14:24:39 +08:00
|
|
|
} else {
|
|
|
|
/* HW requires >= 1 RQ entry with >= 1 gather entry */
|
|
|
|
if (is_user && (!cap->max_recv_wr || !cap->max_recv_sge))
|
|
|
|
return -EINVAL;
|
|
|
|
|
2007-06-18 23:13:48 +08:00
|
|
|
qp->rq.wqe_cnt = roundup_pow_of_two(max(1U, cap->max_recv_wr));
|
2007-06-13 01:52:02 +08:00
|
|
|
qp->rq.max_gs = roundup_pow_of_two(max(1U, cap->max_recv_sge));
|
2007-06-08 14:24:39 +08:00
|
|
|
qp->rq.wqe_shift = ilog2(qp->rq.max_gs * sizeof (struct mlx4_wqe_data_seg));
|
|
|
|
}
|
2007-05-17 15:32:41 +08:00
|
|
|
|
2012-05-24 21:08:08 +08:00
|
|
|
/* leave userspace return values as they were, so as not to break ABI */
|
|
|
|
if (is_user) {
|
|
|
|
cap->max_recv_wr = qp->rq.max_post = qp->rq.wqe_cnt;
|
|
|
|
cap->max_recv_sge = qp->rq.max_gs;
|
|
|
|
} else {
|
|
|
|
cap->max_recv_wr = qp->rq.max_post =
|
|
|
|
min(dev->dev->caps.max_wqes - MLX4_IB_SQ_MAX_SPARE, qp->rq.wqe_cnt);
|
|
|
|
cap->max_recv_sge = min(qp->rq.max_gs,
|
|
|
|
min(dev->dev->caps.max_sq_sg,
|
|
|
|
dev->dev->caps.max_rq_sg));
|
|
|
|
}
|
2007-05-17 15:32:41 +08:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int set_kernel_sq_size(struct mlx4_ib_dev *dev, struct ib_qp_cap *cap,
|
2012-08-03 16:40:40 +08:00
|
|
|
enum mlx4_ib_qp_type type, struct mlx4_ib_qp *qp)
|
2007-05-17 15:32:41 +08:00
|
|
|
{
|
IB/mlx4: Use multiple WQ blocks to post smaller send WQEs
ConnectX HCA supports shrinking WQEs, so that a single work request
can be made of multiple units of wqe_shift. This way, WRs can differ
in size, and do not have to be a power of 2 in size, saving memory and
speeding up send WR posting. Unfortunately, if we do this then the
wqe_index field in CQEs can't be used to look up the WR ID anymore, so
our implementation does this only if selective signaling is off.
Further, on 32-bit platforms, we can't use vmap() to make the QP
buffer virtually contigious. Thus we have to use constant-sized WRs to
make sure a WR is always fully within a single page-sized chunk.
Finally, we use WRs with the NOP opcode to avoid wrapping around the
queue buffer in the middle of posting a WR, and we set the
NoErrorCompletion bit to avoid getting completions with error for NOP
WRs. However, NEC is only supported starting with firmware 2.2.232,
so we use constant-sized WRs for older firmware. And, since MLX QPs
only support SEND, we use constant-sized WRs in this case.
When stamping during NOP posting, do stamping following setting of the
NOP WQE valid bit.
Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-01-28 16:40:59 +08:00
|
|
|
int s;
|
|
|
|
|
2007-05-17 15:32:41 +08:00
|
|
|
/* Sanity check SQ size before proceeding */
|
2012-05-24 21:08:08 +08:00
|
|
|
if (cap->max_send_wr > (dev->dev->caps.max_wqes - MLX4_IB_SQ_MAX_SPARE) ||
|
|
|
|
cap->max_send_sge > min(dev->dev->caps.max_sq_sg, dev->dev->caps.max_rq_sg) ||
|
2008-04-17 12:09:27 +08:00
|
|
|
cap->max_inline_data + send_wqe_overhead(type, qp->flags) +
|
2007-05-09 09:00:38 +08:00
|
|
|
sizeof (struct mlx4_wqe_inline_seg) > dev->dev->caps.max_sq_desc_sz)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* For MLX transport we need 2 extra S/G entries:
|
|
|
|
* one for the header and one for the checksum at the end
|
|
|
|
*/
|
2012-08-03 16:40:40 +08:00
|
|
|
if ((type == MLX4_IB_QPT_SMI || type == MLX4_IB_QPT_GSI ||
|
|
|
|
type & (MLX4_IB_QPT_PROXY_SMI_OWNER | MLX4_IB_QPT_TUN_SMI_OWNER)) &&
|
2007-05-09 09:00:38 +08:00
|
|
|
cap->max_send_sge + 2 > dev->dev->caps.max_sq_sg)
|
|
|
|
return -EINVAL;
|
|
|
|
|
IB/mlx4: Use multiple WQ blocks to post smaller send WQEs
ConnectX HCA supports shrinking WQEs, so that a single work request
can be made of multiple units of wqe_shift. This way, WRs can differ
in size, and do not have to be a power of 2 in size, saving memory and
speeding up send WR posting. Unfortunately, if we do this then the
wqe_index field in CQEs can't be used to look up the WR ID anymore, so
our implementation does this only if selective signaling is off.
Further, on 32-bit platforms, we can't use vmap() to make the QP
buffer virtually contigious. Thus we have to use constant-sized WRs to
make sure a WR is always fully within a single page-sized chunk.
Finally, we use WRs with the NOP opcode to avoid wrapping around the
queue buffer in the middle of posting a WR, and we set the
NoErrorCompletion bit to avoid getting completions with error for NOP
WRs. However, NEC is only supported starting with firmware 2.2.232,
so we use constant-sized WRs for older firmware. And, since MLX QPs
only support SEND, we use constant-sized WRs in this case.
When stamping during NOP posting, do stamping following setting of the
NOP WQE valid bit.
Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-01-28 16:40:59 +08:00
|
|
|
s = max(cap->max_send_sge * sizeof (struct mlx4_wqe_data_seg),
|
|
|
|
cap->max_inline_data + sizeof (struct mlx4_wqe_inline_seg)) +
|
2008-04-17 12:09:27 +08:00
|
|
|
send_wqe_overhead(type, qp->flags);
|
2007-05-09 09:00:38 +08:00
|
|
|
|
2008-05-21 05:00:02 +08:00
|
|
|
if (s > dev->dev->caps.max_sq_desc_sz)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2007-06-18 23:13:48 +08:00
|
|
|
/*
|
IB/mlx4: Use multiple WQ blocks to post smaller send WQEs
ConnectX HCA supports shrinking WQEs, so that a single work request
can be made of multiple units of wqe_shift. This way, WRs can differ
in size, and do not have to be a power of 2 in size, saving memory and
speeding up send WR posting. Unfortunately, if we do this then the
wqe_index field in CQEs can't be used to look up the WR ID anymore, so
our implementation does this only if selective signaling is off.
Further, on 32-bit platforms, we can't use vmap() to make the QP
buffer virtually contigious. Thus we have to use constant-sized WRs to
make sure a WR is always fully within a single page-sized chunk.
Finally, we use WRs with the NOP opcode to avoid wrapping around the
queue buffer in the middle of posting a WR, and we set the
NoErrorCompletion bit to avoid getting completions with error for NOP
WRs. However, NEC is only supported starting with firmware 2.2.232,
so we use constant-sized WRs for older firmware. And, since MLX QPs
only support SEND, we use constant-sized WRs in this case.
When stamping during NOP posting, do stamping following setting of the
NOP WQE valid bit.
Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-01-28 16:40:59 +08:00
|
|
|
* Hermon supports shrinking WQEs, such that a single work
|
|
|
|
* request can include multiple units of 1 << wqe_shift. This
|
|
|
|
* way, work requests can differ in size, and do not have to
|
|
|
|
* be a power of 2 in size, saving memory and speeding up send
|
|
|
|
* WR posting. Unfortunately, if we do this then the
|
|
|
|
* wqe_index field in CQEs can't be used to look up the WR ID
|
|
|
|
* anymore, so we do this only if selective signaling is off.
|
|
|
|
*
|
|
|
|
* Further, on 32-bit platforms, we can't use vmap() to make
|
tree-wide: fix assorted typos all over the place
That is "success", "unknown", "through", "performance", "[re|un]mapping"
, "access", "default", "reasonable", "[con]currently", "temperature"
, "channel", "[un]used", "application", "example","hierarchy", "therefore"
, "[over|under]flow", "contiguous", "threshold", "enough" and others.
Signed-off-by: André Goddard Rosa <andre.goddard@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2009-11-14 23:09:05 +08:00
|
|
|
* the QP buffer virtually contiguous. Thus we have to use
|
IB/mlx4: Use multiple WQ blocks to post smaller send WQEs
ConnectX HCA supports shrinking WQEs, so that a single work request
can be made of multiple units of wqe_shift. This way, WRs can differ
in size, and do not have to be a power of 2 in size, saving memory and
speeding up send WR posting. Unfortunately, if we do this then the
wqe_index field in CQEs can't be used to look up the WR ID anymore, so
our implementation does this only if selective signaling is off.
Further, on 32-bit platforms, we can't use vmap() to make the QP
buffer virtually contigious. Thus we have to use constant-sized WRs to
make sure a WR is always fully within a single page-sized chunk.
Finally, we use WRs with the NOP opcode to avoid wrapping around the
queue buffer in the middle of posting a WR, and we set the
NoErrorCompletion bit to avoid getting completions with error for NOP
WRs. However, NEC is only supported starting with firmware 2.2.232,
so we use constant-sized WRs for older firmware. And, since MLX QPs
only support SEND, we use constant-sized WRs in this case.
When stamping during NOP posting, do stamping following setting of the
NOP WQE valid bit.
Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-01-28 16:40:59 +08:00
|
|
|
* constant-sized WRs to make sure a WR is always fully within
|
|
|
|
* a single page-sized chunk.
|
|
|
|
*
|
|
|
|
* Finally, we use NOP work requests to pad the end of the
|
|
|
|
* work queue, to avoid wrap-around in the middle of WR. We
|
|
|
|
* set NEC bit to avoid getting completions with error for
|
|
|
|
* these NOP WRs, but since NEC is only supported starting
|
|
|
|
* with firmware 2.2.232, we use constant-sized WRs for older
|
|
|
|
* firmware.
|
|
|
|
*
|
|
|
|
* And, since MLX QPs only support SEND, we use constant-sized
|
|
|
|
* WRs in this case.
|
|
|
|
*
|
|
|
|
* We look for the smallest value of wqe_shift such that the
|
|
|
|
* resulting number of wqes does not exceed device
|
|
|
|
* capabilities.
|
|
|
|
*
|
|
|
|
* We set WQE size to at least 64 bytes, this way stamping
|
|
|
|
* invalidates each WQE.
|
2007-06-18 23:13:48 +08:00
|
|
|
*/
|
IB/mlx4: Use multiple WQ blocks to post smaller send WQEs
ConnectX HCA supports shrinking WQEs, so that a single work request
can be made of multiple units of wqe_shift. This way, WRs can differ
in size, and do not have to be a power of 2 in size, saving memory and
speeding up send WR posting. Unfortunately, if we do this then the
wqe_index field in CQEs can't be used to look up the WR ID anymore, so
our implementation does this only if selective signaling is off.
Further, on 32-bit platforms, we can't use vmap() to make the QP
buffer virtually contigious. Thus we have to use constant-sized WRs to
make sure a WR is always fully within a single page-sized chunk.
Finally, we use WRs with the NOP opcode to avoid wrapping around the
queue buffer in the middle of posting a WR, and we set the
NoErrorCompletion bit to avoid getting completions with error for NOP
WRs. However, NEC is only supported starting with firmware 2.2.232,
so we use constant-sized WRs for older firmware. And, since MLX QPs
only support SEND, we use constant-sized WRs in this case.
When stamping during NOP posting, do stamping following setting of the
NOP WQE valid bit.
Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-01-28 16:40:59 +08:00
|
|
|
if (dev->dev->caps.fw_ver >= MLX4_FW_VER_WQE_CTRL_NEC &&
|
|
|
|
qp->sq_signal_bits && BITS_PER_LONG == 64 &&
|
2012-08-03 16:40:40 +08:00
|
|
|
type != MLX4_IB_QPT_SMI && type != MLX4_IB_QPT_GSI &&
|
|
|
|
!(type & (MLX4_IB_QPT_PROXY_SMI_OWNER | MLX4_IB_QPT_PROXY_SMI |
|
|
|
|
MLX4_IB_QPT_PROXY_GSI | MLX4_IB_QPT_TUN_SMI_OWNER)))
|
IB/mlx4: Use multiple WQ blocks to post smaller send WQEs
ConnectX HCA supports shrinking WQEs, so that a single work request
can be made of multiple units of wqe_shift. This way, WRs can differ
in size, and do not have to be a power of 2 in size, saving memory and
speeding up send WR posting. Unfortunately, if we do this then the
wqe_index field in CQEs can't be used to look up the WR ID anymore, so
our implementation does this only if selective signaling is off.
Further, on 32-bit platforms, we can't use vmap() to make the QP
buffer virtually contigious. Thus we have to use constant-sized WRs to
make sure a WR is always fully within a single page-sized chunk.
Finally, we use WRs with the NOP opcode to avoid wrapping around the
queue buffer in the middle of posting a WR, and we set the
NoErrorCompletion bit to avoid getting completions with error for NOP
WRs. However, NEC is only supported starting with firmware 2.2.232,
so we use constant-sized WRs for older firmware. And, since MLX QPs
only support SEND, we use constant-sized WRs in this case.
When stamping during NOP posting, do stamping following setting of the
NOP WQE valid bit.
Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-01-28 16:40:59 +08:00
|
|
|
qp->sq.wqe_shift = ilog2(64);
|
|
|
|
else
|
|
|
|
qp->sq.wqe_shift = ilog2(roundup_pow_of_two(s));
|
|
|
|
|
|
|
|
for (;;) {
|
|
|
|
qp->sq_max_wqes_per_wr = DIV_ROUND_UP(s, 1U << qp->sq.wqe_shift);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We need to leave 2 KB + 1 WR of headroom in the SQ to
|
|
|
|
* allow HW to prefetch.
|
|
|
|
*/
|
|
|
|
qp->sq_spare_wqes = (2048 >> qp->sq.wqe_shift) + qp->sq_max_wqes_per_wr;
|
|
|
|
qp->sq.wqe_cnt = roundup_pow_of_two(cap->max_send_wr *
|
|
|
|
qp->sq_max_wqes_per_wr +
|
|
|
|
qp->sq_spare_wqes);
|
|
|
|
|
|
|
|
if (qp->sq.wqe_cnt <= dev->dev->caps.max_wqes)
|
|
|
|
break;
|
|
|
|
|
|
|
|
if (qp->sq_max_wqes_per_wr <= 1)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
++qp->sq.wqe_shift;
|
|
|
|
}
|
|
|
|
|
2008-05-21 05:00:02 +08:00
|
|
|
qp->sq.max_gs = (min(dev->dev->caps.max_sq_desc_sz,
|
|
|
|
(qp->sq_max_wqes_per_wr << qp->sq.wqe_shift)) -
|
2008-04-17 12:09:27 +08:00
|
|
|
send_wqe_overhead(type, qp->flags)) /
|
|
|
|
sizeof (struct mlx4_wqe_data_seg);
|
2007-06-18 23:13:48 +08:00
|
|
|
|
|
|
|
qp->buf_size = (qp->rq.wqe_cnt << qp->rq.wqe_shift) +
|
|
|
|
(qp->sq.wqe_cnt << qp->sq.wqe_shift);
|
2007-05-09 09:00:38 +08:00
|
|
|
if (qp->rq.wqe_shift > qp->sq.wqe_shift) {
|
|
|
|
qp->rq.offset = 0;
|
2007-06-18 23:13:48 +08:00
|
|
|
qp->sq.offset = qp->rq.wqe_cnt << qp->rq.wqe_shift;
|
2007-05-09 09:00:38 +08:00
|
|
|
} else {
|
2007-06-18 23:13:48 +08:00
|
|
|
qp->rq.offset = qp->sq.wqe_cnt << qp->sq.wqe_shift;
|
2007-05-09 09:00:38 +08:00
|
|
|
qp->sq.offset = 0;
|
|
|
|
}
|
|
|
|
|
IB/mlx4: Use multiple WQ blocks to post smaller send WQEs
ConnectX HCA supports shrinking WQEs, so that a single work request
can be made of multiple units of wqe_shift. This way, WRs can differ
in size, and do not have to be a power of 2 in size, saving memory and
speeding up send WR posting. Unfortunately, if we do this then the
wqe_index field in CQEs can't be used to look up the WR ID anymore, so
our implementation does this only if selective signaling is off.
Further, on 32-bit platforms, we can't use vmap() to make the QP
buffer virtually contigious. Thus we have to use constant-sized WRs to
make sure a WR is always fully within a single page-sized chunk.
Finally, we use WRs with the NOP opcode to avoid wrapping around the
queue buffer in the middle of posting a WR, and we set the
NoErrorCompletion bit to avoid getting completions with error for NOP
WRs. However, NEC is only supported starting with firmware 2.2.232,
so we use constant-sized WRs for older firmware. And, since MLX QPs
only support SEND, we use constant-sized WRs in this case.
When stamping during NOP posting, do stamping following setting of the
NOP WQE valid bit.
Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-01-28 16:40:59 +08:00
|
|
|
cap->max_send_wr = qp->sq.max_post =
|
|
|
|
(qp->sq.wqe_cnt - qp->sq_spare_wqes) / qp->sq_max_wqes_per_wr;
|
2008-05-21 05:00:02 +08:00
|
|
|
cap->max_send_sge = min(qp->sq.max_gs,
|
|
|
|
min(dev->dev->caps.max_sq_sg,
|
|
|
|
dev->dev->caps.max_rq_sg));
|
2007-06-18 23:13:53 +08:00
|
|
|
/* We don't support inline sends for kernel QPs (yet) */
|
|
|
|
cap->max_inline_data = 0;
|
2007-05-09 09:00:38 +08:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2007-10-18 23:36:43 +08:00
|
|
|
static int set_user_sq_size(struct mlx4_ib_dev *dev,
|
|
|
|
struct mlx4_ib_qp *qp,
|
2007-05-17 15:32:41 +08:00
|
|
|
struct mlx4_ib_create_qp *ucmd)
|
|
|
|
{
|
2007-10-18 23:36:43 +08:00
|
|
|
/* Sanity check SQ size before proceeding */
|
|
|
|
if ((1 << ucmd->log_sq_bb_count) > dev->dev->caps.max_wqes ||
|
|
|
|
ucmd->log_sq_stride >
|
|
|
|
ilog2(roundup_pow_of_two(dev->dev->caps.max_sq_desc_sz)) ||
|
|
|
|
ucmd->log_sq_stride < MLX4_IB_MIN_SQ_STRIDE)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2007-06-18 23:13:48 +08:00
|
|
|
qp->sq.wqe_cnt = 1 << ucmd->log_sq_bb_count;
|
2007-05-17 15:32:41 +08:00
|
|
|
qp->sq.wqe_shift = ucmd->log_sq_stride;
|
|
|
|
|
2007-06-18 23:13:48 +08:00
|
|
|
qp->buf_size = (qp->rq.wqe_cnt << qp->rq.wqe_shift) +
|
|
|
|
(qp->sq.wqe_cnt << qp->sq.wqe_shift);
|
2007-05-17 15:32:41 +08:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2012-08-03 16:40:40 +08:00
|
|
|
static int alloc_proxy_bufs(struct ib_device *dev, struct mlx4_ib_qp *qp)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
qp->sqp_proxy_rcv =
|
|
|
|
kmalloc(sizeof (struct mlx4_ib_buf) * qp->rq.wqe_cnt,
|
|
|
|
GFP_KERNEL);
|
|
|
|
if (!qp->sqp_proxy_rcv)
|
|
|
|
return -ENOMEM;
|
|
|
|
for (i = 0; i < qp->rq.wqe_cnt; i++) {
|
|
|
|
qp->sqp_proxy_rcv[i].addr =
|
|
|
|
kmalloc(sizeof (struct mlx4_ib_proxy_sqp_hdr),
|
|
|
|
GFP_KERNEL);
|
|
|
|
if (!qp->sqp_proxy_rcv[i].addr)
|
|
|
|
goto err;
|
|
|
|
qp->sqp_proxy_rcv[i].map =
|
|
|
|
ib_dma_map_single(dev, qp->sqp_proxy_rcv[i].addr,
|
|
|
|
sizeof (struct mlx4_ib_proxy_sqp_hdr),
|
|
|
|
DMA_FROM_DEVICE);
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
err:
|
|
|
|
while (i > 0) {
|
|
|
|
--i;
|
|
|
|
ib_dma_unmap_single(dev, qp->sqp_proxy_rcv[i].map,
|
|
|
|
sizeof (struct mlx4_ib_proxy_sqp_hdr),
|
|
|
|
DMA_FROM_DEVICE);
|
|
|
|
kfree(qp->sqp_proxy_rcv[i].addr);
|
|
|
|
}
|
|
|
|
kfree(qp->sqp_proxy_rcv);
|
|
|
|
qp->sqp_proxy_rcv = NULL;
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void free_proxy_bufs(struct ib_device *dev, struct mlx4_ib_qp *qp)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < qp->rq.wqe_cnt; i++) {
|
|
|
|
ib_dma_unmap_single(dev, qp->sqp_proxy_rcv[i].map,
|
|
|
|
sizeof (struct mlx4_ib_proxy_sqp_hdr),
|
|
|
|
DMA_FROM_DEVICE);
|
|
|
|
kfree(qp->sqp_proxy_rcv[i].addr);
|
|
|
|
}
|
|
|
|
kfree(qp->sqp_proxy_rcv);
|
|
|
|
}
|
|
|
|
|
2011-06-03 02:32:15 +08:00
|
|
|
static int qp_has_rq(struct ib_qp_init_attr *attr)
|
|
|
|
{
|
|
|
|
if (attr->qp_type == IB_QPT_XRC_INI || attr->qp_type == IB_QPT_XRC_TGT)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
return !attr->srq;
|
|
|
|
}
|
|
|
|
|
2007-05-09 09:00:38 +08:00
|
|
|
static int create_qp_common(struct mlx4_ib_dev *dev, struct ib_pd *pd,
|
|
|
|
struct ib_qp_init_attr *init_attr,
|
2012-08-03 16:40:40 +08:00
|
|
|
struct ib_udata *udata, int sqpn, struct mlx4_ib_qp **caller_qp)
|
2007-05-09 09:00:38 +08:00
|
|
|
{
|
2008-10-11 03:01:37 +08:00
|
|
|
int qpn;
|
2007-05-09 09:00:38 +08:00
|
|
|
int err;
|
2012-08-03 16:40:40 +08:00
|
|
|
struct mlx4_ib_sqp *sqp;
|
|
|
|
struct mlx4_ib_qp *qp;
|
|
|
|
enum mlx4_ib_qp_type qp_type = (enum mlx4_ib_qp_type) init_attr->qp_type;
|
|
|
|
|
|
|
|
/* When tunneling special qps, we use a plain UD qp */
|
|
|
|
if (sqpn) {
|
|
|
|
if (mlx4_is_mfunc(dev->dev) &&
|
|
|
|
(!mlx4_is_master(dev->dev) ||
|
|
|
|
!(init_attr->create_flags & MLX4_IB_SRIOV_SQP))) {
|
|
|
|
if (init_attr->qp_type == IB_QPT_GSI)
|
|
|
|
qp_type = MLX4_IB_QPT_PROXY_GSI;
|
|
|
|
else if (mlx4_is_master(dev->dev))
|
|
|
|
qp_type = MLX4_IB_QPT_PROXY_SMI_OWNER;
|
|
|
|
else
|
|
|
|
qp_type = MLX4_IB_QPT_PROXY_SMI;
|
|
|
|
}
|
|
|
|
qpn = sqpn;
|
|
|
|
/* add extra sg entry for tunneling */
|
|
|
|
init_attr->cap.max_recv_sge++;
|
|
|
|
} else if (init_attr->create_flags & MLX4_IB_SRIOV_TUNNEL_QP) {
|
|
|
|
struct mlx4_ib_qp_tunnel_init_attr *tnl_init =
|
|
|
|
container_of(init_attr,
|
|
|
|
struct mlx4_ib_qp_tunnel_init_attr, init_attr);
|
|
|
|
if ((tnl_init->proxy_qp_type != IB_QPT_SMI &&
|
|
|
|
tnl_init->proxy_qp_type != IB_QPT_GSI) ||
|
|
|
|
!mlx4_is_master(dev->dev))
|
|
|
|
return -EINVAL;
|
|
|
|
if (tnl_init->proxy_qp_type == IB_QPT_GSI)
|
|
|
|
qp_type = MLX4_IB_QPT_TUN_GSI;
|
|
|
|
else if (tnl_init->slave == mlx4_master_func_num(dev->dev))
|
|
|
|
qp_type = MLX4_IB_QPT_TUN_SMI_OWNER;
|
|
|
|
else
|
|
|
|
qp_type = MLX4_IB_QPT_TUN_SMI;
|
mlx4: Modify proxy/tunnel QP mechanism so that guests do no calculations
Previously, the structure of a guest's proxy QPs followed the
structure of the PPF special qps (qp0 port 1, qp0 port 2, qp1 port 1,
qp1 port 2, ...). The guest then did offset calculations on the
sqp_base qp number that the PPF passed to it in QUERY_FUNC_CAP().
This is now changed so that the guest does no offset calculations
regarding proxy or tunnel QPs to use. This change frees the PPF from
needing to adhere to a specific order in allocating proxy and tunnel
QPs.
Now QUERY_FUNC_CAP provides each port individually with its proxy
qp0, proxy qp1, tunnel qp0, and tunnel qp1 QP numbers, and these are
used directly where required (with no offset calculations).
To accomplish this change, several fields were added to the phys_caps
structure for use by the PPF and by non-SR-IOV mode:
base_sqpn -- in non-sriov mode, this was formerly sqp_start.
base_proxy_sqpn -- the first physical proxy qp number -- used by PPF
base_tunnel_sqpn -- the first physical tunnel qp number -- used by PPF.
The current code in the PPF still adheres to the previous layout of
sqps, proxy-sqps and tunnel-sqps. However, the PPF can change this
layout without affecting VF or (paravirtualized) PF code.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-08-03 16:40:57 +08:00
|
|
|
/* we are definitely in the PPF here, since we are creating
|
|
|
|
* tunnel QPs. base_tunnel_sqpn is therefore valid. */
|
|
|
|
qpn = dev->dev->phys_caps.base_tunnel_sqpn + 8 * tnl_init->slave
|
|
|
|
+ tnl_init->proxy_qp_type * 2 + tnl_init->port - 1;
|
2012-08-03 16:40:40 +08:00
|
|
|
sqpn = qpn;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!*caller_qp) {
|
|
|
|
if (qp_type == MLX4_IB_QPT_SMI || qp_type == MLX4_IB_QPT_GSI ||
|
|
|
|
(qp_type & (MLX4_IB_QPT_PROXY_SMI | MLX4_IB_QPT_PROXY_SMI_OWNER |
|
|
|
|
MLX4_IB_QPT_PROXY_GSI | MLX4_IB_QPT_TUN_SMI_OWNER))) {
|
|
|
|
sqp = kzalloc(sizeof (struct mlx4_ib_sqp), GFP_KERNEL);
|
|
|
|
if (!sqp)
|
|
|
|
return -ENOMEM;
|
|
|
|
qp = &sqp->qp;
|
|
|
|
} else {
|
|
|
|
qp = kzalloc(sizeof (struct mlx4_ib_qp), GFP_KERNEL);
|
|
|
|
if (!qp)
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
} else
|
|
|
|
qp = *caller_qp;
|
|
|
|
|
|
|
|
qp->mlx4_ib_qp_type = qp_type;
|
2007-05-09 09:00:38 +08:00
|
|
|
|
|
|
|
mutex_init(&qp->mutex);
|
|
|
|
spin_lock_init(&qp->sq.lock);
|
|
|
|
spin_lock_init(&qp->rq.lock);
|
2010-10-25 12:08:52 +08:00
|
|
|
INIT_LIST_HEAD(&qp->gid_list);
|
{NET, IB}/mlx4: Add device managed flow steering firmware API
The driver is modified to support three operation modes.
If supported by firmware use the device managed flow steering
API, that which we call device managed steering mode. Else, if
the firmware supports the B0 steering mode use it, and finally,
if none of the above, use the A0 steering mode.
When the steering mode is device managed, the code is modified
such that L2 based rules set by the mlx4_en driver for Ethernet
unicast and multicast, and the IB stack multicast attach calls
done through the mlx4_ib driver are all routed to use the device
managed API.
When attaching rule using device managed flow steering API,
the firmware returns a 64 bit registration id, which is to be
provided during detach.
Currently the firmware is always programmed during HCA initialization
to use standard L2 hashing. Future work should be done to allow
configuring the flow-steering hash function with common, non
proprietary means.
Signed-off-by: Hadar Hen Zion <hadarh@mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-07-05 12:03:46 +08:00
|
|
|
INIT_LIST_HEAD(&qp->steering_rules);
|
2007-05-09 09:00:38 +08:00
|
|
|
|
|
|
|
qp->state = IB_QPS_RESET;
|
IB/mlx4: Use multiple WQ blocks to post smaller send WQEs
ConnectX HCA supports shrinking WQEs, so that a single work request
can be made of multiple units of wqe_shift. This way, WRs can differ
in size, and do not have to be a power of 2 in size, saving memory and
speeding up send WR posting. Unfortunately, if we do this then the
wqe_index field in CQEs can't be used to look up the WR ID anymore, so
our implementation does this only if selective signaling is off.
Further, on 32-bit platforms, we can't use vmap() to make the QP
buffer virtually contigious. Thus we have to use constant-sized WRs to
make sure a WR is always fully within a single page-sized chunk.
Finally, we use WRs with the NOP opcode to avoid wrapping around the
queue buffer in the middle of posting a WR, and we set the
NoErrorCompletion bit to avoid getting completions with error for NOP
WRs. However, NEC is only supported starting with firmware 2.2.232,
so we use constant-sized WRs for older firmware. And, since MLX QPs
only support SEND, we use constant-sized WRs in this case.
When stamping during NOP posting, do stamping following setting of the
NOP WQE valid bit.
Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-01-28 16:40:59 +08:00
|
|
|
if (init_attr->sq_sig_type == IB_SIGNAL_ALL_WR)
|
|
|
|
qp->sq_signal_bits = cpu_to_be32(MLX4_WQE_CTRL_CQ_UPDATE);
|
2007-05-09 09:00:38 +08:00
|
|
|
|
2011-06-03 02:32:15 +08:00
|
|
|
err = set_rq_size(dev, &init_attr->cap, !!pd->uobject, qp_has_rq(init_attr), qp);
|
2007-05-09 09:00:38 +08:00
|
|
|
if (err)
|
|
|
|
goto err;
|
|
|
|
|
|
|
|
if (pd->uobject) {
|
|
|
|
struct mlx4_ib_create_qp ucmd;
|
|
|
|
|
|
|
|
if (ib_copy_from_udata(&ucmd, udata, sizeof ucmd)) {
|
|
|
|
err = -EFAULT;
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
2007-06-18 23:13:48 +08:00
|
|
|
qp->sq_no_prefetch = ucmd.sq_no_prefetch;
|
|
|
|
|
2007-10-18 23:36:43 +08:00
|
|
|
err = set_user_sq_size(dev, qp, &ucmd);
|
2007-05-17 15:32:41 +08:00
|
|
|
if (err)
|
|
|
|
goto err;
|
|
|
|
|
2007-05-09 09:00:38 +08:00
|
|
|
qp->umem = ib_umem_get(pd->uobject->context, ucmd.buf_addr,
|
2008-04-29 16:00:34 +08:00
|
|
|
qp->buf_size, 0, 0);
|
2007-05-09 09:00:38 +08:00
|
|
|
if (IS_ERR(qp->umem)) {
|
|
|
|
err = PTR_ERR(qp->umem);
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
|
|
|
err = mlx4_mtt_init(dev->dev, ib_umem_page_count(qp->umem),
|
|
|
|
ilog2(qp->umem->page_size), &qp->mtt);
|
|
|
|
if (err)
|
|
|
|
goto err_buf;
|
|
|
|
|
|
|
|
err = mlx4_ib_umem_write_mtt(dev, &qp->mtt, qp->umem);
|
|
|
|
if (err)
|
|
|
|
goto err_mtt;
|
|
|
|
|
2011-06-03 02:32:15 +08:00
|
|
|
if (qp_has_rq(init_attr)) {
|
2007-05-24 06:16:08 +08:00
|
|
|
err = mlx4_ib_db_map_user(to_mucontext(pd->uobject->context),
|
|
|
|
ucmd.db_addr, &qp->db);
|
|
|
|
if (err)
|
|
|
|
goto err_mtt;
|
|
|
|
}
|
2007-05-09 09:00:38 +08:00
|
|
|
} else {
|
2007-06-18 23:13:48 +08:00
|
|
|
qp->sq_no_prefetch = 0;
|
|
|
|
|
2008-07-15 14:48:48 +08:00
|
|
|
if (init_attr->create_flags & IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK)
|
|
|
|
qp->flags |= MLX4_IB_QP_BLOCK_MULTICAST_LOOPBACK;
|
|
|
|
|
2008-04-17 12:09:27 +08:00
|
|
|
if (init_attr->create_flags & IB_QP_CREATE_IPOIB_UD_LSO)
|
|
|
|
qp->flags |= MLX4_IB_QP_LSO;
|
|
|
|
|
2012-08-03 16:40:40 +08:00
|
|
|
err = set_kernel_sq_size(dev, &init_attr->cap, qp_type, qp);
|
2007-05-17 15:32:41 +08:00
|
|
|
if (err)
|
|
|
|
goto err;
|
|
|
|
|
2011-06-03 02:32:15 +08:00
|
|
|
if (qp_has_rq(init_attr)) {
|
2008-04-24 02:55:45 +08:00
|
|
|
err = mlx4_db_alloc(dev->dev, &qp->db, 0);
|
2007-05-24 06:16:08 +08:00
|
|
|
if (err)
|
|
|
|
goto err;
|
2007-05-09 09:00:38 +08:00
|
|
|
|
2007-05-24 06:16:08 +08:00
|
|
|
*qp->db.db = 0;
|
|
|
|
}
|
2007-05-09 09:00:38 +08:00
|
|
|
|
|
|
|
if (mlx4_buf_alloc(dev->dev, qp->buf_size, PAGE_SIZE * 2, &qp->buf)) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto err_db;
|
|
|
|
}
|
|
|
|
|
|
|
|
err = mlx4_mtt_init(dev->dev, qp->buf.npages, qp->buf.page_shift,
|
|
|
|
&qp->mtt);
|
|
|
|
if (err)
|
|
|
|
goto err_buf;
|
|
|
|
|
|
|
|
err = mlx4_buf_write_mtt(dev->dev, &qp->mtt, &qp->buf);
|
|
|
|
if (err)
|
|
|
|
goto err_mtt;
|
|
|
|
|
2007-06-18 23:13:48 +08:00
|
|
|
qp->sq.wrid = kmalloc(qp->sq.wqe_cnt * sizeof (u64), GFP_KERNEL);
|
|
|
|
qp->rq.wrid = kmalloc(qp->rq.wqe_cnt * sizeof (u64), GFP_KERNEL);
|
2007-05-09 09:00:38 +08:00
|
|
|
|
|
|
|
if (!qp->sq.wrid || !qp->rq.wrid) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto err_wrid;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2008-10-11 03:01:37 +08:00
|
|
|
if (sqpn) {
|
2012-08-03 16:40:40 +08:00
|
|
|
if (qp->mlx4_ib_qp_type & (MLX4_IB_QPT_PROXY_SMI_OWNER |
|
|
|
|
MLX4_IB_QPT_PROXY_SMI | MLX4_IB_QPT_PROXY_GSI)) {
|
|
|
|
if (alloc_proxy_bufs(pd->device, qp)) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto err_wrid;
|
|
|
|
}
|
|
|
|
}
|
2008-10-11 03:01:37 +08:00
|
|
|
} else {
|
2012-01-17 19:39:07 +08:00
|
|
|
/* Raw packet QPNs must be aligned to 8 bits. If not, the WQE
|
|
|
|
* BlueFlame setup flow wrongly causes VLAN insertion. */
|
|
|
|
if (init_attr->qp_type == IB_QPT_RAW_PACKET)
|
|
|
|
err = mlx4_qp_reserve_range(dev->dev, 1, 1 << 8, &qpn);
|
|
|
|
else
|
|
|
|
err = mlx4_qp_reserve_range(dev->dev, 1, 1, &qpn);
|
2008-10-11 03:01:37 +08:00
|
|
|
if (err)
|
2012-08-03 16:40:40 +08:00
|
|
|
goto err_proxy;
|
2008-10-11 03:01:37 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
err = mlx4_qp_alloc(dev->dev, qpn, &qp->mqp);
|
2007-05-09 09:00:38 +08:00
|
|
|
if (err)
|
2008-10-11 03:01:37 +08:00
|
|
|
goto err_qpn;
|
2007-05-09 09:00:38 +08:00
|
|
|
|
2011-06-03 02:32:15 +08:00
|
|
|
if (init_attr->qp_type == IB_QPT_XRC_TGT)
|
|
|
|
qp->mqp.qpn |= (1 << 23);
|
|
|
|
|
2007-05-09 09:00:38 +08:00
|
|
|
/*
|
|
|
|
* Hardware wants QPN written in big-endian order (after
|
|
|
|
* shifting) for send doorbell. Precompute this value to save
|
|
|
|
* a little bit when posting sends.
|
|
|
|
*/
|
|
|
|
qp->doorbell_qpn = swab32(qp->mqp.qpn << 8);
|
|
|
|
|
|
|
|
qp->mqp.event = mlx4_ib_qp_event;
|
2012-08-03 16:40:40 +08:00
|
|
|
if (!*caller_qp)
|
|
|
|
*caller_qp = qp;
|
2007-05-09 09:00:38 +08:00
|
|
|
return 0;
|
|
|
|
|
2008-10-11 03:01:37 +08:00
|
|
|
err_qpn:
|
|
|
|
if (!sqpn)
|
|
|
|
mlx4_qp_release_range(dev->dev, qpn, 1);
|
2012-08-03 16:40:40 +08:00
|
|
|
err_proxy:
|
|
|
|
if (qp->mlx4_ib_qp_type == MLX4_IB_QPT_PROXY_GSI)
|
|
|
|
free_proxy_bufs(pd->device, qp);
|
2007-05-09 09:00:38 +08:00
|
|
|
err_wrid:
|
2007-07-21 12:19:43 +08:00
|
|
|
if (pd->uobject) {
|
2011-06-03 02:32:15 +08:00
|
|
|
if (qp_has_rq(init_attr))
|
|
|
|
mlx4_ib_db_unmap_user(to_mucontext(pd->uobject->context), &qp->db);
|
2007-07-21 12:19:43 +08:00
|
|
|
} else {
|
2007-05-09 09:00:38 +08:00
|
|
|
kfree(qp->sq.wrid);
|
|
|
|
kfree(qp->rq.wrid);
|
|
|
|
}
|
|
|
|
|
|
|
|
err_mtt:
|
|
|
|
mlx4_mtt_cleanup(dev->dev, &qp->mtt);
|
|
|
|
|
|
|
|
err_buf:
|
|
|
|
if (pd->uobject)
|
|
|
|
ib_umem_release(qp->umem);
|
|
|
|
else
|
|
|
|
mlx4_buf_free(dev->dev, qp->buf_size, &qp->buf);
|
|
|
|
|
|
|
|
err_db:
|
2011-06-03 02:32:15 +08:00
|
|
|
if (!pd->uobject && qp_has_rq(init_attr))
|
2008-04-24 02:55:45 +08:00
|
|
|
mlx4_db_free(dev->dev, &qp->db);
|
2007-05-09 09:00:38 +08:00
|
|
|
|
|
|
|
err:
|
2012-08-03 16:40:40 +08:00
|
|
|
if (!*caller_qp)
|
|
|
|
kfree(qp);
|
2007-05-09 09:00:38 +08:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
static enum mlx4_qp_state to_mlx4_state(enum ib_qp_state state)
|
|
|
|
{
|
|
|
|
switch (state) {
|
|
|
|
case IB_QPS_RESET: return MLX4_QP_STATE_RST;
|
|
|
|
case IB_QPS_INIT: return MLX4_QP_STATE_INIT;
|
|
|
|
case IB_QPS_RTR: return MLX4_QP_STATE_RTR;
|
|
|
|
case IB_QPS_RTS: return MLX4_QP_STATE_RTS;
|
|
|
|
case IB_QPS_SQD: return MLX4_QP_STATE_SQD;
|
|
|
|
case IB_QPS_SQE: return MLX4_QP_STATE_SQER;
|
|
|
|
case IB_QPS_ERR: return MLX4_QP_STATE_ERR;
|
|
|
|
default: return -1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void mlx4_ib_lock_cqs(struct mlx4_ib_cq *send_cq, struct mlx4_ib_cq *recv_cq)
|
2009-09-06 11:24:49 +08:00
|
|
|
__acquires(&send_cq->lock) __acquires(&recv_cq->lock)
|
2007-05-09 09:00:38 +08:00
|
|
|
{
|
2009-09-06 11:24:49 +08:00
|
|
|
if (send_cq == recv_cq) {
|
2007-05-09 09:00:38 +08:00
|
|
|
spin_lock_irq(&send_cq->lock);
|
2009-09-06 11:24:49 +08:00
|
|
|
__acquire(&recv_cq->lock);
|
|
|
|
} else if (send_cq->mcq.cqn < recv_cq->mcq.cqn) {
|
2007-05-09 09:00:38 +08:00
|
|
|
spin_lock_irq(&send_cq->lock);
|
|
|
|
spin_lock_nested(&recv_cq->lock, SINGLE_DEPTH_NESTING);
|
|
|
|
} else {
|
|
|
|
spin_lock_irq(&recv_cq->lock);
|
|
|
|
spin_lock_nested(&send_cq->lock, SINGLE_DEPTH_NESTING);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void mlx4_ib_unlock_cqs(struct mlx4_ib_cq *send_cq, struct mlx4_ib_cq *recv_cq)
|
2009-09-06 11:24:49 +08:00
|
|
|
__releases(&send_cq->lock) __releases(&recv_cq->lock)
|
2007-05-09 09:00:38 +08:00
|
|
|
{
|
2009-09-06 11:24:49 +08:00
|
|
|
if (send_cq == recv_cq) {
|
|
|
|
__release(&recv_cq->lock);
|
2007-05-09 09:00:38 +08:00
|
|
|
spin_unlock_irq(&send_cq->lock);
|
2009-09-06 11:24:49 +08:00
|
|
|
} else if (send_cq->mcq.cqn < recv_cq->mcq.cqn) {
|
2007-05-09 09:00:38 +08:00
|
|
|
spin_unlock(&recv_cq->lock);
|
|
|
|
spin_unlock_irq(&send_cq->lock);
|
|
|
|
} else {
|
|
|
|
spin_unlock(&send_cq->lock);
|
|
|
|
spin_unlock_irq(&recv_cq->lock);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2010-10-25 12:08:52 +08:00
|
|
|
static void del_gid_entries(struct mlx4_ib_qp *qp)
|
|
|
|
{
|
|
|
|
struct mlx4_ib_gid_entry *ge, *tmp;
|
|
|
|
|
|
|
|
list_for_each_entry_safe(ge, tmp, &qp->gid_list, list) {
|
|
|
|
list_del(&ge->list);
|
|
|
|
kfree(ge);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-06-03 02:32:15 +08:00
|
|
|
static struct mlx4_ib_pd *get_pd(struct mlx4_ib_qp *qp)
|
|
|
|
{
|
|
|
|
if (qp->ibqp.qp_type == IB_QPT_XRC_TGT)
|
|
|
|
return to_mpd(to_mxrcd(qp->ibqp.xrcd)->pd);
|
|
|
|
else
|
|
|
|
return to_mpd(qp->ibqp.pd);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void get_cqs(struct mlx4_ib_qp *qp,
|
|
|
|
struct mlx4_ib_cq **send_cq, struct mlx4_ib_cq **recv_cq)
|
|
|
|
{
|
|
|
|
switch (qp->ibqp.qp_type) {
|
|
|
|
case IB_QPT_XRC_TGT:
|
|
|
|
*send_cq = to_mcq(to_mxrcd(qp->ibqp.xrcd)->cq);
|
|
|
|
*recv_cq = *send_cq;
|
|
|
|
break;
|
|
|
|
case IB_QPT_XRC_INI:
|
|
|
|
*send_cq = to_mcq(qp->ibqp.send_cq);
|
|
|
|
*recv_cq = *send_cq;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
*send_cq = to_mcq(qp->ibqp.send_cq);
|
|
|
|
*recv_cq = to_mcq(qp->ibqp.recv_cq);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2007-05-09 09:00:38 +08:00
|
|
|
static void destroy_qp_common(struct mlx4_ib_dev *dev, struct mlx4_ib_qp *qp,
|
|
|
|
int is_user)
|
|
|
|
{
|
|
|
|
struct mlx4_ib_cq *send_cq, *recv_cq;
|
|
|
|
|
|
|
|
if (qp->state != IB_QPS_RESET)
|
|
|
|
if (mlx4_qp_modify(dev->dev, NULL, to_mlx4_state(qp->state),
|
|
|
|
MLX4_QP_STATE_RST, NULL, 0, 0, &qp->mqp))
|
2012-04-29 22:04:26 +08:00
|
|
|
pr_warn("modify QP %06x to RESET failed.\n",
|
2007-05-09 09:00:38 +08:00
|
|
|
qp->mqp.qpn);
|
|
|
|
|
2011-06-03 02:32:15 +08:00
|
|
|
get_cqs(qp, &send_cq, &recv_cq);
|
2007-05-09 09:00:38 +08:00
|
|
|
|
|
|
|
mlx4_ib_lock_cqs(send_cq, recv_cq);
|
|
|
|
|
|
|
|
if (!is_user) {
|
|
|
|
__mlx4_ib_cq_clean(recv_cq, qp->mqp.qpn,
|
|
|
|
qp->ibqp.srq ? to_msrq(qp->ibqp.srq): NULL);
|
|
|
|
if (send_cq != recv_cq)
|
|
|
|
__mlx4_ib_cq_clean(send_cq, qp->mqp.qpn, NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
mlx4_qp_remove(dev->dev, &qp->mqp);
|
|
|
|
|
|
|
|
mlx4_ib_unlock_cqs(send_cq, recv_cq);
|
|
|
|
|
|
|
|
mlx4_qp_free(dev->dev, &qp->mqp);
|
2008-10-11 03:01:37 +08:00
|
|
|
|
2012-08-03 16:40:40 +08:00
|
|
|
if (!is_sqp(dev, qp) && !is_tunnel_qp(dev, qp))
|
2008-10-11 03:01:37 +08:00
|
|
|
mlx4_qp_release_range(dev->dev, qp->mqp.qpn, 1);
|
|
|
|
|
2007-05-09 09:00:38 +08:00
|
|
|
mlx4_mtt_cleanup(dev->dev, &qp->mtt);
|
|
|
|
|
|
|
|
if (is_user) {
|
2011-06-03 02:32:15 +08:00
|
|
|
if (qp->rq.wqe_cnt)
|
2007-05-24 06:16:08 +08:00
|
|
|
mlx4_ib_db_unmap_user(to_mucontext(qp->ibqp.uobject->context),
|
|
|
|
&qp->db);
|
2007-05-09 09:00:38 +08:00
|
|
|
ib_umem_release(qp->umem);
|
|
|
|
} else {
|
|
|
|
kfree(qp->sq.wrid);
|
|
|
|
kfree(qp->rq.wrid);
|
2012-08-03 16:40:40 +08:00
|
|
|
if (qp->mlx4_ib_qp_type & (MLX4_IB_QPT_PROXY_SMI_OWNER |
|
|
|
|
MLX4_IB_QPT_PROXY_SMI | MLX4_IB_QPT_PROXY_GSI))
|
|
|
|
free_proxy_bufs(&dev->ib_dev, qp);
|
2007-05-09 09:00:38 +08:00
|
|
|
mlx4_buf_free(dev->dev, qp->buf_size, &qp->buf);
|
2011-06-03 02:32:15 +08:00
|
|
|
if (qp->rq.wqe_cnt)
|
2008-04-24 02:55:45 +08:00
|
|
|
mlx4_db_free(dev->dev, &qp->db);
|
2007-05-09 09:00:38 +08:00
|
|
|
}
|
2010-10-25 12:08:52 +08:00
|
|
|
|
|
|
|
del_gid_entries(qp);
|
2007-05-09 09:00:38 +08:00
|
|
|
}
|
|
|
|
|
mlx4: Modify proxy/tunnel QP mechanism so that guests do no calculations
Previously, the structure of a guest's proxy QPs followed the
structure of the PPF special qps (qp0 port 1, qp0 port 2, qp1 port 1,
qp1 port 2, ...). The guest then did offset calculations on the
sqp_base qp number that the PPF passed to it in QUERY_FUNC_CAP().
This is now changed so that the guest does no offset calculations
regarding proxy or tunnel QPs to use. This change frees the PPF from
needing to adhere to a specific order in allocating proxy and tunnel
QPs.
Now QUERY_FUNC_CAP provides each port individually with its proxy
qp0, proxy qp1, tunnel qp0, and tunnel qp1 QP numbers, and these are
used directly where required (with no offset calculations).
To accomplish this change, several fields were added to the phys_caps
structure for use by the PPF and by non-SR-IOV mode:
base_sqpn -- in non-sriov mode, this was formerly sqp_start.
base_proxy_sqpn -- the first physical proxy qp number -- used by PPF
base_tunnel_sqpn -- the first physical tunnel qp number -- used by PPF.
The current code in the PPF still adheres to the previous layout of
sqps, proxy-sqps and tunnel-sqps. However, the PPF can change this
layout without affecting VF or (paravirtualized) PF code.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-08-03 16:40:57 +08:00
|
|
|
static u32 get_sqp_num(struct mlx4_ib_dev *dev, struct ib_qp_init_attr *attr)
|
|
|
|
{
|
|
|
|
/* Native or PPF */
|
|
|
|
if (!mlx4_is_mfunc(dev->dev) ||
|
|
|
|
(mlx4_is_master(dev->dev) &&
|
|
|
|
attr->create_flags & MLX4_IB_SRIOV_SQP)) {
|
|
|
|
return dev->dev->phys_caps.base_sqpn +
|
|
|
|
(attr->qp_type == IB_QPT_SMI ? 0 : 2) +
|
|
|
|
attr->port_num - 1;
|
|
|
|
}
|
|
|
|
/* PF or VF -- creating proxies */
|
|
|
|
if (attr->qp_type == IB_QPT_SMI)
|
|
|
|
return dev->dev->caps.qp0_proxy[attr->port_num - 1];
|
|
|
|
else
|
|
|
|
return dev->dev->caps.qp1_proxy[attr->port_num - 1];
|
|
|
|
}
|
|
|
|
|
2007-05-09 09:00:38 +08:00
|
|
|
struct ib_qp *mlx4_ib_create_qp(struct ib_pd *pd,
|
|
|
|
struct ib_qp_init_attr *init_attr,
|
|
|
|
struct ib_udata *udata)
|
|
|
|
{
|
2012-08-03 16:40:40 +08:00
|
|
|
struct mlx4_ib_qp *qp = NULL;
|
2007-05-09 09:00:38 +08:00
|
|
|
int err;
|
2011-06-03 02:32:15 +08:00
|
|
|
u16 xrcdn = 0;
|
2007-05-09 09:00:38 +08:00
|
|
|
|
2008-07-15 14:48:48 +08:00
|
|
|
/*
|
2012-08-03 16:40:40 +08:00
|
|
|
* We only support LSO, vendor flag1, and multicast loopback blocking,
|
|
|
|
* and only for kernel UD QPs.
|
2008-07-15 14:48:48 +08:00
|
|
|
*/
|
2012-08-03 16:40:40 +08:00
|
|
|
if (init_attr->create_flags & ~(MLX4_IB_QP_LSO |
|
|
|
|
MLX4_IB_QP_BLOCK_MULTICAST_LOOPBACK |
|
|
|
|
MLX4_IB_SRIOV_TUNNEL_QP | MLX4_IB_SRIOV_SQP))
|
2008-04-17 12:09:27 +08:00
|
|
|
return ERR_PTR(-EINVAL);
|
2008-07-15 14:48:48 +08:00
|
|
|
|
|
|
|
if (init_attr->create_flags &&
|
2012-08-03 16:40:40 +08:00
|
|
|
(udata ||
|
|
|
|
((init_attr->create_flags & ~MLX4_IB_SRIOV_SQP) &&
|
|
|
|
init_attr->qp_type != IB_QPT_UD) ||
|
|
|
|
((init_attr->create_flags & MLX4_IB_SRIOV_SQP) &&
|
|
|
|
init_attr->qp_type > IB_QPT_GSI)))
|
2008-04-17 12:09:27 +08:00
|
|
|
return ERR_PTR(-EINVAL);
|
|
|
|
|
2007-05-09 09:00:38 +08:00
|
|
|
switch (init_attr->qp_type) {
|
2011-06-03 02:32:15 +08:00
|
|
|
case IB_QPT_XRC_TGT:
|
|
|
|
pd = to_mxrcd(init_attr->xrcd)->pd;
|
|
|
|
xrcdn = to_mxrcd(init_attr->xrcd)->xrcdn;
|
|
|
|
init_attr->send_cq = to_mxrcd(init_attr->xrcd)->cq;
|
|
|
|
/* fall through */
|
|
|
|
case IB_QPT_XRC_INI:
|
|
|
|
if (!(to_mdev(pd->device)->dev->caps.flags & MLX4_DEV_CAP_FLAG_XRC))
|
|
|
|
return ERR_PTR(-ENOSYS);
|
|
|
|
init_attr->recv_cq = init_attr->send_cq;
|
|
|
|
/* fall through */
|
2007-05-09 09:00:38 +08:00
|
|
|
case IB_QPT_RC:
|
|
|
|
case IB_QPT_UC:
|
2012-01-17 19:39:07 +08:00
|
|
|
case IB_QPT_RAW_PACKET:
|
2008-07-15 14:48:53 +08:00
|
|
|
qp = kzalloc(sizeof *qp, GFP_KERNEL);
|
2007-05-09 09:00:38 +08:00
|
|
|
if (!qp)
|
|
|
|
return ERR_PTR(-ENOMEM);
|
2012-08-03 16:40:40 +08:00
|
|
|
/* fall through */
|
|
|
|
case IB_QPT_UD:
|
|
|
|
{
|
|
|
|
err = create_qp_common(to_mdev(pd->device), pd, init_attr,
|
|
|
|
udata, 0, &qp);
|
|
|
|
if (err)
|
2007-05-09 09:00:38 +08:00
|
|
|
return ERR_PTR(err);
|
|
|
|
|
|
|
|
qp->ibqp.qp_num = qp->mqp.qpn;
|
2011-06-03 02:32:15 +08:00
|
|
|
qp->xrcdn = xrcdn;
|
2007-05-09 09:00:38 +08:00
|
|
|
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
case IB_QPT_SMI:
|
|
|
|
case IB_QPT_GSI:
|
|
|
|
{
|
|
|
|
/* Userspace is not allowed to create special QPs: */
|
2011-06-03 02:32:15 +08:00
|
|
|
if (udata)
|
2007-05-09 09:00:38 +08:00
|
|
|
return ERR_PTR(-EINVAL);
|
|
|
|
|
2011-06-03 02:32:15 +08:00
|
|
|
err = create_qp_common(to_mdev(pd->device), pd, init_attr, udata,
|
mlx4: Modify proxy/tunnel QP mechanism so that guests do no calculations
Previously, the structure of a guest's proxy QPs followed the
structure of the PPF special qps (qp0 port 1, qp0 port 2, qp1 port 1,
qp1 port 2, ...). The guest then did offset calculations on the
sqp_base qp number that the PPF passed to it in QUERY_FUNC_CAP().
This is now changed so that the guest does no offset calculations
regarding proxy or tunnel QPs to use. This change frees the PPF from
needing to adhere to a specific order in allocating proxy and tunnel
QPs.
Now QUERY_FUNC_CAP provides each port individually with its proxy
qp0, proxy qp1, tunnel qp0, and tunnel qp1 QP numbers, and these are
used directly where required (with no offset calculations).
To accomplish this change, several fields were added to the phys_caps
structure for use by the PPF and by non-SR-IOV mode:
base_sqpn -- in non-sriov mode, this was formerly sqp_start.
base_proxy_sqpn -- the first physical proxy qp number -- used by PPF
base_tunnel_sqpn -- the first physical tunnel qp number -- used by PPF.
The current code in the PPF still adheres to the previous layout of
sqps, proxy-sqps and tunnel-sqps. However, the PPF can change this
layout without affecting VF or (paravirtualized) PF code.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-08-03 16:40:57 +08:00
|
|
|
get_sqp_num(to_mdev(pd->device), init_attr),
|
2012-08-03 16:40:40 +08:00
|
|
|
&qp);
|
|
|
|
if (err)
|
2007-05-09 09:00:38 +08:00
|
|
|
return ERR_PTR(err);
|
|
|
|
|
|
|
|
qp->port = init_attr->port_num;
|
|
|
|
qp->ibqp.qp_num = init_attr->qp_type == IB_QPT_SMI ? 0 : 1;
|
|
|
|
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
default:
|
|
|
|
/* Don't support raw QPs */
|
|
|
|
return ERR_PTR(-EINVAL);
|
|
|
|
}
|
|
|
|
|
|
|
|
return &qp->ibqp;
|
|
|
|
}
|
|
|
|
|
|
|
|
int mlx4_ib_destroy_qp(struct ib_qp *qp)
|
|
|
|
{
|
|
|
|
struct mlx4_ib_dev *dev = to_mdev(qp->device);
|
|
|
|
struct mlx4_ib_qp *mqp = to_mqp(qp);
|
2011-06-03 02:32:15 +08:00
|
|
|
struct mlx4_ib_pd *pd;
|
2007-05-09 09:00:38 +08:00
|
|
|
|
|
|
|
if (is_qp0(dev, mqp))
|
|
|
|
mlx4_CLOSE_PORT(dev->dev, mqp->port);
|
|
|
|
|
2011-06-03 02:32:15 +08:00
|
|
|
pd = get_pd(mqp);
|
|
|
|
destroy_qp_common(dev, mqp, !!pd->ibpd.uobject);
|
2007-05-09 09:00:38 +08:00
|
|
|
|
|
|
|
if (is_sqp(dev, mqp))
|
|
|
|
kfree(to_msqp(mqp));
|
|
|
|
else
|
|
|
|
kfree(mqp);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2012-08-03 16:40:40 +08:00
|
|
|
static int to_mlx4_st(struct mlx4_ib_dev *dev, enum mlx4_ib_qp_type type)
|
2007-05-09 09:00:38 +08:00
|
|
|
{
|
|
|
|
switch (type) {
|
2012-08-03 16:40:40 +08:00
|
|
|
case MLX4_IB_QPT_RC: return MLX4_QP_ST_RC;
|
|
|
|
case MLX4_IB_QPT_UC: return MLX4_QP_ST_UC;
|
|
|
|
case MLX4_IB_QPT_UD: return MLX4_QP_ST_UD;
|
|
|
|
case MLX4_IB_QPT_XRC_INI:
|
|
|
|
case MLX4_IB_QPT_XRC_TGT: return MLX4_QP_ST_XRC;
|
|
|
|
case MLX4_IB_QPT_SMI:
|
|
|
|
case MLX4_IB_QPT_GSI:
|
|
|
|
case MLX4_IB_QPT_RAW_PACKET: return MLX4_QP_ST_MLX;
|
|
|
|
|
|
|
|
case MLX4_IB_QPT_PROXY_SMI_OWNER:
|
|
|
|
case MLX4_IB_QPT_TUN_SMI_OWNER: return (mlx4_is_mfunc(dev->dev) ?
|
|
|
|
MLX4_QP_ST_MLX : -1);
|
|
|
|
case MLX4_IB_QPT_PROXY_SMI:
|
|
|
|
case MLX4_IB_QPT_TUN_SMI:
|
|
|
|
case MLX4_IB_QPT_PROXY_GSI:
|
|
|
|
case MLX4_IB_QPT_TUN_GSI: return (mlx4_is_mfunc(dev->dev) ?
|
|
|
|
MLX4_QP_ST_UD : -1);
|
|
|
|
default: return -1;
|
2007-05-09 09:00:38 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2007-05-14 12:26:51 +08:00
|
|
|
static __be32 to_mlx4_access_flags(struct mlx4_ib_qp *qp, const struct ib_qp_attr *attr,
|
2007-05-09 09:00:38 +08:00
|
|
|
int attr_mask)
|
|
|
|
{
|
|
|
|
u8 dest_rd_atomic;
|
|
|
|
u32 access_flags;
|
|
|
|
u32 hw_access_flags = 0;
|
|
|
|
|
|
|
|
if (attr_mask & IB_QP_MAX_DEST_RD_ATOMIC)
|
|
|
|
dest_rd_atomic = attr->max_dest_rd_atomic;
|
|
|
|
else
|
|
|
|
dest_rd_atomic = qp->resp_depth;
|
|
|
|
|
|
|
|
if (attr_mask & IB_QP_ACCESS_FLAGS)
|
|
|
|
access_flags = attr->qp_access_flags;
|
|
|
|
else
|
|
|
|
access_flags = qp->atomic_rd_en;
|
|
|
|
|
|
|
|
if (!dest_rd_atomic)
|
|
|
|
access_flags &= IB_ACCESS_REMOTE_WRITE;
|
|
|
|
|
|
|
|
if (access_flags & IB_ACCESS_REMOTE_READ)
|
|
|
|
hw_access_flags |= MLX4_QP_BIT_RRE;
|
|
|
|
if (access_flags & IB_ACCESS_REMOTE_ATOMIC)
|
|
|
|
hw_access_flags |= MLX4_QP_BIT_RAE;
|
|
|
|
if (access_flags & IB_ACCESS_REMOTE_WRITE)
|
|
|
|
hw_access_flags |= MLX4_QP_BIT_RWE;
|
|
|
|
|
|
|
|
return cpu_to_be32(hw_access_flags);
|
|
|
|
}
|
|
|
|
|
2007-05-14 12:26:51 +08:00
|
|
|
static void store_sqp_attrs(struct mlx4_ib_sqp *sqp, const struct ib_qp_attr *attr,
|
2007-05-09 09:00:38 +08:00
|
|
|
int attr_mask)
|
|
|
|
{
|
|
|
|
if (attr_mask & IB_QP_PKEY_INDEX)
|
|
|
|
sqp->pkey_index = attr->pkey_index;
|
|
|
|
if (attr_mask & IB_QP_QKEY)
|
|
|
|
sqp->qkey = attr->qkey;
|
|
|
|
if (attr_mask & IB_QP_SQ_PSN)
|
|
|
|
sqp->send_psn = attr->sq_psn;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void mlx4_set_sched(struct mlx4_qp_path *path, u8 port)
|
|
|
|
{
|
|
|
|
path->sched_queue = (path->sched_queue & 0xbf) | ((port - 1) << 6);
|
|
|
|
}
|
|
|
|
|
2007-05-14 12:26:51 +08:00
|
|
|
static int mlx4_set_path(struct mlx4_ib_dev *dev, const struct ib_ah_attr *ah,
|
2007-05-09 09:00:38 +08:00
|
|
|
struct mlx4_qp_path *path, u8 port)
|
|
|
|
{
|
2010-10-25 12:08:52 +08:00
|
|
|
int err;
|
|
|
|
int is_eth = rdma_port_get_link_layer(&dev->ib_dev, port) ==
|
|
|
|
IB_LINK_LAYER_ETHERNET;
|
|
|
|
u8 mac[6];
|
|
|
|
int is_mcast;
|
2010-08-26 22:19:22 +08:00
|
|
|
u16 vlan_tag;
|
|
|
|
int vidx;
|
2010-10-25 12:08:52 +08:00
|
|
|
|
2007-05-09 09:00:38 +08:00
|
|
|
path->grh_mylmc = ah->src_path_bits & 0x7f;
|
|
|
|
path->rlid = cpu_to_be16(ah->dlid);
|
|
|
|
if (ah->static_rate) {
|
|
|
|
path->static_rate = ah->static_rate + MLX4_STAT_RATE_OFFSET;
|
|
|
|
while (path->static_rate > IB_RATE_2_5_GBPS + MLX4_STAT_RATE_OFFSET &&
|
|
|
|
!(1 << path->static_rate & dev->dev->caps.stat_rate_support))
|
|
|
|
--path->static_rate;
|
|
|
|
} else
|
|
|
|
path->static_rate = 0;
|
|
|
|
|
|
|
|
if (ah->ah_flags & IB_AH_GRH) {
|
2007-06-18 23:15:02 +08:00
|
|
|
if (ah->grh.sgid_index >= dev->dev->caps.gid_table_len[port]) {
|
2012-04-29 22:04:26 +08:00
|
|
|
pr_err("sgid_index (%u) too large. max is %d\n",
|
2007-06-18 23:15:02 +08:00
|
|
|
ah->grh.sgid_index, dev->dev->caps.gid_table_len[port] - 1);
|
2007-05-09 09:00:38 +08:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
path->grh_mylmc |= 1 << 7;
|
|
|
|
path->mgid_index = ah->grh.sgid_index;
|
|
|
|
path->hop_limit = ah->grh.hop_limit;
|
|
|
|
path->tclass_flowlabel =
|
|
|
|
cpu_to_be32((ah->grh.traffic_class << 20) |
|
|
|
|
(ah->grh.flow_label));
|
|
|
|
memcpy(path->rgid, ah->grh.dgid.raw, 16);
|
|
|
|
}
|
|
|
|
|
2010-10-25 12:08:52 +08:00
|
|
|
if (is_eth) {
|
2010-08-26 22:19:22 +08:00
|
|
|
path->sched_queue = MLX4_IB_DEFAULT_SCHED_QUEUE |
|
2011-12-11 22:40:05 +08:00
|
|
|
((port - 1) << 6) | ((ah->sl & 7) << 3);
|
2010-08-26 22:19:22 +08:00
|
|
|
|
2010-10-25 12:08:52 +08:00
|
|
|
if (!(ah->ah_flags & IB_AH_GRH))
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
err = mlx4_ib_resolve_grh(dev, ah, mac, &is_mcast, port);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
|
|
|
|
memcpy(path->dmac, mac, 6);
|
|
|
|
path->ackto = MLX4_IB_LINK_TYPE_ETH;
|
|
|
|
/* use index 0 into MAC table for IBoE */
|
|
|
|
path->grh_mylmc &= 0x80;
|
2010-08-26 22:19:22 +08:00
|
|
|
|
|
|
|
vlan_tag = rdma_get_vlan_id(&dev->iboe.gid_table[port - 1][ah->grh.sgid_index]);
|
|
|
|
if (vlan_tag < 0x1000) {
|
|
|
|
if (mlx4_find_cached_vlan(dev->dev, port, vlan_tag, &vidx))
|
|
|
|
return -ENOENT;
|
|
|
|
|
|
|
|
path->vlan_index = vidx;
|
|
|
|
path->fl = 1 << 6;
|
|
|
|
}
|
|
|
|
} else
|
|
|
|
path->sched_queue = MLX4_IB_DEFAULT_SCHED_QUEUE |
|
|
|
|
((port - 1) << 6) | ((ah->sl & 0xf) << 2);
|
2010-10-25 12:08:52 +08:00
|
|
|
|
2007-05-09 09:00:38 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2010-10-25 12:08:52 +08:00
|
|
|
static void update_mcg_macs(struct mlx4_ib_dev *dev, struct mlx4_ib_qp *qp)
|
|
|
|
{
|
|
|
|
struct mlx4_ib_gid_entry *ge, *tmp;
|
|
|
|
|
|
|
|
list_for_each_entry_safe(ge, tmp, &qp->gid_list, list) {
|
|
|
|
if (!ge->added && mlx4_ib_add_mc(dev, qp, &ge->gid)) {
|
|
|
|
ge->added = 1;
|
|
|
|
ge->port = qp->port;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2007-05-14 12:26:51 +08:00
|
|
|
static int __mlx4_ib_modify_qp(struct ib_qp *ibqp,
|
|
|
|
const struct ib_qp_attr *attr, int attr_mask,
|
|
|
|
enum ib_qp_state cur_state, enum ib_qp_state new_state)
|
2007-05-09 09:00:38 +08:00
|
|
|
{
|
|
|
|
struct mlx4_ib_dev *dev = to_mdev(ibqp->device);
|
|
|
|
struct mlx4_ib_qp *qp = to_mqp(ibqp);
|
2011-06-03 02:32:15 +08:00
|
|
|
struct mlx4_ib_pd *pd;
|
|
|
|
struct mlx4_ib_cq *send_cq, *recv_cq;
|
2007-05-09 09:00:38 +08:00
|
|
|
struct mlx4_qp_context *context;
|
|
|
|
enum mlx4_qp_optpar optpar = 0;
|
|
|
|
int sqd_event;
|
|
|
|
int err = -EINVAL;
|
|
|
|
|
|
|
|
context = kzalloc(sizeof *context, GFP_KERNEL);
|
|
|
|
if (!context)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
context->flags = cpu_to_be32((to_mlx4_state(new_state) << 28) |
|
2012-08-03 16:40:40 +08:00
|
|
|
(to_mlx4_st(dev, qp->mlx4_ib_qp_type) << 16));
|
2007-05-09 09:00:38 +08:00
|
|
|
|
|
|
|
if (!(attr_mask & IB_QP_PATH_MIG_STATE))
|
|
|
|
context->flags |= cpu_to_be32(MLX4_QP_PM_MIGRATED << 11);
|
|
|
|
else {
|
|
|
|
optpar |= MLX4_QP_OPTPAR_PM_STATE;
|
|
|
|
switch (attr->path_mig_state) {
|
|
|
|
case IB_MIG_MIGRATED:
|
|
|
|
context->flags |= cpu_to_be32(MLX4_QP_PM_MIGRATED << 11);
|
|
|
|
break;
|
|
|
|
case IB_MIG_REARM:
|
|
|
|
context->flags |= cpu_to_be32(MLX4_QP_PM_REARM << 11);
|
|
|
|
break;
|
|
|
|
case IB_MIG_ARMED:
|
|
|
|
context->flags |= cpu_to_be32(MLX4_QP_PM_ARMED << 11);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2008-04-17 12:09:27 +08:00
|
|
|
if (ibqp->qp_type == IB_QPT_GSI || ibqp->qp_type == IB_QPT_SMI)
|
2007-05-09 09:00:38 +08:00
|
|
|
context->mtu_msgmax = (IB_MTU_4096 << 5) | 11;
|
2012-01-17 19:39:07 +08:00
|
|
|
else if (ibqp->qp_type == IB_QPT_RAW_PACKET)
|
|
|
|
context->mtu_msgmax = (MLX4_RAW_QP_MTU << 5) | MLX4_RAW_QP_MSGMAX;
|
2008-04-17 12:09:27 +08:00
|
|
|
else if (ibqp->qp_type == IB_QPT_UD) {
|
|
|
|
if (qp->flags & MLX4_IB_QP_LSO)
|
|
|
|
context->mtu_msgmax = (IB_MTU_4096 << 5) |
|
|
|
|
ilog2(dev->dev->caps.max_gso_sz);
|
|
|
|
else
|
2008-08-08 05:06:50 +08:00
|
|
|
context->mtu_msgmax = (IB_MTU_4096 << 5) | 12;
|
2008-04-17 12:09:27 +08:00
|
|
|
} else if (attr_mask & IB_QP_PATH_MTU) {
|
2007-05-09 09:00:38 +08:00
|
|
|
if (attr->path_mtu < IB_MTU_256 || attr->path_mtu > IB_MTU_4096) {
|
2012-04-29 22:04:26 +08:00
|
|
|
pr_err("path MTU (%u) is invalid\n",
|
2007-05-09 09:00:38 +08:00
|
|
|
attr->path_mtu);
|
2007-07-20 03:58:09 +08:00
|
|
|
goto out;
|
2007-05-09 09:00:38 +08:00
|
|
|
}
|
2008-07-15 14:48:45 +08:00
|
|
|
context->mtu_msgmax = (attr->path_mtu << 5) |
|
|
|
|
ilog2(dev->dev->caps.max_msg_sz);
|
2007-05-09 09:00:38 +08:00
|
|
|
}
|
|
|
|
|
2007-06-18 23:13:48 +08:00
|
|
|
if (qp->rq.wqe_cnt)
|
|
|
|
context->rq_size_stride = ilog2(qp->rq.wqe_cnt) << 3;
|
2007-05-09 09:00:38 +08:00
|
|
|
context->rq_size_stride |= qp->rq.wqe_shift - 4;
|
|
|
|
|
2007-06-18 23:13:48 +08:00
|
|
|
if (qp->sq.wqe_cnt)
|
|
|
|
context->sq_size_stride = ilog2(qp->sq.wqe_cnt) << 3;
|
2007-05-09 09:00:38 +08:00
|
|
|
context->sq_size_stride |= qp->sq.wqe_shift - 4;
|
|
|
|
|
2011-06-03 02:32:15 +08:00
|
|
|
if (cur_state == IB_QPS_RESET && new_state == IB_QPS_INIT) {
|
2007-06-18 23:13:48 +08:00
|
|
|
context->sq_size_stride |= !!qp->sq_no_prefetch << 7;
|
2011-06-03 02:32:15 +08:00
|
|
|
context->xrcd = cpu_to_be32((u32) qp->xrcdn);
|
2013-04-21 23:10:00 +08:00
|
|
|
if (ibqp->qp_type == IB_QPT_RAW_PACKET)
|
|
|
|
context->param3 |= cpu_to_be32(1 << 30);
|
2011-06-03 02:32:15 +08:00
|
|
|
}
|
2007-06-18 23:13:48 +08:00
|
|
|
|
2007-05-09 09:00:38 +08:00
|
|
|
if (qp->ibqp.uobject)
|
|
|
|
context->usr_page = cpu_to_be32(to_mucontext(ibqp->uobject->context)->uar.index);
|
|
|
|
else
|
|
|
|
context->usr_page = cpu_to_be32(dev->priv_uar.index);
|
|
|
|
|
|
|
|
if (attr_mask & IB_QP_DEST_QPN)
|
|
|
|
context->remote_qpn = cpu_to_be32(attr->dest_qp_num);
|
|
|
|
|
|
|
|
if (attr_mask & IB_QP_PORT) {
|
|
|
|
if (cur_state == IB_QPS_SQD && new_state == IB_QPS_SQD &&
|
|
|
|
!(attr_mask & IB_QP_AV)) {
|
|
|
|
mlx4_set_sched(&context->pri_path, attr->port_num);
|
|
|
|
optpar |= MLX4_QP_OPTPAR_SCHED_QUEUE;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-06-15 22:49:57 +08:00
|
|
|
if (cur_state == IB_QPS_INIT && new_state == IB_QPS_RTR) {
|
|
|
|
if (dev->counters[qp->port - 1] != -1) {
|
|
|
|
context->pri_path.counter_index =
|
|
|
|
dev->counters[qp->port - 1];
|
|
|
|
optpar |= MLX4_QP_OPTPAR_COUNTER_INDEX;
|
|
|
|
} else
|
|
|
|
context->pri_path.counter_index = 0xff;
|
|
|
|
}
|
|
|
|
|
2007-05-09 09:00:38 +08:00
|
|
|
if (attr_mask & IB_QP_PKEY_INDEX) {
|
2012-08-03 16:40:40 +08:00
|
|
|
if (qp->mlx4_ib_qp_type & MLX4_IB_QPT_ANY_SRIOV)
|
|
|
|
context->pri_path.disable_pkey_check = 0x40;
|
2007-05-09 09:00:38 +08:00
|
|
|
context->pri_path.pkey_index = attr->pkey_index;
|
|
|
|
optpar |= MLX4_QP_OPTPAR_PKEY_INDEX;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (attr_mask & IB_QP_AV) {
|
|
|
|
if (mlx4_set_path(dev, &attr->ah_attr, &context->pri_path,
|
2012-08-03 16:40:40 +08:00
|
|
|
attr_mask & IB_QP_PORT ?
|
|
|
|
attr->port_num : qp->port))
|
2007-05-09 09:00:38 +08:00
|
|
|
goto out;
|
|
|
|
|
|
|
|
optpar |= (MLX4_QP_OPTPAR_PRIMARY_ADDR_PATH |
|
|
|
|
MLX4_QP_OPTPAR_SCHED_QUEUE);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (attr_mask & IB_QP_TIMEOUT) {
|
2010-10-25 12:08:52 +08:00
|
|
|
context->pri_path.ackto |= attr->timeout << 3;
|
2007-05-09 09:00:38 +08:00
|
|
|
optpar |= MLX4_QP_OPTPAR_ACK_TIMEOUT;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (attr_mask & IB_QP_ALT_PATH) {
|
|
|
|
if (attr->alt_port_num == 0 ||
|
|
|
|
attr->alt_port_num > dev->dev->caps.num_ports)
|
2007-07-20 03:58:09 +08:00
|
|
|
goto out;
|
2007-05-09 09:00:38 +08:00
|
|
|
|
2007-06-18 23:15:02 +08:00
|
|
|
if (attr->alt_pkey_index >=
|
|
|
|
dev->dev->caps.pkey_table_len[attr->alt_port_num])
|
2007-07-20 03:58:09 +08:00
|
|
|
goto out;
|
2007-06-18 23:15:02 +08:00
|
|
|
|
2007-05-09 09:00:38 +08:00
|
|
|
if (mlx4_set_path(dev, &attr->alt_ah_attr, &context->alt_path,
|
|
|
|
attr->alt_port_num))
|
2007-07-20 03:58:09 +08:00
|
|
|
goto out;
|
2007-05-09 09:00:38 +08:00
|
|
|
|
|
|
|
context->alt_path.pkey_index = attr->alt_pkey_index;
|
|
|
|
context->alt_path.ackto = attr->alt_timeout << 3;
|
|
|
|
optpar |= MLX4_QP_OPTPAR_ALT_ADDR_PATH;
|
|
|
|
}
|
|
|
|
|
2011-06-03 02:32:15 +08:00
|
|
|
pd = get_pd(qp);
|
|
|
|
get_cqs(qp, &send_cq, &recv_cq);
|
|
|
|
context->pd = cpu_to_be32(pd->pdn);
|
|
|
|
context->cqn_send = cpu_to_be32(send_cq->mcq.cqn);
|
|
|
|
context->cqn_recv = cpu_to_be32(recv_cq->mcq.cqn);
|
|
|
|
context->params1 = cpu_to_be32(MLX4_IB_ACK_REQ_FREQ << 28);
|
2007-06-07 00:35:04 +08:00
|
|
|
|
2008-07-23 23:12:26 +08:00
|
|
|
/* Set "fast registration enabled" for all kernel QPs */
|
|
|
|
if (!qp->ibqp.uobject)
|
|
|
|
context->params1 |= cpu_to_be32(1 << 11);
|
|
|
|
|
2007-06-07 00:35:04 +08:00
|
|
|
if (attr_mask & IB_QP_RNR_RETRY) {
|
|
|
|
context->params1 |= cpu_to_be32(attr->rnr_retry << 13);
|
|
|
|
optpar |= MLX4_QP_OPTPAR_RNR_RETRY;
|
|
|
|
}
|
|
|
|
|
2007-05-09 09:00:38 +08:00
|
|
|
if (attr_mask & IB_QP_RETRY_CNT) {
|
|
|
|
context->params1 |= cpu_to_be32(attr->retry_cnt << 16);
|
|
|
|
optpar |= MLX4_QP_OPTPAR_RETRY_COUNT;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (attr_mask & IB_QP_MAX_QP_RD_ATOMIC) {
|
|
|
|
if (attr->max_rd_atomic)
|
|
|
|
context->params1 |=
|
|
|
|
cpu_to_be32(fls(attr->max_rd_atomic - 1) << 21);
|
|
|
|
optpar |= MLX4_QP_OPTPAR_SRA_MAX;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (attr_mask & IB_QP_SQ_PSN)
|
|
|
|
context->next_send_psn = cpu_to_be32(attr->sq_psn);
|
|
|
|
|
|
|
|
if (attr_mask & IB_QP_MAX_DEST_RD_ATOMIC) {
|
|
|
|
if (attr->max_dest_rd_atomic)
|
|
|
|
context->params2 |=
|
|
|
|
cpu_to_be32(fls(attr->max_dest_rd_atomic - 1) << 21);
|
|
|
|
optpar |= MLX4_QP_OPTPAR_RRA_MAX;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (attr_mask & (IB_QP_ACCESS_FLAGS | IB_QP_MAX_DEST_RD_ATOMIC)) {
|
|
|
|
context->params2 |= to_mlx4_access_flags(qp, attr, attr_mask);
|
|
|
|
optpar |= MLX4_QP_OPTPAR_RWE | MLX4_QP_OPTPAR_RRE | MLX4_QP_OPTPAR_RAE;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ibqp->srq)
|
|
|
|
context->params2 |= cpu_to_be32(MLX4_QP_BIT_RIC);
|
|
|
|
|
|
|
|
if (attr_mask & IB_QP_MIN_RNR_TIMER) {
|
|
|
|
context->rnr_nextrecvpsn |= cpu_to_be32(attr->min_rnr_timer << 24);
|
|
|
|
optpar |= MLX4_QP_OPTPAR_RNR_TIMEOUT;
|
|
|
|
}
|
|
|
|
if (attr_mask & IB_QP_RQ_PSN)
|
|
|
|
context->rnr_nextrecvpsn |= cpu_to_be32(attr->rq_psn);
|
|
|
|
|
2012-08-03 16:40:40 +08:00
|
|
|
/* proxy and tunnel qp qkeys will be changed in modify-qp wrappers */
|
2007-05-09 09:00:38 +08:00
|
|
|
if (attr_mask & IB_QP_QKEY) {
|
2012-08-03 16:40:40 +08:00
|
|
|
if (qp->mlx4_ib_qp_type &
|
|
|
|
(MLX4_IB_QPT_PROXY_SMI_OWNER | MLX4_IB_QPT_TUN_SMI_OWNER))
|
|
|
|
context->qkey = cpu_to_be32(IB_QP_SET_QKEY);
|
|
|
|
else {
|
|
|
|
if (mlx4_is_mfunc(dev->dev) &&
|
|
|
|
!(qp->mlx4_ib_qp_type & MLX4_IB_QPT_ANY_SRIOV) &&
|
|
|
|
(attr->qkey & MLX4_RESERVED_QKEY_MASK) ==
|
|
|
|
MLX4_RESERVED_QKEY_BASE) {
|
|
|
|
pr_err("Cannot use reserved QKEY"
|
|
|
|
" 0x%x (range 0xffff0000..0xffffffff"
|
|
|
|
" is reserved)\n", attr->qkey);
|
|
|
|
err = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
context->qkey = cpu_to_be32(attr->qkey);
|
|
|
|
}
|
2007-05-09 09:00:38 +08:00
|
|
|
optpar |= MLX4_QP_OPTPAR_Q_KEY;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ibqp->srq)
|
|
|
|
context->srqn = cpu_to_be32(1 << 24 | to_msrq(ibqp->srq)->msrq.srqn);
|
|
|
|
|
2011-06-03 02:32:15 +08:00
|
|
|
if (qp->rq.wqe_cnt && cur_state == IB_QPS_RESET && new_state == IB_QPS_INIT)
|
2007-05-09 09:00:38 +08:00
|
|
|
context->db_rec_addr = cpu_to_be64(qp->db.dma);
|
|
|
|
|
|
|
|
if (cur_state == IB_QPS_INIT &&
|
|
|
|
new_state == IB_QPS_RTR &&
|
|
|
|
(ibqp->qp_type == IB_QPT_GSI || ibqp->qp_type == IB_QPT_SMI ||
|
2012-01-17 19:39:07 +08:00
|
|
|
ibqp->qp_type == IB_QPT_UD ||
|
|
|
|
ibqp->qp_type == IB_QPT_RAW_PACKET)) {
|
2007-05-09 09:00:38 +08:00
|
|
|
context->pri_path.sched_queue = (qp->port - 1) << 6;
|
2012-08-03 16:40:40 +08:00
|
|
|
if (qp->mlx4_ib_qp_type == MLX4_IB_QPT_SMI ||
|
|
|
|
qp->mlx4_ib_qp_type &
|
|
|
|
(MLX4_IB_QPT_PROXY_SMI_OWNER | MLX4_IB_QPT_TUN_SMI_OWNER)) {
|
2007-05-09 09:00:38 +08:00
|
|
|
context->pri_path.sched_queue |= MLX4_IB_DEFAULT_QP0_SCHED_QUEUE;
|
2012-08-03 16:40:40 +08:00
|
|
|
if (qp->mlx4_ib_qp_type != MLX4_IB_QPT_SMI)
|
|
|
|
context->pri_path.fl = 0x80;
|
|
|
|
} else {
|
|
|
|
if (qp->mlx4_ib_qp_type & MLX4_IB_QPT_ANY_SRIOV)
|
|
|
|
context->pri_path.fl = 0x80;
|
2007-05-09 09:00:38 +08:00
|
|
|
context->pri_path.sched_queue |= MLX4_IB_DEFAULT_SCHED_QUEUE;
|
2012-08-03 16:40:40 +08:00
|
|
|
}
|
2007-05-09 09:00:38 +08:00
|
|
|
}
|
|
|
|
|
2013-04-21 23:10:01 +08:00
|
|
|
if (qp->ibqp.qp_type == IB_QPT_RAW_PACKET)
|
|
|
|
context->pri_path.ackto = (context->pri_path.ackto & 0xf8) |
|
|
|
|
MLX4_IB_LINK_TYPE_ETH;
|
|
|
|
|
2007-05-09 09:00:38 +08:00
|
|
|
if (cur_state == IB_QPS_RTS && new_state == IB_QPS_SQD &&
|
|
|
|
attr_mask & IB_QP_EN_SQD_ASYNC_NOTIFY && attr->en_sqd_async_notify)
|
|
|
|
sqd_event = 1;
|
|
|
|
else
|
|
|
|
sqd_event = 0;
|
|
|
|
|
2008-10-09 11:09:01 +08:00
|
|
|
if (!ibqp->uobject && cur_state == IB_QPS_RESET && new_state == IB_QPS_INIT)
|
|
|
|
context->rlkey |= (1 << 4);
|
|
|
|
|
2007-05-24 21:05:01 +08:00
|
|
|
/*
|
|
|
|
* Before passing a kernel QP to the HW, make sure that the
|
2007-06-18 23:13:48 +08:00
|
|
|
* ownership bits of the send queue are set and the SQ
|
|
|
|
* headroom is stamped so that the hardware doesn't start
|
|
|
|
* processing stale work requests.
|
2007-05-24 21:05:01 +08:00
|
|
|
*/
|
|
|
|
if (!ibqp->uobject && cur_state == IB_QPS_RESET && new_state == IB_QPS_INIT) {
|
|
|
|
struct mlx4_wqe_ctrl_seg *ctrl;
|
|
|
|
int i;
|
|
|
|
|
2007-06-18 23:13:48 +08:00
|
|
|
for (i = 0; i < qp->sq.wqe_cnt; ++i) {
|
2007-05-24 21:05:01 +08:00
|
|
|
ctrl = get_send_wqe(qp, i);
|
|
|
|
ctrl->owner_opcode = cpu_to_be32(1 << 31);
|
2008-07-15 14:48:44 +08:00
|
|
|
if (qp->sq_max_wqes_per_wr == 1)
|
|
|
|
ctrl->fence_size = 1 << (qp->sq.wqe_shift - 4);
|
2007-06-18 23:13:48 +08:00
|
|
|
|
IB/mlx4: Use multiple WQ blocks to post smaller send WQEs
ConnectX HCA supports shrinking WQEs, so that a single work request
can be made of multiple units of wqe_shift. This way, WRs can differ
in size, and do not have to be a power of 2 in size, saving memory and
speeding up send WR posting. Unfortunately, if we do this then the
wqe_index field in CQEs can't be used to look up the WR ID anymore, so
our implementation does this only if selective signaling is off.
Further, on 32-bit platforms, we can't use vmap() to make the QP
buffer virtually contigious. Thus we have to use constant-sized WRs to
make sure a WR is always fully within a single page-sized chunk.
Finally, we use WRs with the NOP opcode to avoid wrapping around the
queue buffer in the middle of posting a WR, and we set the
NoErrorCompletion bit to avoid getting completions with error for NOP
WRs. However, NEC is only supported starting with firmware 2.2.232,
so we use constant-sized WRs for older firmware. And, since MLX QPs
only support SEND, we use constant-sized WRs in this case.
When stamping during NOP posting, do stamping following setting of the
NOP WQE valid bit.
Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-01-28 16:40:59 +08:00
|
|
|
stamp_send_wqe(qp, i, 1 << qp->sq.wqe_shift);
|
2007-05-24 21:05:01 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2007-05-09 09:00:38 +08:00
|
|
|
err = mlx4_qp_modify(dev->dev, &qp->mtt, to_mlx4_state(cur_state),
|
|
|
|
to_mlx4_state(new_state), context, optpar,
|
|
|
|
sqd_event, &qp->mqp);
|
|
|
|
if (err)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
qp->state = new_state;
|
|
|
|
|
|
|
|
if (attr_mask & IB_QP_ACCESS_FLAGS)
|
|
|
|
qp->atomic_rd_en = attr->qp_access_flags;
|
|
|
|
if (attr_mask & IB_QP_MAX_DEST_RD_ATOMIC)
|
|
|
|
qp->resp_depth = attr->max_dest_rd_atomic;
|
2010-10-25 12:08:52 +08:00
|
|
|
if (attr_mask & IB_QP_PORT) {
|
2007-05-09 09:00:38 +08:00
|
|
|
qp->port = attr->port_num;
|
2010-10-25 12:08:52 +08:00
|
|
|
update_mcg_macs(dev, qp);
|
|
|
|
}
|
2007-05-09 09:00:38 +08:00
|
|
|
if (attr_mask & IB_QP_ALT_PATH)
|
|
|
|
qp->alt_port = attr->alt_port_num;
|
|
|
|
|
|
|
|
if (is_sqp(dev, qp))
|
|
|
|
store_sqp_attrs(to_msqp(qp), attr, attr_mask);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If we moved QP0 to RTR, bring the IB link up; if we moved
|
|
|
|
* QP0 to RESET or ERROR, bring the link back down.
|
|
|
|
*/
|
|
|
|
if (is_qp0(dev, qp)) {
|
|
|
|
if (cur_state != IB_QPS_RTR && new_state == IB_QPS_RTR)
|
2007-06-18 23:15:02 +08:00
|
|
|
if (mlx4_INIT_PORT(dev->dev, qp->port))
|
2012-04-29 22:04:26 +08:00
|
|
|
pr_warn("INIT_PORT failed for port %d\n",
|
2007-06-18 23:15:02 +08:00
|
|
|
qp->port);
|
2007-05-09 09:00:38 +08:00
|
|
|
|
|
|
|
if (cur_state != IB_QPS_RESET && cur_state != IB_QPS_ERR &&
|
|
|
|
(new_state == IB_QPS_RESET || new_state == IB_QPS_ERR))
|
|
|
|
mlx4_CLOSE_PORT(dev->dev, qp->port);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If we moved a kernel QP to RESET, clean up all old CQ
|
|
|
|
* entries and reinitialize the QP.
|
|
|
|
*/
|
|
|
|
if (new_state == IB_QPS_RESET && !ibqp->uobject) {
|
2011-06-03 02:32:15 +08:00
|
|
|
mlx4_ib_cq_clean(recv_cq, qp->mqp.qpn,
|
2007-05-09 09:00:38 +08:00
|
|
|
ibqp->srq ? to_msrq(ibqp->srq): NULL);
|
2011-06-03 02:32:15 +08:00
|
|
|
if (send_cq != recv_cq)
|
|
|
|
mlx4_ib_cq_clean(send_cq, qp->mqp.qpn, NULL);
|
2007-05-09 09:00:38 +08:00
|
|
|
|
|
|
|
qp->rq.head = 0;
|
|
|
|
qp->rq.tail = 0;
|
|
|
|
qp->sq.head = 0;
|
|
|
|
qp->sq.tail = 0;
|
IB/mlx4: Use multiple WQ blocks to post smaller send WQEs
ConnectX HCA supports shrinking WQEs, so that a single work request
can be made of multiple units of wqe_shift. This way, WRs can differ
in size, and do not have to be a power of 2 in size, saving memory and
speeding up send WR posting. Unfortunately, if we do this then the
wqe_index field in CQEs can't be used to look up the WR ID anymore, so
our implementation does this only if selective signaling is off.
Further, on 32-bit platforms, we can't use vmap() to make the QP
buffer virtually contigious. Thus we have to use constant-sized WRs to
make sure a WR is always fully within a single page-sized chunk.
Finally, we use WRs with the NOP opcode to avoid wrapping around the
queue buffer in the middle of posting a WR, and we set the
NoErrorCompletion bit to avoid getting completions with error for NOP
WRs. However, NEC is only supported starting with firmware 2.2.232,
so we use constant-sized WRs for older firmware. And, since MLX QPs
only support SEND, we use constant-sized WRs in this case.
When stamping during NOP posting, do stamping following setting of the
NOP WQE valid bit.
Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-01-28 16:40:59 +08:00
|
|
|
qp->sq_next_wqe = 0;
|
2011-06-03 02:32:15 +08:00
|
|
|
if (qp->rq.wqe_cnt)
|
2007-05-24 06:16:08 +08:00
|
|
|
*qp->db.db = 0;
|
2007-05-09 09:00:38 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
|
|
|
kfree(context);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2007-05-14 12:26:51 +08:00
|
|
|
int mlx4_ib_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
|
|
|
|
int attr_mask, struct ib_udata *udata)
|
|
|
|
{
|
|
|
|
struct mlx4_ib_dev *dev = to_mdev(ibqp->device);
|
|
|
|
struct mlx4_ib_qp *qp = to_mqp(ibqp);
|
|
|
|
enum ib_qp_state cur_state, new_state;
|
|
|
|
int err = -EINVAL;
|
|
|
|
|
|
|
|
mutex_lock(&qp->mutex);
|
|
|
|
|
|
|
|
cur_state = attr_mask & IB_QP_CUR_STATE ? attr->cur_qp_state : qp->state;
|
|
|
|
new_state = attr_mask & IB_QP_STATE ? attr->qp_state : cur_state;
|
|
|
|
|
2012-06-19 16:21:35 +08:00
|
|
|
if (!ib_modify_qp_is_ok(cur_state, new_state, ibqp->qp_type, attr_mask)) {
|
|
|
|
pr_debug("qpn 0x%x: invalid attribute mask specified "
|
|
|
|
"for transition %d to %d. qp_type %d,"
|
|
|
|
" attr_mask 0x%x\n",
|
|
|
|
ibqp->qp_num, cur_state, new_state,
|
|
|
|
ibqp->qp_type, attr_mask);
|
2007-05-14 12:26:51 +08:00
|
|
|
goto out;
|
2012-06-19 16:21:35 +08:00
|
|
|
}
|
2007-05-14 12:26:51 +08:00
|
|
|
|
|
|
|
if ((attr_mask & IB_QP_PORT) &&
|
2012-08-03 16:40:40 +08:00
|
|
|
(attr->port_num == 0 || attr->port_num > dev->num_ports)) {
|
2012-06-19 16:21:35 +08:00
|
|
|
pr_debug("qpn 0x%x: invalid port number (%d) specified "
|
|
|
|
"for transition %d to %d. qp_type %d\n",
|
|
|
|
ibqp->qp_num, attr->port_num, cur_state,
|
|
|
|
new_state, ibqp->qp_type);
|
2007-05-14 12:26:51 +08:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2012-01-17 19:39:07 +08:00
|
|
|
if ((attr_mask & IB_QP_PORT) && (ibqp->qp_type == IB_QPT_RAW_PACKET) &&
|
|
|
|
(rdma_port_get_link_layer(&dev->ib_dev, attr->port_num) !=
|
|
|
|
IB_LINK_LAYER_ETHERNET))
|
|
|
|
goto out;
|
|
|
|
|
2007-06-18 23:15:02 +08:00
|
|
|
if (attr_mask & IB_QP_PKEY_INDEX) {
|
|
|
|
int p = attr_mask & IB_QP_PORT ? attr->port_num : qp->port;
|
2012-06-19 16:21:35 +08:00
|
|
|
if (attr->pkey_index >= dev->dev->caps.pkey_table_len[p]) {
|
|
|
|
pr_debug("qpn 0x%x: invalid pkey index (%d) specified "
|
|
|
|
"for transition %d to %d. qp_type %d\n",
|
|
|
|
ibqp->qp_num, attr->pkey_index, cur_state,
|
|
|
|
new_state, ibqp->qp_type);
|
2007-06-18 23:15:02 +08:00
|
|
|
goto out;
|
2012-06-19 16:21:35 +08:00
|
|
|
}
|
2007-06-18 23:15:02 +08:00
|
|
|
}
|
|
|
|
|
2007-05-14 12:26:51 +08:00
|
|
|
if (attr_mask & IB_QP_MAX_QP_RD_ATOMIC &&
|
|
|
|
attr->max_rd_atomic > dev->dev->caps.max_qp_init_rdma) {
|
2012-06-19 16:21:35 +08:00
|
|
|
pr_debug("qpn 0x%x: max_rd_atomic (%d) too large. "
|
|
|
|
"Transition %d to %d. qp_type %d\n",
|
|
|
|
ibqp->qp_num, attr->max_rd_atomic, cur_state,
|
|
|
|
new_state, ibqp->qp_type);
|
2007-05-14 12:26:51 +08:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (attr_mask & IB_QP_MAX_DEST_RD_ATOMIC &&
|
|
|
|
attr->max_dest_rd_atomic > dev->dev->caps.max_qp_dest_rdma) {
|
2012-06-19 16:21:35 +08:00
|
|
|
pr_debug("qpn 0x%x: max_dest_rd_atomic (%d) too large. "
|
|
|
|
"Transition %d to %d. qp_type %d\n",
|
|
|
|
ibqp->qp_num, attr->max_dest_rd_atomic, cur_state,
|
|
|
|
new_state, ibqp->qp_type);
|
2007-05-14 12:26:51 +08:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (cur_state == new_state && cur_state == IB_QPS_RESET) {
|
|
|
|
err = 0;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
err = __mlx4_ib_modify_qp(ibqp, attr, attr_mask, cur_state, new_state);
|
|
|
|
|
|
|
|
out:
|
|
|
|
mutex_unlock(&qp->mutex);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2012-08-03 16:40:40 +08:00
|
|
|
static int build_sriov_qp0_header(struct mlx4_ib_sqp *sqp,
|
|
|
|
struct ib_send_wr *wr,
|
|
|
|
void *wqe, unsigned *mlx_seg_len)
|
|
|
|
{
|
|
|
|
struct mlx4_ib_dev *mdev = to_mdev(sqp->qp.ibqp.device);
|
|
|
|
struct ib_device *ib_dev = &mdev->ib_dev;
|
|
|
|
struct mlx4_wqe_mlx_seg *mlx = wqe;
|
|
|
|
struct mlx4_wqe_inline_seg *inl = wqe + sizeof *mlx;
|
|
|
|
struct mlx4_ib_ah *ah = to_mah(wr->wr.ud.ah);
|
|
|
|
u16 pkey;
|
|
|
|
u32 qkey;
|
|
|
|
int send_size;
|
|
|
|
int header_size;
|
|
|
|
int spc;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (wr->opcode != IB_WR_SEND)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
send_size = 0;
|
|
|
|
|
|
|
|
for (i = 0; i < wr->num_sge; ++i)
|
|
|
|
send_size += wr->sg_list[i].length;
|
|
|
|
|
|
|
|
/* for proxy-qp0 sends, need to add in size of tunnel header */
|
|
|
|
/* for tunnel-qp0 sends, tunnel header is already in s/g list */
|
|
|
|
if (sqp->qp.mlx4_ib_qp_type == MLX4_IB_QPT_PROXY_SMI_OWNER)
|
|
|
|
send_size += sizeof (struct mlx4_ib_tunnel_header);
|
|
|
|
|
|
|
|
ib_ud_header_init(send_size, 1, 0, 0, 0, 0, &sqp->ud_header);
|
|
|
|
|
|
|
|
if (sqp->qp.mlx4_ib_qp_type == MLX4_IB_QPT_PROXY_SMI_OWNER) {
|
|
|
|
sqp->ud_header.lrh.service_level =
|
|
|
|
be32_to_cpu(ah->av.ib.sl_tclass_flowlabel) >> 28;
|
|
|
|
sqp->ud_header.lrh.destination_lid =
|
|
|
|
cpu_to_be16(ah->av.ib.g_slid & 0x7f);
|
|
|
|
sqp->ud_header.lrh.source_lid =
|
|
|
|
cpu_to_be16(ah->av.ib.g_slid & 0x7f);
|
|
|
|
}
|
|
|
|
|
|
|
|
mlx->flags &= cpu_to_be32(MLX4_WQE_CTRL_CQ_UPDATE);
|
|
|
|
|
|
|
|
/* force loopback */
|
|
|
|
mlx->flags |= cpu_to_be32(MLX4_WQE_MLX_VL15 | 0x1 | MLX4_WQE_MLX_SLR);
|
|
|
|
mlx->rlid = sqp->ud_header.lrh.destination_lid;
|
|
|
|
|
|
|
|
sqp->ud_header.lrh.virtual_lane = 0;
|
|
|
|
sqp->ud_header.bth.solicited_event = !!(wr->send_flags & IB_SEND_SOLICITED);
|
|
|
|
ib_get_cached_pkey(ib_dev, sqp->qp.port, 0, &pkey);
|
|
|
|
sqp->ud_header.bth.pkey = cpu_to_be16(pkey);
|
|
|
|
if (sqp->qp.mlx4_ib_qp_type == MLX4_IB_QPT_TUN_SMI_OWNER)
|
|
|
|
sqp->ud_header.bth.destination_qpn = cpu_to_be32(wr->wr.ud.remote_qpn);
|
|
|
|
else
|
|
|
|
sqp->ud_header.bth.destination_qpn =
|
mlx4: Modify proxy/tunnel QP mechanism so that guests do no calculations
Previously, the structure of a guest's proxy QPs followed the
structure of the PPF special qps (qp0 port 1, qp0 port 2, qp1 port 1,
qp1 port 2, ...). The guest then did offset calculations on the
sqp_base qp number that the PPF passed to it in QUERY_FUNC_CAP().
This is now changed so that the guest does no offset calculations
regarding proxy or tunnel QPs to use. This change frees the PPF from
needing to adhere to a specific order in allocating proxy and tunnel
QPs.
Now QUERY_FUNC_CAP provides each port individually with its proxy
qp0, proxy qp1, tunnel qp0, and tunnel qp1 QP numbers, and these are
used directly where required (with no offset calculations).
To accomplish this change, several fields were added to the phys_caps
structure for use by the PPF and by non-SR-IOV mode:
base_sqpn -- in non-sriov mode, this was formerly sqp_start.
base_proxy_sqpn -- the first physical proxy qp number -- used by PPF
base_tunnel_sqpn -- the first physical tunnel qp number -- used by PPF.
The current code in the PPF still adheres to the previous layout of
sqps, proxy-sqps and tunnel-sqps. However, the PPF can change this
layout without affecting VF or (paravirtualized) PF code.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-08-03 16:40:57 +08:00
|
|
|
cpu_to_be32(mdev->dev->caps.qp0_tunnel[sqp->qp.port - 1]);
|
2012-08-03 16:40:40 +08:00
|
|
|
|
|
|
|
sqp->ud_header.bth.psn = cpu_to_be32((sqp->send_psn++) & ((1 << 24) - 1));
|
|
|
|
if (mlx4_get_parav_qkey(mdev->dev, sqp->qp.mqp.qpn, &qkey))
|
|
|
|
return -EINVAL;
|
|
|
|
sqp->ud_header.deth.qkey = cpu_to_be32(qkey);
|
|
|
|
sqp->ud_header.deth.source_qpn = cpu_to_be32(sqp->qp.mqp.qpn);
|
|
|
|
|
|
|
|
sqp->ud_header.bth.opcode = IB_OPCODE_UD_SEND_ONLY;
|
|
|
|
sqp->ud_header.immediate_present = 0;
|
|
|
|
|
|
|
|
header_size = ib_ud_header_pack(&sqp->ud_header, sqp->header_buf);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Inline data segments may not cross a 64 byte boundary. If
|
|
|
|
* our UD header is bigger than the space available up to the
|
|
|
|
* next 64 byte boundary in the WQE, use two inline data
|
|
|
|
* segments to hold the UD header.
|
|
|
|
*/
|
|
|
|
spc = MLX4_INLINE_ALIGN -
|
|
|
|
((unsigned long) (inl + 1) & (MLX4_INLINE_ALIGN - 1));
|
|
|
|
if (header_size <= spc) {
|
|
|
|
inl->byte_count = cpu_to_be32(1 << 31 | header_size);
|
|
|
|
memcpy(inl + 1, sqp->header_buf, header_size);
|
|
|
|
i = 1;
|
|
|
|
} else {
|
|
|
|
inl->byte_count = cpu_to_be32(1 << 31 | spc);
|
|
|
|
memcpy(inl + 1, sqp->header_buf, spc);
|
|
|
|
|
|
|
|
inl = (void *) (inl + 1) + spc;
|
|
|
|
memcpy(inl + 1, sqp->header_buf + spc, header_size - spc);
|
|
|
|
/*
|
|
|
|
* Need a barrier here to make sure all the data is
|
|
|
|
* visible before the byte_count field is set.
|
|
|
|
* Otherwise the HCA prefetcher could grab the 64-byte
|
|
|
|
* chunk with this inline segment and get a valid (!=
|
|
|
|
* 0xffffffff) byte count but stale data, and end up
|
|
|
|
* generating a packet with bad headers.
|
|
|
|
*
|
|
|
|
* The first inline segment's byte_count field doesn't
|
|
|
|
* need a barrier, because it comes after a
|
|
|
|
* control/MLX segment and therefore is at an offset
|
|
|
|
* of 16 mod 64.
|
|
|
|
*/
|
|
|
|
wmb();
|
|
|
|
inl->byte_count = cpu_to_be32(1 << 31 | (header_size - spc));
|
|
|
|
i = 2;
|
|
|
|
}
|
|
|
|
|
|
|
|
*mlx_seg_len =
|
|
|
|
ALIGN(i * sizeof (struct mlx4_wqe_inline_seg) + header_size, 16);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2007-05-09 09:00:38 +08:00
|
|
|
static int build_mlx_header(struct mlx4_ib_sqp *sqp, struct ib_send_wr *wr,
|
2008-04-17 12:09:28 +08:00
|
|
|
void *wqe, unsigned *mlx_seg_len)
|
2007-05-09 09:00:38 +08:00
|
|
|
{
|
2010-01-27 21:57:03 +08:00
|
|
|
struct ib_device *ib_dev = sqp->qp.ibqp.device;
|
2007-05-09 09:00:38 +08:00
|
|
|
struct mlx4_wqe_mlx_seg *mlx = wqe;
|
|
|
|
struct mlx4_wqe_inline_seg *inl = wqe + sizeof *mlx;
|
|
|
|
struct mlx4_ib_ah *ah = to_mah(wr->wr.ud.ah);
|
2012-08-11 02:25:34 +08:00
|
|
|
struct net_device *ndev;
|
2010-08-26 22:19:22 +08:00
|
|
|
union ib_gid sgid;
|
2007-05-09 09:00:38 +08:00
|
|
|
u16 pkey;
|
|
|
|
int send_size;
|
|
|
|
int header_size;
|
2007-06-19 00:23:47 +08:00
|
|
|
int spc;
|
2007-05-09 09:00:38 +08:00
|
|
|
int i;
|
2012-08-03 16:40:40 +08:00
|
|
|
int err = 0;
|
2013-02-26 01:17:13 +08:00
|
|
|
u16 vlan = 0xffff;
|
2013-02-26 01:02:03 +08:00
|
|
|
bool is_eth;
|
|
|
|
bool is_vlan = false;
|
|
|
|
bool is_grh;
|
2007-05-09 09:00:38 +08:00
|
|
|
|
|
|
|
send_size = 0;
|
|
|
|
for (i = 0; i < wr->num_sge; ++i)
|
|
|
|
send_size += wr->sg_list[i].length;
|
|
|
|
|
2010-10-25 12:08:52 +08:00
|
|
|
is_eth = rdma_port_get_link_layer(sqp->qp.ibqp.device, sqp->qp.port) == IB_LINK_LAYER_ETHERNET;
|
|
|
|
is_grh = mlx4_ib_ah_grh_present(ah);
|
2010-08-26 22:19:22 +08:00
|
|
|
if (is_eth) {
|
2012-08-03 16:40:40 +08:00
|
|
|
if (mlx4_is_mfunc(to_mdev(ib_dev)->dev)) {
|
|
|
|
/* When multi-function is enabled, the ib_core gid
|
|
|
|
* indexes don't necessarily match the hw ones, so
|
|
|
|
* we must use our own cache */
|
|
|
|
sgid.global.subnet_prefix =
|
|
|
|
to_mdev(ib_dev)->sriov.demux[sqp->qp.port - 1].
|
|
|
|
subnet_prefix;
|
|
|
|
sgid.global.interface_id =
|
|
|
|
to_mdev(ib_dev)->sriov.demux[sqp->qp.port - 1].
|
|
|
|
guid_cache[ah->av.ib.gid_index];
|
|
|
|
} else {
|
|
|
|
err = ib_get_cached_gid(ib_dev,
|
|
|
|
be32_to_cpu(ah->av.ib.port_pd) >> 24,
|
|
|
|
ah->av.ib.gid_index, &sgid);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2010-08-26 22:19:22 +08:00
|
|
|
vlan = rdma_get_vlan_id(&sgid);
|
|
|
|
is_vlan = vlan < 0x1000;
|
|
|
|
}
|
|
|
|
ib_ud_header_init(send_size, !is_eth, is_eth, is_vlan, is_grh, 0, &sqp->ud_header);
|
2010-10-25 12:08:52 +08:00
|
|
|
|
|
|
|
if (!is_eth) {
|
|
|
|
sqp->ud_header.lrh.service_level =
|
|
|
|
be32_to_cpu(ah->av.ib.sl_tclass_flowlabel) >> 28;
|
|
|
|
sqp->ud_header.lrh.destination_lid = ah->av.ib.dlid;
|
|
|
|
sqp->ud_header.lrh.source_lid = cpu_to_be16(ah->av.ib.g_slid & 0x7f);
|
|
|
|
}
|
2007-05-09 09:00:38 +08:00
|
|
|
|
2010-10-25 12:08:52 +08:00
|
|
|
if (is_grh) {
|
2007-05-09 09:00:38 +08:00
|
|
|
sqp->ud_header.grh.traffic_class =
|
2010-10-25 12:08:52 +08:00
|
|
|
(be32_to_cpu(ah->av.ib.sl_tclass_flowlabel) >> 20) & 0xff;
|
2007-05-09 09:00:38 +08:00
|
|
|
sqp->ud_header.grh.flow_label =
|
2010-10-25 12:08:52 +08:00
|
|
|
ah->av.ib.sl_tclass_flowlabel & cpu_to_be32(0xfffff);
|
|
|
|
sqp->ud_header.grh.hop_limit = ah->av.ib.hop_limit;
|
2012-08-03 16:40:40 +08:00
|
|
|
if (mlx4_is_mfunc(to_mdev(ib_dev)->dev)) {
|
|
|
|
/* When multi-function is enabled, the ib_core gid
|
|
|
|
* indexes don't necessarily match the hw ones, so
|
|
|
|
* we must use our own cache */
|
|
|
|
sqp->ud_header.grh.source_gid.global.subnet_prefix =
|
|
|
|
to_mdev(ib_dev)->sriov.demux[sqp->qp.port - 1].
|
|
|
|
subnet_prefix;
|
|
|
|
sqp->ud_header.grh.source_gid.global.interface_id =
|
|
|
|
to_mdev(ib_dev)->sriov.demux[sqp->qp.port - 1].
|
|
|
|
guid_cache[ah->av.ib.gid_index];
|
|
|
|
} else
|
|
|
|
ib_get_cached_gid(ib_dev,
|
|
|
|
be32_to_cpu(ah->av.ib.port_pd) >> 24,
|
|
|
|
ah->av.ib.gid_index,
|
|
|
|
&sqp->ud_header.grh.source_gid);
|
2007-05-09 09:00:38 +08:00
|
|
|
memcpy(sqp->ud_header.grh.destination_gid.raw,
|
2010-10-25 12:08:52 +08:00
|
|
|
ah->av.ib.dgid, 16);
|
2007-05-09 09:00:38 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
mlx->flags &= cpu_to_be32(MLX4_WQE_CTRL_CQ_UPDATE);
|
2010-10-25 12:08:52 +08:00
|
|
|
|
|
|
|
if (!is_eth) {
|
|
|
|
mlx->flags |= cpu_to_be32((!sqp->qp.ibqp.qp_num ? MLX4_WQE_MLX_VL15 : 0) |
|
|
|
|
(sqp->ud_header.lrh.destination_lid ==
|
|
|
|
IB_LID_PERMISSIVE ? MLX4_WQE_MLX_SLR : 0) |
|
|
|
|
(sqp->ud_header.lrh.service_level << 8));
|
2012-08-03 16:40:40 +08:00
|
|
|
if (ah->av.ib.port_pd & cpu_to_be32(0x80000000))
|
|
|
|
mlx->flags |= cpu_to_be32(0x1); /* force loopback */
|
2010-10-25 12:08:52 +08:00
|
|
|
mlx->rlid = sqp->ud_header.lrh.destination_lid;
|
|
|
|
}
|
2007-05-09 09:00:38 +08:00
|
|
|
|
|
|
|
switch (wr->opcode) {
|
|
|
|
case IB_WR_SEND:
|
|
|
|
sqp->ud_header.bth.opcode = IB_OPCODE_UD_SEND_ONLY;
|
|
|
|
sqp->ud_header.immediate_present = 0;
|
|
|
|
break;
|
|
|
|
case IB_WR_SEND_WITH_IMM:
|
|
|
|
sqp->ud_header.bth.opcode = IB_OPCODE_UD_SEND_ONLY_WITH_IMMEDIATE;
|
|
|
|
sqp->ud_header.immediate_present = 1;
|
2008-04-17 12:09:32 +08:00
|
|
|
sqp->ud_header.immediate_data = wr->ex.imm_data;
|
2007-05-09 09:00:38 +08:00
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2010-10-25 12:08:52 +08:00
|
|
|
if (is_eth) {
|
|
|
|
u8 *smac;
|
2012-04-29 22:04:24 +08:00
|
|
|
u16 pcp = (be32_to_cpu(ah->av.ib.sl_tclass_flowlabel) >> 29) << 13;
|
|
|
|
|
|
|
|
mlx->sched_prio = cpu_to_be16(pcp);
|
2010-10-25 12:08:52 +08:00
|
|
|
|
|
|
|
memcpy(sqp->ud_header.eth.dmac_h, ah->av.eth.mac, 6);
|
|
|
|
/* FIXME: cache smac value? */
|
2012-08-11 02:25:34 +08:00
|
|
|
ndev = to_mdev(sqp->qp.ibqp.device)->iboe.netdevs[sqp->qp.port - 1];
|
|
|
|
if (!ndev)
|
|
|
|
return -ENODEV;
|
|
|
|
smac = ndev->dev_addr;
|
2010-10-25 12:08:52 +08:00
|
|
|
memcpy(sqp->ud_header.eth.smac_h, smac, 6);
|
|
|
|
if (!memcmp(sqp->ud_header.eth.smac_h, sqp->ud_header.eth.dmac_h, 6))
|
|
|
|
mlx->flags |= cpu_to_be32(MLX4_WQE_CTRL_FORCE_LOOPBACK);
|
2010-08-26 22:19:22 +08:00
|
|
|
if (!is_vlan) {
|
|
|
|
sqp->ud_header.eth.type = cpu_to_be16(MLX4_IB_IBOE_ETHERTYPE);
|
|
|
|
} else {
|
|
|
|
sqp->ud_header.vlan.type = cpu_to_be16(MLX4_IB_IBOE_ETHERTYPE);
|
|
|
|
sqp->ud_header.vlan.tag = cpu_to_be16(vlan | pcp);
|
|
|
|
}
|
2010-10-25 12:08:52 +08:00
|
|
|
} else {
|
|
|
|
sqp->ud_header.lrh.virtual_lane = !sqp->qp.ibqp.qp_num ? 15 : 0;
|
|
|
|
if (sqp->ud_header.lrh.destination_lid == IB_LID_PERMISSIVE)
|
|
|
|
sqp->ud_header.lrh.source_lid = IB_LID_PERMISSIVE;
|
|
|
|
}
|
2007-05-09 09:00:38 +08:00
|
|
|
sqp->ud_header.bth.solicited_event = !!(wr->send_flags & IB_SEND_SOLICITED);
|
|
|
|
if (!sqp->qp.ibqp.qp_num)
|
|
|
|
ib_get_cached_pkey(ib_dev, sqp->qp.port, sqp->pkey_index, &pkey);
|
|
|
|
else
|
|
|
|
ib_get_cached_pkey(ib_dev, sqp->qp.port, wr->wr.ud.pkey_index, &pkey);
|
|
|
|
sqp->ud_header.bth.pkey = cpu_to_be16(pkey);
|
|
|
|
sqp->ud_header.bth.destination_qpn = cpu_to_be32(wr->wr.ud.remote_qpn);
|
|
|
|
sqp->ud_header.bth.psn = cpu_to_be32((sqp->send_psn++) & ((1 << 24) - 1));
|
|
|
|
sqp->ud_header.deth.qkey = cpu_to_be32(wr->wr.ud.remote_qkey & 0x80000000 ?
|
|
|
|
sqp->qkey : wr->wr.ud.remote_qkey);
|
|
|
|
sqp->ud_header.deth.source_qpn = cpu_to_be32(sqp->qp.ibqp.qp_num);
|
|
|
|
|
|
|
|
header_size = ib_ud_header_pack(&sqp->ud_header, sqp->header_buf);
|
|
|
|
|
|
|
|
if (0) {
|
2012-04-29 22:04:26 +08:00
|
|
|
pr_err("built UD header of size %d:\n", header_size);
|
2007-05-09 09:00:38 +08:00
|
|
|
for (i = 0; i < header_size / 4; ++i) {
|
|
|
|
if (i % 8 == 0)
|
2012-04-29 22:04:26 +08:00
|
|
|
pr_err(" [%02x] ", i * 4);
|
|
|
|
pr_cont(" %08x",
|
|
|
|
be32_to_cpu(((__be32 *) sqp->header_buf)[i]));
|
2007-05-09 09:00:38 +08:00
|
|
|
if ((i + 1) % 8 == 0)
|
2012-04-29 22:04:26 +08:00
|
|
|
pr_cont("\n");
|
2007-05-09 09:00:38 +08:00
|
|
|
}
|
2012-04-29 22:04:26 +08:00
|
|
|
pr_err("\n");
|
2007-05-09 09:00:38 +08:00
|
|
|
}
|
|
|
|
|
2007-06-19 00:23:47 +08:00
|
|
|
/*
|
|
|
|
* Inline data segments may not cross a 64 byte boundary. If
|
|
|
|
* our UD header is bigger than the space available up to the
|
|
|
|
* next 64 byte boundary in the WQE, use two inline data
|
|
|
|
* segments to hold the UD header.
|
|
|
|
*/
|
|
|
|
spc = MLX4_INLINE_ALIGN -
|
|
|
|
((unsigned long) (inl + 1) & (MLX4_INLINE_ALIGN - 1));
|
|
|
|
if (header_size <= spc) {
|
|
|
|
inl->byte_count = cpu_to_be32(1 << 31 | header_size);
|
|
|
|
memcpy(inl + 1, sqp->header_buf, header_size);
|
|
|
|
i = 1;
|
|
|
|
} else {
|
|
|
|
inl->byte_count = cpu_to_be32(1 << 31 | spc);
|
|
|
|
memcpy(inl + 1, sqp->header_buf, spc);
|
|
|
|
|
|
|
|
inl = (void *) (inl + 1) + spc;
|
|
|
|
memcpy(inl + 1, sqp->header_buf + spc, header_size - spc);
|
|
|
|
/*
|
|
|
|
* Need a barrier here to make sure all the data is
|
|
|
|
* visible before the byte_count field is set.
|
|
|
|
* Otherwise the HCA prefetcher could grab the 64-byte
|
|
|
|
* chunk with this inline segment and get a valid (!=
|
|
|
|
* 0xffffffff) byte count but stale data, and end up
|
|
|
|
* generating a packet with bad headers.
|
|
|
|
*
|
|
|
|
* The first inline segment's byte_count field doesn't
|
|
|
|
* need a barrier, because it comes after a
|
|
|
|
* control/MLX segment and therefore is at an offset
|
|
|
|
* of 16 mod 64.
|
|
|
|
*/
|
|
|
|
wmb();
|
|
|
|
inl->byte_count = cpu_to_be32(1 << 31 | (header_size - spc));
|
|
|
|
i = 2;
|
|
|
|
}
|
2007-05-09 09:00:38 +08:00
|
|
|
|
2008-04-17 12:09:28 +08:00
|
|
|
*mlx_seg_len =
|
|
|
|
ALIGN(i * sizeof (struct mlx4_wqe_inline_seg) + header_size, 16);
|
|
|
|
return 0;
|
2007-05-09 09:00:38 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static int mlx4_wq_overflow(struct mlx4_ib_wq *wq, int nreq, struct ib_cq *ib_cq)
|
|
|
|
{
|
|
|
|
unsigned cur;
|
|
|
|
struct mlx4_ib_cq *cq;
|
|
|
|
|
|
|
|
cur = wq->head - wq->tail;
|
2007-06-18 23:13:48 +08:00
|
|
|
if (likely(cur + nreq < wq->max_post))
|
2007-05-09 09:00:38 +08:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
cq = to_mcq(ib_cq);
|
|
|
|
spin_lock(&cq->lock);
|
|
|
|
cur = wq->head - wq->tail;
|
|
|
|
spin_unlock(&cq->lock);
|
|
|
|
|
2007-06-18 23:13:48 +08:00
|
|
|
return cur + nreq >= wq->max_post;
|
2007-05-09 09:00:38 +08:00
|
|
|
}
|
|
|
|
|
2008-07-23 23:12:26 +08:00
|
|
|
static __be32 convert_access(int acc)
|
|
|
|
{
|
2013-02-07 00:19:15 +08:00
|
|
|
return (acc & IB_ACCESS_REMOTE_ATOMIC ?
|
|
|
|
cpu_to_be32(MLX4_WQE_FMR_AND_BIND_PERM_ATOMIC) : 0) |
|
|
|
|
(acc & IB_ACCESS_REMOTE_WRITE ?
|
|
|
|
cpu_to_be32(MLX4_WQE_FMR_AND_BIND_PERM_REMOTE_WRITE) : 0) |
|
|
|
|
(acc & IB_ACCESS_REMOTE_READ ?
|
|
|
|
cpu_to_be32(MLX4_WQE_FMR_AND_BIND_PERM_REMOTE_READ) : 0) |
|
2008-07-23 23:12:26 +08:00
|
|
|
(acc & IB_ACCESS_LOCAL_WRITE ? cpu_to_be32(MLX4_WQE_FMR_PERM_LOCAL_WRITE) : 0) |
|
|
|
|
cpu_to_be32(MLX4_WQE_FMR_PERM_LOCAL_READ);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void set_fmr_seg(struct mlx4_wqe_fmr_seg *fseg, struct ib_send_wr *wr)
|
|
|
|
{
|
|
|
|
struct mlx4_ib_fast_reg_page_list *mfrpl = to_mfrpl(wr->wr.fast_reg.page_list);
|
2008-09-16 05:25:23 +08:00
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < wr->wr.fast_reg.page_list_len; ++i)
|
2009-05-08 12:35:13 +08:00
|
|
|
mfrpl->mapped_page_list[i] =
|
2008-09-16 05:25:23 +08:00
|
|
|
cpu_to_be64(wr->wr.fast_reg.page_list->page_list[i] |
|
|
|
|
MLX4_MTT_FLAG_PRESENT);
|
2008-07-23 23:12:26 +08:00
|
|
|
|
|
|
|
fseg->flags = convert_access(wr->wr.fast_reg.access_flags);
|
|
|
|
fseg->mem_key = cpu_to_be32(wr->wr.fast_reg.rkey);
|
|
|
|
fseg->buf_list = cpu_to_be64(mfrpl->map);
|
|
|
|
fseg->start_addr = cpu_to_be64(wr->wr.fast_reg.iova_start);
|
|
|
|
fseg->reg_len = cpu_to_be64(wr->wr.fast_reg.length);
|
|
|
|
fseg->offset = 0; /* XXX -- is this just for ZBVA? */
|
|
|
|
fseg->page_size = cpu_to_be32(wr->wr.fast_reg.page_shift);
|
|
|
|
fseg->reserved[0] = 0;
|
|
|
|
fseg->reserved[1] = 0;
|
|
|
|
}
|
|
|
|
|
2013-02-07 00:19:15 +08:00
|
|
|
static void set_bind_seg(struct mlx4_wqe_bind_seg *bseg, struct ib_send_wr *wr)
|
|
|
|
{
|
|
|
|
bseg->flags1 =
|
|
|
|
convert_access(wr->wr.bind_mw.bind_info.mw_access_flags) &
|
|
|
|
cpu_to_be32(MLX4_WQE_FMR_AND_BIND_PERM_REMOTE_READ |
|
|
|
|
MLX4_WQE_FMR_AND_BIND_PERM_REMOTE_WRITE |
|
|
|
|
MLX4_WQE_FMR_AND_BIND_PERM_ATOMIC);
|
|
|
|
bseg->flags2 = 0;
|
|
|
|
if (wr->wr.bind_mw.mw->type == IB_MW_TYPE_2)
|
|
|
|
bseg->flags2 |= cpu_to_be32(MLX4_WQE_BIND_TYPE_2);
|
|
|
|
if (wr->wr.bind_mw.bind_info.mw_access_flags & IB_ZERO_BASED)
|
|
|
|
bseg->flags2 |= cpu_to_be32(MLX4_WQE_BIND_ZERO_BASED);
|
|
|
|
bseg->new_rkey = cpu_to_be32(wr->wr.bind_mw.rkey);
|
|
|
|
bseg->lkey = cpu_to_be32(wr->wr.bind_mw.bind_info.mr->lkey);
|
|
|
|
bseg->addr = cpu_to_be64(wr->wr.bind_mw.bind_info.addr);
|
|
|
|
bseg->length = cpu_to_be64(wr->wr.bind_mw.bind_info.length);
|
|
|
|
}
|
|
|
|
|
2008-07-23 23:12:26 +08:00
|
|
|
static void set_local_inv_seg(struct mlx4_wqe_local_inval_seg *iseg, u32 rkey)
|
|
|
|
{
|
2013-02-07 00:19:07 +08:00
|
|
|
memset(iseg, 0, sizeof(*iseg));
|
|
|
|
iseg->mem_key = cpu_to_be32(rkey);
|
2008-07-23 23:12:26 +08:00
|
|
|
}
|
|
|
|
|
2007-07-19 02:47:55 +08:00
|
|
|
static __always_inline void set_raddr_seg(struct mlx4_wqe_raddr_seg *rseg,
|
|
|
|
u64 remote_addr, u32 rkey)
|
|
|
|
{
|
|
|
|
rseg->raddr = cpu_to_be64(remote_addr);
|
|
|
|
rseg->rkey = cpu_to_be32(rkey);
|
|
|
|
rseg->reserved = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void set_atomic_seg(struct mlx4_wqe_atomic_seg *aseg, struct ib_send_wr *wr)
|
|
|
|
{
|
|
|
|
if (wr->opcode == IB_WR_ATOMIC_CMP_AND_SWP) {
|
|
|
|
aseg->swap_add = cpu_to_be64(wr->wr.atomic.swap);
|
|
|
|
aseg->compare = cpu_to_be64(wr->wr.atomic.compare_add);
|
2010-04-14 22:23:39 +08:00
|
|
|
} else if (wr->opcode == IB_WR_MASKED_ATOMIC_FETCH_AND_ADD) {
|
|
|
|
aseg->swap_add = cpu_to_be64(wr->wr.atomic.compare_add);
|
|
|
|
aseg->compare = cpu_to_be64(wr->wr.atomic.compare_add_mask);
|
2007-07-19 02:47:55 +08:00
|
|
|
} else {
|
|
|
|
aseg->swap_add = cpu_to_be64(wr->wr.atomic.compare_add);
|
|
|
|
aseg->compare = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
}
|
|
|
|
|
2010-04-14 22:23:39 +08:00
|
|
|
static void set_masked_atomic_seg(struct mlx4_wqe_masked_atomic_seg *aseg,
|
|
|
|
struct ib_send_wr *wr)
|
|
|
|
{
|
|
|
|
aseg->swap_add = cpu_to_be64(wr->wr.atomic.swap);
|
|
|
|
aseg->swap_add_mask = cpu_to_be64(wr->wr.atomic.swap_mask);
|
|
|
|
aseg->compare = cpu_to_be64(wr->wr.atomic.compare_add);
|
|
|
|
aseg->compare_mask = cpu_to_be64(wr->wr.atomic.compare_add_mask);
|
|
|
|
}
|
|
|
|
|
2007-07-19 02:47:55 +08:00
|
|
|
static void set_datagram_seg(struct mlx4_wqe_datagram_seg *dseg,
|
2011-10-10 16:54:42 +08:00
|
|
|
struct ib_send_wr *wr)
|
2007-07-19 02:47:55 +08:00
|
|
|
{
|
|
|
|
memcpy(dseg->av, &to_mah(wr->wr.ud.ah)->av, sizeof (struct mlx4_av));
|
|
|
|
dseg->dqpn = cpu_to_be32(wr->wr.ud.remote_qpn);
|
|
|
|
dseg->qkey = cpu_to_be32(wr->wr.ud.remote_qkey);
|
2010-10-25 12:08:52 +08:00
|
|
|
dseg->vlan = to_mah(wr->wr.ud.ah)->av.eth.vlan;
|
|
|
|
memcpy(dseg->mac, to_mah(wr->wr.ud.ah)->av.eth.mac, 6);
|
2007-07-19 02:47:55 +08:00
|
|
|
}
|
|
|
|
|
2012-08-03 16:40:40 +08:00
|
|
|
static void set_tunnel_datagram_seg(struct mlx4_ib_dev *dev,
|
|
|
|
struct mlx4_wqe_datagram_seg *dseg,
|
|
|
|
struct ib_send_wr *wr, enum ib_qp_type qpt)
|
|
|
|
{
|
|
|
|
union mlx4_ext_av *av = &to_mah(wr->wr.ud.ah)->av;
|
|
|
|
struct mlx4_av sqp_av = {0};
|
|
|
|
int port = *((u8 *) &av->ib.port_pd) & 0x3;
|
|
|
|
|
|
|
|
/* force loopback */
|
|
|
|
sqp_av.port_pd = av->ib.port_pd | cpu_to_be32(0x80000000);
|
|
|
|
sqp_av.g_slid = av->ib.g_slid & 0x7f; /* no GRH */
|
|
|
|
sqp_av.sl_tclass_flowlabel = av->ib.sl_tclass_flowlabel &
|
|
|
|
cpu_to_be32(0xf0000000);
|
|
|
|
|
|
|
|
memcpy(dseg->av, &sqp_av, sizeof (struct mlx4_av));
|
mlx4: Modify proxy/tunnel QP mechanism so that guests do no calculations
Previously, the structure of a guest's proxy QPs followed the
structure of the PPF special qps (qp0 port 1, qp0 port 2, qp1 port 1,
qp1 port 2, ...). The guest then did offset calculations on the
sqp_base qp number that the PPF passed to it in QUERY_FUNC_CAP().
This is now changed so that the guest does no offset calculations
regarding proxy or tunnel QPs to use. This change frees the PPF from
needing to adhere to a specific order in allocating proxy and tunnel
QPs.
Now QUERY_FUNC_CAP provides each port individually with its proxy
qp0, proxy qp1, tunnel qp0, and tunnel qp1 QP numbers, and these are
used directly where required (with no offset calculations).
To accomplish this change, several fields were added to the phys_caps
structure for use by the PPF and by non-SR-IOV mode:
base_sqpn -- in non-sriov mode, this was formerly sqp_start.
base_proxy_sqpn -- the first physical proxy qp number -- used by PPF
base_tunnel_sqpn -- the first physical tunnel qp number -- used by PPF.
The current code in the PPF still adheres to the previous layout of
sqps, proxy-sqps and tunnel-sqps. However, the PPF can change this
layout without affecting VF or (paravirtualized) PF code.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <roland@purestorage.com>
2012-08-03 16:40:57 +08:00
|
|
|
/* This function used only for sending on QP1 proxies */
|
|
|
|
dseg->dqpn = cpu_to_be32(dev->dev->caps.qp1_tunnel[port - 1]);
|
|
|
|
/* Use QKEY from the QP context, which is set by master */
|
|
|
|
dseg->qkey = cpu_to_be32(IB_QP_SET_QKEY);
|
2012-08-03 16:40:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void build_tunnel_header(struct ib_send_wr *wr, void *wqe, unsigned *mlx_seg_len)
|
|
|
|
{
|
|
|
|
struct mlx4_wqe_inline_seg *inl = wqe;
|
|
|
|
struct mlx4_ib_tunnel_header hdr;
|
|
|
|
struct mlx4_ib_ah *ah = to_mah(wr->wr.ud.ah);
|
|
|
|
int spc;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
memcpy(&hdr.av, &ah->av, sizeof hdr.av);
|
|
|
|
hdr.remote_qpn = cpu_to_be32(wr->wr.ud.remote_qpn);
|
|
|
|
hdr.pkey_index = cpu_to_be16(wr->wr.ud.pkey_index);
|
|
|
|
hdr.qkey = cpu_to_be32(wr->wr.ud.remote_qkey);
|
|
|
|
|
|
|
|
spc = MLX4_INLINE_ALIGN -
|
|
|
|
((unsigned long) (inl + 1) & (MLX4_INLINE_ALIGN - 1));
|
|
|
|
if (sizeof (hdr) <= spc) {
|
|
|
|
memcpy(inl + 1, &hdr, sizeof (hdr));
|
|
|
|
wmb();
|
|
|
|
inl->byte_count = cpu_to_be32(1 << 31 | sizeof (hdr));
|
|
|
|
i = 1;
|
|
|
|
} else {
|
|
|
|
memcpy(inl + 1, &hdr, spc);
|
|
|
|
wmb();
|
|
|
|
inl->byte_count = cpu_to_be32(1 << 31 | spc);
|
|
|
|
|
|
|
|
inl = (void *) (inl + 1) + spc;
|
|
|
|
memcpy(inl + 1, (void *) &hdr + spc, sizeof (hdr) - spc);
|
|
|
|
wmb();
|
|
|
|
inl->byte_count = cpu_to_be32(1 << 31 | (sizeof (hdr) - spc));
|
|
|
|
i = 2;
|
|
|
|
}
|
|
|
|
|
|
|
|
*mlx_seg_len =
|
|
|
|
ALIGN(i * sizeof (struct mlx4_wqe_inline_seg) + sizeof (hdr), 16);
|
|
|
|
}
|
|
|
|
|
2007-09-20 00:52:25 +08:00
|
|
|
static void set_mlx_icrc_seg(void *dseg)
|
|
|
|
{
|
|
|
|
u32 *t = dseg;
|
|
|
|
struct mlx4_wqe_inline_seg *iseg = dseg;
|
|
|
|
|
|
|
|
t[1] = 0;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Need a barrier here before writing the byte_count field to
|
|
|
|
* make sure that all the data is visible before the
|
|
|
|
* byte_count field is set. Otherwise, if the segment begins
|
|
|
|
* a new cacheline, the HCA prefetcher could grab the 64-byte
|
|
|
|
* chunk and get a valid (!= * 0xffffffff) byte count but
|
|
|
|
* stale data, and end up sending the wrong data.
|
|
|
|
*/
|
|
|
|
wmb();
|
|
|
|
|
|
|
|
iseg->byte_count = cpu_to_be32((1 << 31) | 4);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void set_data_seg(struct mlx4_wqe_data_seg *dseg, struct ib_sge *sg)
|
2007-07-19 02:46:27 +08:00
|
|
|
{
|
|
|
|
dseg->lkey = cpu_to_be32(sg->lkey);
|
|
|
|
dseg->addr = cpu_to_be64(sg->addr);
|
2007-09-20 00:52:25 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Need a barrier here before writing the byte_count field to
|
|
|
|
* make sure that all the data is visible before the
|
|
|
|
* byte_count field is set. Otherwise, if the segment begins
|
|
|
|
* a new cacheline, the HCA prefetcher could grab the 64-byte
|
|
|
|
* chunk and get a valid (!= * 0xffffffff) byte count but
|
|
|
|
* stale data, and end up sending the wrong data.
|
|
|
|
*/
|
|
|
|
wmb();
|
|
|
|
|
|
|
|
dseg->byte_count = cpu_to_be32(sg->length);
|
2007-07-19 02:46:27 +08:00
|
|
|
}
|
|
|
|
|
2007-10-10 10:59:05 +08:00
|
|
|
static void __set_data_seg(struct mlx4_wqe_data_seg *dseg, struct ib_sge *sg)
|
|
|
|
{
|
|
|
|
dseg->byte_count = cpu_to_be32(sg->length);
|
|
|
|
dseg->lkey = cpu_to_be32(sg->lkey);
|
|
|
|
dseg->addr = cpu_to_be64(sg->addr);
|
|
|
|
}
|
|
|
|
|
2008-07-23 05:19:39 +08:00
|
|
|
static int build_lso_seg(struct mlx4_wqe_lso_seg *wqe, struct ib_send_wr *wr,
|
2009-01-17 04:47:47 +08:00
|
|
|
struct mlx4_ib_qp *qp, unsigned *lso_seg_len,
|
2009-11-13 03:19:44 +08:00
|
|
|
__be32 *lso_hdr_sz, __be32 *blh)
|
2008-04-17 12:09:27 +08:00
|
|
|
{
|
|
|
|
unsigned halign = ALIGN(sizeof *wqe + wr->wr.ud.hlen, 16);
|
|
|
|
|
2009-11-13 03:19:44 +08:00
|
|
|
if (unlikely(halign > MLX4_IB_CACHE_LINE_SIZE))
|
|
|
|
*blh = cpu_to_be32(1 << 6);
|
2008-04-17 12:09:27 +08:00
|
|
|
|
|
|
|
if (unlikely(!(qp->flags & MLX4_IB_QP_LSO) &&
|
|
|
|
wr->num_sge > qp->sq.max_gs - (halign >> 4)))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
memcpy(wqe->header, wr->wr.ud.header, wr->wr.ud.hlen);
|
|
|
|
|
2009-01-17 04:47:47 +08:00
|
|
|
*lso_hdr_sz = cpu_to_be32((wr->wr.ud.mss - wr->wr.ud.hlen) << 16 |
|
|
|
|
wr->wr.ud.hlen);
|
2008-04-17 12:09:27 +08:00
|
|
|
*lso_seg_len = halign;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-07-23 23:12:26 +08:00
|
|
|
static __be32 send_ieth(struct ib_send_wr *wr)
|
|
|
|
{
|
|
|
|
switch (wr->opcode) {
|
|
|
|
case IB_WR_SEND_WITH_IMM:
|
|
|
|
case IB_WR_RDMA_WRITE_WITH_IMM:
|
|
|
|
return wr->ex.imm_data;
|
|
|
|
|
|
|
|
case IB_WR_SEND_WITH_INV:
|
|
|
|
return cpu_to_be32(wr->ex.invalidate_rkey);
|
|
|
|
|
|
|
|
default:
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-08-03 16:40:40 +08:00
|
|
|
static void add_zero_len_inline(void *wqe)
|
|
|
|
{
|
|
|
|
struct mlx4_wqe_inline_seg *inl = wqe;
|
|
|
|
memset(wqe, 0, 16);
|
|
|
|
inl->byte_count = cpu_to_be32(1 << 31);
|
|
|
|
}
|
|
|
|
|
2007-05-09 09:00:38 +08:00
|
|
|
int mlx4_ib_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
|
|
|
|
struct ib_send_wr **bad_wr)
|
|
|
|
{
|
|
|
|
struct mlx4_ib_qp *qp = to_mqp(ibqp);
|
|
|
|
void *wqe;
|
|
|
|
struct mlx4_wqe_ctrl_seg *ctrl;
|
2007-09-20 00:52:25 +08:00
|
|
|
struct mlx4_wqe_data_seg *dseg;
|
2007-05-09 09:00:38 +08:00
|
|
|
unsigned long flags;
|
|
|
|
int nreq;
|
|
|
|
int err = 0;
|
IB/mlx4: Use multiple WQ blocks to post smaller send WQEs
ConnectX HCA supports shrinking WQEs, so that a single work request
can be made of multiple units of wqe_shift. This way, WRs can differ
in size, and do not have to be a power of 2 in size, saving memory and
speeding up send WR posting. Unfortunately, if we do this then the
wqe_index field in CQEs can't be used to look up the WR ID anymore, so
our implementation does this only if selective signaling is off.
Further, on 32-bit platforms, we can't use vmap() to make the QP
buffer virtually contigious. Thus we have to use constant-sized WRs to
make sure a WR is always fully within a single page-sized chunk.
Finally, we use WRs with the NOP opcode to avoid wrapping around the
queue buffer in the middle of posting a WR, and we set the
NoErrorCompletion bit to avoid getting completions with error for NOP
WRs. However, NEC is only supported starting with firmware 2.2.232,
so we use constant-sized WRs for older firmware. And, since MLX QPs
only support SEND, we use constant-sized WRs in this case.
When stamping during NOP posting, do stamping following setting of the
NOP WQE valid bit.
Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-01-28 16:40:59 +08:00
|
|
|
unsigned ind;
|
|
|
|
int uninitialized_var(stamp);
|
|
|
|
int uninitialized_var(size);
|
2008-05-17 05:28:30 +08:00
|
|
|
unsigned uninitialized_var(seglen);
|
2009-01-17 04:47:47 +08:00
|
|
|
__be32 dummy;
|
|
|
|
__be32 *lso_wqe;
|
|
|
|
__be32 uninitialized_var(lso_hdr_sz);
|
2009-11-13 03:19:44 +08:00
|
|
|
__be32 blh;
|
2007-05-09 09:00:38 +08:00
|
|
|
int i;
|
|
|
|
|
2007-10-31 01:53:54 +08:00
|
|
|
spin_lock_irqsave(&qp->sq.lock, flags);
|
2007-05-09 09:00:38 +08:00
|
|
|
|
IB/mlx4: Use multiple WQ blocks to post smaller send WQEs
ConnectX HCA supports shrinking WQEs, so that a single work request
can be made of multiple units of wqe_shift. This way, WRs can differ
in size, and do not have to be a power of 2 in size, saving memory and
speeding up send WR posting. Unfortunately, if we do this then the
wqe_index field in CQEs can't be used to look up the WR ID anymore, so
our implementation does this only if selective signaling is off.
Further, on 32-bit platforms, we can't use vmap() to make the QP
buffer virtually contigious. Thus we have to use constant-sized WRs to
make sure a WR is always fully within a single page-sized chunk.
Finally, we use WRs with the NOP opcode to avoid wrapping around the
queue buffer in the middle of posting a WR, and we set the
NoErrorCompletion bit to avoid getting completions with error for NOP
WRs. However, NEC is only supported starting with firmware 2.2.232,
so we use constant-sized WRs for older firmware. And, since MLX QPs
only support SEND, we use constant-sized WRs in this case.
When stamping during NOP posting, do stamping following setting of the
NOP WQE valid bit.
Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-01-28 16:40:59 +08:00
|
|
|
ind = qp->sq_next_wqe;
|
2007-05-09 09:00:38 +08:00
|
|
|
|
|
|
|
for (nreq = 0; wr; ++nreq, wr = wr->next) {
|
2009-01-17 04:47:47 +08:00
|
|
|
lso_wqe = &dummy;
|
2009-11-13 03:19:44 +08:00
|
|
|
blh = 0;
|
2009-01-17 04:47:47 +08:00
|
|
|
|
2007-05-09 09:00:38 +08:00
|
|
|
if (mlx4_wq_overflow(&qp->sq, nreq, qp->ibqp.send_cq)) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
*bad_wr = wr;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (unlikely(wr->num_sge > qp->sq.max_gs)) {
|
|
|
|
err = -EINVAL;
|
|
|
|
*bad_wr = wr;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2007-06-18 23:13:48 +08:00
|
|
|
ctrl = wqe = get_send_wqe(qp, ind & (qp->sq.wqe_cnt - 1));
|
IB/mlx4: Use multiple WQ blocks to post smaller send WQEs
ConnectX HCA supports shrinking WQEs, so that a single work request
can be made of multiple units of wqe_shift. This way, WRs can differ
in size, and do not have to be a power of 2 in size, saving memory and
speeding up send WR posting. Unfortunately, if we do this then the
wqe_index field in CQEs can't be used to look up the WR ID anymore, so
our implementation does this only if selective signaling is off.
Further, on 32-bit platforms, we can't use vmap() to make the QP
buffer virtually contigious. Thus we have to use constant-sized WRs to
make sure a WR is always fully within a single page-sized chunk.
Finally, we use WRs with the NOP opcode to avoid wrapping around the
queue buffer in the middle of posting a WR, and we set the
NoErrorCompletion bit to avoid getting completions with error for NOP
WRs. However, NEC is only supported starting with firmware 2.2.232,
so we use constant-sized WRs for older firmware. And, since MLX QPs
only support SEND, we use constant-sized WRs in this case.
When stamping during NOP posting, do stamping following setting of the
NOP WQE valid bit.
Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-01-28 16:40:59 +08:00
|
|
|
qp->sq.wrid[(qp->sq.head + nreq) & (qp->sq.wqe_cnt - 1)] = wr->wr_id;
|
2007-05-09 09:00:38 +08:00
|
|
|
|
|
|
|
ctrl->srcrb_flags =
|
|
|
|
(wr->send_flags & IB_SEND_SIGNALED ?
|
|
|
|
cpu_to_be32(MLX4_WQE_CTRL_CQ_UPDATE) : 0) |
|
|
|
|
(wr->send_flags & IB_SEND_SOLICITED ?
|
|
|
|
cpu_to_be32(MLX4_WQE_CTRL_SOLICITED) : 0) |
|
2008-04-17 12:01:10 +08:00
|
|
|
((wr->send_flags & IB_SEND_IP_CSUM) ?
|
|
|
|
cpu_to_be32(MLX4_WQE_CTRL_IP_CSUM |
|
|
|
|
MLX4_WQE_CTRL_TCP_UDP_CSUM) : 0) |
|
2007-05-09 09:00:38 +08:00
|
|
|
qp->sq_signal_bits;
|
|
|
|
|
2008-07-23 23:12:26 +08:00
|
|
|
ctrl->imm = send_ieth(wr);
|
2007-05-09 09:00:38 +08:00
|
|
|
|
|
|
|
wqe += sizeof *ctrl;
|
|
|
|
size = sizeof *ctrl / 16;
|
|
|
|
|
2012-08-03 16:40:40 +08:00
|
|
|
switch (qp->mlx4_ib_qp_type) {
|
|
|
|
case MLX4_IB_QPT_RC:
|
|
|
|
case MLX4_IB_QPT_UC:
|
2007-05-09 09:00:38 +08:00
|
|
|
switch (wr->opcode) {
|
|
|
|
case IB_WR_ATOMIC_CMP_AND_SWP:
|
|
|
|
case IB_WR_ATOMIC_FETCH_AND_ADD:
|
2010-04-14 22:23:39 +08:00
|
|
|
case IB_WR_MASKED_ATOMIC_FETCH_AND_ADD:
|
2007-07-19 02:47:55 +08:00
|
|
|
set_raddr_seg(wqe, wr->wr.atomic.remote_addr,
|
|
|
|
wr->wr.atomic.rkey);
|
2007-05-09 09:00:38 +08:00
|
|
|
wqe += sizeof (struct mlx4_wqe_raddr_seg);
|
|
|
|
|
2007-07-19 02:47:55 +08:00
|
|
|
set_atomic_seg(wqe, wr);
|
2007-05-09 09:00:38 +08:00
|
|
|
wqe += sizeof (struct mlx4_wqe_atomic_seg);
|
2007-07-19 02:47:55 +08:00
|
|
|
|
2007-05-09 09:00:38 +08:00
|
|
|
size += (sizeof (struct mlx4_wqe_raddr_seg) +
|
|
|
|
sizeof (struct mlx4_wqe_atomic_seg)) / 16;
|
2010-04-14 22:23:39 +08:00
|
|
|
|
|
|
|
break;
|
|
|
|
|
|
|
|
case IB_WR_MASKED_ATOMIC_CMP_AND_SWP:
|
|
|
|
set_raddr_seg(wqe, wr->wr.atomic.remote_addr,
|
|
|
|
wr->wr.atomic.rkey);
|
|
|
|
wqe += sizeof (struct mlx4_wqe_raddr_seg);
|
|
|
|
|
|
|
|
set_masked_atomic_seg(wqe, wr);
|
|
|
|
wqe += sizeof (struct mlx4_wqe_masked_atomic_seg);
|
|
|
|
|
|
|
|
size += (sizeof (struct mlx4_wqe_raddr_seg) +
|
|
|
|
sizeof (struct mlx4_wqe_masked_atomic_seg)) / 16;
|
2007-05-09 09:00:38 +08:00
|
|
|
|
|
|
|
break;
|
|
|
|
|
|
|
|
case IB_WR_RDMA_READ:
|
|
|
|
case IB_WR_RDMA_WRITE:
|
|
|
|
case IB_WR_RDMA_WRITE_WITH_IMM:
|
2007-07-19 02:47:55 +08:00
|
|
|
set_raddr_seg(wqe, wr->wr.rdma.remote_addr,
|
|
|
|
wr->wr.rdma.rkey);
|
2007-05-09 09:00:38 +08:00
|
|
|
wqe += sizeof (struct mlx4_wqe_raddr_seg);
|
|
|
|
size += sizeof (struct mlx4_wqe_raddr_seg) / 16;
|
|
|
|
break;
|
2008-07-23 23:12:26 +08:00
|
|
|
|
|
|
|
case IB_WR_LOCAL_INV:
|
2009-06-06 01:36:24 +08:00
|
|
|
ctrl->srcrb_flags |=
|
|
|
|
cpu_to_be32(MLX4_WQE_CTRL_STRONG_ORDER);
|
2008-07-23 23:12:26 +08:00
|
|
|
set_local_inv_seg(wqe, wr->ex.invalidate_rkey);
|
|
|
|
wqe += sizeof (struct mlx4_wqe_local_inval_seg);
|
|
|
|
size += sizeof (struct mlx4_wqe_local_inval_seg) / 16;
|
|
|
|
break;
|
|
|
|
|
|
|
|
case IB_WR_FAST_REG_MR:
|
2009-06-06 01:36:24 +08:00
|
|
|
ctrl->srcrb_flags |=
|
|
|
|
cpu_to_be32(MLX4_WQE_CTRL_STRONG_ORDER);
|
2008-07-23 23:12:26 +08:00
|
|
|
set_fmr_seg(wqe, wr);
|
|
|
|
wqe += sizeof (struct mlx4_wqe_fmr_seg);
|
|
|
|
size += sizeof (struct mlx4_wqe_fmr_seg) / 16;
|
|
|
|
break;
|
2007-05-09 09:00:38 +08:00
|
|
|
|
2013-02-07 00:19:15 +08:00
|
|
|
case IB_WR_BIND_MW:
|
|
|
|
ctrl->srcrb_flags |=
|
|
|
|
cpu_to_be32(MLX4_WQE_CTRL_STRONG_ORDER);
|
|
|
|
set_bind_seg(wqe, wr);
|
|
|
|
wqe += sizeof(struct mlx4_wqe_bind_seg);
|
|
|
|
size += sizeof(struct mlx4_wqe_bind_seg) / 16;
|
|
|
|
break;
|
2007-05-09 09:00:38 +08:00
|
|
|
default:
|
|
|
|
/* No extra segments required for sends */
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
|
2012-08-03 16:40:40 +08:00
|
|
|
case MLX4_IB_QPT_TUN_SMI_OWNER:
|
|
|
|
err = build_sriov_qp0_header(to_msqp(qp), wr, ctrl, &seglen);
|
|
|
|
if (unlikely(err)) {
|
|
|
|
*bad_wr = wr;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
wqe += seglen;
|
|
|
|
size += seglen / 16;
|
|
|
|
break;
|
|
|
|
case MLX4_IB_QPT_TUN_SMI:
|
|
|
|
case MLX4_IB_QPT_TUN_GSI:
|
|
|
|
/* this is a UD qp used in MAD responses to slaves. */
|
|
|
|
set_datagram_seg(wqe, wr);
|
|
|
|
/* set the forced-loopback bit in the data seg av */
|
|
|
|
*(__be32 *) wqe |= cpu_to_be32(0x80000000);
|
|
|
|
wqe += sizeof (struct mlx4_wqe_datagram_seg);
|
|
|
|
size += sizeof (struct mlx4_wqe_datagram_seg) / 16;
|
|
|
|
break;
|
|
|
|
case MLX4_IB_QPT_UD:
|
2011-10-10 16:54:42 +08:00
|
|
|
set_datagram_seg(wqe, wr);
|
2007-05-09 09:00:38 +08:00
|
|
|
wqe += sizeof (struct mlx4_wqe_datagram_seg);
|
|
|
|
size += sizeof (struct mlx4_wqe_datagram_seg) / 16;
|
2008-04-17 12:09:27 +08:00
|
|
|
|
|
|
|
if (wr->opcode == IB_WR_LSO) {
|
2009-11-13 03:19:44 +08:00
|
|
|
err = build_lso_seg(wqe, wr, qp, &seglen, &lso_hdr_sz, &blh);
|
2008-04-17 12:09:27 +08:00
|
|
|
if (unlikely(err)) {
|
|
|
|
*bad_wr = wr;
|
|
|
|
goto out;
|
|
|
|
}
|
2009-01-17 04:47:47 +08:00
|
|
|
lso_wqe = (__be32 *) wqe;
|
2008-04-17 12:09:27 +08:00
|
|
|
wqe += seglen;
|
|
|
|
size += seglen / 16;
|
|
|
|
}
|
2007-05-09 09:00:38 +08:00
|
|
|
break;
|
|
|
|
|
2012-08-03 16:40:40 +08:00
|
|
|
case MLX4_IB_QPT_PROXY_SMI_OWNER:
|
|
|
|
if (unlikely(!mlx4_is_master(to_mdev(ibqp->device)->dev))) {
|
|
|
|
err = -ENOSYS;
|
|
|
|
*bad_wr = wr;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
err = build_sriov_qp0_header(to_msqp(qp), wr, ctrl, &seglen);
|
|
|
|
if (unlikely(err)) {
|
|
|
|
*bad_wr = wr;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
wqe += seglen;
|
|
|
|
size += seglen / 16;
|
|
|
|
/* to start tunnel header on a cache-line boundary */
|
|
|
|
add_zero_len_inline(wqe);
|
|
|
|
wqe += 16;
|
|
|
|
size++;
|
|
|
|
build_tunnel_header(wr, wqe, &seglen);
|
|
|
|
wqe += seglen;
|
|
|
|
size += seglen / 16;
|
|
|
|
break;
|
|
|
|
case MLX4_IB_QPT_PROXY_SMI:
|
|
|
|
/* don't allow QP0 sends on guests */
|
|
|
|
err = -ENOSYS;
|
|
|
|
*bad_wr = wr;
|
|
|
|
goto out;
|
|
|
|
case MLX4_IB_QPT_PROXY_GSI:
|
|
|
|
/* If we are tunneling special qps, this is a UD qp.
|
|
|
|
* In this case we first add a UD segment targeting
|
|
|
|
* the tunnel qp, and then add a header with address
|
|
|
|
* information */
|
|
|
|
set_tunnel_datagram_seg(to_mdev(ibqp->device), wqe, wr, ibqp->qp_type);
|
|
|
|
wqe += sizeof (struct mlx4_wqe_datagram_seg);
|
|
|
|
size += sizeof (struct mlx4_wqe_datagram_seg) / 16;
|
|
|
|
build_tunnel_header(wr, wqe, &seglen);
|
|
|
|
wqe += seglen;
|
|
|
|
size += seglen / 16;
|
|
|
|
break;
|
|
|
|
|
|
|
|
case MLX4_IB_QPT_SMI:
|
|
|
|
case MLX4_IB_QPT_GSI:
|
2008-04-17 12:09:28 +08:00
|
|
|
err = build_mlx_header(to_msqp(qp), wr, ctrl, &seglen);
|
|
|
|
if (unlikely(err)) {
|
2007-05-09 09:00:38 +08:00
|
|
|
*bad_wr = wr;
|
|
|
|
goto out;
|
|
|
|
}
|
2008-04-17 12:09:28 +08:00
|
|
|
wqe += seglen;
|
|
|
|
size += seglen / 16;
|
2007-05-09 09:00:38 +08:00
|
|
|
break;
|
|
|
|
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2007-09-20 00:52:25 +08:00
|
|
|
/*
|
|
|
|
* Write data segments in reverse order, so as to
|
|
|
|
* overwrite cacheline stamp last within each
|
|
|
|
* cacheline. This avoids issues with WQE
|
|
|
|
* prefetching.
|
|
|
|
*/
|
2007-05-09 09:00:38 +08:00
|
|
|
|
2007-09-20 00:52:25 +08:00
|
|
|
dseg = wqe;
|
|
|
|
dseg += wr->num_sge - 1;
|
|
|
|
size += wr->num_sge * (sizeof (struct mlx4_wqe_data_seg) / 16);
|
2007-05-09 09:00:38 +08:00
|
|
|
|
|
|
|
/* Add one more inline data segment for ICRC for MLX sends */
|
2012-08-03 16:40:40 +08:00
|
|
|
if (unlikely(qp->mlx4_ib_qp_type == MLX4_IB_QPT_SMI ||
|
|
|
|
qp->mlx4_ib_qp_type == MLX4_IB_QPT_GSI ||
|
|
|
|
qp->mlx4_ib_qp_type &
|
|
|
|
(MLX4_IB_QPT_PROXY_SMI_OWNER | MLX4_IB_QPT_TUN_SMI_OWNER))) {
|
2007-09-20 00:52:25 +08:00
|
|
|
set_mlx_icrc_seg(dseg + 1);
|
2007-05-09 09:00:38 +08:00
|
|
|
size += sizeof (struct mlx4_wqe_data_seg) / 16;
|
|
|
|
}
|
|
|
|
|
2007-09-20 00:52:25 +08:00
|
|
|
for (i = wr->num_sge - 1; i >= 0; --i, --dseg)
|
|
|
|
set_data_seg(dseg, wr->sg_list + i);
|
|
|
|
|
2009-01-17 04:47:47 +08:00
|
|
|
/*
|
|
|
|
* Possibly overwrite stamping in cacheline with LSO
|
|
|
|
* segment only after making sure all data segments
|
|
|
|
* are written.
|
|
|
|
*/
|
|
|
|
wmb();
|
|
|
|
*lso_wqe = lso_hdr_sz;
|
|
|
|
|
2007-05-09 09:00:38 +08:00
|
|
|
ctrl->fence_size = (wr->send_flags & IB_SEND_FENCE ?
|
|
|
|
MLX4_WQE_CTRL_FENCE : 0) | size;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Make sure descriptor is fully written before
|
|
|
|
* setting ownership bit (because HW can start
|
|
|
|
* executing as soon as we do).
|
|
|
|
*/
|
|
|
|
wmb();
|
|
|
|
|
2007-05-19 23:51:58 +08:00
|
|
|
if (wr->opcode < 0 || wr->opcode >= ARRAY_SIZE(mlx4_ib_opcode)) {
|
2012-02-10 00:52:50 +08:00
|
|
|
*bad_wr = wr;
|
2007-05-09 09:00:38 +08:00
|
|
|
err = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
ctrl->owner_opcode = mlx4_ib_opcode[wr->opcode] |
|
2009-11-13 03:19:44 +08:00
|
|
|
(ind & qp->sq.wqe_cnt ? cpu_to_be32(1 << 31) : 0) | blh;
|
2007-06-18 23:13:48 +08:00
|
|
|
|
IB/mlx4: Use multiple WQ blocks to post smaller send WQEs
ConnectX HCA supports shrinking WQEs, so that a single work request
can be made of multiple units of wqe_shift. This way, WRs can differ
in size, and do not have to be a power of 2 in size, saving memory and
speeding up send WR posting. Unfortunately, if we do this then the
wqe_index field in CQEs can't be used to look up the WR ID anymore, so
our implementation does this only if selective signaling is off.
Further, on 32-bit platforms, we can't use vmap() to make the QP
buffer virtually contigious. Thus we have to use constant-sized WRs to
make sure a WR is always fully within a single page-sized chunk.
Finally, we use WRs with the NOP opcode to avoid wrapping around the
queue buffer in the middle of posting a WR, and we set the
NoErrorCompletion bit to avoid getting completions with error for NOP
WRs. However, NEC is only supported starting with firmware 2.2.232,
so we use constant-sized WRs for older firmware. And, since MLX QPs
only support SEND, we use constant-sized WRs in this case.
When stamping during NOP posting, do stamping following setting of the
NOP WQE valid bit.
Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-01-28 16:40:59 +08:00
|
|
|
stamp = ind + qp->sq_spare_wqes;
|
|
|
|
ind += DIV_ROUND_UP(size * 16, 1U << qp->sq.wqe_shift);
|
|
|
|
|
2007-06-18 23:13:48 +08:00
|
|
|
/*
|
|
|
|
* We can improve latency by not stamping the last
|
|
|
|
* send queue WQE until after ringing the doorbell, so
|
|
|
|
* only stamp here if there are still more WQEs to post.
|
IB/mlx4: Use multiple WQ blocks to post smaller send WQEs
ConnectX HCA supports shrinking WQEs, so that a single work request
can be made of multiple units of wqe_shift. This way, WRs can differ
in size, and do not have to be a power of 2 in size, saving memory and
speeding up send WR posting. Unfortunately, if we do this then the
wqe_index field in CQEs can't be used to look up the WR ID anymore, so
our implementation does this only if selective signaling is off.
Further, on 32-bit platforms, we can't use vmap() to make the QP
buffer virtually contigious. Thus we have to use constant-sized WRs to
make sure a WR is always fully within a single page-sized chunk.
Finally, we use WRs with the NOP opcode to avoid wrapping around the
queue buffer in the middle of posting a WR, and we set the
NoErrorCompletion bit to avoid getting completions with error for NOP
WRs. However, NEC is only supported starting with firmware 2.2.232,
so we use constant-sized WRs for older firmware. And, since MLX QPs
only support SEND, we use constant-sized WRs in this case.
When stamping during NOP posting, do stamping following setting of the
NOP WQE valid bit.
Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-01-28 16:40:59 +08:00
|
|
|
*
|
|
|
|
* Same optimization applies to padding with NOP wqe
|
|
|
|
* in case of WQE shrinking (used to prevent wrap-around
|
|
|
|
* in the middle of WR).
|
2007-06-18 23:13:48 +08:00
|
|
|
*/
|
IB/mlx4: Use multiple WQ blocks to post smaller send WQEs
ConnectX HCA supports shrinking WQEs, so that a single work request
can be made of multiple units of wqe_shift. This way, WRs can differ
in size, and do not have to be a power of 2 in size, saving memory and
speeding up send WR posting. Unfortunately, if we do this then the
wqe_index field in CQEs can't be used to look up the WR ID anymore, so
our implementation does this only if selective signaling is off.
Further, on 32-bit platforms, we can't use vmap() to make the QP
buffer virtually contigious. Thus we have to use constant-sized WRs to
make sure a WR is always fully within a single page-sized chunk.
Finally, we use WRs with the NOP opcode to avoid wrapping around the
queue buffer in the middle of posting a WR, and we set the
NoErrorCompletion bit to avoid getting completions with error for NOP
WRs. However, NEC is only supported starting with firmware 2.2.232,
so we use constant-sized WRs for older firmware. And, since MLX QPs
only support SEND, we use constant-sized WRs in this case.
When stamping during NOP posting, do stamping following setting of the
NOP WQE valid bit.
Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-01-28 16:40:59 +08:00
|
|
|
if (wr->next) {
|
|
|
|
stamp_send_wqe(qp, stamp, size * 16);
|
|
|
|
ind = pad_wraparound(qp, ind);
|
|
|
|
}
|
2007-05-09 09:00:38 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
|
|
|
if (likely(nreq)) {
|
|
|
|
qp->sq.head += nreq;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Make sure that descriptors are written before
|
|
|
|
* doorbell record.
|
|
|
|
*/
|
|
|
|
wmb();
|
|
|
|
|
|
|
|
writel(qp->doorbell_qpn,
|
|
|
|
to_mdev(ibqp->device)->uar_map + MLX4_SEND_DOORBELL);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Make sure doorbells don't leak out of SQ spinlock
|
|
|
|
* and reach the HCA out of order.
|
|
|
|
*/
|
|
|
|
mmiowb();
|
2007-06-18 23:13:48 +08:00
|
|
|
|
IB/mlx4: Use multiple WQ blocks to post smaller send WQEs
ConnectX HCA supports shrinking WQEs, so that a single work request
can be made of multiple units of wqe_shift. This way, WRs can differ
in size, and do not have to be a power of 2 in size, saving memory and
speeding up send WR posting. Unfortunately, if we do this then the
wqe_index field in CQEs can't be used to look up the WR ID anymore, so
our implementation does this only if selective signaling is off.
Further, on 32-bit platforms, we can't use vmap() to make the QP
buffer virtually contigious. Thus we have to use constant-sized WRs to
make sure a WR is always fully within a single page-sized chunk.
Finally, we use WRs with the NOP opcode to avoid wrapping around the
queue buffer in the middle of posting a WR, and we set the
NoErrorCompletion bit to avoid getting completions with error for NOP
WRs. However, NEC is only supported starting with firmware 2.2.232,
so we use constant-sized WRs for older firmware. And, since MLX QPs
only support SEND, we use constant-sized WRs in this case.
When stamping during NOP posting, do stamping following setting of the
NOP WQE valid bit.
Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-01-28 16:40:59 +08:00
|
|
|
stamp_send_wqe(qp, stamp, size * 16);
|
|
|
|
|
|
|
|
ind = pad_wraparound(qp, ind);
|
|
|
|
qp->sq_next_wqe = ind;
|
2007-05-09 09:00:38 +08:00
|
|
|
}
|
|
|
|
|
2007-10-31 01:53:54 +08:00
|
|
|
spin_unlock_irqrestore(&qp->sq.lock, flags);
|
2007-05-09 09:00:38 +08:00
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
int mlx4_ib_post_recv(struct ib_qp *ibqp, struct ib_recv_wr *wr,
|
|
|
|
struct ib_recv_wr **bad_wr)
|
|
|
|
{
|
|
|
|
struct mlx4_ib_qp *qp = to_mqp(ibqp);
|
|
|
|
struct mlx4_wqe_data_seg *scat;
|
|
|
|
unsigned long flags;
|
|
|
|
int err = 0;
|
|
|
|
int nreq;
|
|
|
|
int ind;
|
2012-08-03 16:40:40 +08:00
|
|
|
int max_gs;
|
2007-05-09 09:00:38 +08:00
|
|
|
int i;
|
|
|
|
|
2012-08-03 16:40:40 +08:00
|
|
|
max_gs = qp->rq.max_gs;
|
2007-05-09 09:00:38 +08:00
|
|
|
spin_lock_irqsave(&qp->rq.lock, flags);
|
|
|
|
|
2007-06-18 23:13:48 +08:00
|
|
|
ind = qp->rq.head & (qp->rq.wqe_cnt - 1);
|
2007-05-09 09:00:38 +08:00
|
|
|
|
|
|
|
for (nreq = 0; wr; ++nreq, wr = wr->next) {
|
2010-01-07 04:51:30 +08:00
|
|
|
if (mlx4_wq_overflow(&qp->rq, nreq, qp->ibqp.recv_cq)) {
|
2007-05-09 09:00:38 +08:00
|
|
|
err = -ENOMEM;
|
|
|
|
*bad_wr = wr;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (unlikely(wr->num_sge > qp->rq.max_gs)) {
|
|
|
|
err = -EINVAL;
|
|
|
|
*bad_wr = wr;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
scat = get_recv_wqe(qp, ind);
|
|
|
|
|
2012-08-03 16:40:40 +08:00
|
|
|
if (qp->mlx4_ib_qp_type & (MLX4_IB_QPT_PROXY_SMI_OWNER |
|
|
|
|
MLX4_IB_QPT_PROXY_SMI | MLX4_IB_QPT_PROXY_GSI)) {
|
|
|
|
ib_dma_sync_single_for_device(ibqp->device,
|
|
|
|
qp->sqp_proxy_rcv[ind].map,
|
|
|
|
sizeof (struct mlx4_ib_proxy_sqp_hdr),
|
|
|
|
DMA_FROM_DEVICE);
|
|
|
|
scat->byte_count =
|
|
|
|
cpu_to_be32(sizeof (struct mlx4_ib_proxy_sqp_hdr));
|
|
|
|
/* use dma lkey from upper layer entry */
|
|
|
|
scat->lkey = cpu_to_be32(wr->sg_list->lkey);
|
|
|
|
scat->addr = cpu_to_be64(qp->sqp_proxy_rcv[ind].map);
|
|
|
|
scat++;
|
|
|
|
max_gs--;
|
|
|
|
}
|
|
|
|
|
2007-10-10 10:59:05 +08:00
|
|
|
for (i = 0; i < wr->num_sge; ++i)
|
|
|
|
__set_data_seg(scat + i, wr->sg_list + i);
|
2007-05-09 09:00:38 +08:00
|
|
|
|
2012-08-03 16:40:40 +08:00
|
|
|
if (i < max_gs) {
|
2007-05-09 09:00:38 +08:00
|
|
|
scat[i].byte_count = 0;
|
|
|
|
scat[i].lkey = cpu_to_be32(MLX4_INVALID_LKEY);
|
|
|
|
scat[i].addr = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
qp->rq.wrid[ind] = wr->wr_id;
|
|
|
|
|
2007-06-18 23:13:48 +08:00
|
|
|
ind = (ind + 1) & (qp->rq.wqe_cnt - 1);
|
2007-05-09 09:00:38 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
|
|
|
if (likely(nreq)) {
|
|
|
|
qp->rq.head += nreq;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Make sure that descriptors are written before
|
|
|
|
* doorbell record.
|
|
|
|
*/
|
|
|
|
wmb();
|
|
|
|
|
|
|
|
*qp->db.db = cpu_to_be32(qp->rq.head & 0xffff);
|
|
|
|
}
|
|
|
|
|
|
|
|
spin_unlock_irqrestore(&qp->rq.lock, flags);
|
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
2007-06-21 17:27:47 +08:00
|
|
|
|
|
|
|
static inline enum ib_qp_state to_ib_qp_state(enum mlx4_qp_state mlx4_state)
|
|
|
|
{
|
|
|
|
switch (mlx4_state) {
|
|
|
|
case MLX4_QP_STATE_RST: return IB_QPS_RESET;
|
|
|
|
case MLX4_QP_STATE_INIT: return IB_QPS_INIT;
|
|
|
|
case MLX4_QP_STATE_RTR: return IB_QPS_RTR;
|
|
|
|
case MLX4_QP_STATE_RTS: return IB_QPS_RTS;
|
|
|
|
case MLX4_QP_STATE_SQ_DRAINING:
|
|
|
|
case MLX4_QP_STATE_SQD: return IB_QPS_SQD;
|
|
|
|
case MLX4_QP_STATE_SQER: return IB_QPS_SQE;
|
|
|
|
case MLX4_QP_STATE_ERR: return IB_QPS_ERR;
|
|
|
|
default: return -1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline enum ib_mig_state to_ib_mig_state(int mlx4_mig_state)
|
|
|
|
{
|
|
|
|
switch (mlx4_mig_state) {
|
|
|
|
case MLX4_QP_PM_ARMED: return IB_MIG_ARMED;
|
|
|
|
case MLX4_QP_PM_REARM: return IB_MIG_REARM;
|
|
|
|
case MLX4_QP_PM_MIGRATED: return IB_MIG_MIGRATED;
|
|
|
|
default: return -1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static int to_ib_qp_access_flags(int mlx4_flags)
|
|
|
|
{
|
|
|
|
int ib_flags = 0;
|
|
|
|
|
|
|
|
if (mlx4_flags & MLX4_QP_BIT_RRE)
|
|
|
|
ib_flags |= IB_ACCESS_REMOTE_READ;
|
|
|
|
if (mlx4_flags & MLX4_QP_BIT_RWE)
|
|
|
|
ib_flags |= IB_ACCESS_REMOTE_WRITE;
|
|
|
|
if (mlx4_flags & MLX4_QP_BIT_RAE)
|
|
|
|
ib_flags |= IB_ACCESS_REMOTE_ATOMIC;
|
|
|
|
|
|
|
|
return ib_flags;
|
|
|
|
}
|
|
|
|
|
2010-08-26 22:19:22 +08:00
|
|
|
static void to_ib_ah_attr(struct mlx4_ib_dev *ibdev, struct ib_ah_attr *ib_ah_attr,
|
2007-06-21 17:27:47 +08:00
|
|
|
struct mlx4_qp_path *path)
|
|
|
|
{
|
2010-08-26 22:19:22 +08:00
|
|
|
struct mlx4_dev *dev = ibdev->dev;
|
|
|
|
int is_eth;
|
|
|
|
|
2007-07-15 20:00:09 +08:00
|
|
|
memset(ib_ah_attr, 0, sizeof *ib_ah_attr);
|
2007-06-21 17:27:47 +08:00
|
|
|
ib_ah_attr->port_num = path->sched_queue & 0x40 ? 2 : 1;
|
|
|
|
|
|
|
|
if (ib_ah_attr->port_num == 0 || ib_ah_attr->port_num > dev->caps.num_ports)
|
|
|
|
return;
|
|
|
|
|
2010-08-26 22:19:22 +08:00
|
|
|
is_eth = rdma_port_get_link_layer(&ibdev->ib_dev, ib_ah_attr->port_num) ==
|
|
|
|
IB_LINK_LAYER_ETHERNET;
|
|
|
|
if (is_eth)
|
|
|
|
ib_ah_attr->sl = ((path->sched_queue >> 3) & 0x7) |
|
|
|
|
((path->sched_queue & 4) << 1);
|
|
|
|
else
|
|
|
|
ib_ah_attr->sl = (path->sched_queue >> 2) & 0xf;
|
|
|
|
|
2007-06-21 17:27:47 +08:00
|
|
|
ib_ah_attr->dlid = be16_to_cpu(path->rlid);
|
|
|
|
ib_ah_attr->src_path_bits = path->grh_mylmc & 0x7f;
|
|
|
|
ib_ah_attr->static_rate = path->static_rate ? path->static_rate - 5 : 0;
|
|
|
|
ib_ah_attr->ah_flags = (path->grh_mylmc & (1 << 7)) ? IB_AH_GRH : 0;
|
|
|
|
if (ib_ah_attr->ah_flags) {
|
|
|
|
ib_ah_attr->grh.sgid_index = path->mgid_index;
|
|
|
|
ib_ah_attr->grh.hop_limit = path->hop_limit;
|
|
|
|
ib_ah_attr->grh.traffic_class =
|
|
|
|
(be32_to_cpu(path->tclass_flowlabel) >> 20) & 0xff;
|
|
|
|
ib_ah_attr->grh.flow_label =
|
2007-07-18 09:37:38 +08:00
|
|
|
be32_to_cpu(path->tclass_flowlabel) & 0xfffff;
|
2007-06-21 17:27:47 +08:00
|
|
|
memcpy(ib_ah_attr->grh.dgid.raw,
|
|
|
|
path->rgid, sizeof ib_ah_attr->grh.dgid.raw);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
int mlx4_ib_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr, int qp_attr_mask,
|
|
|
|
struct ib_qp_init_attr *qp_init_attr)
|
|
|
|
{
|
|
|
|
struct mlx4_ib_dev *dev = to_mdev(ibqp->device);
|
|
|
|
struct mlx4_ib_qp *qp = to_mqp(ibqp);
|
|
|
|
struct mlx4_qp_context context;
|
|
|
|
int mlx4_state;
|
2008-04-17 12:09:34 +08:00
|
|
|
int err = 0;
|
|
|
|
|
|
|
|
mutex_lock(&qp->mutex);
|
2007-06-21 17:27:47 +08:00
|
|
|
|
|
|
|
if (qp->state == IB_QPS_RESET) {
|
|
|
|
qp_attr->qp_state = IB_QPS_RESET;
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
|
|
|
err = mlx4_qp_query(dev->dev, &qp->mqp, &context);
|
2008-04-17 12:09:34 +08:00
|
|
|
if (err) {
|
|
|
|
err = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
2007-06-21 17:27:47 +08:00
|
|
|
|
|
|
|
mlx4_state = be32_to_cpu(context.flags) >> 28;
|
|
|
|
|
2008-04-17 12:09:34 +08:00
|
|
|
qp->state = to_ib_qp_state(mlx4_state);
|
|
|
|
qp_attr->qp_state = qp->state;
|
2007-06-21 17:27:47 +08:00
|
|
|
qp_attr->path_mtu = context.mtu_msgmax >> 5;
|
|
|
|
qp_attr->path_mig_state =
|
|
|
|
to_ib_mig_state((be32_to_cpu(context.flags) >> 11) & 0x3);
|
|
|
|
qp_attr->qkey = be32_to_cpu(context.qkey);
|
|
|
|
qp_attr->rq_psn = be32_to_cpu(context.rnr_nextrecvpsn) & 0xffffff;
|
|
|
|
qp_attr->sq_psn = be32_to_cpu(context.next_send_psn) & 0xffffff;
|
|
|
|
qp_attr->dest_qp_num = be32_to_cpu(context.remote_qpn) & 0xffffff;
|
|
|
|
qp_attr->qp_access_flags =
|
|
|
|
to_ib_qp_access_flags(be32_to_cpu(context.params2));
|
|
|
|
|
|
|
|
if (qp->ibqp.qp_type == IB_QPT_RC || qp->ibqp.qp_type == IB_QPT_UC) {
|
2010-08-26 22:19:22 +08:00
|
|
|
to_ib_ah_attr(dev, &qp_attr->ah_attr, &context.pri_path);
|
|
|
|
to_ib_ah_attr(dev, &qp_attr->alt_ah_attr, &context.alt_path);
|
2007-06-21 17:27:47 +08:00
|
|
|
qp_attr->alt_pkey_index = context.alt_path.pkey_index & 0x7f;
|
|
|
|
qp_attr->alt_port_num = qp_attr->alt_ah_attr.port_num;
|
|
|
|
}
|
|
|
|
|
|
|
|
qp_attr->pkey_index = context.pri_path.pkey_index & 0x7f;
|
2007-07-18 09:37:38 +08:00
|
|
|
if (qp_attr->qp_state == IB_QPS_INIT)
|
|
|
|
qp_attr->port_num = qp->port;
|
|
|
|
else
|
|
|
|
qp_attr->port_num = context.pri_path.sched_queue & 0x40 ? 2 : 1;
|
2007-06-21 17:27:47 +08:00
|
|
|
|
|
|
|
/* qp_attr->en_sqd_async_notify is only applicable in modify qp */
|
|
|
|
qp_attr->sq_draining = mlx4_state == MLX4_QP_STATE_SQ_DRAINING;
|
|
|
|
|
|
|
|
qp_attr->max_rd_atomic = 1 << ((be32_to_cpu(context.params1) >> 21) & 0x7);
|
|
|
|
|
|
|
|
qp_attr->max_dest_rd_atomic =
|
|
|
|
1 << ((be32_to_cpu(context.params2) >> 21) & 0x7);
|
|
|
|
qp_attr->min_rnr_timer =
|
|
|
|
(be32_to_cpu(context.rnr_nextrecvpsn) >> 24) & 0x1f;
|
|
|
|
qp_attr->timeout = context.pri_path.ackto >> 3;
|
|
|
|
qp_attr->retry_cnt = (be32_to_cpu(context.params1) >> 16) & 0x7;
|
|
|
|
qp_attr->rnr_retry = (be32_to_cpu(context.params1) >> 13) & 0x7;
|
|
|
|
qp_attr->alt_timeout = context.alt_path.ackto >> 3;
|
|
|
|
|
|
|
|
done:
|
|
|
|
qp_attr->cur_qp_state = qp_attr->qp_state;
|
2007-07-18 11:59:02 +08:00
|
|
|
qp_attr->cap.max_recv_wr = qp->rq.wqe_cnt;
|
|
|
|
qp_attr->cap.max_recv_sge = qp->rq.max_gs;
|
|
|
|
|
2007-06-21 17:27:47 +08:00
|
|
|
if (!ibqp->uobject) {
|
2007-07-18 11:59:02 +08:00
|
|
|
qp_attr->cap.max_send_wr = qp->sq.wqe_cnt;
|
|
|
|
qp_attr->cap.max_send_sge = qp->sq.max_gs;
|
|
|
|
} else {
|
|
|
|
qp_attr->cap.max_send_wr = 0;
|
|
|
|
qp_attr->cap.max_send_sge = 0;
|
2007-06-21 17:27:47 +08:00
|
|
|
}
|
|
|
|
|
2007-07-18 11:59:02 +08:00
|
|
|
/*
|
|
|
|
* We don't support inline sends for kernel QPs (yet), and we
|
|
|
|
* don't know what userspace's value should be.
|
|
|
|
*/
|
|
|
|
qp_attr->cap.max_inline_data = 0;
|
|
|
|
|
|
|
|
qp_init_attr->cap = qp_attr->cap;
|
|
|
|
|
2008-07-15 14:48:48 +08:00
|
|
|
qp_init_attr->create_flags = 0;
|
|
|
|
if (qp->flags & MLX4_IB_QP_BLOCK_MULTICAST_LOOPBACK)
|
|
|
|
qp_init_attr->create_flags |= IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK;
|
|
|
|
|
|
|
|
if (qp->flags & MLX4_IB_QP_LSO)
|
|
|
|
qp_init_attr->create_flags |= IB_QP_CREATE_IPOIB_UD_LSO;
|
|
|
|
|
2012-08-23 22:09:03 +08:00
|
|
|
qp_init_attr->sq_sig_type =
|
|
|
|
qp->sq_signal_bits == cpu_to_be32(MLX4_WQE_CTRL_CQ_UPDATE) ?
|
|
|
|
IB_SIGNAL_ALL_WR : IB_SIGNAL_REQ_WR;
|
|
|
|
|
2008-04-17 12:09:34 +08:00
|
|
|
out:
|
|
|
|
mutex_unlock(&qp->mutex);
|
|
|
|
return err;
|
2007-06-21 17:27:47 +08:00
|
|
|
}
|
|
|
|
|