2017-06-15 02:37:39 +08:00
|
|
|
/*
|
|
|
|
* Copyright (c) 2016-2017, Mellanox Technologies. All rights reserved.
|
|
|
|
* Copyright (c) 2016-2017, Dave Watson <davejwatson@fb.com>. All rights reserved.
|
|
|
|
* Copyright (c) 2016-2017, Lance Chao <lancerchao@fb.com>. All rights reserved.
|
|
|
|
* Copyright (c) 2016, Fridolin Pokorny <fridolin.pokorny@gmail.com>. All rights reserved.
|
|
|
|
* Copyright (c) 2016, Nikos Mavrogiannopoulos <nmav@gnutls.org>. All rights reserved.
|
2018-10-13 08:46:01 +08:00
|
|
|
* Copyright (c) 2018, Covalent IO, Inc. http://covalent.io
|
2017-06-15 02:37:39 +08:00
|
|
|
*
|
|
|
|
* This software is available to you under a choice of one of two
|
|
|
|
* licenses. You may choose to be licensed under the terms of the GNU
|
|
|
|
* General Public License (GPL) Version 2, available from the file
|
|
|
|
* COPYING in the main directory of this source tree, or the
|
|
|
|
* OpenIB.org BSD license below:
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or
|
|
|
|
* without modification, are permitted provided that the following
|
|
|
|
* conditions are met:
|
|
|
|
*
|
|
|
|
* - Redistributions of source code must retain the above
|
|
|
|
* copyright notice, this list of conditions and the following
|
|
|
|
* disclaimer.
|
|
|
|
*
|
|
|
|
* - Redistributions in binary form must reproduce the above
|
|
|
|
* copyright notice, this list of conditions and the following
|
|
|
|
* disclaimer in the documentation and/or other materials
|
|
|
|
* provided with the distribution.
|
|
|
|
*
|
|
|
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
|
|
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
|
|
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
|
|
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
|
|
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
|
|
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
|
|
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
|
|
* SOFTWARE.
|
|
|
|
*/
|
|
|
|
|
2021-10-28 05:59:20 +08:00
|
|
|
#include <linux/bug.h>
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
#include <linux/sched/signal.h>
|
2017-06-15 02:37:39 +08:00
|
|
|
#include <linux/module.h>
|
2021-05-14 11:11:02 +08:00
|
|
|
#include <linux/splice.h>
|
2017-06-15 02:37:39 +08:00
|
|
|
#include <crypto/aead.h>
|
|
|
|
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
#include <net/strparser.h>
|
2017-06-15 02:37:39 +08:00
|
|
|
#include <net/tls.h>
|
|
|
|
|
2022-04-09 02:31:26 +08:00
|
|
|
struct tls_decrypt_arg {
|
|
|
|
bool zc;
|
|
|
|
bool async;
|
2022-07-06 07:59:23 +08:00
|
|
|
u8 tail;
|
2022-04-09 02:31:26 +08:00
|
|
|
};
|
|
|
|
|
2022-07-08 09:03:11 +08:00
|
|
|
struct tls_decrypt_ctx {
|
|
|
|
u8 iv[MAX_IV_SIZE];
|
|
|
|
u8 aad[TLS_MAX_AAD_SIZE];
|
|
|
|
u8 tail;
|
|
|
|
struct scatterlist sg[];
|
|
|
|
};
|
|
|
|
|
2021-10-28 05:59:20 +08:00
|
|
|
noinline void tls_err_abort(struct sock *sk, int err)
|
|
|
|
{
|
|
|
|
WARN_ON_ONCE(err >= 0);
|
|
|
|
/* sk->sk_err should contain a positive error code. */
|
|
|
|
sk->sk_err = -err;
|
|
|
|
sk_error_report(sk);
|
|
|
|
}
|
|
|
|
|
2018-08-29 07:33:57 +08:00
|
|
|
static int __skb_nsg(struct sk_buff *skb, int offset, int len,
|
|
|
|
unsigned int recursion_level)
|
|
|
|
{
|
|
|
|
int start = skb_headlen(skb);
|
|
|
|
int i, chunk = start - offset;
|
|
|
|
struct sk_buff *frag_iter;
|
|
|
|
int elt = 0;
|
|
|
|
|
|
|
|
if (unlikely(recursion_level >= 24))
|
|
|
|
return -EMSGSIZE;
|
|
|
|
|
|
|
|
if (chunk > 0) {
|
|
|
|
if (chunk > len)
|
|
|
|
chunk = len;
|
|
|
|
elt++;
|
|
|
|
len -= chunk;
|
|
|
|
if (len == 0)
|
|
|
|
return elt;
|
|
|
|
offset += chunk;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
|
|
|
|
int end;
|
|
|
|
|
|
|
|
WARN_ON(start > offset + len);
|
|
|
|
|
|
|
|
end = start + skb_frag_size(&skb_shinfo(skb)->frags[i]);
|
|
|
|
chunk = end - offset;
|
|
|
|
if (chunk > 0) {
|
|
|
|
if (chunk > len)
|
|
|
|
chunk = len;
|
|
|
|
elt++;
|
|
|
|
len -= chunk;
|
|
|
|
if (len == 0)
|
|
|
|
return elt;
|
|
|
|
offset += chunk;
|
|
|
|
}
|
|
|
|
start = end;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (unlikely(skb_has_frag_list(skb))) {
|
|
|
|
skb_walk_frags(skb, frag_iter) {
|
|
|
|
int end, ret;
|
|
|
|
|
|
|
|
WARN_ON(start > offset + len);
|
|
|
|
|
|
|
|
end = start + frag_iter->len;
|
|
|
|
chunk = end - offset;
|
|
|
|
if (chunk > 0) {
|
|
|
|
if (chunk > len)
|
|
|
|
chunk = len;
|
|
|
|
ret = __skb_nsg(frag_iter, offset - start, chunk,
|
|
|
|
recursion_level + 1);
|
|
|
|
if (unlikely(ret < 0))
|
|
|
|
return ret;
|
|
|
|
elt += ret;
|
|
|
|
len -= chunk;
|
|
|
|
if (len == 0)
|
|
|
|
return elt;
|
|
|
|
offset += chunk;
|
|
|
|
}
|
|
|
|
start = end;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
BUG_ON(len);
|
|
|
|
return elt;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Return the number of scatterlist elements required to completely map the
|
|
|
|
* skb, or -EMSGSIZE if the recursion depth is exceeded.
|
|
|
|
*/
|
|
|
|
static int skb_nsg(struct sk_buff *skb, int offset, int len)
|
|
|
|
{
|
|
|
|
return __skb_nsg(skb, offset, len, 0);
|
|
|
|
}
|
|
|
|
|
2022-07-06 07:59:23 +08:00
|
|
|
static int tls_padding_length(struct tls_prot_info *prot, struct sk_buff *skb,
|
|
|
|
struct tls_decrypt_arg *darg)
|
2019-01-31 05:58:31 +08:00
|
|
|
{
|
|
|
|
struct strp_msg *rxm = strp_msg(skb);
|
2022-04-08 11:38:16 +08:00
|
|
|
struct tls_msg *tlm = tls_msg(skb);
|
2019-01-31 05:58:31 +08:00
|
|
|
int sub = 0;
|
|
|
|
|
|
|
|
/* Determine zero-padding length */
|
2019-05-10 07:14:07 +08:00
|
|
|
if (prot->version == TLS_1_3_VERSION) {
|
2022-04-08 11:38:20 +08:00
|
|
|
int offset = rxm->full_len - TLS_TAG_SIZE - 1;
|
2022-07-06 07:59:23 +08:00
|
|
|
char content_type = darg->zc ? darg->tail : 0;
|
2019-01-31 05:58:31 +08:00
|
|
|
int err;
|
|
|
|
|
|
|
|
while (content_type == 0) {
|
2022-04-08 11:38:20 +08:00
|
|
|
if (offset < prot->prepend_size)
|
2019-01-31 05:58:31 +08:00
|
|
|
return -EBADMSG;
|
2022-04-08 11:38:20 +08:00
|
|
|
err = skb_copy_bits(skb, rxm->offset + offset,
|
2019-01-31 05:58:31 +08:00
|
|
|
&content_type, 1);
|
2019-05-10 07:14:07 +08:00
|
|
|
if (err)
|
|
|
|
return err;
|
2019-01-31 05:58:31 +08:00
|
|
|
if (content_type)
|
|
|
|
break;
|
|
|
|
sub++;
|
2022-04-08 11:38:20 +08:00
|
|
|
offset--;
|
2019-01-31 05:58:31 +08:00
|
|
|
}
|
2022-04-08 11:38:16 +08:00
|
|
|
tlm->control = content_type;
|
2019-01-31 05:58:31 +08:00
|
|
|
}
|
|
|
|
return sub;
|
|
|
|
}
|
|
|
|
|
2018-08-29 17:56:55 +08:00
|
|
|
static void tls_decrypt_done(struct crypto_async_request *req, int err)
|
|
|
|
{
|
|
|
|
struct aead_request *aead_req = (struct aead_request *)req;
|
|
|
|
struct scatterlist *sgout = aead_req->dst;
|
2019-01-16 18:40:16 +08:00
|
|
|
struct scatterlist *sgin = aead_req->src;
|
2018-09-15 04:01:46 +08:00
|
|
|
struct tls_sw_context_rx *ctx;
|
|
|
|
struct tls_context *tls_ctx;
|
2019-02-14 15:11:35 +08:00
|
|
|
struct tls_prot_info *prot;
|
2018-08-29 17:56:55 +08:00
|
|
|
struct scatterlist *sg;
|
2018-09-15 04:01:46 +08:00
|
|
|
struct sk_buff *skb;
|
2018-08-29 17:56:55 +08:00
|
|
|
unsigned int pages;
|
2018-09-15 04:01:46 +08:00
|
|
|
|
|
|
|
skb = (struct sk_buff *)req->data;
|
|
|
|
tls_ctx = tls_get_ctx(skb->sk);
|
|
|
|
ctx = tls_sw_ctx_rx(tls_ctx);
|
2019-02-14 15:11:35 +08:00
|
|
|
prot = &tls_ctx->prot_info;
|
2018-08-29 17:56:55 +08:00
|
|
|
|
|
|
|
/* Propagate if there was an err */
|
|
|
|
if (err) {
|
2019-10-05 07:19:26 +08:00
|
|
|
if (err == -EBADMSG)
|
|
|
|
TLS_INC_STATS(sock_net(skb->sk),
|
|
|
|
LINUX_MIB_TLSDECRYPTERROR);
|
2018-08-29 17:56:55 +08:00
|
|
|
ctx->async_wait.err = err;
|
2018-09-15 04:01:46 +08:00
|
|
|
tls_err_abort(skb->sk, err);
|
2019-01-16 18:40:16 +08:00
|
|
|
} else {
|
|
|
|
struct strp_msg *rxm = strp_msg(skb);
|
2022-04-12 03:19:11 +08:00
|
|
|
|
|
|
|
/* No TLS 1.3 support with async crypto */
|
|
|
|
WARN_ON(prot->tail_size);
|
|
|
|
|
|
|
|
rxm->offset += prot->prepend_size;
|
|
|
|
rxm->full_len -= prot->overhead_size;
|
2018-08-29 17:56:55 +08:00
|
|
|
}
|
|
|
|
|
2018-09-15 04:01:46 +08:00
|
|
|
/* After using skb->sk to propagate sk through crypto async callback
|
|
|
|
* we need to NULL it again.
|
|
|
|
*/
|
|
|
|
skb->sk = NULL;
|
|
|
|
|
2018-08-29 17:56:55 +08:00
|
|
|
|
2019-01-16 18:40:16 +08:00
|
|
|
/* Free the destination pages if skb was not decrypted inplace */
|
|
|
|
if (sgout != sgin) {
|
|
|
|
/* Skip the first S/G entry as it points to AAD */
|
|
|
|
for_each_sg(sg_next(sgout), sg, UINT_MAX, pages) {
|
|
|
|
if (!sg)
|
|
|
|
break;
|
|
|
|
put_page(sg_page(sg));
|
|
|
|
}
|
2018-08-29 17:56:55 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
kfree(aead_req);
|
|
|
|
|
2020-05-23 04:10:31 +08:00
|
|
|
spin_lock_bh(&ctx->decrypt_compl_lock);
|
2022-04-09 02:31:27 +08:00
|
|
|
if (!atomic_dec_return(&ctx->decrypt_pending))
|
2018-08-29 17:56:55 +08:00
|
|
|
complete(&ctx->async_wait.completion);
|
2020-05-23 04:10:31 +08:00
|
|
|
spin_unlock_bh(&ctx->decrypt_compl_lock);
|
2018-08-29 17:56:55 +08:00
|
|
|
}
|
|
|
|
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
static int tls_do_decryption(struct sock *sk,
|
2018-08-29 17:56:55 +08:00
|
|
|
struct sk_buff *skb,
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
struct scatterlist *sgin,
|
|
|
|
struct scatterlist *sgout,
|
|
|
|
char *iv_recv,
|
|
|
|
size_t data_len,
|
2018-08-29 17:56:55 +08:00
|
|
|
struct aead_request *aead_req,
|
2022-04-12 03:19:15 +08:00
|
|
|
struct tls_decrypt_arg *darg)
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
{
|
|
|
|
struct tls_context *tls_ctx = tls_get_ctx(sk);
|
2019-02-14 15:11:35 +08:00
|
|
|
struct tls_prot_info *prot = &tls_ctx->prot_info;
|
2018-04-30 15:16:15 +08:00
|
|
|
struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx);
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
int ret;
|
|
|
|
|
2018-08-10 23:16:41 +08:00
|
|
|
aead_request_set_tfm(aead_req, ctx->aead_recv);
|
2019-02-14 15:11:35 +08:00
|
|
|
aead_request_set_ad(aead_req, prot->aad_size);
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
aead_request_set_crypt(aead_req, sgin, sgout,
|
2019-02-14 15:11:35 +08:00
|
|
|
data_len + prot->tag_size,
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
(u8 *)iv_recv);
|
|
|
|
|
2022-04-12 03:19:15 +08:00
|
|
|
if (darg->async) {
|
2018-09-15 04:01:46 +08:00
|
|
|
/* Using skb->sk to push sk through to crypto async callback
|
|
|
|
* handler. This allows propagating errors up to the socket
|
|
|
|
* if needed. It _must_ be cleared in the async handler
|
2019-03-21 19:59:57 +08:00
|
|
|
* before consume_skb is called. We _know_ skb->sk is NULL
|
2018-09-15 04:01:46 +08:00
|
|
|
* because it is a clone from strparser.
|
|
|
|
*/
|
|
|
|
skb->sk = sk;
|
2018-08-29 17:56:55 +08:00
|
|
|
aead_request_set_callback(aead_req,
|
|
|
|
CRYPTO_TFM_REQ_MAY_BACKLOG,
|
|
|
|
tls_decrypt_done, skb);
|
|
|
|
atomic_inc(&ctx->decrypt_pending);
|
|
|
|
} else {
|
|
|
|
aead_request_set_callback(aead_req,
|
|
|
|
CRYPTO_TFM_REQ_MAY_BACKLOG,
|
|
|
|
crypto_req_done, &ctx->async_wait);
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = crypto_aead_decrypt(aead_req);
|
|
|
|
if (ret == -EINPROGRESS) {
|
2022-04-12 03:19:15 +08:00
|
|
|
if (darg->async)
|
|
|
|
return 0;
|
2018-08-29 17:56:55 +08:00
|
|
|
|
|
|
|
ret = crypto_wait_req(ret, &ctx->async_wait);
|
|
|
|
}
|
2022-04-12 03:19:15 +08:00
|
|
|
darg->async = false;
|
|
|
|
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2018-10-13 08:45:59 +08:00
|
|
|
static void tls_trim_both_msgs(struct sock *sk, int target_size)
|
2017-06-15 02:37:39 +08:00
|
|
|
{
|
|
|
|
struct tls_context *tls_ctx = tls_get_ctx(sk);
|
2019-02-14 15:11:35 +08:00
|
|
|
struct tls_prot_info *prot = &tls_ctx->prot_info;
|
2018-04-30 15:16:15 +08:00
|
|
|
struct tls_sw_context_tx *ctx = tls_sw_ctx_tx(tls_ctx);
|
2018-09-21 12:16:13 +08:00
|
|
|
struct tls_rec *rec = ctx->open_rec;
|
2017-06-15 02:37:39 +08:00
|
|
|
|
2018-10-13 08:45:59 +08:00
|
|
|
sk_msg_trim(sk, &rec->msg_plaintext, target_size);
|
2017-06-15 02:37:39 +08:00
|
|
|
if (target_size > 0)
|
2019-02-14 15:11:35 +08:00
|
|
|
target_size += prot->overhead_size;
|
2018-10-13 08:45:59 +08:00
|
|
|
sk_msg_trim(sk, &rec->msg_encrypted, target_size);
|
2017-06-15 02:37:39 +08:00
|
|
|
}
|
|
|
|
|
2018-10-13 08:45:59 +08:00
|
|
|
static int tls_alloc_encrypted_msg(struct sock *sk, int len)
|
2017-06-15 02:37:39 +08:00
|
|
|
{
|
|
|
|
struct tls_context *tls_ctx = tls_get_ctx(sk);
|
2018-04-30 15:16:15 +08:00
|
|
|
struct tls_sw_context_tx *ctx = tls_sw_ctx_tx(tls_ctx);
|
2018-09-21 12:16:13 +08:00
|
|
|
struct tls_rec *rec = ctx->open_rec;
|
2018-10-13 08:45:59 +08:00
|
|
|
struct sk_msg *msg_en = &rec->msg_encrypted;
|
2017-06-15 02:37:39 +08:00
|
|
|
|
2018-10-13 08:45:59 +08:00
|
|
|
return sk_msg_alloc(sk, msg_en, len, 0);
|
2017-06-15 02:37:39 +08:00
|
|
|
}
|
|
|
|
|
2018-10-13 08:45:59 +08:00
|
|
|
static int tls_clone_plaintext_msg(struct sock *sk, int required)
|
2017-06-15 02:37:39 +08:00
|
|
|
{
|
|
|
|
struct tls_context *tls_ctx = tls_get_ctx(sk);
|
2019-02-14 15:11:35 +08:00
|
|
|
struct tls_prot_info *prot = &tls_ctx->prot_info;
|
2018-04-30 15:16:15 +08:00
|
|
|
struct tls_sw_context_tx *ctx = tls_sw_ctx_tx(tls_ctx);
|
2018-09-21 12:16:13 +08:00
|
|
|
struct tls_rec *rec = ctx->open_rec;
|
2018-10-13 08:45:59 +08:00
|
|
|
struct sk_msg *msg_pl = &rec->msg_plaintext;
|
|
|
|
struct sk_msg *msg_en = &rec->msg_encrypted;
|
2018-09-30 10:34:35 +08:00
|
|
|
int skip, len;
|
2017-06-15 02:37:39 +08:00
|
|
|
|
2018-10-13 08:45:59 +08:00
|
|
|
/* We add page references worth len bytes from encrypted sg
|
|
|
|
* at the end of plaintext sg. It is guaranteed that msg_en
|
2018-09-30 10:34:35 +08:00
|
|
|
* has enough required room (ensured by caller).
|
|
|
|
*/
|
2018-10-13 08:45:59 +08:00
|
|
|
len = required - msg_pl->sg.size;
|
2018-09-07 00:11:40 +08:00
|
|
|
|
2018-10-13 08:45:59 +08:00
|
|
|
/* Skip initial bytes in msg_en's data to be able to use
|
|
|
|
* same offset of both plain and encrypted data.
|
2018-09-30 10:34:35 +08:00
|
|
|
*/
|
2019-02-14 15:11:35 +08:00
|
|
|
skip = prot->prepend_size + msg_pl->sg.size;
|
2018-09-30 10:34:35 +08:00
|
|
|
|
2018-10-13 08:45:59 +08:00
|
|
|
return sk_msg_clone(sk, msg_pl, msg_en, skip, len);
|
2017-06-15 02:37:39 +08:00
|
|
|
}
|
|
|
|
|
2018-10-13 08:46:01 +08:00
|
|
|
static struct tls_rec *tls_get_rec(struct sock *sk)
|
2017-06-15 02:37:39 +08:00
|
|
|
{
|
|
|
|
struct tls_context *tls_ctx = tls_get_ctx(sk);
|
2019-02-14 15:11:35 +08:00
|
|
|
struct tls_prot_info *prot = &tls_ctx->prot_info;
|
2018-04-30 15:16:15 +08:00
|
|
|
struct tls_sw_context_tx *ctx = tls_sw_ctx_tx(tls_ctx);
|
2018-10-13 08:46:01 +08:00
|
|
|
struct sk_msg *msg_pl, *msg_en;
|
|
|
|
struct tls_rec *rec;
|
|
|
|
int mem_size;
|
2017-06-15 02:37:39 +08:00
|
|
|
|
2018-10-13 08:46:01 +08:00
|
|
|
mem_size = sizeof(struct tls_rec) + crypto_aead_reqsize(ctx->aead_send);
|
|
|
|
|
|
|
|
rec = kzalloc(mem_size, sk->sk_allocation);
|
2018-09-21 12:16:13 +08:00
|
|
|
if (!rec)
|
2018-10-13 08:46:01 +08:00
|
|
|
return NULL;
|
|
|
|
|
|
|
|
msg_pl = &rec->msg_plaintext;
|
|
|
|
msg_en = &rec->msg_encrypted;
|
|
|
|
|
|
|
|
sk_msg_init(msg_pl);
|
|
|
|
sk_msg_init(msg_en);
|
|
|
|
|
|
|
|
sg_init_table(rec->sg_aead_in, 2);
|
2019-02-14 15:11:35 +08:00
|
|
|
sg_set_buf(&rec->sg_aead_in[0], rec->aad_space, prot->aad_size);
|
2018-10-13 08:46:01 +08:00
|
|
|
sg_unmark_end(&rec->sg_aead_in[1]);
|
|
|
|
|
|
|
|
sg_init_table(rec->sg_aead_out, 2);
|
2019-02-14 15:11:35 +08:00
|
|
|
sg_set_buf(&rec->sg_aead_out[0], rec->aad_space, prot->aad_size);
|
2018-10-13 08:46:01 +08:00
|
|
|
sg_unmark_end(&rec->sg_aead_out[1]);
|
|
|
|
|
|
|
|
return rec;
|
|
|
|
}
|
2018-09-21 12:16:13 +08:00
|
|
|
|
2018-10-13 08:46:01 +08:00
|
|
|
static void tls_free_rec(struct sock *sk, struct tls_rec *rec)
|
|
|
|
{
|
2018-10-13 08:45:59 +08:00
|
|
|
sk_msg_free(sk, &rec->msg_encrypted);
|
|
|
|
sk_msg_free(sk, &rec->msg_plaintext);
|
2018-09-25 22:51:51 +08:00
|
|
|
kfree(rec);
|
2018-09-21 12:16:13 +08:00
|
|
|
}
|
|
|
|
|
2018-10-13 08:46:01 +08:00
|
|
|
static void tls_free_open_rec(struct sock *sk)
|
|
|
|
{
|
|
|
|
struct tls_context *tls_ctx = tls_get_ctx(sk);
|
|
|
|
struct tls_sw_context_tx *ctx = tls_sw_ctx_tx(tls_ctx);
|
|
|
|
struct tls_rec *rec = ctx->open_rec;
|
|
|
|
|
|
|
|
if (rec) {
|
|
|
|
tls_free_rec(sk, rec);
|
|
|
|
ctx->open_rec = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-09-21 12:16:13 +08:00
|
|
|
int tls_tx_records(struct sock *sk, int flags)
|
|
|
|
{
|
|
|
|
struct tls_context *tls_ctx = tls_get_ctx(sk);
|
|
|
|
struct tls_sw_context_tx *ctx = tls_sw_ctx_tx(tls_ctx);
|
|
|
|
struct tls_rec *rec, *tmp;
|
2018-10-13 08:45:59 +08:00
|
|
|
struct sk_msg *msg_en;
|
2018-09-21 12:16:13 +08:00
|
|
|
int tx_flags, rc = 0;
|
|
|
|
|
|
|
|
if (tls_is_partially_sent_record(tls_ctx)) {
|
net/tls: Fixed race condition in async encryption
On processors with multi-engine crypto accelerators, it is possible that
multiple records get encrypted in parallel and their encryption
completion is notified to different cpus in multicore processor. This
leads to the situation where tls_encrypt_done() starts executing in
parallel on different cores. In current implementation, encrypted
records are queued to tx_ready_list in tls_encrypt_done(). This requires
addition to linked list 'tx_ready_list' to be protected. As
tls_decrypt_done() could be executing in irq content, it is not possible
to protect linked list addition operation using a lock.
To fix the problem, we remove linked list addition operation from the
irq context. We do tx_ready_list addition/removal operation from
application context only and get rid of possible multiple access to
the linked list. Before starting encryption on the record, we add it to
the tail of tx_ready_list. To prevent tls_tx_records() from transmitting
it, we mark the record with a new flag 'tx_ready' in 'struct tls_rec'.
When record encryption gets completed, tls_encrypt_done() has to only
update the 'tx_ready' flag to true & linked list add operation is not
required.
The changed logic brings some other side benefits. Since the records
are always submitted in tls sequence number order for encryption, the
tx_ready_list always remains sorted and addition of new records to it
does not have to traverse the linked list.
Lastly, we renamed tx_ready_list in 'struct tls_sw_context_tx' to
'tx_list'. This is because now, the some of the records at the tail are
not ready to transmit.
Fixes: a42055e8d2c3 ("net/tls: Add support for async encryption")
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-09-24 18:05:56 +08:00
|
|
|
rec = list_first_entry(&ctx->tx_list,
|
2018-09-21 12:16:13 +08:00
|
|
|
struct tls_rec, list);
|
|
|
|
|
|
|
|
if (flags == -1)
|
|
|
|
tx_flags = rec->tx_flags;
|
|
|
|
else
|
|
|
|
tx_flags = flags;
|
|
|
|
|
|
|
|
rc = tls_push_partial_record(sk, tls_ctx, tx_flags);
|
|
|
|
if (rc)
|
|
|
|
goto tx_err;
|
|
|
|
|
|
|
|
/* Full record has been transmitted.
|
net/tls: Fixed race condition in async encryption
On processors with multi-engine crypto accelerators, it is possible that
multiple records get encrypted in parallel and their encryption
completion is notified to different cpus in multicore processor. This
leads to the situation where tls_encrypt_done() starts executing in
parallel on different cores. In current implementation, encrypted
records are queued to tx_ready_list in tls_encrypt_done(). This requires
addition to linked list 'tx_ready_list' to be protected. As
tls_decrypt_done() could be executing in irq content, it is not possible
to protect linked list addition operation using a lock.
To fix the problem, we remove linked list addition operation from the
irq context. We do tx_ready_list addition/removal operation from
application context only and get rid of possible multiple access to
the linked list. Before starting encryption on the record, we add it to
the tail of tx_ready_list. To prevent tls_tx_records() from transmitting
it, we mark the record with a new flag 'tx_ready' in 'struct tls_rec'.
When record encryption gets completed, tls_encrypt_done() has to only
update the 'tx_ready' flag to true & linked list add operation is not
required.
The changed logic brings some other side benefits. Since the records
are always submitted in tls sequence number order for encryption, the
tx_ready_list always remains sorted and addition of new records to it
does not have to traverse the linked list.
Lastly, we renamed tx_ready_list in 'struct tls_sw_context_tx' to
'tx_list'. This is because now, the some of the records at the tail are
not ready to transmit.
Fixes: a42055e8d2c3 ("net/tls: Add support for async encryption")
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-09-24 18:05:56 +08:00
|
|
|
* Remove the head of tx_list
|
2018-09-21 12:16:13 +08:00
|
|
|
*/
|
|
|
|
list_del(&rec->list);
|
2018-10-13 08:45:59 +08:00
|
|
|
sk_msg_free(sk, &rec->msg_plaintext);
|
2018-09-21 12:16:13 +08:00
|
|
|
kfree(rec);
|
|
|
|
}
|
|
|
|
|
net/tls: Fixed race condition in async encryption
On processors with multi-engine crypto accelerators, it is possible that
multiple records get encrypted in parallel and their encryption
completion is notified to different cpus in multicore processor. This
leads to the situation where tls_encrypt_done() starts executing in
parallel on different cores. In current implementation, encrypted
records are queued to tx_ready_list in tls_encrypt_done(). This requires
addition to linked list 'tx_ready_list' to be protected. As
tls_decrypt_done() could be executing in irq content, it is not possible
to protect linked list addition operation using a lock.
To fix the problem, we remove linked list addition operation from the
irq context. We do tx_ready_list addition/removal operation from
application context only and get rid of possible multiple access to
the linked list. Before starting encryption on the record, we add it to
the tail of tx_ready_list. To prevent tls_tx_records() from transmitting
it, we mark the record with a new flag 'tx_ready' in 'struct tls_rec'.
When record encryption gets completed, tls_encrypt_done() has to only
update the 'tx_ready' flag to true & linked list add operation is not
required.
The changed logic brings some other side benefits. Since the records
are always submitted in tls sequence number order for encryption, the
tx_ready_list always remains sorted and addition of new records to it
does not have to traverse the linked list.
Lastly, we renamed tx_ready_list in 'struct tls_sw_context_tx' to
'tx_list'. This is because now, the some of the records at the tail are
not ready to transmit.
Fixes: a42055e8d2c3 ("net/tls: Add support for async encryption")
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-09-24 18:05:56 +08:00
|
|
|
/* Tx all ready records */
|
|
|
|
list_for_each_entry_safe(rec, tmp, &ctx->tx_list, list) {
|
|
|
|
if (READ_ONCE(rec->tx_ready)) {
|
2018-09-21 12:16:13 +08:00
|
|
|
if (flags == -1)
|
|
|
|
tx_flags = rec->tx_flags;
|
|
|
|
else
|
|
|
|
tx_flags = flags;
|
|
|
|
|
2018-10-13 08:45:59 +08:00
|
|
|
msg_en = &rec->msg_encrypted;
|
2018-09-21 12:16:13 +08:00
|
|
|
rc = tls_push_sg(sk, tls_ctx,
|
2018-10-13 08:45:59 +08:00
|
|
|
&msg_en->sg.data[msg_en->sg.curr],
|
2018-09-21 12:16:13 +08:00
|
|
|
0, tx_flags);
|
|
|
|
if (rc)
|
|
|
|
goto tx_err;
|
|
|
|
|
|
|
|
list_del(&rec->list);
|
2018-10-13 08:45:59 +08:00
|
|
|
sk_msg_free(sk, &rec->msg_plaintext);
|
2018-09-21 12:16:13 +08:00
|
|
|
kfree(rec);
|
|
|
|
} else {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
tx_err:
|
|
|
|
if (rc < 0 && rc != -EAGAIN)
|
2021-10-28 05:59:20 +08:00
|
|
|
tls_err_abort(sk, -EBADMSG);
|
2018-09-21 12:16:13 +08:00
|
|
|
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void tls_encrypt_done(struct crypto_async_request *req, int err)
|
|
|
|
{
|
|
|
|
struct aead_request *aead_req = (struct aead_request *)req;
|
|
|
|
struct sock *sk = req->data;
|
|
|
|
struct tls_context *tls_ctx = tls_get_ctx(sk);
|
2019-02-14 15:11:35 +08:00
|
|
|
struct tls_prot_info *prot = &tls_ctx->prot_info;
|
2018-09-21 12:16:13 +08:00
|
|
|
struct tls_sw_context_tx *ctx = tls_sw_ctx_tx(tls_ctx);
|
2018-10-13 08:45:59 +08:00
|
|
|
struct scatterlist *sge;
|
|
|
|
struct sk_msg *msg_en;
|
2018-09-21 12:16:13 +08:00
|
|
|
struct tls_rec *rec;
|
|
|
|
bool ready = false;
|
|
|
|
int pending;
|
|
|
|
|
|
|
|
rec = container_of(aead_req, struct tls_rec, aead_req);
|
2018-10-13 08:45:59 +08:00
|
|
|
msg_en = &rec->msg_encrypted;
|
2018-09-21 12:16:13 +08:00
|
|
|
|
2018-10-13 08:45:59 +08:00
|
|
|
sge = sk_msg_elem(msg_en, msg_en->sg.curr);
|
2019-02-14 15:11:35 +08:00
|
|
|
sge->offset -= prot->prepend_size;
|
|
|
|
sge->length += prot->prepend_size;
|
2018-09-21 12:16:13 +08:00
|
|
|
|
2018-09-26 18:52:08 +08:00
|
|
|
/* Check if error is previously set on socket */
|
2018-09-21 12:16:13 +08:00
|
|
|
if (err || sk->sk_err) {
|
|
|
|
rec = NULL;
|
|
|
|
|
|
|
|
/* If err is already set on socket, return the same code */
|
|
|
|
if (sk->sk_err) {
|
2021-10-28 05:59:21 +08:00
|
|
|
ctx->async_wait.err = -sk->sk_err;
|
2018-09-21 12:16:13 +08:00
|
|
|
} else {
|
|
|
|
ctx->async_wait.err = err;
|
|
|
|
tls_err_abort(sk, err);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
net/tls: Fixed race condition in async encryption
On processors with multi-engine crypto accelerators, it is possible that
multiple records get encrypted in parallel and their encryption
completion is notified to different cpus in multicore processor. This
leads to the situation where tls_encrypt_done() starts executing in
parallel on different cores. In current implementation, encrypted
records are queued to tx_ready_list in tls_encrypt_done(). This requires
addition to linked list 'tx_ready_list' to be protected. As
tls_decrypt_done() could be executing in irq content, it is not possible
to protect linked list addition operation using a lock.
To fix the problem, we remove linked list addition operation from the
irq context. We do tx_ready_list addition/removal operation from
application context only and get rid of possible multiple access to
the linked list. Before starting encryption on the record, we add it to
the tail of tx_ready_list. To prevent tls_tx_records() from transmitting
it, we mark the record with a new flag 'tx_ready' in 'struct tls_rec'.
When record encryption gets completed, tls_encrypt_done() has to only
update the 'tx_ready' flag to true & linked list add operation is not
required.
The changed logic brings some other side benefits. Since the records
are always submitted in tls sequence number order for encryption, the
tx_ready_list always remains sorted and addition of new records to it
does not have to traverse the linked list.
Lastly, we renamed tx_ready_list in 'struct tls_sw_context_tx' to
'tx_list'. This is because now, the some of the records at the tail are
not ready to transmit.
Fixes: a42055e8d2c3 ("net/tls: Add support for async encryption")
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-09-24 18:05:56 +08:00
|
|
|
if (rec) {
|
|
|
|
struct tls_rec *first_rec;
|
|
|
|
|
|
|
|
/* Mark the record as ready for transmission */
|
|
|
|
smp_store_mb(rec->tx_ready, true);
|
|
|
|
|
|
|
|
/* If received record is at head of tx_list, schedule tx */
|
|
|
|
first_rec = list_first_entry(&ctx->tx_list,
|
|
|
|
struct tls_rec, list);
|
|
|
|
if (rec == first_rec)
|
|
|
|
ready = true;
|
|
|
|
}
|
2018-09-21 12:16:13 +08:00
|
|
|
|
2020-05-23 04:10:31 +08:00
|
|
|
spin_lock_bh(&ctx->encrypt_compl_lock);
|
2018-09-21 12:16:13 +08:00
|
|
|
pending = atomic_dec_return(&ctx->encrypt_pending);
|
|
|
|
|
2020-05-23 04:10:31 +08:00
|
|
|
if (!pending && ctx->async_notify)
|
2018-09-21 12:16:13 +08:00
|
|
|
complete(&ctx->async_wait.completion);
|
2020-05-23 04:10:31 +08:00
|
|
|
spin_unlock_bh(&ctx->encrypt_compl_lock);
|
2018-09-21 12:16:13 +08:00
|
|
|
|
|
|
|
if (!ready)
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* Schedule the transmission */
|
|
|
|
if (!test_and_set_bit(BIT_TX_SCHEDULED, &ctx->tx_bitmask))
|
2018-10-13 08:45:59 +08:00
|
|
|
schedule_delayed_work(&ctx->tx_work.work, 1);
|
2017-06-15 02:37:39 +08:00
|
|
|
}
|
|
|
|
|
2018-09-21 12:16:13 +08:00
|
|
|
static int tls_do_encryption(struct sock *sk,
|
|
|
|
struct tls_context *tls_ctx,
|
2018-06-15 09:07:45 +08:00
|
|
|
struct tls_sw_context_tx *ctx,
|
|
|
|
struct aead_request *aead_req,
|
2018-10-13 08:45:59 +08:00
|
|
|
size_t data_len, u32 start)
|
2017-06-15 02:37:39 +08:00
|
|
|
{
|
2019-02-14 15:11:35 +08:00
|
|
|
struct tls_prot_info *prot = &tls_ctx->prot_info;
|
2018-09-21 12:16:13 +08:00
|
|
|
struct tls_rec *rec = ctx->open_rec;
|
2018-10-13 08:45:59 +08:00
|
|
|
struct sk_msg *msg_en = &rec->msg_encrypted;
|
|
|
|
struct scatterlist *sge = sk_msg_elem(msg_en, start);
|
2019-03-20 10:03:36 +08:00
|
|
|
int rc, iv_offset = 0;
|
|
|
|
|
|
|
|
/* For CCM based ciphers, first byte of IV is a constant */
|
2021-09-28 14:28:43 +08:00
|
|
|
switch (prot->cipher_type) {
|
|
|
|
case TLS_CIPHER_AES_CCM_128:
|
2019-03-20 10:03:36 +08:00
|
|
|
rec->iv_data[0] = TLS_AES_CCM_IV_B0_BYTE;
|
|
|
|
iv_offset = 1;
|
2021-09-28 14:28:43 +08:00
|
|
|
break;
|
|
|
|
case TLS_CIPHER_SM4_CCM:
|
|
|
|
rec->iv_data[0] = TLS_SM4_CCM_IV_B0_BYTE;
|
|
|
|
iv_offset = 1;
|
|
|
|
break;
|
2019-03-20 10:03:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
memcpy(&rec->iv_data[iv_offset], tls_ctx->tx.iv,
|
|
|
|
prot->iv_size + prot->salt_size);
|
2017-06-15 02:37:39 +08:00
|
|
|
|
2021-11-29 17:32:12 +08:00
|
|
|
xor_iv_with_seq(prot, rec->iv_data + iv_offset, tls_ctx->tx.rec_seq);
|
2019-01-27 08:57:38 +08:00
|
|
|
|
2019-02-14 15:11:35 +08:00
|
|
|
sge->offset += prot->prepend_size;
|
|
|
|
sge->length -= prot->prepend_size;
|
2017-06-15 02:37:39 +08:00
|
|
|
|
2018-10-13 08:45:59 +08:00
|
|
|
msg_en->sg.curr = start;
|
2018-09-30 10:34:35 +08:00
|
|
|
|
2017-06-15 02:37:39 +08:00
|
|
|
aead_request_set_tfm(aead_req, ctx->aead_send);
|
2019-02-14 15:11:35 +08:00
|
|
|
aead_request_set_ad(aead_req, prot->aad_size);
|
2018-10-13 08:45:59 +08:00
|
|
|
aead_request_set_crypt(aead_req, rec->sg_aead_in,
|
|
|
|
rec->sg_aead_out,
|
2019-01-27 08:57:38 +08:00
|
|
|
data_len, rec->iv_data);
|
2018-02-01 00:04:37 +08:00
|
|
|
|
|
|
|
aead_request_set_callback(aead_req, CRYPTO_TFM_REQ_MAY_BACKLOG,
|
2018-09-21 12:16:13 +08:00
|
|
|
tls_encrypt_done, sk);
|
|
|
|
|
net/tls: Fixed race condition in async encryption
On processors with multi-engine crypto accelerators, it is possible that
multiple records get encrypted in parallel and their encryption
completion is notified to different cpus in multicore processor. This
leads to the situation where tls_encrypt_done() starts executing in
parallel on different cores. In current implementation, encrypted
records are queued to tx_ready_list in tls_encrypt_done(). This requires
addition to linked list 'tx_ready_list' to be protected. As
tls_decrypt_done() could be executing in irq content, it is not possible
to protect linked list addition operation using a lock.
To fix the problem, we remove linked list addition operation from the
irq context. We do tx_ready_list addition/removal operation from
application context only and get rid of possible multiple access to
the linked list. Before starting encryption on the record, we add it to
the tail of tx_ready_list. To prevent tls_tx_records() from transmitting
it, we mark the record with a new flag 'tx_ready' in 'struct tls_rec'.
When record encryption gets completed, tls_encrypt_done() has to only
update the 'tx_ready' flag to true & linked list add operation is not
required.
The changed logic brings some other side benefits. Since the records
are always submitted in tls sequence number order for encryption, the
tx_ready_list always remains sorted and addition of new records to it
does not have to traverse the linked list.
Lastly, we renamed tx_ready_list in 'struct tls_sw_context_tx' to
'tx_list'. This is because now, the some of the records at the tail are
not ready to transmit.
Fixes: a42055e8d2c3 ("net/tls: Add support for async encryption")
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-09-24 18:05:56 +08:00
|
|
|
/* Add the record in tx_list */
|
|
|
|
list_add_tail((struct list_head *)&rec->list, &ctx->tx_list);
|
2018-09-21 12:16:13 +08:00
|
|
|
atomic_inc(&ctx->encrypt_pending);
|
2018-02-01 00:04:37 +08:00
|
|
|
|
2018-09-21 12:16:13 +08:00
|
|
|
rc = crypto_aead_encrypt(aead_req);
|
|
|
|
if (!rc || rc != -EINPROGRESS) {
|
|
|
|
atomic_dec(&ctx->encrypt_pending);
|
2019-02-14 15:11:35 +08:00
|
|
|
sge->offset -= prot->prepend_size;
|
|
|
|
sge->length += prot->prepend_size;
|
2018-09-21 12:16:13 +08:00
|
|
|
}
|
2017-06-15 02:37:39 +08:00
|
|
|
|
net/tls: Fixed race condition in async encryption
On processors with multi-engine crypto accelerators, it is possible that
multiple records get encrypted in parallel and their encryption
completion is notified to different cpus in multicore processor. This
leads to the situation where tls_encrypt_done() starts executing in
parallel on different cores. In current implementation, encrypted
records are queued to tx_ready_list in tls_encrypt_done(). This requires
addition to linked list 'tx_ready_list' to be protected. As
tls_decrypt_done() could be executing in irq content, it is not possible
to protect linked list addition operation using a lock.
To fix the problem, we remove linked list addition operation from the
irq context. We do tx_ready_list addition/removal operation from
application context only and get rid of possible multiple access to
the linked list. Before starting encryption on the record, we add it to
the tail of tx_ready_list. To prevent tls_tx_records() from transmitting
it, we mark the record with a new flag 'tx_ready' in 'struct tls_rec'.
When record encryption gets completed, tls_encrypt_done() has to only
update the 'tx_ready' flag to true & linked list add operation is not
required.
The changed logic brings some other side benefits. Since the records
are always submitted in tls sequence number order for encryption, the
tx_ready_list always remains sorted and addition of new records to it
does not have to traverse the linked list.
Lastly, we renamed tx_ready_list in 'struct tls_sw_context_tx' to
'tx_list'. This is because now, the some of the records at the tail are
not ready to transmit.
Fixes: a42055e8d2c3 ("net/tls: Add support for async encryption")
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-09-24 18:05:56 +08:00
|
|
|
if (!rc) {
|
|
|
|
WRITE_ONCE(rec->tx_ready, true);
|
|
|
|
} else if (rc != -EINPROGRESS) {
|
|
|
|
list_del(&rec->list);
|
2018-09-21 12:16:13 +08:00
|
|
|
return rc;
|
net/tls: Fixed race condition in async encryption
On processors with multi-engine crypto accelerators, it is possible that
multiple records get encrypted in parallel and their encryption
completion is notified to different cpus in multicore processor. This
leads to the situation where tls_encrypt_done() starts executing in
parallel on different cores. In current implementation, encrypted
records are queued to tx_ready_list in tls_encrypt_done(). This requires
addition to linked list 'tx_ready_list' to be protected. As
tls_decrypt_done() could be executing in irq content, it is not possible
to protect linked list addition operation using a lock.
To fix the problem, we remove linked list addition operation from the
irq context. We do tx_ready_list addition/removal operation from
application context only and get rid of possible multiple access to
the linked list. Before starting encryption on the record, we add it to
the tail of tx_ready_list. To prevent tls_tx_records() from transmitting
it, we mark the record with a new flag 'tx_ready' in 'struct tls_rec'.
When record encryption gets completed, tls_encrypt_done() has to only
update the 'tx_ready' flag to true & linked list add operation is not
required.
The changed logic brings some other side benefits. Since the records
are always submitted in tls sequence number order for encryption, the
tx_ready_list always remains sorted and addition of new records to it
does not have to traverse the linked list.
Lastly, we renamed tx_ready_list in 'struct tls_sw_context_tx' to
'tx_list'. This is because now, the some of the records at the tail are
not ready to transmit.
Fixes: a42055e8d2c3 ("net/tls: Add support for async encryption")
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-09-24 18:05:56 +08:00
|
|
|
}
|
2017-06-15 02:37:39 +08:00
|
|
|
|
2018-09-21 12:16:13 +08:00
|
|
|
/* Unhook the record from context if encryption is not failure */
|
|
|
|
ctx->open_rec = NULL;
|
2019-06-04 06:17:05 +08:00
|
|
|
tls_advance_record_sn(sk, prot, &tls_ctx->tx);
|
2017-06-15 02:37:39 +08:00
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2018-10-13 08:46:01 +08:00
|
|
|
static int tls_split_open_record(struct sock *sk, struct tls_rec *from,
|
|
|
|
struct tls_rec **to, struct sk_msg *msg_opl,
|
|
|
|
struct sk_msg *msg_oen, u32 split_point,
|
|
|
|
u32 tx_overhead_size, u32 *orig_end)
|
|
|
|
{
|
|
|
|
u32 i, j, bytes = 0, apply = msg_opl->apply_bytes;
|
|
|
|
struct scatterlist *sge, *osge, *nsge;
|
|
|
|
u32 orig_size = msg_opl->sg.size;
|
|
|
|
struct scatterlist tmp = { };
|
|
|
|
struct sk_msg *msg_npl;
|
|
|
|
struct tls_rec *new;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
new = tls_get_rec(sk);
|
|
|
|
if (!new)
|
|
|
|
return -ENOMEM;
|
|
|
|
ret = sk_msg_alloc(sk, &new->msg_encrypted, msg_opl->sg.size +
|
|
|
|
tx_overhead_size, 0);
|
|
|
|
if (ret < 0) {
|
|
|
|
tls_free_rec(sk, new);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
*orig_end = msg_opl->sg.end;
|
|
|
|
i = msg_opl->sg.start;
|
|
|
|
sge = sk_msg_elem(msg_opl, i);
|
|
|
|
while (apply && sge->length) {
|
|
|
|
if (sge->length > apply) {
|
|
|
|
u32 len = sge->length - apply;
|
|
|
|
|
|
|
|
get_page(sg_page(sge));
|
|
|
|
sg_set_page(&tmp, sg_page(sge), len,
|
|
|
|
sge->offset + apply);
|
|
|
|
sge->length = apply;
|
|
|
|
bytes += apply;
|
|
|
|
apply = 0;
|
|
|
|
} else {
|
|
|
|
apply -= sge->length;
|
|
|
|
bytes += sge->length;
|
|
|
|
}
|
|
|
|
|
|
|
|
sk_msg_iter_var_next(i);
|
|
|
|
if (i == msg_opl->sg.end)
|
|
|
|
break;
|
|
|
|
sge = sk_msg_elem(msg_opl, i);
|
|
|
|
}
|
|
|
|
|
|
|
|
msg_opl->sg.end = i;
|
|
|
|
msg_opl->sg.curr = i;
|
|
|
|
msg_opl->sg.copybreak = 0;
|
|
|
|
msg_opl->apply_bytes = 0;
|
|
|
|
msg_opl->sg.size = bytes;
|
|
|
|
|
|
|
|
msg_npl = &new->msg_plaintext;
|
|
|
|
msg_npl->apply_bytes = apply;
|
|
|
|
msg_npl->sg.size = orig_size - bytes;
|
|
|
|
|
|
|
|
j = msg_npl->sg.start;
|
|
|
|
nsge = sk_msg_elem(msg_npl, j);
|
|
|
|
if (tmp.length) {
|
|
|
|
memcpy(nsge, &tmp, sizeof(*nsge));
|
|
|
|
sk_msg_iter_var_next(j);
|
|
|
|
nsge = sk_msg_elem(msg_npl, j);
|
|
|
|
}
|
|
|
|
|
|
|
|
osge = sk_msg_elem(msg_opl, i);
|
|
|
|
while (osge->length) {
|
|
|
|
memcpy(nsge, osge, sizeof(*nsge));
|
|
|
|
sg_unmark_end(nsge);
|
|
|
|
sk_msg_iter_var_next(i);
|
|
|
|
sk_msg_iter_var_next(j);
|
|
|
|
if (i == *orig_end)
|
|
|
|
break;
|
|
|
|
osge = sk_msg_elem(msg_opl, i);
|
|
|
|
nsge = sk_msg_elem(msg_npl, j);
|
|
|
|
}
|
|
|
|
|
|
|
|
msg_npl->sg.end = j;
|
|
|
|
msg_npl->sg.curr = j;
|
|
|
|
msg_npl->sg.copybreak = 0;
|
|
|
|
|
|
|
|
*to = new;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void tls_merge_open_record(struct sock *sk, struct tls_rec *to,
|
|
|
|
struct tls_rec *from, u32 orig_end)
|
|
|
|
{
|
|
|
|
struct sk_msg *msg_npl = &from->msg_plaintext;
|
|
|
|
struct sk_msg *msg_opl = &to->msg_plaintext;
|
|
|
|
struct scatterlist *osge, *nsge;
|
|
|
|
u32 i, j;
|
|
|
|
|
|
|
|
i = msg_opl->sg.end;
|
|
|
|
sk_msg_iter_var_prev(i);
|
|
|
|
j = msg_npl->sg.start;
|
|
|
|
|
|
|
|
osge = sk_msg_elem(msg_opl, i);
|
|
|
|
nsge = sk_msg_elem(msg_npl, j);
|
|
|
|
|
|
|
|
if (sg_page(osge) == sg_page(nsge) &&
|
|
|
|
osge->offset + osge->length == nsge->offset) {
|
|
|
|
osge->length += nsge->length;
|
|
|
|
put_page(sg_page(nsge));
|
|
|
|
}
|
|
|
|
|
|
|
|
msg_opl->sg.end = orig_end;
|
|
|
|
msg_opl->sg.curr = orig_end;
|
|
|
|
msg_opl->sg.copybreak = 0;
|
|
|
|
msg_opl->apply_bytes = msg_opl->sg.size + msg_npl->sg.size;
|
|
|
|
msg_opl->sg.size += msg_npl->sg.size;
|
|
|
|
|
|
|
|
sk_msg_free(sk, &to->msg_encrypted);
|
|
|
|
sk_msg_xfer_full(&to->msg_encrypted, &from->msg_encrypted);
|
|
|
|
|
|
|
|
kfree(from);
|
|
|
|
}
|
|
|
|
|
2017-06-15 02:37:39 +08:00
|
|
|
static int tls_push_record(struct sock *sk, int flags,
|
|
|
|
unsigned char record_type)
|
|
|
|
{
|
|
|
|
struct tls_context *tls_ctx = tls_get_ctx(sk);
|
2019-02-14 15:11:35 +08:00
|
|
|
struct tls_prot_info *prot = &tls_ctx->prot_info;
|
2018-04-30 15:16:15 +08:00
|
|
|
struct tls_sw_context_tx *ctx = tls_sw_ctx_tx(tls_ctx);
|
2018-10-13 08:46:01 +08:00
|
|
|
struct tls_rec *rec = ctx->open_rec, *tmp = NULL;
|
treewide: Remove uninitialized_var() usage
Using uninitialized_var() is dangerous as it papers over real bugs[1]
(or can in the future), and suppresses unrelated compiler warnings
(e.g. "unused variable"). If the compiler thinks it is uninitialized,
either simply initialize the variable or make compiler changes.
In preparation for removing[2] the[3] macro[4], remove all remaining
needless uses with the following script:
git grep '\buninitialized_var\b' | cut -d: -f1 | sort -u | \
xargs perl -pi -e \
's/\buninitialized_var\(([^\)]+)\)/\1/g;
s:\s*/\* (GCC be quiet|to make compiler happy) \*/$::g;'
drivers/video/fbdev/riva/riva_hw.c was manually tweaked to avoid
pathological white-space.
No outstanding warnings were found building allmodconfig with GCC 9.3.0
for x86_64, i386, arm64, arm, powerpc, powerpc64le, s390x, mips, sparc64,
alpha, and m68k.
[1] https://lore.kernel.org/lkml/20200603174714.192027-1-glider@google.com/
[2] https://lore.kernel.org/lkml/CA+55aFw+Vbj0i=1TGqCR5vQkCzWJ0QxK6CernOU6eedsudAixw@mail.gmail.com/
[3] https://lore.kernel.org/lkml/CA+55aFwgbgqhbp1fkxvRKEpzyR5J8n1vKT1VZdz9knmPuXhOeg@mail.gmail.com/
[4] https://lore.kernel.org/lkml/CA+55aFz2500WfbKXAx8s67wrm9=yVJu65TpLgN_ybYNv0VEOKA@mail.gmail.com/
Reviewed-by: Leon Romanovsky <leonro@mellanox.com> # drivers/infiniband and mlx4/mlx5
Acked-by: Jason Gunthorpe <jgg@mellanox.com> # IB
Acked-by: Kalle Valo <kvalo@codeaurora.org> # wireless drivers
Reviewed-by: Chao Yu <yuchao0@huawei.com> # erofs
Signed-off-by: Kees Cook <keescook@chromium.org>
2020-06-04 04:09:38 +08:00
|
|
|
u32 i, split_point, orig_end;
|
2018-10-13 08:45:59 +08:00
|
|
|
struct sk_msg *msg_pl, *msg_en;
|
2018-06-15 09:07:45 +08:00
|
|
|
struct aead_request *req;
|
2018-10-13 08:46:01 +08:00
|
|
|
bool split;
|
2017-06-15 02:37:39 +08:00
|
|
|
int rc;
|
|
|
|
|
2018-09-21 12:16:13 +08:00
|
|
|
if (!rec)
|
|
|
|
return 0;
|
2018-06-15 09:07:45 +08:00
|
|
|
|
2018-10-13 08:45:59 +08:00
|
|
|
msg_pl = &rec->msg_plaintext;
|
|
|
|
msg_en = &rec->msg_encrypted;
|
|
|
|
|
2018-10-13 08:46:01 +08:00
|
|
|
split_point = msg_pl->apply_bytes;
|
|
|
|
split = split_point && split_point < msg_pl->sg.size;
|
bpf: Sockmap/tls, tls_sw can create a plaintext buf > encrypt buf
It is possible to build a plaintext buffer using push helper that is larger
than the allocated encrypt buffer. When this record is pushed to crypto
layers this can result in a NULL pointer dereference because the crypto
API expects the encrypt buffer is large enough to fit the plaintext
buffer. Kernel splat below.
To resolve catch the cases this can happen and split the buffer into two
records to send individually. Unfortunately, there is still one case to
handle where the split creates a zero sized buffer. In this case we merge
the buffers and unmark the split. This happens when apply is zero and user
pushed data beyond encrypt buffer. This fixes the original case as well
because the split allocated an encrypt buffer larger than the plaintext
buffer and the merge simply moves the pointers around so we now have
a reference to the new (larger) encrypt buffer.
Perhaps its not ideal but it seems the best solution for a fixes branch
and avoids handling these two cases, (a) apply that needs split and (b)
non apply case. The are edge cases anyways so optimizing them seems not
necessary unless someone wants later in next branches.
[ 306.719107] BUG: kernel NULL pointer dereference, address: 0000000000000008
[...]
[ 306.747260] RIP: 0010:scatterwalk_copychunks+0x12f/0x1b0
[...]
[ 306.770350] Call Trace:
[ 306.770956] scatterwalk_map_and_copy+0x6c/0x80
[ 306.772026] gcm_enc_copy_hash+0x4b/0x50
[ 306.772925] gcm_hash_crypt_remain_continue+0xef/0x110
[ 306.774138] gcm_hash_crypt_continue+0xa1/0xb0
[ 306.775103] ? gcm_hash_crypt_continue+0xa1/0xb0
[ 306.776103] gcm_hash_assoc_remain_continue+0x94/0xa0
[ 306.777170] gcm_hash_assoc_continue+0x9d/0xb0
[ 306.778239] gcm_hash_init_continue+0x8f/0xa0
[ 306.779121] gcm_hash+0x73/0x80
[ 306.779762] gcm_encrypt_continue+0x6d/0x80
[ 306.780582] crypto_gcm_encrypt+0xcb/0xe0
[ 306.781474] crypto_aead_encrypt+0x1f/0x30
[ 306.782353] tls_push_record+0x3b9/0xb20 [tls]
[ 306.783314] ? sk_psock_msg_verdict+0x199/0x300
[ 306.784287] bpf_exec_tx_verdict+0x3f2/0x680 [tls]
[ 306.785357] tls_sw_sendmsg+0x4a3/0x6a0 [tls]
test_sockmap test signature to trigger bug,
[TEST]: (1, 1, 1, sendmsg, pass,redir,start 1,end 2,pop (1,2),ktls,):
Fixes: d3b18ad31f93d ("tls: add bpf support to sk_msg handling")
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/bpf/20200111061206.8028-7-john.fastabend@gmail.com
2020-01-11 14:12:04 +08:00
|
|
|
if (unlikely((!split &&
|
|
|
|
msg_pl->sg.size +
|
|
|
|
prot->overhead_size > msg_en->sg.size) ||
|
|
|
|
(split &&
|
|
|
|
split_point +
|
|
|
|
prot->overhead_size > msg_en->sg.size))) {
|
|
|
|
split = true;
|
|
|
|
split_point = msg_en->sg.size;
|
|
|
|
}
|
2018-10-13 08:46:01 +08:00
|
|
|
if (split) {
|
|
|
|
rc = tls_split_open_record(sk, rec, &tmp, msg_pl, msg_en,
|
2019-02-14 15:11:35 +08:00
|
|
|
split_point, prot->overhead_size,
|
2018-10-13 08:46:01 +08:00
|
|
|
&orig_end);
|
|
|
|
if (rc < 0)
|
|
|
|
return rc;
|
bpf: Sockmap/tls, tls_sw can create a plaintext buf > encrypt buf
It is possible to build a plaintext buffer using push helper that is larger
than the allocated encrypt buffer. When this record is pushed to crypto
layers this can result in a NULL pointer dereference because the crypto
API expects the encrypt buffer is large enough to fit the plaintext
buffer. Kernel splat below.
To resolve catch the cases this can happen and split the buffer into two
records to send individually. Unfortunately, there is still one case to
handle where the split creates a zero sized buffer. In this case we merge
the buffers and unmark the split. This happens when apply is zero and user
pushed data beyond encrypt buffer. This fixes the original case as well
because the split allocated an encrypt buffer larger than the plaintext
buffer and the merge simply moves the pointers around so we now have
a reference to the new (larger) encrypt buffer.
Perhaps its not ideal but it seems the best solution for a fixes branch
and avoids handling these two cases, (a) apply that needs split and (b)
non apply case. The are edge cases anyways so optimizing them seems not
necessary unless someone wants later in next branches.
[ 306.719107] BUG: kernel NULL pointer dereference, address: 0000000000000008
[...]
[ 306.747260] RIP: 0010:scatterwalk_copychunks+0x12f/0x1b0
[...]
[ 306.770350] Call Trace:
[ 306.770956] scatterwalk_map_and_copy+0x6c/0x80
[ 306.772026] gcm_enc_copy_hash+0x4b/0x50
[ 306.772925] gcm_hash_crypt_remain_continue+0xef/0x110
[ 306.774138] gcm_hash_crypt_continue+0xa1/0xb0
[ 306.775103] ? gcm_hash_crypt_continue+0xa1/0xb0
[ 306.776103] gcm_hash_assoc_remain_continue+0x94/0xa0
[ 306.777170] gcm_hash_assoc_continue+0x9d/0xb0
[ 306.778239] gcm_hash_init_continue+0x8f/0xa0
[ 306.779121] gcm_hash+0x73/0x80
[ 306.779762] gcm_encrypt_continue+0x6d/0x80
[ 306.780582] crypto_gcm_encrypt+0xcb/0xe0
[ 306.781474] crypto_aead_encrypt+0x1f/0x30
[ 306.782353] tls_push_record+0x3b9/0xb20 [tls]
[ 306.783314] ? sk_psock_msg_verdict+0x199/0x300
[ 306.784287] bpf_exec_tx_verdict+0x3f2/0x680 [tls]
[ 306.785357] tls_sw_sendmsg+0x4a3/0x6a0 [tls]
test_sockmap test signature to trigger bug,
[TEST]: (1, 1, 1, sendmsg, pass,redir,start 1,end 2,pop (1,2),ktls,):
Fixes: d3b18ad31f93d ("tls: add bpf support to sk_msg handling")
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/bpf/20200111061206.8028-7-john.fastabend@gmail.com
2020-01-11 14:12:04 +08:00
|
|
|
/* This can happen if above tls_split_open_record allocates
|
|
|
|
* a single large encryption buffer instead of two smaller
|
|
|
|
* ones. In this case adjust pointers and continue without
|
|
|
|
* split.
|
|
|
|
*/
|
|
|
|
if (!msg_pl->sg.size) {
|
|
|
|
tls_merge_open_record(sk, rec, tmp, orig_end);
|
|
|
|
msg_pl = &rec->msg_plaintext;
|
|
|
|
msg_en = &rec->msg_encrypted;
|
|
|
|
split = false;
|
|
|
|
}
|
2018-10-13 08:46:01 +08:00
|
|
|
sk_msg_trim(sk, msg_en, msg_pl->sg.size +
|
2019-02-14 15:11:35 +08:00
|
|
|
prot->overhead_size);
|
2018-10-13 08:46:01 +08:00
|
|
|
}
|
|
|
|
|
2018-09-21 12:16:13 +08:00
|
|
|
rec->tx_flags = flags;
|
|
|
|
req = &rec->aead_req;
|
2017-06-15 02:37:39 +08:00
|
|
|
|
2018-10-13 08:45:59 +08:00
|
|
|
i = msg_pl->sg.end;
|
|
|
|
sk_msg_iter_var_prev(i);
|
2019-01-31 05:58:31 +08:00
|
|
|
|
|
|
|
rec->content_type = record_type;
|
2019-02-14 15:11:35 +08:00
|
|
|
if (prot->version == TLS_1_3_VERSION) {
|
2019-01-31 05:58:31 +08:00
|
|
|
/* Add content type to end of message. No padding added */
|
|
|
|
sg_set_buf(&rec->sg_content_type, &rec->content_type, 1);
|
|
|
|
sg_mark_end(&rec->sg_content_type);
|
|
|
|
sg_chain(msg_pl->sg.data, msg_pl->sg.end + 1,
|
|
|
|
&rec->sg_content_type);
|
|
|
|
} else {
|
|
|
|
sg_mark_end(sk_msg_elem(msg_pl, i));
|
|
|
|
}
|
2018-09-21 12:16:13 +08:00
|
|
|
|
bpf: Sockmap/tls, skmsg can have wrapped skmsg that needs extra chaining
Its possible through a set of push, pop, apply helper calls to construct
a skmsg, which is just a ring of scatterlist elements, with the start
value larger than the end value. For example,
end start
|_0_|_1_| ... |_n_|_n+1_|
Where end points at 1 and start points and n so that valid elements is
the set {n, n+1, 0, 1}.
Currently, because we don't build the correct chain only {n, n+1} will
be sent. This adds a check and sg_chain call to correctly submit the
above to the crypto and tls send path.
Fixes: d3b18ad31f93d ("tls: add bpf support to sk_msg handling")
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/bpf/20200111061206.8028-8-john.fastabend@gmail.com
2020-01-11 14:12:05 +08:00
|
|
|
if (msg_pl->sg.end < msg_pl->sg.start) {
|
|
|
|
sg_chain(&msg_pl->sg.data[msg_pl->sg.start],
|
|
|
|
MAX_SKB_FRAGS - msg_pl->sg.start + 1,
|
|
|
|
msg_pl->sg.data);
|
|
|
|
}
|
|
|
|
|
2018-10-13 08:45:59 +08:00
|
|
|
i = msg_pl->sg.start;
|
2019-11-28 04:16:43 +08:00
|
|
|
sg_chain(rec->sg_aead_in, 2, &msg_pl->sg.data[i]);
|
2018-10-13 08:45:59 +08:00
|
|
|
|
|
|
|
i = msg_en->sg.end;
|
|
|
|
sk_msg_iter_var_prev(i);
|
|
|
|
sg_mark_end(sk_msg_elem(msg_en, i));
|
|
|
|
|
|
|
|
i = msg_en->sg.start;
|
|
|
|
sg_chain(rec->sg_aead_out, 2, &msg_en->sg.data[i]);
|
|
|
|
|
2019-02-14 15:11:35 +08:00
|
|
|
tls_make_aad(rec->aad_space, msg_pl->sg.size + prot->tail_size,
|
2020-11-24 23:24:46 +08:00
|
|
|
tls_ctx->tx.rec_seq, record_type, prot);
|
2017-06-15 02:37:39 +08:00
|
|
|
|
|
|
|
tls_fill_prepend(tls_ctx,
|
2018-10-13 08:45:59 +08:00
|
|
|
page_address(sg_page(&msg_en->sg.data[i])) +
|
2019-01-31 05:58:31 +08:00
|
|
|
msg_en->sg.data[i].offset,
|
2019-02-14 15:11:35 +08:00
|
|
|
msg_pl->sg.size + prot->tail_size,
|
2020-11-24 23:24:46 +08:00
|
|
|
record_type);
|
2017-06-15 02:37:39 +08:00
|
|
|
|
2018-10-13 08:45:59 +08:00
|
|
|
tls_ctx->pending_open_record_frags = false;
|
2017-06-15 02:37:39 +08:00
|
|
|
|
2019-01-31 05:58:31 +08:00
|
|
|
rc = tls_do_encryption(sk, tls_ctx, ctx, req,
|
2019-02-14 15:11:35 +08:00
|
|
|
msg_pl->sg.size + prot->tail_size, i);
|
2018-09-21 12:16:13 +08:00
|
|
|
if (rc < 0) {
|
2018-10-13 08:46:01 +08:00
|
|
|
if (rc != -EINPROGRESS) {
|
2021-10-28 05:59:20 +08:00
|
|
|
tls_err_abort(sk, -EBADMSG);
|
2018-10-13 08:46:01 +08:00
|
|
|
if (split) {
|
|
|
|
tls_ctx->pending_open_record_frags = true;
|
|
|
|
tls_merge_open_record(sk, rec, tmp, orig_end);
|
|
|
|
}
|
|
|
|
}
|
2019-01-31 06:08:21 +08:00
|
|
|
ctx->async_capable = 1;
|
2018-09-21 12:16:13 +08:00
|
|
|
return rc;
|
2018-10-13 08:46:01 +08:00
|
|
|
} else if (split) {
|
|
|
|
msg_pl = &tmp->msg_plaintext;
|
|
|
|
msg_en = &tmp->msg_encrypted;
|
2019-02-14 15:11:35 +08:00
|
|
|
sk_msg_trim(sk, msg_en, msg_pl->sg.size + prot->overhead_size);
|
2018-10-13 08:46:01 +08:00
|
|
|
tls_ctx->pending_open_record_frags = true;
|
|
|
|
ctx->open_rec = tmp;
|
2018-09-21 12:16:13 +08:00
|
|
|
}
|
2017-06-15 02:37:39 +08:00
|
|
|
|
net/tls: Fixed race condition in async encryption
On processors with multi-engine crypto accelerators, it is possible that
multiple records get encrypted in parallel and their encryption
completion is notified to different cpus in multicore processor. This
leads to the situation where tls_encrypt_done() starts executing in
parallel on different cores. In current implementation, encrypted
records are queued to tx_ready_list in tls_encrypt_done(). This requires
addition to linked list 'tx_ready_list' to be protected. As
tls_decrypt_done() could be executing in irq content, it is not possible
to protect linked list addition operation using a lock.
To fix the problem, we remove linked list addition operation from the
irq context. We do tx_ready_list addition/removal operation from
application context only and get rid of possible multiple access to
the linked list. Before starting encryption on the record, we add it to
the tail of tx_ready_list. To prevent tls_tx_records() from transmitting
it, we mark the record with a new flag 'tx_ready' in 'struct tls_rec'.
When record encryption gets completed, tls_encrypt_done() has to only
update the 'tx_ready' flag to true & linked list add operation is not
required.
The changed logic brings some other side benefits. Since the records
are always submitted in tls sequence number order for encryption, the
tx_ready_list always remains sorted and addition of new records to it
does not have to traverse the linked list.
Lastly, we renamed tx_ready_list in 'struct tls_sw_context_tx' to
'tx_list'. This is because now, the some of the records at the tail are
not ready to transmit.
Fixes: a42055e8d2c3 ("net/tls: Add support for async encryption")
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-09-24 18:05:56 +08:00
|
|
|
return tls_tx_records(sk, flags);
|
2017-06-15 02:37:39 +08:00
|
|
|
}
|
|
|
|
|
2018-10-13 08:46:01 +08:00
|
|
|
static int bpf_exec_tx_verdict(struct sk_msg *msg, struct sock *sk,
|
|
|
|
bool full_record, u8 record_type,
|
2020-05-20 16:41:43 +08:00
|
|
|
ssize_t *copied, int flags)
|
2017-06-15 02:37:39 +08:00
|
|
|
{
|
|
|
|
struct tls_context *tls_ctx = tls_get_ctx(sk);
|
2018-04-30 15:16:15 +08:00
|
|
|
struct tls_sw_context_tx *ctx = tls_sw_ctx_tx(tls_ctx);
|
2018-10-13 08:46:01 +08:00
|
|
|
struct sk_msg msg_redir = { };
|
|
|
|
struct sk_psock *psock;
|
|
|
|
struct sock *sk_redir;
|
2018-09-21 12:16:13 +08:00
|
|
|
struct tls_rec *rec;
|
bpf: sk_msg, sock{map|hash} redirect through ULP
A sockmap program that redirects through a kTLS ULP enabled socket
will not work correctly because the ULP layer is skipped. This
fixes the behavior to call through the ULP layer on redirect to
ensure any operations required on the data stream at the ULP layer
continue to be applied.
To do this we add an internal flag MSG_SENDPAGE_NOPOLICY to avoid
calling the BPF layer on a redirected message. This is
required to avoid calling the BPF layer multiple times (possibly
recursively) which is not the current/expected behavior without
ULPs. In the future we may add a redirect flag if users _do_
want the policy applied again but this would need to work for both
ULP and non-ULP sockets and be opt-in to avoid breaking existing
programs.
Also to avoid polluting the flag space with an internal flag we
reuse the flag space overlapping MSG_SENDPAGE_NOPOLICY with
MSG_WAITFORONE. Here WAITFORONE is specific to recv path and
SENDPAGE_NOPOLICY is only used for sendpage hooks. The last thing
to verify is user space API is masked correctly to ensure the flag
can not be set by user. (Note this needs to be true regardless
because we have internal flags already in-use that user space
should not be able to set). But for completeness we have two UAPI
paths into sendpage, sendfile and splice.
In the sendfile case the function do_sendfile() zero's flags,
./fs/read_write.c:
static ssize_t do_sendfile(int out_fd, int in_fd, loff_t *ppos,
size_t count, loff_t max)
{
...
fl = 0;
#if 0
/*
* We need to debate whether we can enable this or not. The
* man page documents EAGAIN return for the output at least,
* and the application is arguably buggy if it doesn't expect
* EAGAIN on a non-blocking file descriptor.
*/
if (in.file->f_flags & O_NONBLOCK)
fl = SPLICE_F_NONBLOCK;
#endif
file_start_write(out.file);
retval = do_splice_direct(in.file, &pos, out.file, &out_pos, count, fl);
}
In the splice case the pipe_to_sendpage "actor" is used which
masks flags with SPLICE_F_MORE.
./fs/splice.c:
static int pipe_to_sendpage(struct pipe_inode_info *pipe,
struct pipe_buffer *buf, struct splice_desc *sd)
{
...
more = (sd->flags & SPLICE_F_MORE) ? MSG_MORE : 0;
...
}
Confirming what we expect that internal flags are in fact internal
to socket side.
Fixes: d3b18ad31f93 ("tls: add bpf support to sk_msg handling")
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-12-21 03:35:35 +08:00
|
|
|
bool enospc, policy;
|
2018-10-13 08:46:01 +08:00
|
|
|
int err = 0, send;
|
2018-11-27 06:16:17 +08:00
|
|
|
u32 delta = 0;
|
2018-10-13 08:46:01 +08:00
|
|
|
|
bpf: sk_msg, sock{map|hash} redirect through ULP
A sockmap program that redirects through a kTLS ULP enabled socket
will not work correctly because the ULP layer is skipped. This
fixes the behavior to call through the ULP layer on redirect to
ensure any operations required on the data stream at the ULP layer
continue to be applied.
To do this we add an internal flag MSG_SENDPAGE_NOPOLICY to avoid
calling the BPF layer on a redirected message. This is
required to avoid calling the BPF layer multiple times (possibly
recursively) which is not the current/expected behavior without
ULPs. In the future we may add a redirect flag if users _do_
want the policy applied again but this would need to work for both
ULP and non-ULP sockets and be opt-in to avoid breaking existing
programs.
Also to avoid polluting the flag space with an internal flag we
reuse the flag space overlapping MSG_SENDPAGE_NOPOLICY with
MSG_WAITFORONE. Here WAITFORONE is specific to recv path and
SENDPAGE_NOPOLICY is only used for sendpage hooks. The last thing
to verify is user space API is masked correctly to ensure the flag
can not be set by user. (Note this needs to be true regardless
because we have internal flags already in-use that user space
should not be able to set). But for completeness we have two UAPI
paths into sendpage, sendfile and splice.
In the sendfile case the function do_sendfile() zero's flags,
./fs/read_write.c:
static ssize_t do_sendfile(int out_fd, int in_fd, loff_t *ppos,
size_t count, loff_t max)
{
...
fl = 0;
#if 0
/*
* We need to debate whether we can enable this or not. The
* man page documents EAGAIN return for the output at least,
* and the application is arguably buggy if it doesn't expect
* EAGAIN on a non-blocking file descriptor.
*/
if (in.file->f_flags & O_NONBLOCK)
fl = SPLICE_F_NONBLOCK;
#endif
file_start_write(out.file);
retval = do_splice_direct(in.file, &pos, out.file, &out_pos, count, fl);
}
In the splice case the pipe_to_sendpage "actor" is used which
masks flags with SPLICE_F_MORE.
./fs/splice.c:
static int pipe_to_sendpage(struct pipe_inode_info *pipe,
struct pipe_buffer *buf, struct splice_desc *sd)
{
...
more = (sd->flags & SPLICE_F_MORE) ? MSG_MORE : 0;
...
}
Confirming what we expect that internal flags are in fact internal
to socket side.
Fixes: d3b18ad31f93 ("tls: add bpf support to sk_msg handling")
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-12-21 03:35:35 +08:00
|
|
|
policy = !(flags & MSG_SENDPAGE_NOPOLICY);
|
2018-10-13 08:46:01 +08:00
|
|
|
psock = sk_psock_get(sk);
|
2019-11-28 04:16:40 +08:00
|
|
|
if (!psock || !policy) {
|
|
|
|
err = tls_push_record(sk, flags, record_type);
|
2020-05-20 16:41:44 +08:00
|
|
|
if (err && sk->sk_err == EBADMSG) {
|
2019-11-28 04:16:40 +08:00
|
|
|
*copied -= sk_msg_free(sk, msg);
|
|
|
|
tls_free_open_rec(sk);
|
2020-05-20 16:41:44 +08:00
|
|
|
err = -sk->sk_err;
|
2019-11-28 04:16:40 +08:00
|
|
|
}
|
2020-04-25 20:54:37 +08:00
|
|
|
if (psock)
|
|
|
|
sk_psock_put(sk, psock);
|
2019-11-28 04:16:40 +08:00
|
|
|
return err;
|
|
|
|
}
|
2018-10-13 08:46:01 +08:00
|
|
|
more_data:
|
|
|
|
enospc = sk_msg_full(msg);
|
2018-11-27 06:16:17 +08:00
|
|
|
if (psock->eval == __SK_NONE) {
|
|
|
|
delta = msg->sg.size;
|
2018-10-13 08:46:01 +08:00
|
|
|
psock->eval = sk_psock_msg_verdict(sk, psock, msg);
|
2020-01-11 14:12:06 +08:00
|
|
|
delta -= msg->sg.size;
|
2018-11-27 06:16:17 +08:00
|
|
|
}
|
2018-10-13 08:46:01 +08:00
|
|
|
if (msg->cork_bytes && msg->cork_bytes > msg->sg.size &&
|
|
|
|
!enospc && !full_record) {
|
|
|
|
err = -ENOSPC;
|
|
|
|
goto out_err;
|
|
|
|
}
|
|
|
|
msg->cork_bytes = 0;
|
|
|
|
send = msg->sg.size;
|
|
|
|
if (msg->apply_bytes && msg->apply_bytes < send)
|
|
|
|
send = msg->apply_bytes;
|
|
|
|
|
|
|
|
switch (psock->eval) {
|
|
|
|
case __SK_PASS:
|
|
|
|
err = tls_push_record(sk, flags, record_type);
|
2020-05-20 16:41:44 +08:00
|
|
|
if (err && sk->sk_err == EBADMSG) {
|
2018-10-13 08:46:01 +08:00
|
|
|
*copied -= sk_msg_free(sk, msg);
|
|
|
|
tls_free_open_rec(sk);
|
2020-05-20 16:41:44 +08:00
|
|
|
err = -sk->sk_err;
|
2018-10-13 08:46:01 +08:00
|
|
|
goto out_err;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case __SK_REDIRECT:
|
|
|
|
sk_redir = psock->sk_redir;
|
|
|
|
memcpy(&msg_redir, msg, sizeof(*msg));
|
|
|
|
if (msg->apply_bytes < send)
|
|
|
|
msg->apply_bytes = 0;
|
|
|
|
else
|
|
|
|
msg->apply_bytes -= send;
|
|
|
|
sk_msg_return_zero(sk, msg, send);
|
|
|
|
msg->sg.size -= send;
|
|
|
|
release_sock(sk);
|
|
|
|
err = tcp_bpf_sendmsg_redir(sk_redir, &msg_redir, send, flags);
|
|
|
|
lock_sock(sk);
|
|
|
|
if (err < 0) {
|
|
|
|
*copied -= sk_msg_free_nocharge(sk, &msg_redir);
|
|
|
|
msg->sg.size = 0;
|
|
|
|
}
|
|
|
|
if (msg->sg.size == 0)
|
|
|
|
tls_free_open_rec(sk);
|
|
|
|
break;
|
|
|
|
case __SK_DROP:
|
|
|
|
default:
|
|
|
|
sk_msg_free_partial(sk, msg, send);
|
|
|
|
if (msg->apply_bytes < send)
|
|
|
|
msg->apply_bytes = 0;
|
|
|
|
else
|
|
|
|
msg->apply_bytes -= send;
|
|
|
|
if (msg->sg.size == 0)
|
|
|
|
tls_free_open_rec(sk);
|
2018-11-27 06:16:17 +08:00
|
|
|
*copied -= (send + delta);
|
2018-10-13 08:46:01 +08:00
|
|
|
err = -EACCES;
|
|
|
|
}
|
2018-09-21 12:16:13 +08:00
|
|
|
|
2018-10-13 08:46:01 +08:00
|
|
|
if (likely(!err)) {
|
|
|
|
bool reset_eval = !ctx->open_rec;
|
2018-09-21 12:16:13 +08:00
|
|
|
|
2018-10-13 08:46:01 +08:00
|
|
|
rec = ctx->open_rec;
|
|
|
|
if (rec) {
|
|
|
|
msg = &rec->msg_plaintext;
|
|
|
|
if (!msg->apply_bytes)
|
|
|
|
reset_eval = true;
|
|
|
|
}
|
|
|
|
if (reset_eval) {
|
|
|
|
psock->eval = __SK_NONE;
|
|
|
|
if (psock->sk_redir) {
|
|
|
|
sock_put(psock->sk_redir);
|
|
|
|
psock->sk_redir = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (rec)
|
|
|
|
goto more_data;
|
|
|
|
}
|
|
|
|
out_err:
|
|
|
|
sk_psock_put(sk, psock);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int tls_sw_push_pending_record(struct sock *sk, int flags)
|
|
|
|
{
|
|
|
|
struct tls_context *tls_ctx = tls_get_ctx(sk);
|
|
|
|
struct tls_sw_context_tx *ctx = tls_sw_ctx_tx(tls_ctx);
|
|
|
|
struct tls_rec *rec = ctx->open_rec;
|
|
|
|
struct sk_msg *msg_pl;
|
|
|
|
size_t copied;
|
2018-09-21 12:16:13 +08:00
|
|
|
|
|
|
|
if (!rec)
|
2018-10-13 08:46:01 +08:00
|
|
|
return 0;
|
2018-09-21 12:16:13 +08:00
|
|
|
|
2018-10-13 08:45:59 +08:00
|
|
|
msg_pl = &rec->msg_plaintext;
|
2018-10-13 08:46:01 +08:00
|
|
|
copied = msg_pl->sg.size;
|
|
|
|
if (!copied)
|
|
|
|
return 0;
|
2018-09-21 12:16:13 +08:00
|
|
|
|
2018-10-13 08:46:01 +08:00
|
|
|
return bpf_exec_tx_verdict(msg_pl, sk, true, TLS_RECORD_TYPE_DATA,
|
|
|
|
&copied, flags);
|
2018-09-21 12:16:13 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
int tls_sw_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
|
|
|
|
{
|
2017-06-15 02:37:39 +08:00
|
|
|
long timeo = sock_sndtimeo(sk, msg->msg_flags & MSG_DONTWAIT);
|
2018-09-21 12:16:13 +08:00
|
|
|
struct tls_context *tls_ctx = tls_get_ctx(sk);
|
2019-02-14 15:11:35 +08:00
|
|
|
struct tls_prot_info *prot = &tls_ctx->prot_info;
|
2018-09-21 12:16:13 +08:00
|
|
|
struct tls_sw_context_tx *ctx = tls_sw_ctx_tx(tls_ctx);
|
2019-01-31 06:08:21 +08:00
|
|
|
bool async_capable = ctx->async_capable;
|
2018-09-21 12:16:13 +08:00
|
|
|
unsigned char record_type = TLS_RECORD_TYPE_DATA;
|
2018-10-22 20:07:28 +08:00
|
|
|
bool is_kvec = iov_iter_is_kvec(&msg->msg_iter);
|
2017-06-15 02:37:39 +08:00
|
|
|
bool eor = !(msg->msg_flags & MSG_MORE);
|
2020-05-20 16:41:43 +08:00
|
|
|
size_t try_to_copy;
|
|
|
|
ssize_t copied = 0;
|
2018-10-13 08:45:59 +08:00
|
|
|
struct sk_msg *msg_pl, *msg_en;
|
2018-09-21 12:16:13 +08:00
|
|
|
struct tls_rec *rec;
|
|
|
|
int required_size;
|
|
|
|
int num_async = 0;
|
2017-06-15 02:37:39 +08:00
|
|
|
bool full_record;
|
2018-09-21 12:16:13 +08:00
|
|
|
int record_room;
|
|
|
|
int num_zc = 0;
|
2017-06-15 02:37:39 +08:00
|
|
|
int orig_size;
|
2018-09-24 18:39:49 +08:00
|
|
|
int ret = 0;
|
2020-05-23 04:10:31 +08:00
|
|
|
int pending;
|
2017-06-15 02:37:39 +08:00
|
|
|
|
2020-08-06 14:49:06 +08:00
|
|
|
if (msg->msg_flags & ~(MSG_MORE | MSG_DONTWAIT | MSG_NOSIGNAL |
|
|
|
|
MSG_CMSG_COMPAT))
|
2019-12-05 14:41:18 +08:00
|
|
|
return -EOPNOTSUPP;
|
2017-06-15 02:37:39 +08:00
|
|
|
|
2019-11-06 06:24:35 +08:00
|
|
|
mutex_lock(&tls_ctx->tx_lock);
|
2017-06-15 02:37:39 +08:00
|
|
|
lock_sock(sk);
|
|
|
|
|
|
|
|
if (unlikely(msg->msg_controllen)) {
|
|
|
|
ret = tls_proccess_cmsg(sk, msg, &record_type);
|
2018-09-21 12:16:13 +08:00
|
|
|
if (ret) {
|
|
|
|
if (ret == -EINPROGRESS)
|
|
|
|
num_async++;
|
|
|
|
else if (ret != -EAGAIN)
|
|
|
|
goto send_end;
|
|
|
|
}
|
2017-06-15 02:37:39 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
while (msg_data_left(msg)) {
|
|
|
|
if (sk->sk_err) {
|
2018-01-12 22:42:06 +08:00
|
|
|
ret = -sk->sk_err;
|
2017-06-15 02:37:39 +08:00
|
|
|
goto send_end;
|
|
|
|
}
|
|
|
|
|
2018-10-13 08:46:01 +08:00
|
|
|
if (ctx->open_rec)
|
|
|
|
rec = ctx->open_rec;
|
|
|
|
else
|
|
|
|
rec = ctx->open_rec = tls_get_rec(sk);
|
2018-09-21 12:16:13 +08:00
|
|
|
if (!rec) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto send_end;
|
|
|
|
}
|
|
|
|
|
2018-10-13 08:45:59 +08:00
|
|
|
msg_pl = &rec->msg_plaintext;
|
|
|
|
msg_en = &rec->msg_encrypted;
|
|
|
|
|
|
|
|
orig_size = msg_pl->sg.size;
|
2017-06-15 02:37:39 +08:00
|
|
|
full_record = false;
|
|
|
|
try_to_copy = msg_data_left(msg);
|
2018-10-13 08:45:59 +08:00
|
|
|
record_room = TLS_MAX_PAYLOAD_SIZE - msg_pl->sg.size;
|
2017-06-15 02:37:39 +08:00
|
|
|
if (try_to_copy >= record_room) {
|
|
|
|
try_to_copy = record_room;
|
|
|
|
full_record = true;
|
|
|
|
}
|
|
|
|
|
2018-10-13 08:45:59 +08:00
|
|
|
required_size = msg_pl->sg.size + try_to_copy +
|
2019-02-14 15:11:35 +08:00
|
|
|
prot->overhead_size;
|
2017-06-15 02:37:39 +08:00
|
|
|
|
|
|
|
if (!sk_stream_memory_free(sk))
|
|
|
|
goto wait_for_sndbuf;
|
2018-09-21 12:16:13 +08:00
|
|
|
|
2017-06-15 02:37:39 +08:00
|
|
|
alloc_encrypted:
|
2018-10-13 08:45:59 +08:00
|
|
|
ret = tls_alloc_encrypted_msg(sk, required_size);
|
2017-06-15 02:37:39 +08:00
|
|
|
if (ret) {
|
|
|
|
if (ret != -ENOSPC)
|
|
|
|
goto wait_for_memory;
|
|
|
|
|
|
|
|
/* Adjust try_to_copy according to the amount that was
|
|
|
|
* actually allocated. The difference is due
|
|
|
|
* to max sg elements limit
|
|
|
|
*/
|
2018-10-13 08:45:59 +08:00
|
|
|
try_to_copy -= required_size - msg_en->sg.size;
|
2017-06-15 02:37:39 +08:00
|
|
|
full_record = true;
|
|
|
|
}
|
2018-09-21 12:16:13 +08:00
|
|
|
|
|
|
|
if (!is_kvec && (full_record || eor) && !async_capable) {
|
2018-10-13 08:46:01 +08:00
|
|
|
u32 first = msg_pl->sg.end;
|
|
|
|
|
2018-10-13 08:45:59 +08:00
|
|
|
ret = sk_msg_zerocopy_from_iter(sk, &msg->msg_iter,
|
|
|
|
msg_pl, try_to_copy);
|
2017-06-15 02:37:39 +08:00
|
|
|
if (ret)
|
|
|
|
goto fallback_to_reg_send;
|
|
|
|
|
2018-09-21 12:16:13 +08:00
|
|
|
num_zc++;
|
2017-06-15 02:37:39 +08:00
|
|
|
copied += try_to_copy;
|
2018-10-13 08:46:01 +08:00
|
|
|
|
|
|
|
sk_msg_sg_copy_set(msg_pl, first);
|
|
|
|
ret = bpf_exec_tx_verdict(msg_pl, sk, full_record,
|
|
|
|
record_type, &copied,
|
|
|
|
msg->msg_flags);
|
2018-09-21 12:16:13 +08:00
|
|
|
if (ret) {
|
|
|
|
if (ret == -EINPROGRESS)
|
|
|
|
num_async++;
|
2018-10-13 08:46:01 +08:00
|
|
|
else if (ret == -ENOMEM)
|
|
|
|
goto wait_for_memory;
|
2019-11-28 04:16:39 +08:00
|
|
|
else if (ctx->open_rec && ret == -ENOSPC)
|
2018-10-13 08:46:01 +08:00
|
|
|
goto rollback_iter;
|
2018-09-21 12:16:13 +08:00
|
|
|
else if (ret != -EAGAIN)
|
|
|
|
goto send_end;
|
|
|
|
}
|
2018-07-26 22:59:35 +08:00
|
|
|
continue;
|
2018-10-13 08:46:01 +08:00
|
|
|
rollback_iter:
|
|
|
|
copied -= try_to_copy;
|
|
|
|
sk_msg_sg_copy_clear(msg_pl, first);
|
|
|
|
iov_iter_revert(&msg->msg_iter,
|
|
|
|
msg_pl->sg.size - orig_size);
|
2017-06-15 02:37:39 +08:00
|
|
|
fallback_to_reg_send:
|
2018-10-13 08:45:59 +08:00
|
|
|
sk_msg_trim(sk, msg_pl, orig_size);
|
2017-06-15 02:37:39 +08:00
|
|
|
}
|
|
|
|
|
2018-10-13 08:45:59 +08:00
|
|
|
required_size = msg_pl->sg.size + try_to_copy;
|
2018-09-30 10:34:35 +08:00
|
|
|
|
2018-10-13 08:45:59 +08:00
|
|
|
ret = tls_clone_plaintext_msg(sk, required_size);
|
2017-06-15 02:37:39 +08:00
|
|
|
if (ret) {
|
|
|
|
if (ret != -ENOSPC)
|
2018-09-30 10:34:35 +08:00
|
|
|
goto send_end;
|
2017-06-15 02:37:39 +08:00
|
|
|
|
|
|
|
/* Adjust try_to_copy according to the amount that was
|
|
|
|
* actually allocated. The difference is due
|
|
|
|
* to max sg elements limit
|
|
|
|
*/
|
2018-10-13 08:45:59 +08:00
|
|
|
try_to_copy -= required_size - msg_pl->sg.size;
|
2017-06-15 02:37:39 +08:00
|
|
|
full_record = true;
|
2019-02-14 15:11:35 +08:00
|
|
|
sk_msg_trim(sk, msg_en,
|
|
|
|
msg_pl->sg.size + prot->overhead_size);
|
2017-06-15 02:37:39 +08:00
|
|
|
}
|
|
|
|
|
2018-12-21 23:16:52 +08:00
|
|
|
if (try_to_copy) {
|
|
|
|
ret = sk_msg_memcopy_from_iter(sk, &msg->msg_iter,
|
|
|
|
msg_pl, try_to_copy);
|
|
|
|
if (ret < 0)
|
|
|
|
goto trim_sgl;
|
|
|
|
}
|
2017-06-15 02:37:39 +08:00
|
|
|
|
2018-10-13 08:45:59 +08:00
|
|
|
/* Open records defined only if successfully copied, otherwise
|
|
|
|
* we would trim the sg but not reset the open record frags.
|
|
|
|
*/
|
|
|
|
tls_ctx->pending_open_record_frags = true;
|
2017-06-15 02:37:39 +08:00
|
|
|
copied += try_to_copy;
|
|
|
|
if (full_record || eor) {
|
2018-10-13 08:46:01 +08:00
|
|
|
ret = bpf_exec_tx_verdict(msg_pl, sk, full_record,
|
|
|
|
record_type, &copied,
|
|
|
|
msg->msg_flags);
|
2017-06-15 02:37:39 +08:00
|
|
|
if (ret) {
|
2018-09-21 12:16:13 +08:00
|
|
|
if (ret == -EINPROGRESS)
|
|
|
|
num_async++;
|
2018-10-13 08:46:01 +08:00
|
|
|
else if (ret == -ENOMEM)
|
|
|
|
goto wait_for_memory;
|
|
|
|
else if (ret != -EAGAIN) {
|
|
|
|
if (ret == -ENOSPC)
|
|
|
|
ret = 0;
|
2018-09-21 12:16:13 +08:00
|
|
|
goto send_end;
|
2018-10-13 08:46:01 +08:00
|
|
|
}
|
2017-06-15 02:37:39 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
continue;
|
|
|
|
|
|
|
|
wait_for_sndbuf:
|
|
|
|
set_bit(SOCK_NOSPACE, &sk->sk_socket->flags);
|
|
|
|
wait_for_memory:
|
|
|
|
ret = sk_stream_wait_memory(sk, &timeo);
|
|
|
|
if (ret) {
|
|
|
|
trim_sgl:
|
2019-11-28 04:16:39 +08:00
|
|
|
if (ctx->open_rec)
|
|
|
|
tls_trim_both_msgs(sk, orig_size);
|
2017-06-15 02:37:39 +08:00
|
|
|
goto send_end;
|
|
|
|
}
|
|
|
|
|
2019-11-28 04:16:39 +08:00
|
|
|
if (ctx->open_rec && msg_en->sg.size < required_size)
|
2017-06-15 02:37:39 +08:00
|
|
|
goto alloc_encrypted;
|
|
|
|
}
|
|
|
|
|
2018-09-21 12:16:13 +08:00
|
|
|
if (!num_async) {
|
|
|
|
goto send_end;
|
|
|
|
} else if (num_zc) {
|
|
|
|
/* Wait for pending encryptions to get completed */
|
2020-05-23 04:10:31 +08:00
|
|
|
spin_lock_bh(&ctx->encrypt_compl_lock);
|
|
|
|
ctx->async_notify = true;
|
2018-09-21 12:16:13 +08:00
|
|
|
|
2020-05-23 04:10:31 +08:00
|
|
|
pending = atomic_read(&ctx->encrypt_pending);
|
|
|
|
spin_unlock_bh(&ctx->encrypt_compl_lock);
|
|
|
|
if (pending)
|
2018-09-21 12:16:13 +08:00
|
|
|
crypto_wait_req(-EINPROGRESS, &ctx->async_wait);
|
|
|
|
else
|
|
|
|
reinit_completion(&ctx->async_wait.completion);
|
|
|
|
|
2020-05-23 04:10:31 +08:00
|
|
|
/* There can be no concurrent accesses, since we have no
|
|
|
|
* pending encrypt operations
|
|
|
|
*/
|
2018-09-21 12:16:13 +08:00
|
|
|
WRITE_ONCE(ctx->async_notify, false);
|
|
|
|
|
|
|
|
if (ctx->async_wait.err) {
|
|
|
|
ret = ctx->async_wait.err;
|
|
|
|
copied = 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Transmit if any encryptions have completed */
|
|
|
|
if (test_and_clear_bit(BIT_TX_SCHEDULED, &ctx->tx_bitmask)) {
|
|
|
|
cancel_delayed_work(&ctx->tx_work.work);
|
|
|
|
tls_tx_records(sk, msg->msg_flags);
|
|
|
|
}
|
|
|
|
|
2017-06-15 02:37:39 +08:00
|
|
|
send_end:
|
|
|
|
ret = sk_stream_error(sk, msg->msg_flags, ret);
|
|
|
|
|
|
|
|
release_sock(sk);
|
2019-11-06 06:24:35 +08:00
|
|
|
mutex_unlock(&tls_ctx->tx_lock);
|
2020-05-20 16:41:43 +08:00
|
|
|
return copied > 0 ? copied : ret;
|
2017-06-15 02:37:39 +08:00
|
|
|
}
|
|
|
|
|
2019-01-16 10:39:28 +08:00
|
|
|
static int tls_sw_do_sendpage(struct sock *sk, struct page *page,
|
|
|
|
int offset, size_t size, int flags)
|
2017-06-15 02:37:39 +08:00
|
|
|
{
|
2018-09-21 12:16:13 +08:00
|
|
|
long timeo = sock_sndtimeo(sk, flags & MSG_DONTWAIT);
|
2017-06-15 02:37:39 +08:00
|
|
|
struct tls_context *tls_ctx = tls_get_ctx(sk);
|
2018-04-30 15:16:15 +08:00
|
|
|
struct tls_sw_context_tx *ctx = tls_sw_ctx_tx(tls_ctx);
|
2019-02-14 15:11:35 +08:00
|
|
|
struct tls_prot_info *prot = &tls_ctx->prot_info;
|
2017-06-15 02:37:39 +08:00
|
|
|
unsigned char record_type = TLS_RECORD_TYPE_DATA;
|
2018-10-13 08:45:59 +08:00
|
|
|
struct sk_msg *msg_pl;
|
2018-09-21 12:16:13 +08:00
|
|
|
struct tls_rec *rec;
|
|
|
|
int num_async = 0;
|
2020-05-20 16:41:43 +08:00
|
|
|
ssize_t copied = 0;
|
2017-06-15 02:37:39 +08:00
|
|
|
bool full_record;
|
|
|
|
int record_room;
|
2018-09-24 18:39:49 +08:00
|
|
|
int ret = 0;
|
2018-09-21 12:16:13 +08:00
|
|
|
bool eor;
|
2017-06-15 02:37:39 +08:00
|
|
|
|
2021-06-19 04:34:06 +08:00
|
|
|
eor = !(flags & MSG_SENDPAGE_NOTLAST);
|
2017-06-15 02:37:39 +08:00
|
|
|
sk_clear_bit(SOCKWQ_ASYNC_NOSPACE, sk);
|
|
|
|
|
|
|
|
/* Call the sk_stream functions to manage the sndbuf mem. */
|
|
|
|
while (size > 0) {
|
|
|
|
size_t copy, required_size;
|
|
|
|
|
|
|
|
if (sk->sk_err) {
|
2018-01-12 22:42:06 +08:00
|
|
|
ret = -sk->sk_err;
|
2017-06-15 02:37:39 +08:00
|
|
|
goto sendpage_end;
|
|
|
|
}
|
|
|
|
|
2018-10-13 08:46:01 +08:00
|
|
|
if (ctx->open_rec)
|
|
|
|
rec = ctx->open_rec;
|
|
|
|
else
|
|
|
|
rec = ctx->open_rec = tls_get_rec(sk);
|
2018-09-21 12:16:13 +08:00
|
|
|
if (!rec) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto sendpage_end;
|
|
|
|
}
|
|
|
|
|
2018-10-13 08:45:59 +08:00
|
|
|
msg_pl = &rec->msg_plaintext;
|
|
|
|
|
2017-06-15 02:37:39 +08:00
|
|
|
full_record = false;
|
2018-10-13 08:45:59 +08:00
|
|
|
record_room = TLS_MAX_PAYLOAD_SIZE - msg_pl->sg.size;
|
2017-06-15 02:37:39 +08:00
|
|
|
copy = size;
|
|
|
|
if (copy >= record_room) {
|
|
|
|
copy = record_room;
|
|
|
|
full_record = true;
|
|
|
|
}
|
2018-10-13 08:45:59 +08:00
|
|
|
|
2019-02-14 15:11:35 +08:00
|
|
|
required_size = msg_pl->sg.size + copy + prot->overhead_size;
|
2017-06-15 02:37:39 +08:00
|
|
|
|
|
|
|
if (!sk_stream_memory_free(sk))
|
|
|
|
goto wait_for_sndbuf;
|
|
|
|
alloc_payload:
|
2018-10-13 08:45:59 +08:00
|
|
|
ret = tls_alloc_encrypted_msg(sk, required_size);
|
2017-06-15 02:37:39 +08:00
|
|
|
if (ret) {
|
|
|
|
if (ret != -ENOSPC)
|
|
|
|
goto wait_for_memory;
|
|
|
|
|
|
|
|
/* Adjust copy according to the amount that was
|
|
|
|
* actually allocated. The difference is due
|
|
|
|
* to max sg elements limit
|
|
|
|
*/
|
2018-10-13 08:45:59 +08:00
|
|
|
copy -= required_size - msg_pl->sg.size;
|
2017-06-15 02:37:39 +08:00
|
|
|
full_record = true;
|
|
|
|
}
|
|
|
|
|
2018-10-13 08:45:59 +08:00
|
|
|
sk_msg_page_add(msg_pl, page, copy, offset);
|
2017-06-15 02:37:39 +08:00
|
|
|
sk_mem_charge(sk, copy);
|
2018-10-13 08:45:59 +08:00
|
|
|
|
2017-06-15 02:37:39 +08:00
|
|
|
offset += copy;
|
|
|
|
size -= copy;
|
2018-10-13 08:46:01 +08:00
|
|
|
copied += copy;
|
2017-06-15 02:37:39 +08:00
|
|
|
|
2018-10-13 08:45:59 +08:00
|
|
|
tls_ctx->pending_open_record_frags = true;
|
|
|
|
if (full_record || eor || sk_msg_full(msg_pl)) {
|
2018-10-13 08:46:01 +08:00
|
|
|
ret = bpf_exec_tx_verdict(msg_pl, sk, full_record,
|
|
|
|
record_type, &copied, flags);
|
2017-06-15 02:37:39 +08:00
|
|
|
if (ret) {
|
2018-09-21 12:16:13 +08:00
|
|
|
if (ret == -EINPROGRESS)
|
|
|
|
num_async++;
|
2018-10-13 08:46:01 +08:00
|
|
|
else if (ret == -ENOMEM)
|
|
|
|
goto wait_for_memory;
|
|
|
|
else if (ret != -EAGAIN) {
|
|
|
|
if (ret == -ENOSPC)
|
|
|
|
ret = 0;
|
2018-09-21 12:16:13 +08:00
|
|
|
goto sendpage_end;
|
2018-10-13 08:46:01 +08:00
|
|
|
}
|
2017-06-15 02:37:39 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
continue;
|
|
|
|
wait_for_sndbuf:
|
|
|
|
set_bit(SOCK_NOSPACE, &sk->sk_socket->flags);
|
|
|
|
wait_for_memory:
|
|
|
|
ret = sk_stream_wait_memory(sk, &timeo);
|
|
|
|
if (ret) {
|
2019-11-28 04:16:39 +08:00
|
|
|
if (ctx->open_rec)
|
|
|
|
tls_trim_both_msgs(sk, msg_pl->sg.size);
|
2017-06-15 02:37:39 +08:00
|
|
|
goto sendpage_end;
|
|
|
|
}
|
|
|
|
|
2019-11-28 04:16:39 +08:00
|
|
|
if (ctx->open_rec)
|
|
|
|
goto alloc_payload;
|
2017-06-15 02:37:39 +08:00
|
|
|
}
|
|
|
|
|
2018-09-21 12:16:13 +08:00
|
|
|
if (num_async) {
|
|
|
|
/* Transmit if any encryptions have completed */
|
|
|
|
if (test_and_clear_bit(BIT_TX_SCHEDULED, &ctx->tx_bitmask)) {
|
|
|
|
cancel_delayed_work(&ctx->tx_work.work);
|
|
|
|
tls_tx_records(sk, flags);
|
|
|
|
}
|
|
|
|
}
|
2017-06-15 02:37:39 +08:00
|
|
|
sendpage_end:
|
2018-10-13 08:46:01 +08:00
|
|
|
ret = sk_stream_error(sk, flags, ret);
|
2020-05-20 16:41:43 +08:00
|
|
|
return copied > 0 ? copied : ret;
|
2017-06-15 02:37:39 +08:00
|
|
|
}
|
|
|
|
|
2019-11-18 23:40:51 +08:00
|
|
|
int tls_sw_sendpage_locked(struct sock *sk, struct page *page,
|
|
|
|
int offset, size_t size, int flags)
|
|
|
|
{
|
|
|
|
if (flags & ~(MSG_MORE | MSG_DONTWAIT | MSG_NOSIGNAL |
|
|
|
|
MSG_SENDPAGE_NOTLAST | MSG_SENDPAGE_NOPOLICY |
|
|
|
|
MSG_NO_SHARED_FRAGS))
|
2019-12-05 14:41:18 +08:00
|
|
|
return -EOPNOTSUPP;
|
2019-11-18 23:40:51 +08:00
|
|
|
|
|
|
|
return tls_sw_do_sendpage(sk, page, offset, size, flags);
|
|
|
|
}
|
|
|
|
|
bpf: sk_msg, sock{map|hash} redirect through ULP
A sockmap program that redirects through a kTLS ULP enabled socket
will not work correctly because the ULP layer is skipped. This
fixes the behavior to call through the ULP layer on redirect to
ensure any operations required on the data stream at the ULP layer
continue to be applied.
To do this we add an internal flag MSG_SENDPAGE_NOPOLICY to avoid
calling the BPF layer on a redirected message. This is
required to avoid calling the BPF layer multiple times (possibly
recursively) which is not the current/expected behavior without
ULPs. In the future we may add a redirect flag if users _do_
want the policy applied again but this would need to work for both
ULP and non-ULP sockets and be opt-in to avoid breaking existing
programs.
Also to avoid polluting the flag space with an internal flag we
reuse the flag space overlapping MSG_SENDPAGE_NOPOLICY with
MSG_WAITFORONE. Here WAITFORONE is specific to recv path and
SENDPAGE_NOPOLICY is only used for sendpage hooks. The last thing
to verify is user space API is masked correctly to ensure the flag
can not be set by user. (Note this needs to be true regardless
because we have internal flags already in-use that user space
should not be able to set). But for completeness we have two UAPI
paths into sendpage, sendfile and splice.
In the sendfile case the function do_sendfile() zero's flags,
./fs/read_write.c:
static ssize_t do_sendfile(int out_fd, int in_fd, loff_t *ppos,
size_t count, loff_t max)
{
...
fl = 0;
#if 0
/*
* We need to debate whether we can enable this or not. The
* man page documents EAGAIN return for the output at least,
* and the application is arguably buggy if it doesn't expect
* EAGAIN on a non-blocking file descriptor.
*/
if (in.file->f_flags & O_NONBLOCK)
fl = SPLICE_F_NONBLOCK;
#endif
file_start_write(out.file);
retval = do_splice_direct(in.file, &pos, out.file, &out_pos, count, fl);
}
In the splice case the pipe_to_sendpage "actor" is used which
masks flags with SPLICE_F_MORE.
./fs/splice.c:
static int pipe_to_sendpage(struct pipe_inode_info *pipe,
struct pipe_buffer *buf, struct splice_desc *sd)
{
...
more = (sd->flags & SPLICE_F_MORE) ? MSG_MORE : 0;
...
}
Confirming what we expect that internal flags are in fact internal
to socket side.
Fixes: d3b18ad31f93 ("tls: add bpf support to sk_msg handling")
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-12-21 03:35:35 +08:00
|
|
|
int tls_sw_sendpage(struct sock *sk, struct page *page,
|
|
|
|
int offset, size_t size, int flags)
|
|
|
|
{
|
2019-11-06 06:24:35 +08:00
|
|
|
struct tls_context *tls_ctx = tls_get_ctx(sk);
|
bpf: sk_msg, sock{map|hash} redirect through ULP
A sockmap program that redirects through a kTLS ULP enabled socket
will not work correctly because the ULP layer is skipped. This
fixes the behavior to call through the ULP layer on redirect to
ensure any operations required on the data stream at the ULP layer
continue to be applied.
To do this we add an internal flag MSG_SENDPAGE_NOPOLICY to avoid
calling the BPF layer on a redirected message. This is
required to avoid calling the BPF layer multiple times (possibly
recursively) which is not the current/expected behavior without
ULPs. In the future we may add a redirect flag if users _do_
want the policy applied again but this would need to work for both
ULP and non-ULP sockets and be opt-in to avoid breaking existing
programs.
Also to avoid polluting the flag space with an internal flag we
reuse the flag space overlapping MSG_SENDPAGE_NOPOLICY with
MSG_WAITFORONE. Here WAITFORONE is specific to recv path and
SENDPAGE_NOPOLICY is only used for sendpage hooks. The last thing
to verify is user space API is masked correctly to ensure the flag
can not be set by user. (Note this needs to be true regardless
because we have internal flags already in-use that user space
should not be able to set). But for completeness we have two UAPI
paths into sendpage, sendfile and splice.
In the sendfile case the function do_sendfile() zero's flags,
./fs/read_write.c:
static ssize_t do_sendfile(int out_fd, int in_fd, loff_t *ppos,
size_t count, loff_t max)
{
...
fl = 0;
#if 0
/*
* We need to debate whether we can enable this or not. The
* man page documents EAGAIN return for the output at least,
* and the application is arguably buggy if it doesn't expect
* EAGAIN on a non-blocking file descriptor.
*/
if (in.file->f_flags & O_NONBLOCK)
fl = SPLICE_F_NONBLOCK;
#endif
file_start_write(out.file);
retval = do_splice_direct(in.file, &pos, out.file, &out_pos, count, fl);
}
In the splice case the pipe_to_sendpage "actor" is used which
masks flags with SPLICE_F_MORE.
./fs/splice.c:
static int pipe_to_sendpage(struct pipe_inode_info *pipe,
struct pipe_buffer *buf, struct splice_desc *sd)
{
...
more = (sd->flags & SPLICE_F_MORE) ? MSG_MORE : 0;
...
}
Confirming what we expect that internal flags are in fact internal
to socket side.
Fixes: d3b18ad31f93 ("tls: add bpf support to sk_msg handling")
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-12-21 03:35:35 +08:00
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (flags & ~(MSG_MORE | MSG_DONTWAIT | MSG_NOSIGNAL |
|
|
|
|
MSG_SENDPAGE_NOTLAST | MSG_SENDPAGE_NOPOLICY))
|
2019-12-05 14:41:18 +08:00
|
|
|
return -EOPNOTSUPP;
|
bpf: sk_msg, sock{map|hash} redirect through ULP
A sockmap program that redirects through a kTLS ULP enabled socket
will not work correctly because the ULP layer is skipped. This
fixes the behavior to call through the ULP layer on redirect to
ensure any operations required on the data stream at the ULP layer
continue to be applied.
To do this we add an internal flag MSG_SENDPAGE_NOPOLICY to avoid
calling the BPF layer on a redirected message. This is
required to avoid calling the BPF layer multiple times (possibly
recursively) which is not the current/expected behavior without
ULPs. In the future we may add a redirect flag if users _do_
want the policy applied again but this would need to work for both
ULP and non-ULP sockets and be opt-in to avoid breaking existing
programs.
Also to avoid polluting the flag space with an internal flag we
reuse the flag space overlapping MSG_SENDPAGE_NOPOLICY with
MSG_WAITFORONE. Here WAITFORONE is specific to recv path and
SENDPAGE_NOPOLICY is only used for sendpage hooks. The last thing
to verify is user space API is masked correctly to ensure the flag
can not be set by user. (Note this needs to be true regardless
because we have internal flags already in-use that user space
should not be able to set). But for completeness we have two UAPI
paths into sendpage, sendfile and splice.
In the sendfile case the function do_sendfile() zero's flags,
./fs/read_write.c:
static ssize_t do_sendfile(int out_fd, int in_fd, loff_t *ppos,
size_t count, loff_t max)
{
...
fl = 0;
#if 0
/*
* We need to debate whether we can enable this or not. The
* man page documents EAGAIN return for the output at least,
* and the application is arguably buggy if it doesn't expect
* EAGAIN on a non-blocking file descriptor.
*/
if (in.file->f_flags & O_NONBLOCK)
fl = SPLICE_F_NONBLOCK;
#endif
file_start_write(out.file);
retval = do_splice_direct(in.file, &pos, out.file, &out_pos, count, fl);
}
In the splice case the pipe_to_sendpage "actor" is used which
masks flags with SPLICE_F_MORE.
./fs/splice.c:
static int pipe_to_sendpage(struct pipe_inode_info *pipe,
struct pipe_buffer *buf, struct splice_desc *sd)
{
...
more = (sd->flags & SPLICE_F_MORE) ? MSG_MORE : 0;
...
}
Confirming what we expect that internal flags are in fact internal
to socket side.
Fixes: d3b18ad31f93 ("tls: add bpf support to sk_msg handling")
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-12-21 03:35:35 +08:00
|
|
|
|
2019-11-06 06:24:35 +08:00
|
|
|
mutex_lock(&tls_ctx->tx_lock);
|
bpf: sk_msg, sock{map|hash} redirect through ULP
A sockmap program that redirects through a kTLS ULP enabled socket
will not work correctly because the ULP layer is skipped. This
fixes the behavior to call through the ULP layer on redirect to
ensure any operations required on the data stream at the ULP layer
continue to be applied.
To do this we add an internal flag MSG_SENDPAGE_NOPOLICY to avoid
calling the BPF layer on a redirected message. This is
required to avoid calling the BPF layer multiple times (possibly
recursively) which is not the current/expected behavior without
ULPs. In the future we may add a redirect flag if users _do_
want the policy applied again but this would need to work for both
ULP and non-ULP sockets and be opt-in to avoid breaking existing
programs.
Also to avoid polluting the flag space with an internal flag we
reuse the flag space overlapping MSG_SENDPAGE_NOPOLICY with
MSG_WAITFORONE. Here WAITFORONE is specific to recv path and
SENDPAGE_NOPOLICY is only used for sendpage hooks. The last thing
to verify is user space API is masked correctly to ensure the flag
can not be set by user. (Note this needs to be true regardless
because we have internal flags already in-use that user space
should not be able to set). But for completeness we have two UAPI
paths into sendpage, sendfile and splice.
In the sendfile case the function do_sendfile() zero's flags,
./fs/read_write.c:
static ssize_t do_sendfile(int out_fd, int in_fd, loff_t *ppos,
size_t count, loff_t max)
{
...
fl = 0;
#if 0
/*
* We need to debate whether we can enable this or not. The
* man page documents EAGAIN return for the output at least,
* and the application is arguably buggy if it doesn't expect
* EAGAIN on a non-blocking file descriptor.
*/
if (in.file->f_flags & O_NONBLOCK)
fl = SPLICE_F_NONBLOCK;
#endif
file_start_write(out.file);
retval = do_splice_direct(in.file, &pos, out.file, &out_pos, count, fl);
}
In the splice case the pipe_to_sendpage "actor" is used which
masks flags with SPLICE_F_MORE.
./fs/splice.c:
static int pipe_to_sendpage(struct pipe_inode_info *pipe,
struct pipe_buffer *buf, struct splice_desc *sd)
{
...
more = (sd->flags & SPLICE_F_MORE) ? MSG_MORE : 0;
...
}
Confirming what we expect that internal flags are in fact internal
to socket side.
Fixes: d3b18ad31f93 ("tls: add bpf support to sk_msg handling")
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-12-21 03:35:35 +08:00
|
|
|
lock_sock(sk);
|
|
|
|
ret = tls_sw_do_sendpage(sk, page, offset, size, flags);
|
|
|
|
release_sock(sk);
|
2019-11-06 06:24:35 +08:00
|
|
|
mutex_unlock(&tls_ctx->tx_lock);
|
bpf: sk_msg, sock{map|hash} redirect through ULP
A sockmap program that redirects through a kTLS ULP enabled socket
will not work correctly because the ULP layer is skipped. This
fixes the behavior to call through the ULP layer on redirect to
ensure any operations required on the data stream at the ULP layer
continue to be applied.
To do this we add an internal flag MSG_SENDPAGE_NOPOLICY to avoid
calling the BPF layer on a redirected message. This is
required to avoid calling the BPF layer multiple times (possibly
recursively) which is not the current/expected behavior without
ULPs. In the future we may add a redirect flag if users _do_
want the policy applied again but this would need to work for both
ULP and non-ULP sockets and be opt-in to avoid breaking existing
programs.
Also to avoid polluting the flag space with an internal flag we
reuse the flag space overlapping MSG_SENDPAGE_NOPOLICY with
MSG_WAITFORONE. Here WAITFORONE is specific to recv path and
SENDPAGE_NOPOLICY is only used for sendpage hooks. The last thing
to verify is user space API is masked correctly to ensure the flag
can not be set by user. (Note this needs to be true regardless
because we have internal flags already in-use that user space
should not be able to set). But for completeness we have two UAPI
paths into sendpage, sendfile and splice.
In the sendfile case the function do_sendfile() zero's flags,
./fs/read_write.c:
static ssize_t do_sendfile(int out_fd, int in_fd, loff_t *ppos,
size_t count, loff_t max)
{
...
fl = 0;
#if 0
/*
* We need to debate whether we can enable this or not. The
* man page documents EAGAIN return for the output at least,
* and the application is arguably buggy if it doesn't expect
* EAGAIN on a non-blocking file descriptor.
*/
if (in.file->f_flags & O_NONBLOCK)
fl = SPLICE_F_NONBLOCK;
#endif
file_start_write(out.file);
retval = do_splice_direct(in.file, &pos, out.file, &out_pos, count, fl);
}
In the splice case the pipe_to_sendpage "actor" is used which
masks flags with SPLICE_F_MORE.
./fs/splice.c:
static int pipe_to_sendpage(struct pipe_inode_info *pipe,
struct pipe_buffer *buf, struct splice_desc *sd)
{
...
more = (sd->flags & SPLICE_F_MORE) ? MSG_MORE : 0;
...
}
Confirming what we expect that internal flags are in fact internal
to socket side.
Fixes: d3b18ad31f93 ("tls: add bpf support to sk_msg handling")
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-12-21 03:35:35 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2018-10-13 08:46:01 +08:00
|
|
|
static struct sk_buff *tls_wait_data(struct sock *sk, struct sk_psock *psock,
|
2021-05-14 11:11:02 +08:00
|
|
|
bool nonblock, long timeo, int *err)
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
{
|
|
|
|
struct tls_context *tls_ctx = tls_get_ctx(sk);
|
2018-04-30 15:16:15 +08:00
|
|
|
struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx);
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
struct sk_buff *skb;
|
|
|
|
DEFINE_WAIT_FUNC(wait, woken_wake_function);
|
|
|
|
|
2018-10-13 08:46:01 +08:00
|
|
|
while (!(skb = ctx->recv_pkt) && sk_psock_queue_empty(psock)) {
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
if (sk->sk_err) {
|
|
|
|
*err = sock_error(sk);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2020-11-19 23:59:48 +08:00
|
|
|
if (!skb_queue_empty(&sk->sk_receive_queue)) {
|
|
|
|
__strp_unpause(&ctx->strp);
|
|
|
|
if (ctx->recv_pkt)
|
|
|
|
return ctx->recv_pkt;
|
|
|
|
}
|
|
|
|
|
2018-07-19 07:22:27 +08:00
|
|
|
if (sk->sk_shutdown & RCV_SHUTDOWN)
|
|
|
|
return NULL;
|
|
|
|
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
if (sock_flag(sk, SOCK_DONE))
|
|
|
|
return NULL;
|
|
|
|
|
2021-05-14 11:11:02 +08:00
|
|
|
if (nonblock || !timeo) {
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
*err = -EAGAIN;
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
add_wait_queue(sk_sleep(sk), &wait);
|
|
|
|
sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk);
|
2018-10-13 08:46:01 +08:00
|
|
|
sk_wait_event(sk, &timeo,
|
|
|
|
ctx->recv_pkt != skb ||
|
|
|
|
!sk_psock_queue_empty(psock),
|
|
|
|
&wait);
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk);
|
|
|
|
remove_wait_queue(sk_sleep(sk), &wait);
|
|
|
|
|
|
|
|
/* Handle signals */
|
|
|
|
if (signal_pending(current)) {
|
|
|
|
*err = sock_intr_errno(timeo);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return skb;
|
|
|
|
}
|
|
|
|
|
2022-04-09 02:31:24 +08:00
|
|
|
static int tls_setup_from_iter(struct iov_iter *from,
|
2018-10-13 08:45:59 +08:00
|
|
|
int length, int *pages_used,
|
|
|
|
struct scatterlist *to,
|
|
|
|
int to_max_pages)
|
|
|
|
{
|
|
|
|
int rc = 0, i = 0, num_elem = *pages_used, maxpages;
|
|
|
|
struct page *pages[MAX_SKB_FRAGS];
|
2022-04-09 02:31:24 +08:00
|
|
|
unsigned int size = 0;
|
2018-10-13 08:45:59 +08:00
|
|
|
ssize_t copied, use;
|
|
|
|
size_t offset;
|
|
|
|
|
|
|
|
while (length > 0) {
|
|
|
|
i = 0;
|
|
|
|
maxpages = to_max_pages - num_elem;
|
|
|
|
if (maxpages == 0) {
|
|
|
|
rc = -EFAULT;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
copied = iov_iter_get_pages(from, pages,
|
|
|
|
length,
|
|
|
|
maxpages, &offset);
|
|
|
|
if (copied <= 0) {
|
|
|
|
rc = -EFAULT;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
iov_iter_advance(from, copied);
|
|
|
|
|
|
|
|
length -= copied;
|
|
|
|
size += copied;
|
|
|
|
while (copied) {
|
|
|
|
use = min_t(int, copied, PAGE_SIZE - offset);
|
|
|
|
|
|
|
|
sg_set_page(&to[num_elem],
|
|
|
|
pages[i], use, offset);
|
|
|
|
sg_unmark_end(&to[num_elem]);
|
|
|
|
/* We do not uncharge memory from this API */
|
|
|
|
|
|
|
|
offset = 0;
|
|
|
|
copied -= use;
|
|
|
|
|
|
|
|
i++;
|
|
|
|
num_elem++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
/* Mark the end in the last sg entry if newly added */
|
|
|
|
if (num_elem > *pages_used)
|
|
|
|
sg_mark_end(&to[num_elem - 1]);
|
|
|
|
out:
|
|
|
|
if (rc)
|
2022-04-09 02:31:24 +08:00
|
|
|
iov_iter_revert(from, size);
|
2018-10-13 08:45:59 +08:00
|
|
|
*pages_used = num_elem;
|
|
|
|
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2018-08-10 23:16:41 +08:00
|
|
|
/* This function decrypts the input skb into either out_iov or in out_sg
|
|
|
|
* or in skb buffers itself. The input parameter 'zc' indicates if
|
|
|
|
* zero-copy mode needs to be tried or not. With zero-copy mode, either
|
|
|
|
* out_iov or out_sg must be non-NULL. In case both out_iov and out_sg are
|
|
|
|
* NULL, then the decryption happens inside skb buffers itself, i.e.
|
|
|
|
* zero-copy gets disabled and 'zc' is updated.
|
|
|
|
*/
|
|
|
|
|
|
|
|
static int decrypt_internal(struct sock *sk, struct sk_buff *skb,
|
|
|
|
struct iov_iter *out_iov,
|
|
|
|
struct scatterlist *out_sg,
|
2022-04-09 02:31:26 +08:00
|
|
|
struct tls_decrypt_arg *darg)
|
2018-08-10 23:16:41 +08:00
|
|
|
{
|
|
|
|
struct tls_context *tls_ctx = tls_get_ctx(sk);
|
|
|
|
struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx);
|
2019-02-14 15:11:35 +08:00
|
|
|
struct tls_prot_info *prot = &tls_ctx->prot_info;
|
2022-07-08 09:03:11 +08:00
|
|
|
int n_sgin, n_sgout, aead_size, err, pages = 0;
|
2018-08-10 23:16:41 +08:00
|
|
|
struct strp_msg *rxm = strp_msg(skb);
|
2022-04-08 11:38:16 +08:00
|
|
|
struct tls_msg *tlm = tls_msg(skb);
|
2018-08-10 23:16:41 +08:00
|
|
|
struct aead_request *aead_req;
|
|
|
|
struct sk_buff *unused;
|
|
|
|
struct scatterlist *sgin = NULL;
|
|
|
|
struct scatterlist *sgout = NULL;
|
2022-07-06 07:59:22 +08:00
|
|
|
const int data_len = rxm->full_len - prot->overhead_size;
|
2022-07-06 07:59:23 +08:00
|
|
|
int tail_pages = !!prot->tail_size;
|
2022-07-08 09:03:11 +08:00
|
|
|
struct tls_decrypt_ctx *dctx;
|
2019-03-20 10:03:36 +08:00
|
|
|
int iv_offset = 0;
|
2022-07-08 09:03:11 +08:00
|
|
|
u8 *mem;
|
2018-08-10 23:16:41 +08:00
|
|
|
|
2022-04-09 02:31:26 +08:00
|
|
|
if (darg->zc && (out_iov || out_sg)) {
|
2018-08-10 23:16:41 +08:00
|
|
|
if (out_iov)
|
2022-07-06 07:59:23 +08:00
|
|
|
n_sgout = 1 + tail_pages +
|
2022-02-03 06:20:31 +08:00
|
|
|
iov_iter_npages_cap(out_iov, INT_MAX, data_len);
|
2018-08-10 23:16:41 +08:00
|
|
|
else
|
|
|
|
n_sgout = sg_nents(out_sg);
|
2019-02-14 15:11:35 +08:00
|
|
|
n_sgin = skb_nsg(skb, rxm->offset + prot->prepend_size,
|
|
|
|
rxm->full_len - prot->prepend_size);
|
2018-08-10 23:16:41 +08:00
|
|
|
} else {
|
|
|
|
n_sgout = 0;
|
2022-04-09 02:31:26 +08:00
|
|
|
darg->zc = false;
|
2018-08-29 07:33:57 +08:00
|
|
|
n_sgin = skb_cow_data(skb, 0, &unused);
|
2018-08-10 23:16:41 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
if (n_sgin < 1)
|
|
|
|
return -EBADMSG;
|
|
|
|
|
|
|
|
/* Increment to accommodate AAD */
|
|
|
|
n_sgin = n_sgin + 1;
|
|
|
|
|
|
|
|
/* Allocate a single block of memory which contains
|
2022-07-08 09:03:11 +08:00
|
|
|
* aead_req || tls_decrypt_ctx.
|
|
|
|
* Both structs are variable length.
|
2018-08-10 23:16:41 +08:00
|
|
|
*/
|
2022-07-08 09:03:11 +08:00
|
|
|
aead_size = sizeof(*aead_req) + crypto_aead_reqsize(ctx->aead_recv);
|
|
|
|
mem = kmalloc(aead_size + struct_size(dctx, sg, n_sgin + n_sgout),
|
|
|
|
sk->sk_allocation);
|
2018-08-10 23:16:41 +08:00
|
|
|
if (!mem)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
/* Segment the allocated memory */
|
|
|
|
aead_req = (struct aead_request *)mem;
|
2022-07-08 09:03:11 +08:00
|
|
|
dctx = (struct tls_decrypt_ctx *)(mem + aead_size);
|
|
|
|
sgin = &dctx->sg[0];
|
|
|
|
sgout = &dctx->sg[n_sgin];
|
2018-08-10 23:16:41 +08:00
|
|
|
|
2021-09-28 14:28:43 +08:00
|
|
|
/* For CCM based ciphers, first byte of nonce+iv is a constant */
|
|
|
|
switch (prot->cipher_type) {
|
|
|
|
case TLS_CIPHER_AES_CCM_128:
|
2022-07-08 09:03:11 +08:00
|
|
|
dctx->iv[0] = TLS_AES_CCM_IV_B0_BYTE;
|
2019-03-20 10:03:36 +08:00
|
|
|
iv_offset = 1;
|
2021-09-28 14:28:43 +08:00
|
|
|
break;
|
|
|
|
case TLS_CIPHER_SM4_CCM:
|
2022-07-08 09:03:11 +08:00
|
|
|
dctx->iv[0] = TLS_SM4_CCM_IV_B0_BYTE;
|
2021-09-28 14:28:43 +08:00
|
|
|
iv_offset = 1;
|
|
|
|
break;
|
2019-03-20 10:03:36 +08:00
|
|
|
}
|
|
|
|
|
2018-08-10 23:16:41 +08:00
|
|
|
/* Prepare IV */
|
2020-11-24 23:24:48 +08:00
|
|
|
if (prot->version == TLS_1_3_VERSION ||
|
2022-04-12 03:19:17 +08:00
|
|
|
prot->cipher_type == TLS_CIPHER_CHACHA20_POLY1305) {
|
2022-07-08 09:03:11 +08:00
|
|
|
memcpy(&dctx->iv[iv_offset], tls_ctx->rx.iv,
|
2022-03-31 15:04:28 +08:00
|
|
|
prot->iv_size + prot->salt_size);
|
2022-04-12 03:19:17 +08:00
|
|
|
} else {
|
|
|
|
err = skb_copy_bits(skb, rxm->offset + TLS_HEADER_SIZE,
|
2022-07-08 09:03:11 +08:00
|
|
|
&dctx->iv[iv_offset] + prot->salt_size,
|
2022-04-12 03:19:17 +08:00
|
|
|
prot->iv_size);
|
|
|
|
if (err < 0) {
|
|
|
|
kfree(mem);
|
|
|
|
return err;
|
|
|
|
}
|
2022-07-08 09:03:11 +08:00
|
|
|
memcpy(&dctx->iv[iv_offset], tls_ctx->rx.iv, prot->salt_size);
|
2022-04-12 03:19:17 +08:00
|
|
|
}
|
2022-07-08 09:03:11 +08:00
|
|
|
xor_iv_with_seq(prot, &dctx->iv[iv_offset], tls_ctx->rx.rec_seq);
|
2018-08-10 23:16:41 +08:00
|
|
|
|
|
|
|
/* Prepare AAD */
|
2022-07-08 09:03:11 +08:00
|
|
|
tls_make_aad(dctx->aad, rxm->full_len - prot->overhead_size +
|
2019-02-14 15:11:35 +08:00
|
|
|
prot->tail_size,
|
2022-04-08 11:38:16 +08:00
|
|
|
tls_ctx->rx.rec_seq, tlm->control, prot);
|
2018-08-10 23:16:41 +08:00
|
|
|
|
|
|
|
/* Prepare sgin */
|
|
|
|
sg_init_table(sgin, n_sgin);
|
2022-07-08 09:03:11 +08:00
|
|
|
sg_set_buf(&sgin[0], dctx->aad, prot->aad_size);
|
2018-08-10 23:16:41 +08:00
|
|
|
err = skb_to_sgvec(skb, &sgin[1],
|
2019-02-14 15:11:35 +08:00
|
|
|
rxm->offset + prot->prepend_size,
|
|
|
|
rxm->full_len - prot->prepend_size);
|
2018-08-10 23:16:41 +08:00
|
|
|
if (err < 0) {
|
|
|
|
kfree(mem);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (n_sgout) {
|
|
|
|
if (out_iov) {
|
|
|
|
sg_init_table(sgout, n_sgout);
|
2022-07-08 09:03:11 +08:00
|
|
|
sg_set_buf(&sgout[0], dctx->aad, prot->aad_size);
|
2018-08-10 23:16:41 +08:00
|
|
|
|
2022-07-06 07:59:23 +08:00
|
|
|
err = tls_setup_from_iter(out_iov, data_len,
|
2022-04-09 02:31:24 +08:00
|
|
|
&pages, &sgout[1],
|
2022-07-06 07:59:23 +08:00
|
|
|
(n_sgout - 1 - tail_pages));
|
2018-08-10 23:16:41 +08:00
|
|
|
if (err < 0)
|
|
|
|
goto fallback_to_reg_recv;
|
2022-07-06 07:59:23 +08:00
|
|
|
|
|
|
|
if (prot->tail_size) {
|
|
|
|
sg_unmark_end(&sgout[pages]);
|
2022-07-08 09:03:11 +08:00
|
|
|
sg_set_buf(&sgout[pages + 1], &dctx->tail,
|
2022-07-06 07:59:23 +08:00
|
|
|
prot->tail_size);
|
|
|
|
sg_mark_end(&sgout[pages + 1]);
|
|
|
|
}
|
2018-08-10 23:16:41 +08:00
|
|
|
} else if (out_sg) {
|
|
|
|
memcpy(sgout, out_sg, n_sgout * sizeof(*sgout));
|
|
|
|
} else {
|
|
|
|
goto fallback_to_reg_recv;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
fallback_to_reg_recv:
|
|
|
|
sgout = sgin;
|
|
|
|
pages = 0;
|
2022-04-09 02:31:26 +08:00
|
|
|
darg->zc = false;
|
2018-08-10 23:16:41 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Prepare and submit AEAD request */
|
2022-07-08 09:03:11 +08:00
|
|
|
err = tls_do_decryption(sk, skb, sgin, sgout, dctx->iv,
|
2022-07-06 07:59:22 +08:00
|
|
|
data_len + prot->tail_size, aead_req, darg);
|
2022-04-12 03:19:15 +08:00
|
|
|
if (darg->async)
|
|
|
|
return 0;
|
2018-08-10 23:16:41 +08:00
|
|
|
|
2022-07-06 07:59:23 +08:00
|
|
|
if (prot->tail_size)
|
2022-07-08 09:03:11 +08:00
|
|
|
darg->tail = dctx->tail;
|
2022-07-06 07:59:23 +08:00
|
|
|
|
2018-08-10 23:16:41 +08:00
|
|
|
/* Release the pages in case iov was mapped to pages */
|
|
|
|
for (; pages > 0; pages--)
|
|
|
|
put_page(sg_page(&sgout[pages]));
|
|
|
|
|
|
|
|
kfree(mem);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2018-07-13 19:33:40 +08:00
|
|
|
static int decrypt_skb_update(struct sock *sk, struct sk_buff *skb,
|
2022-04-09 02:31:26 +08:00
|
|
|
struct iov_iter *dest,
|
|
|
|
struct tls_decrypt_arg *darg)
|
2018-07-13 19:33:40 +08:00
|
|
|
{
|
|
|
|
struct tls_context *tls_ctx = tls_get_ctx(sk);
|
2019-02-14 15:11:35 +08:00
|
|
|
struct tls_prot_info *prot = &tls_ctx->prot_info;
|
2018-07-13 19:33:40 +08:00
|
|
|
struct strp_msg *rxm = strp_msg(skb);
|
2022-04-08 11:38:17 +08:00
|
|
|
struct tls_msg *tlm = tls_msg(skb);
|
2022-04-08 11:38:22 +08:00
|
|
|
int pad, err;
|
2018-07-13 19:33:40 +08:00
|
|
|
|
2022-04-08 11:38:22 +08:00
|
|
|
if (tlm->decrypted) {
|
2022-04-09 02:31:26 +08:00
|
|
|
darg->zc = false;
|
2022-04-26 07:33:09 +08:00
|
|
|
darg->async = false;
|
2022-04-08 11:38:22 +08:00
|
|
|
return 0;
|
|
|
|
}
|
2019-09-03 12:31:05 +08:00
|
|
|
|
2022-04-08 11:38:22 +08:00
|
|
|
if (tls_ctx->rx_conf == TLS_HW) {
|
|
|
|
err = tls_device_decrypted(sk, tls_ctx, skb, rxm);
|
|
|
|
if (err < 0)
|
|
|
|
return err;
|
2022-04-08 11:38:23 +08:00
|
|
|
if (err > 0) {
|
|
|
|
tlm->decrypted = 1;
|
2022-04-09 02:31:26 +08:00
|
|
|
darg->zc = false;
|
2022-04-26 07:33:09 +08:00
|
|
|
darg->async = false;
|
2022-04-08 11:38:22 +08:00
|
|
|
goto decrypt_done;
|
2018-08-29 17:56:55 +08:00
|
|
|
}
|
2022-04-08 11:38:22 +08:00
|
|
|
}
|
2019-01-31 05:58:31 +08:00
|
|
|
|
2022-04-09 02:31:26 +08:00
|
|
|
err = decrypt_internal(sk, skb, dest, NULL, darg);
|
2022-07-05 19:08:37 +08:00
|
|
|
if (err < 0) {
|
|
|
|
if (err == -EBADMSG)
|
|
|
|
TLS_INC_STATS(sock_net(sk), LINUX_MIB_TLSDECRYPTERROR);
|
2022-04-08 11:38:22 +08:00
|
|
|
return err;
|
2022-07-05 19:08:37 +08:00
|
|
|
}
|
2022-04-12 03:19:15 +08:00
|
|
|
if (darg->async)
|
|
|
|
goto decrypt_next;
|
2022-07-06 07:59:23 +08:00
|
|
|
/* If opportunistic TLS 1.3 ZC failed retry without ZC */
|
|
|
|
if (unlikely(darg->zc && prot->version == TLS_1_3_VERSION &&
|
|
|
|
darg->tail != TLS_RECORD_TYPE_DATA)) {
|
|
|
|
darg->zc = false;
|
2022-07-06 07:59:24 +08:00
|
|
|
TLS_INC_STATS(sock_net(sk), LINUX_MIN_TLSDECRYPTRETRY);
|
2022-07-06 07:59:23 +08:00
|
|
|
return decrypt_skb_update(sk, skb, dest, darg);
|
|
|
|
}
|
2018-07-13 19:33:40 +08:00
|
|
|
|
2022-04-08 11:38:22 +08:00
|
|
|
decrypt_done:
|
2022-07-06 07:59:23 +08:00
|
|
|
pad = tls_padding_length(prot, skb, darg);
|
2022-04-08 11:38:22 +08:00
|
|
|
if (pad < 0)
|
|
|
|
return pad;
|
|
|
|
|
|
|
|
rxm->full_len -= pad;
|
|
|
|
rxm->offset += prot->prepend_size;
|
|
|
|
rxm->full_len -= prot->overhead_size;
|
|
|
|
tlm->decrypted = 1;
|
2022-04-12 03:19:15 +08:00
|
|
|
decrypt_next:
|
|
|
|
tls_advance_record_sn(sk, prot, &tls_ctx->rx);
|
2022-04-08 11:38:22 +08:00
|
|
|
|
|
|
|
return 0;
|
2018-07-13 19:33:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
int decrypt_skb(struct sock *sk, struct sk_buff *skb,
|
|
|
|
struct scatterlist *sgout)
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
{
|
2022-04-09 02:31:26 +08:00
|
|
|
struct tls_decrypt_arg darg = { .zc = true, };
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
|
2022-04-09 02:31:26 +08:00
|
|
|
return decrypt_internal(sk, skb, NULL, sgout, &darg);
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
}
|
|
|
|
|
2022-04-09 02:31:28 +08:00
|
|
|
static int tls_record_content_type(struct msghdr *msg, struct tls_msg *tlm,
|
|
|
|
u8 *control)
|
|
|
|
{
|
|
|
|
int err;
|
|
|
|
|
|
|
|
if (!*control) {
|
|
|
|
*control = tlm->control;
|
|
|
|
if (!*control)
|
|
|
|
return -EBADMSG;
|
|
|
|
|
|
|
|
err = put_cmsg(msg, SOL_TLS, TLS_GET_RECORD_TYPE,
|
|
|
|
sizeof(*control), control);
|
|
|
|
if (*control != TLS_RECORD_TYPE_DATA) {
|
|
|
|
if (err || msg->msg_flags & MSG_CTRUNC)
|
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
} else if (*control != tlm->control) {
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2019-01-16 18:40:16 +08:00
|
|
|
/* This function traverses the rx_list in tls receive context to copies the
|
2019-02-23 16:42:37 +08:00
|
|
|
* decrypted records into the buffer provided by caller zero copy is not
|
2019-01-16 18:40:16 +08:00
|
|
|
* true. Further, the records are removed from the rx_list if it is not a peek
|
|
|
|
* case and the record has been consumed completely.
|
|
|
|
*/
|
|
|
|
static int process_rx_list(struct tls_sw_context_rx *ctx,
|
|
|
|
struct msghdr *msg,
|
2019-02-23 16:42:37 +08:00
|
|
|
u8 *control,
|
2019-01-16 18:40:16 +08:00
|
|
|
size_t skip,
|
|
|
|
size_t len,
|
|
|
|
bool zc,
|
|
|
|
bool is_peek)
|
|
|
|
{
|
|
|
|
struct sk_buff *skb = skb_peek(&ctx->rx_list);
|
2019-02-23 16:42:37 +08:00
|
|
|
struct tls_msg *tlm;
|
2019-01-16 18:40:16 +08:00
|
|
|
ssize_t copied = 0;
|
2022-04-09 02:31:28 +08:00
|
|
|
int err;
|
2019-02-23 16:42:37 +08:00
|
|
|
|
2019-01-16 18:40:16 +08:00
|
|
|
while (skip && skb) {
|
|
|
|
struct strp_msg *rxm = strp_msg(skb);
|
2019-02-23 16:42:37 +08:00
|
|
|
tlm = tls_msg(skb);
|
|
|
|
|
2022-04-09 02:31:28 +08:00
|
|
|
err = tls_record_content_type(msg, tlm, control);
|
|
|
|
if (err <= 0)
|
2022-04-12 03:19:13 +08:00
|
|
|
goto out;
|
2019-01-16 18:40:16 +08:00
|
|
|
|
|
|
|
if (skip < rxm->full_len)
|
|
|
|
break;
|
|
|
|
|
|
|
|
skip = skip - rxm->full_len;
|
|
|
|
skb = skb_peek_next(skb, &ctx->rx_list);
|
|
|
|
}
|
|
|
|
|
|
|
|
while (len && skb) {
|
|
|
|
struct sk_buff *next_skb;
|
|
|
|
struct strp_msg *rxm = strp_msg(skb);
|
|
|
|
int chunk = min_t(unsigned int, rxm->full_len - skip, len);
|
|
|
|
|
2019-02-23 16:42:37 +08:00
|
|
|
tlm = tls_msg(skb);
|
|
|
|
|
2022-04-09 02:31:28 +08:00
|
|
|
err = tls_record_content_type(msg, tlm, control);
|
|
|
|
if (err <= 0)
|
2022-04-12 03:19:13 +08:00
|
|
|
goto out;
|
2019-02-23 16:42:37 +08:00
|
|
|
|
2019-01-16 18:40:16 +08:00
|
|
|
if (!zc || (rxm->full_len - skip) > len) {
|
2022-04-09 02:31:28 +08:00
|
|
|
err = skb_copy_datagram_msg(skb, rxm->offset + skip,
|
2019-01-16 18:40:16 +08:00
|
|
|
msg, chunk);
|
|
|
|
if (err < 0)
|
2022-04-12 03:19:13 +08:00
|
|
|
goto out;
|
2019-01-16 18:40:16 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
len = len - chunk;
|
|
|
|
copied = copied + chunk;
|
|
|
|
|
|
|
|
/* Consume the data from record if it is non-peek case*/
|
|
|
|
if (!is_peek) {
|
|
|
|
rxm->offset = rxm->offset + chunk;
|
|
|
|
rxm->full_len = rxm->full_len - chunk;
|
|
|
|
|
|
|
|
/* Return if there is unconsumed data in the record */
|
|
|
|
if (rxm->full_len - skip)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* The remaining skip-bytes must lie in 1st record in rx_list.
|
|
|
|
* So from the 2nd record, 'skip' should be 0.
|
|
|
|
*/
|
|
|
|
skip = 0;
|
|
|
|
|
|
|
|
if (msg)
|
|
|
|
msg->msg_flags |= MSG_EOR;
|
|
|
|
|
|
|
|
next_skb = skb_peek_next(skb, &ctx->rx_list);
|
|
|
|
|
|
|
|
if (!is_peek) {
|
2022-04-12 03:19:08 +08:00
|
|
|
__skb_unlink(skb, &ctx->rx_list);
|
2019-03-21 19:59:57 +08:00
|
|
|
consume_skb(skb);
|
2019-01-16 18:40:16 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
skb = next_skb;
|
|
|
|
}
|
2022-04-12 03:19:13 +08:00
|
|
|
err = 0;
|
2019-01-16 18:40:16 +08:00
|
|
|
|
2022-04-12 03:19:13 +08:00
|
|
|
out:
|
|
|
|
return copied ? : err;
|
2019-01-16 18:40:16 +08:00
|
|
|
}
|
|
|
|
|
2022-07-06 07:59:26 +08:00
|
|
|
static void
|
|
|
|
tls_read_flush_backlog(struct sock *sk, struct tls_prot_info *prot,
|
|
|
|
size_t len_left, size_t decrypted, ssize_t done,
|
|
|
|
size_t *flushed_at)
|
|
|
|
{
|
|
|
|
size_t max_rec;
|
|
|
|
|
|
|
|
if (len_left <= decrypted)
|
|
|
|
return;
|
|
|
|
|
|
|
|
max_rec = prot->overhead_size - prot->tail_size + TLS_MAX_PAYLOAD_SIZE;
|
|
|
|
if (done - *flushed_at < SZ_128K && tcp_inq(sk) > max_rec)
|
|
|
|
return;
|
|
|
|
|
|
|
|
*flushed_at = done;
|
|
|
|
sk_flush_backlog(sk);
|
|
|
|
}
|
|
|
|
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
int tls_sw_recvmsg(struct sock *sk,
|
|
|
|
struct msghdr *msg,
|
|
|
|
size_t len,
|
|
|
|
int flags,
|
|
|
|
int *addr_len)
|
|
|
|
{
|
|
|
|
struct tls_context *tls_ctx = tls_get_ctx(sk);
|
2018-04-30 15:16:15 +08:00
|
|
|
struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx);
|
2019-02-14 15:11:35 +08:00
|
|
|
struct tls_prot_info *prot = &tls_ctx->prot_info;
|
2018-10-13 08:46:01 +08:00
|
|
|
struct sk_psock *psock;
|
2019-01-16 18:40:16 +08:00
|
|
|
unsigned char control = 0;
|
|
|
|
ssize_t decrypted = 0;
|
2022-07-06 07:59:26 +08:00
|
|
|
size_t flushed_at = 0;
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
struct strp_msg *rxm;
|
2019-02-23 16:42:37 +08:00
|
|
|
struct tls_msg *tlm;
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
struct sk_buff *skb;
|
|
|
|
ssize_t copied = 0;
|
2022-04-09 02:31:30 +08:00
|
|
|
bool async = false;
|
2018-06-15 09:07:46 +08:00
|
|
|
int target, err = 0;
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
long timeo;
|
2018-10-22 20:07:28 +08:00
|
|
|
bool is_kvec = iov_iter_is_kvec(&msg->msg_iter);
|
2019-01-16 18:40:16 +08:00
|
|
|
bool is_peek = flags & MSG_PEEK;
|
2020-05-30 07:06:59 +08:00
|
|
|
bool bpf_strp_enabled;
|
2022-04-09 02:31:31 +08:00
|
|
|
bool zc_capable;
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
|
|
|
|
if (unlikely(flags & MSG_ERRQUEUE))
|
|
|
|
return sock_recv_errqueue(sk, msg, len, SOL_IP, IP_RECVERR);
|
|
|
|
|
2018-10-13 08:46:01 +08:00
|
|
|
psock = sk_psock_get(sk);
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
lock_sock(sk);
|
2020-05-30 07:06:59 +08:00
|
|
|
bpf_strp_enabled = sk_psock_strp_enabled(psock);
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
|
2022-04-12 03:19:14 +08:00
|
|
|
/* If crypto failed the connection is broken */
|
|
|
|
err = ctx->async_wait.err;
|
|
|
|
if (err)
|
|
|
|
goto end;
|
|
|
|
|
2019-01-16 18:40:16 +08:00
|
|
|
/* Process pending decrypted records. It must be non-zero-copy */
|
2022-04-09 02:31:28 +08:00
|
|
|
err = process_rx_list(ctx, msg, &control, 0, len, false, is_peek);
|
2022-04-12 03:19:13 +08:00
|
|
|
if (err < 0)
|
2019-01-16 18:40:16 +08:00
|
|
|
goto end;
|
|
|
|
|
2022-04-08 11:38:15 +08:00
|
|
|
copied = err;
|
2019-05-25 01:34:30 +08:00
|
|
|
if (len <= copied)
|
2022-04-08 11:38:14 +08:00
|
|
|
goto end;
|
2019-05-25 01:34:30 +08:00
|
|
|
|
|
|
|
target = sock_rcvlowat(sk, flags & MSG_WAITALL, len);
|
|
|
|
len = len - copied;
|
|
|
|
timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);
|
2019-01-16 18:40:16 +08:00
|
|
|
|
2022-04-09 02:31:31 +08:00
|
|
|
zc_capable = !bpf_strp_enabled && !is_kvec && !is_peek &&
|
2022-07-06 07:59:24 +08:00
|
|
|
ctx->zc_capable;
|
2022-04-08 11:38:14 +08:00
|
|
|
decrypted = 0;
|
2019-05-25 01:34:32 +08:00
|
|
|
while (len && (decrypted + copied < target || ctx->recv_pkt)) {
|
2022-04-09 02:31:26 +08:00
|
|
|
struct tls_decrypt_arg darg = {};
|
tls: rx: don't report text length from the bowels of decrypt
We plumb pointer to chunk all the way to the decryption method.
It's set to the length of the text when decrypt_skb_update()
returns.
I think the code is written this way because original TLS
implementation passed &chunk to zerocopy_from_iter() and this
was carried forward as the code gotten more complex, without
any refactoring.
The fix for peek() introduced a new variable - to_decrypt
which for all practical purposes is what chunk is going to
get set to. Spare ourselves the pointer passing, use to_decrypt.
Use this opportunity to clean things up a little further.
Note that chunk / to_decrypt was mostly needed for the async
path, since the sync path would access rxm->full_len (decryption
transforms full_len from record size to text size). Use the
right source of truth more explicitly.
We have three cases:
- async - it's TLS 1.2 only, so chunk == to_decrypt, but we
need the min() because to_decrypt is a whole record
and we don't want to underflow len. Note that we can't
handle partial record by falling back to sync as it
would introduce reordering against records in flight.
- zc - again, TLS 1.2 only for now, so chunk == to_decrypt,
we don't do zc if len < to_decrypt, no need to check again.
- normal - it already handles chunk > len, we can factor out the
assignment to rxm->full_len and share it with zc.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-04-09 02:31:25 +08:00
|
|
|
int to_decrypt, chunk;
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
|
2021-05-14 11:11:02 +08:00
|
|
|
skb = tls_wait_data(sk, psock, flags & MSG_DONTWAIT, timeo, &err);
|
2018-10-13 08:46:01 +08:00
|
|
|
if (!skb) {
|
|
|
|
if (psock) {
|
2022-04-12 03:19:09 +08:00
|
|
|
chunk = sk_msg_recvmsg(sk, psock, msg, len,
|
|
|
|
flags);
|
|
|
|
if (chunk > 0)
|
|
|
|
goto leave_on_list;
|
2018-10-13 08:46:01 +08:00
|
|
|
}
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
goto recv_end;
|
2018-10-13 08:46:01 +08:00
|
|
|
}
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
|
|
|
|
rxm = strp_msg(skb);
|
2022-04-08 11:38:16 +08:00
|
|
|
tlm = tls_msg(skb);
|
2018-08-29 17:56:55 +08:00
|
|
|
|
2019-02-14 15:11:35 +08:00
|
|
|
to_decrypt = rxm->full_len - prot->overhead_size;
|
2019-01-31 05:58:24 +08:00
|
|
|
|
2022-04-09 02:31:31 +08:00
|
|
|
if (zc_capable && to_decrypt <= len &&
|
|
|
|
tlm->control == TLS_RECORD_TYPE_DATA)
|
2022-04-09 02:31:26 +08:00
|
|
|
darg.zc = true;
|
2019-01-31 05:58:24 +08:00
|
|
|
|
2019-02-11 19:31:05 +08:00
|
|
|
/* Do not use async mode if record is non-data */
|
2022-04-08 11:38:16 +08:00
|
|
|
if (tlm->control == TLS_RECORD_TYPE_DATA && !bpf_strp_enabled)
|
2022-04-09 02:31:26 +08:00
|
|
|
darg.async = ctx->async_capable;
|
2019-02-11 19:31:05 +08:00
|
|
|
else
|
2022-04-09 02:31:26 +08:00
|
|
|
darg.async = false;
|
2019-02-11 19:31:05 +08:00
|
|
|
|
2022-04-09 02:31:26 +08:00
|
|
|
err = decrypt_skb_update(sk, skb, &msg->msg_iter, &darg);
|
2022-04-12 03:19:15 +08:00
|
|
|
if (err < 0) {
|
2021-10-28 05:59:20 +08:00
|
|
|
tls_err_abort(sk, -EBADMSG);
|
2019-01-31 05:58:24 +08:00
|
|
|
goto recv_end;
|
|
|
|
}
|
|
|
|
|
2022-04-12 03:19:15 +08:00
|
|
|
async |= darg.async;
|
2019-02-23 16:42:37 +08:00
|
|
|
|
|
|
|
/* If the type of records being processed is not known yet,
|
|
|
|
* set it to record type just dequeued. If it is already known,
|
|
|
|
* but does not match the record type just dequeued, go to end.
|
|
|
|
* We always get record type here since for tls1.2, record type
|
|
|
|
* is known just after record is dequeued from stream parser.
|
|
|
|
* For tls1.3, we disable async.
|
|
|
|
*/
|
2022-04-09 02:31:28 +08:00
|
|
|
err = tls_record_content_type(msg, tlm, &control);
|
|
|
|
if (err <= 0)
|
2019-02-23 16:42:37 +08:00
|
|
|
goto recv_end;
|
2019-01-31 05:58:24 +08:00
|
|
|
|
2022-07-06 07:59:26 +08:00
|
|
|
/* periodically flush backlog, and feed strparser */
|
|
|
|
tls_read_flush_backlog(sk, prot, len, to_decrypt,
|
|
|
|
decrypted + copied, &flushed_at);
|
|
|
|
|
2022-04-09 02:31:33 +08:00
|
|
|
ctx->recv_pkt = NULL;
|
|
|
|
__strp_unpause(&ctx->strp);
|
2022-04-12 03:19:08 +08:00
|
|
|
__skb_queue_tail(&ctx->rx_list, skb);
|
2022-04-09 02:31:33 +08:00
|
|
|
|
tls: rx: don't report text length from the bowels of decrypt
We plumb pointer to chunk all the way to the decryption method.
It's set to the length of the text when decrypt_skb_update()
returns.
I think the code is written this way because original TLS
implementation passed &chunk to zerocopy_from_iter() and this
was carried forward as the code gotten more complex, without
any refactoring.
The fix for peek() introduced a new variable - to_decrypt
which for all practical purposes is what chunk is going to
get set to. Spare ourselves the pointer passing, use to_decrypt.
Use this opportunity to clean things up a little further.
Note that chunk / to_decrypt was mostly needed for the async
path, since the sync path would access rxm->full_len (decryption
transforms full_len from record size to text size). Use the
right source of truth more explicitly.
We have three cases:
- async - it's TLS 1.2 only, so chunk == to_decrypt, but we
need the min() because to_decrypt is a whole record
and we don't want to underflow len. Note that we can't
handle partial record by falling back to sync as it
would introduce reordering against records in flight.
- zc - again, TLS 1.2 only for now, so chunk == to_decrypt,
we don't do zc if len < to_decrypt, no need to check again.
- normal - it already handles chunk > len, we can factor out the
assignment to rxm->full_len and share it with zc.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-04-09 02:31:25 +08:00
|
|
|
if (async) {
|
|
|
|
/* TLS 1.2-only, to_decrypt must be text length */
|
|
|
|
chunk = min_t(int, to_decrypt, len);
|
2022-04-09 02:31:34 +08:00
|
|
|
leave_on_list:
|
|
|
|
decrypted += chunk;
|
|
|
|
len -= chunk;
|
|
|
|
continue;
|
tls: rx: don't report text length from the bowels of decrypt
We plumb pointer to chunk all the way to the decryption method.
It's set to the length of the text when decrypt_skb_update()
returns.
I think the code is written this way because original TLS
implementation passed &chunk to zerocopy_from_iter() and this
was carried forward as the code gotten more complex, without
any refactoring.
The fix for peek() introduced a new variable - to_decrypt
which for all practical purposes is what chunk is going to
get set to. Spare ourselves the pointer passing, use to_decrypt.
Use this opportunity to clean things up a little further.
Note that chunk / to_decrypt was mostly needed for the async
path, since the sync path would access rxm->full_len (decryption
transforms full_len from record size to text size). Use the
right source of truth more explicitly.
We have three cases:
- async - it's TLS 1.2 only, so chunk == to_decrypt, but we
need the min() because to_decrypt is a whole record
and we don't want to underflow len. Note that we can't
handle partial record by falling back to sync as it
would introduce reordering against records in flight.
- zc - again, TLS 1.2 only for now, so chunk == to_decrypt,
we don't do zc if len < to_decrypt, no need to check again.
- normal - it already handles chunk > len, we can factor out the
assignment to rxm->full_len and share it with zc.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-04-09 02:31:25 +08:00
|
|
|
}
|
|
|
|
/* TLS 1.3 may have updated the length by more than overhead */
|
|
|
|
chunk = rxm->full_len;
|
2019-02-11 19:31:05 +08:00
|
|
|
|
2022-04-09 02:31:26 +08:00
|
|
|
if (!darg.zc) {
|
2022-04-09 02:31:34 +08:00
|
|
|
bool partially_consumed = chunk > len;
|
|
|
|
|
2020-05-30 07:06:59 +08:00
|
|
|
if (bpf_strp_enabled) {
|
2022-05-19 04:56:44 +08:00
|
|
|
/* BPF may try to queue the skb */
|
|
|
|
__skb_unlink(skb, &ctx->rx_list);
|
2020-05-30 07:06:59 +08:00
|
|
|
err = sk_psock_tls_strp_read(psock, skb);
|
|
|
|
if (err != __SK_PASS) {
|
|
|
|
rxm->offset = rxm->offset + rxm->full_len;
|
|
|
|
rxm->full_len = 0;
|
|
|
|
if (err == __SK_DROP)
|
|
|
|
consume_skb(skb);
|
|
|
|
continue;
|
|
|
|
}
|
2022-05-19 04:56:44 +08:00
|
|
|
__skb_queue_tail(&ctx->rx_list, skb);
|
2020-05-30 07:06:59 +08:00
|
|
|
}
|
|
|
|
|
2022-04-09 02:31:34 +08:00
|
|
|
if (partially_consumed)
|
2019-01-31 05:58:24 +08:00
|
|
|
chunk = len;
|
2019-01-16 18:40:16 +08:00
|
|
|
|
2019-01-31 05:58:24 +08:00
|
|
|
err = skb_copy_datagram_msg(skb, rxm->offset,
|
|
|
|
msg, chunk);
|
|
|
|
if (err < 0)
|
|
|
|
goto recv_end;
|
2018-08-29 17:56:55 +08:00
|
|
|
|
2022-04-09 02:31:34 +08:00
|
|
|
if (is_peek)
|
|
|
|
goto leave_on_list;
|
|
|
|
|
|
|
|
if (partially_consumed) {
|
|
|
|
rxm->offset += chunk;
|
|
|
|
rxm->full_len -= chunk;
|
|
|
|
goto leave_on_list;
|
2019-01-16 18:40:16 +08:00
|
|
|
}
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
}
|
|
|
|
|
2019-01-16 18:40:16 +08:00
|
|
|
decrypted += chunk;
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
len -= chunk;
|
2019-01-16 18:40:16 +08:00
|
|
|
|
2022-04-12 03:19:08 +08:00
|
|
|
__skb_unlink(skb, &ctx->rx_list);
|
2022-04-09 02:31:34 +08:00
|
|
|
consume_skb(skb);
|
2022-04-09 02:31:32 +08:00
|
|
|
|
2022-04-09 02:31:34 +08:00
|
|
|
/* Return full control message to userspace before trying
|
|
|
|
* to parse another message type
|
|
|
|
*/
|
|
|
|
msg->msg_flags |= MSG_EOR;
|
|
|
|
if (control != TLS_RECORD_TYPE_DATA)
|
|
|
|
break;
|
2019-05-25 01:34:32 +08:00
|
|
|
}
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
|
|
|
|
recv_end:
|
2022-04-09 02:31:30 +08:00
|
|
|
if (async) {
|
2022-04-12 03:19:14 +08:00
|
|
|
int ret, pending;
|
2022-04-09 02:31:30 +08:00
|
|
|
|
2018-08-29 17:56:55 +08:00
|
|
|
/* Wait for all previously submitted records to be decrypted */
|
2020-05-23 04:10:31 +08:00
|
|
|
spin_lock_bh(&ctx->decrypt_compl_lock);
|
2022-04-09 02:31:27 +08:00
|
|
|
reinit_completion(&ctx->async_wait.completion);
|
2020-05-23 04:10:31 +08:00
|
|
|
pending = atomic_read(&ctx->decrypt_pending);
|
|
|
|
spin_unlock_bh(&ctx->decrypt_compl_lock);
|
|
|
|
if (pending) {
|
2022-04-12 03:19:14 +08:00
|
|
|
ret = crypto_wait_req(-EINPROGRESS, &ctx->async_wait);
|
|
|
|
if (ret) {
|
|
|
|
if (err >= 0 || err == -EINPROGRESS)
|
|
|
|
err = ret;
|
2019-01-16 18:40:16 +08:00
|
|
|
decrypted = 0;
|
|
|
|
goto end;
|
2018-08-29 17:56:55 +08:00
|
|
|
}
|
|
|
|
}
|
2020-05-23 04:10:31 +08:00
|
|
|
|
2019-01-16 18:40:16 +08:00
|
|
|
/* Drain records from the rx_list & copy if required */
|
|
|
|
if (is_peek || is_kvec)
|
2022-04-09 02:31:28 +08:00
|
|
|
err = process_rx_list(ctx, msg, &control, copied,
|
2019-01-16 18:40:16 +08:00
|
|
|
decrypted, false, is_peek);
|
|
|
|
else
|
2022-04-09 02:31:28 +08:00
|
|
|
err = process_rx_list(ctx, msg, &control, 0,
|
2019-01-16 18:40:16 +08:00
|
|
|
decrypted, true, is_peek);
|
2022-04-12 03:19:13 +08:00
|
|
|
decrypted = max(err, 0);
|
2018-08-29 17:56:55 +08:00
|
|
|
}
|
|
|
|
|
2019-01-16 18:40:16 +08:00
|
|
|
copied += decrypted;
|
|
|
|
|
|
|
|
end:
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
release_sock(sk);
|
2018-10-13 08:46:01 +08:00
|
|
|
if (psock)
|
|
|
|
sk_psock_put(sk, psock);
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
return copied ? : err;
|
|
|
|
}
|
|
|
|
|
|
|
|
ssize_t tls_sw_splice_read(struct socket *sock, loff_t *ppos,
|
|
|
|
struct pipe_inode_info *pipe,
|
|
|
|
size_t len, unsigned int flags)
|
|
|
|
{
|
|
|
|
struct tls_context *tls_ctx = tls_get_ctx(sock->sk);
|
2018-04-30 15:16:15 +08:00
|
|
|
struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx);
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
struct strp_msg *rxm = NULL;
|
|
|
|
struct sock *sk = sock->sk;
|
2022-04-08 11:38:16 +08:00
|
|
|
struct tls_msg *tlm;
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
struct sk_buff *skb;
|
|
|
|
ssize_t copied = 0;
|
2021-11-25 07:25:54 +08:00
|
|
|
bool from_queue;
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
int err = 0;
|
|
|
|
long timeo;
|
|
|
|
int chunk;
|
|
|
|
|
|
|
|
lock_sock(sk);
|
|
|
|
|
2021-05-14 11:11:02 +08:00
|
|
|
timeo = sock_rcvtimeo(sk, flags & SPLICE_F_NONBLOCK);
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
|
2021-11-25 07:25:54 +08:00
|
|
|
from_queue = !skb_queue_empty(&ctx->rx_list);
|
|
|
|
if (from_queue) {
|
|
|
|
skb = __skb_dequeue(&ctx->rx_list);
|
|
|
|
} else {
|
2022-04-09 02:31:26 +08:00
|
|
|
struct tls_decrypt_arg darg = {};
|
|
|
|
|
2021-11-25 07:25:54 +08:00
|
|
|
skb = tls_wait_data(sk, NULL, flags & SPLICE_F_NONBLOCK, timeo,
|
|
|
|
&err);
|
|
|
|
if (!skb)
|
|
|
|
goto splice_read_end;
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
|
2022-04-09 02:31:26 +08:00
|
|
|
err = decrypt_skb_update(sk, skb, NULL, &darg);
|
2021-11-25 07:25:54 +08:00
|
|
|
if (err < 0) {
|
|
|
|
tls_err_abort(sk, -EBADMSG);
|
|
|
|
goto splice_read_end;
|
|
|
|
}
|
2021-11-25 07:25:52 +08:00
|
|
|
}
|
2019-01-31 05:58:24 +08:00
|
|
|
|
2022-04-08 11:38:16 +08:00
|
|
|
rxm = strp_msg(skb);
|
|
|
|
tlm = tls_msg(skb);
|
|
|
|
|
2021-11-25 07:25:52 +08:00
|
|
|
/* splice does not support reading control messages */
|
2022-04-08 11:38:16 +08:00
|
|
|
if (tlm->control != TLS_RECORD_TYPE_DATA) {
|
2021-11-25 07:25:52 +08:00
|
|
|
err = -EINVAL;
|
|
|
|
goto splice_read_end;
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
}
|
2021-11-25 07:25:52 +08:00
|
|
|
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
chunk = min_t(unsigned int, rxm->full_len, len);
|
|
|
|
copied = skb_splice_bits(skb, sk, rxm->offset, pipe, chunk, flags);
|
|
|
|
if (copied < 0)
|
|
|
|
goto splice_read_end;
|
|
|
|
|
2021-11-25 07:25:54 +08:00
|
|
|
if (!from_queue) {
|
|
|
|
ctx->recv_pkt = NULL;
|
|
|
|
__strp_unpause(&ctx->strp);
|
|
|
|
}
|
|
|
|
if (chunk < rxm->full_len) {
|
|
|
|
__skb_queue_head(&ctx->rx_list, skb);
|
|
|
|
rxm->offset += len;
|
|
|
|
rxm->full_len -= len;
|
|
|
|
} else {
|
|
|
|
consume_skb(skb);
|
|
|
|
}
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
|
|
|
|
splice_read_end:
|
|
|
|
release_sock(sk);
|
|
|
|
return copied ? : err;
|
|
|
|
}
|
|
|
|
|
2021-10-09 04:33:03 +08:00
|
|
|
bool tls_sw_sock_is_readable(struct sock *sk)
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
{
|
|
|
|
struct tls_context *tls_ctx = tls_get_ctx(sk);
|
2018-04-30 15:16:15 +08:00
|
|
|
struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx);
|
2018-10-13 08:46:01 +08:00
|
|
|
bool ingress_empty = true;
|
|
|
|
struct sk_psock *psock;
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
|
2018-10-13 08:46:01 +08:00
|
|
|
rcu_read_lock();
|
|
|
|
psock = sk_psock(sk);
|
|
|
|
if (psock)
|
|
|
|
ingress_empty = list_empty(&psock->ingress_msg);
|
|
|
|
rcu_read_unlock();
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
|
2019-07-05 05:50:36 +08:00
|
|
|
return !ingress_empty || ctx->recv_pkt ||
|
|
|
|
!skb_queue_empty(&ctx->rx_list);
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static int tls_read_size(struct strparser *strp, struct sk_buff *skb)
|
|
|
|
{
|
|
|
|
struct tls_context *tls_ctx = tls_get_ctx(strp->sk);
|
2019-02-14 15:11:35 +08:00
|
|
|
struct tls_prot_info *prot = &tls_ctx->prot_info;
|
2018-06-26 07:55:05 +08:00
|
|
|
char header[TLS_HEADER_SIZE + MAX_IV_SIZE];
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
struct strp_msg *rxm = strp_msg(skb);
|
2022-04-08 11:38:16 +08:00
|
|
|
struct tls_msg *tlm = tls_msg(skb);
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
size_t cipher_overhead;
|
|
|
|
size_t data_len = 0;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
/* Verify that we have a full TLS header, or wait for more data */
|
2019-02-14 15:11:35 +08:00
|
|
|
if (rxm->offset + prot->prepend_size > skb->len)
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
return 0;
|
|
|
|
|
2018-06-26 07:55:05 +08:00
|
|
|
/* Sanity-check size of on-stack buffer. */
|
2019-02-14 15:11:35 +08:00
|
|
|
if (WARN_ON(prot->prepend_size > sizeof(header))) {
|
2018-06-26 07:55:05 +08:00
|
|
|
ret = -EINVAL;
|
|
|
|
goto read_failure;
|
|
|
|
}
|
|
|
|
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
/* Linearize header to local buffer */
|
2019-02-14 15:11:35 +08:00
|
|
|
ret = skb_copy_bits(skb, rxm->offset, header, prot->prepend_size);
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
if (ret < 0)
|
|
|
|
goto read_failure;
|
|
|
|
|
2022-04-08 11:38:18 +08:00
|
|
|
tlm->decrypted = 0;
|
2022-04-08 11:38:16 +08:00
|
|
|
tlm->control = header[0];
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
|
|
|
|
data_len = ((header[4] & 0xFF) | (header[3] << 8));
|
|
|
|
|
2019-02-14 15:11:35 +08:00
|
|
|
cipher_overhead = prot->tag_size;
|
2020-11-24 23:24:48 +08:00
|
|
|
if (prot->version != TLS_1_3_VERSION &&
|
|
|
|
prot->cipher_type != TLS_CIPHER_CHACHA20_POLY1305)
|
2019-02-14 15:11:35 +08:00
|
|
|
cipher_overhead += prot->iv_size;
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
|
2019-01-31 05:58:31 +08:00
|
|
|
if (data_len > TLS_MAX_PAYLOAD_SIZE + cipher_overhead +
|
2019-02-14 15:11:35 +08:00
|
|
|
prot->tail_size) {
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
ret = -EMSGSIZE;
|
|
|
|
goto read_failure;
|
|
|
|
}
|
|
|
|
if (data_len < cipher_overhead) {
|
|
|
|
ret = -EBADMSG;
|
|
|
|
goto read_failure;
|
|
|
|
}
|
|
|
|
|
2019-01-31 05:58:31 +08:00
|
|
|
/* Note that both TLS1.3 and TLS1.2 use TLS_1_2 version here */
|
|
|
|
if (header[1] != TLS_1_2_VERSION_MINOR ||
|
|
|
|
header[2] != TLS_1_2_VERSION_MAJOR) {
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
ret = -EINVAL;
|
|
|
|
goto read_failure;
|
|
|
|
}
|
2019-09-03 12:31:05 +08:00
|
|
|
|
2019-06-11 12:40:02 +08:00
|
|
|
tls_device_rx_resync_new_rec(strp->sk, data_len + TLS_HEADER_SIZE,
|
2019-06-11 12:40:01 +08:00
|
|
|
TCP_SKB_CB(skb)->seq + rxm->offset);
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
return data_len + TLS_HEADER_SIZE;
|
|
|
|
|
|
|
|
read_failure:
|
|
|
|
tls_err_abort(strp->sk, ret);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void tls_queue(struct strparser *strp, struct sk_buff *skb)
|
|
|
|
{
|
|
|
|
struct tls_context *tls_ctx = tls_get_ctx(strp->sk);
|
2018-04-30 15:16:15 +08:00
|
|
|
struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx);
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
|
|
|
|
ctx->recv_pkt = skb;
|
|
|
|
strp_pause(strp);
|
|
|
|
|
2018-07-30 18:38:33 +08:00
|
|
|
ctx->saved_data_ready(strp->sk);
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void tls_data_ready(struct sock *sk)
|
|
|
|
{
|
|
|
|
struct tls_context *tls_ctx = tls_get_ctx(sk);
|
2018-04-30 15:16:15 +08:00
|
|
|
struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx);
|
2018-10-13 08:46:01 +08:00
|
|
|
struct sk_psock *psock;
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
|
|
|
|
strp_data_ready(&ctx->strp);
|
2018-10-13 08:46:01 +08:00
|
|
|
|
|
|
|
psock = sk_psock_get(sk);
|
2020-04-25 21:10:23 +08:00
|
|
|
if (psock) {
|
|
|
|
if (!list_empty(&psock->ingress_msg))
|
|
|
|
ctx->saved_data_ready(sk);
|
2018-10-13 08:46:01 +08:00
|
|
|
sk_psock_put(sk, psock);
|
|
|
|
}
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
}
|
|
|
|
|
2019-07-20 01:29:16 +08:00
|
|
|
void tls_sw_cancel_work_tx(struct tls_context *tls_ctx)
|
|
|
|
{
|
|
|
|
struct tls_sw_context_tx *ctx = tls_sw_ctx_tx(tls_ctx);
|
|
|
|
|
|
|
|
set_bit(BIT_TX_CLOSING, &ctx->tx_bitmask);
|
|
|
|
set_bit(BIT_TX_SCHEDULED, &ctx->tx_bitmask);
|
|
|
|
cancel_delayed_work_sync(&ctx->tx_work.work);
|
|
|
|
}
|
|
|
|
|
2019-07-20 01:29:17 +08:00
|
|
|
void tls_sw_release_resources_tx(struct sock *sk)
|
2017-06-15 02:37:39 +08:00
|
|
|
{
|
|
|
|
struct tls_context *tls_ctx = tls_get_ctx(sk);
|
2018-04-30 15:16:15 +08:00
|
|
|
struct tls_sw_context_tx *ctx = tls_sw_ctx_tx(tls_ctx);
|
2018-09-21 12:16:13 +08:00
|
|
|
struct tls_rec *rec, *tmp;
|
2020-09-24 14:58:45 +08:00
|
|
|
int pending;
|
2018-09-21 12:16:13 +08:00
|
|
|
|
|
|
|
/* Wait for any pending async encryptions to complete */
|
2020-09-24 14:58:45 +08:00
|
|
|
spin_lock_bh(&ctx->encrypt_compl_lock);
|
|
|
|
ctx->async_notify = true;
|
|
|
|
pending = atomic_read(&ctx->encrypt_pending);
|
|
|
|
spin_unlock_bh(&ctx->encrypt_compl_lock);
|
|
|
|
|
|
|
|
if (pending)
|
2018-09-21 12:16:13 +08:00
|
|
|
crypto_wait_req(-EINPROGRESS, &ctx->async_wait);
|
|
|
|
|
|
|
|
tls_tx_records(sk, -1);
|
|
|
|
|
net/tls: Fixed race condition in async encryption
On processors with multi-engine crypto accelerators, it is possible that
multiple records get encrypted in parallel and their encryption
completion is notified to different cpus in multicore processor. This
leads to the situation where tls_encrypt_done() starts executing in
parallel on different cores. In current implementation, encrypted
records are queued to tx_ready_list in tls_encrypt_done(). This requires
addition to linked list 'tx_ready_list' to be protected. As
tls_decrypt_done() could be executing in irq content, it is not possible
to protect linked list addition operation using a lock.
To fix the problem, we remove linked list addition operation from the
irq context. We do tx_ready_list addition/removal operation from
application context only and get rid of possible multiple access to
the linked list. Before starting encryption on the record, we add it to
the tail of tx_ready_list. To prevent tls_tx_records() from transmitting
it, we mark the record with a new flag 'tx_ready' in 'struct tls_rec'.
When record encryption gets completed, tls_encrypt_done() has to only
update the 'tx_ready' flag to true & linked list add operation is not
required.
The changed logic brings some other side benefits. Since the records
are always submitted in tls sequence number order for encryption, the
tx_ready_list always remains sorted and addition of new records to it
does not have to traverse the linked list.
Lastly, we renamed tx_ready_list in 'struct tls_sw_context_tx' to
'tx_list'. This is because now, the some of the records at the tail are
not ready to transmit.
Fixes: a42055e8d2c3 ("net/tls: Add support for async encryption")
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-09-24 18:05:56 +08:00
|
|
|
/* Free up un-sent records in tx_list. First, free
|
2018-09-21 12:16:13 +08:00
|
|
|
* the partially sent record if any at head of tx_list.
|
|
|
|
*/
|
2019-11-28 04:16:44 +08:00
|
|
|
if (tls_ctx->partially_sent_record) {
|
|
|
|
tls_free_partial_record(sk, tls_ctx);
|
net/tls: Fixed race condition in async encryption
On processors with multi-engine crypto accelerators, it is possible that
multiple records get encrypted in parallel and their encryption
completion is notified to different cpus in multicore processor. This
leads to the situation where tls_encrypt_done() starts executing in
parallel on different cores. In current implementation, encrypted
records are queued to tx_ready_list in tls_encrypt_done(). This requires
addition to linked list 'tx_ready_list' to be protected. As
tls_decrypt_done() could be executing in irq content, it is not possible
to protect linked list addition operation using a lock.
To fix the problem, we remove linked list addition operation from the
irq context. We do tx_ready_list addition/removal operation from
application context only and get rid of possible multiple access to
the linked list. Before starting encryption on the record, we add it to
the tail of tx_ready_list. To prevent tls_tx_records() from transmitting
it, we mark the record with a new flag 'tx_ready' in 'struct tls_rec'.
When record encryption gets completed, tls_encrypt_done() has to only
update the 'tx_ready' flag to true & linked list add operation is not
required.
The changed logic brings some other side benefits. Since the records
are always submitted in tls sequence number order for encryption, the
tx_ready_list always remains sorted and addition of new records to it
does not have to traverse the linked list.
Lastly, we renamed tx_ready_list in 'struct tls_sw_context_tx' to
'tx_list'. This is because now, the some of the records at the tail are
not ready to transmit.
Fixes: a42055e8d2c3 ("net/tls: Add support for async encryption")
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-09-24 18:05:56 +08:00
|
|
|
rec = list_first_entry(&ctx->tx_list,
|
2018-09-21 12:16:13 +08:00
|
|
|
struct tls_rec, list);
|
|
|
|
list_del(&rec->list);
|
2018-10-13 08:45:59 +08:00
|
|
|
sk_msg_free(sk, &rec->msg_plaintext);
|
2018-09-21 12:16:13 +08:00
|
|
|
kfree(rec);
|
|
|
|
}
|
|
|
|
|
net/tls: Fixed race condition in async encryption
On processors with multi-engine crypto accelerators, it is possible that
multiple records get encrypted in parallel and their encryption
completion is notified to different cpus in multicore processor. This
leads to the situation where tls_encrypt_done() starts executing in
parallel on different cores. In current implementation, encrypted
records are queued to tx_ready_list in tls_encrypt_done(). This requires
addition to linked list 'tx_ready_list' to be protected. As
tls_decrypt_done() could be executing in irq content, it is not possible
to protect linked list addition operation using a lock.
To fix the problem, we remove linked list addition operation from the
irq context. We do tx_ready_list addition/removal operation from
application context only and get rid of possible multiple access to
the linked list. Before starting encryption on the record, we add it to
the tail of tx_ready_list. To prevent tls_tx_records() from transmitting
it, we mark the record with a new flag 'tx_ready' in 'struct tls_rec'.
When record encryption gets completed, tls_encrypt_done() has to only
update the 'tx_ready' flag to true & linked list add operation is not
required.
The changed logic brings some other side benefits. Since the records
are always submitted in tls sequence number order for encryption, the
tx_ready_list always remains sorted and addition of new records to it
does not have to traverse the linked list.
Lastly, we renamed tx_ready_list in 'struct tls_sw_context_tx' to
'tx_list'. This is because now, the some of the records at the tail are
not ready to transmit.
Fixes: a42055e8d2c3 ("net/tls: Add support for async encryption")
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-09-24 18:05:56 +08:00
|
|
|
list_for_each_entry_safe(rec, tmp, &ctx->tx_list, list) {
|
2018-09-21 12:16:13 +08:00
|
|
|
list_del(&rec->list);
|
2018-10-13 08:45:59 +08:00
|
|
|
sk_msg_free(sk, &rec->msg_encrypted);
|
|
|
|
sk_msg_free(sk, &rec->msg_plaintext);
|
2018-09-21 12:16:13 +08:00
|
|
|
kfree(rec);
|
|
|
|
}
|
2017-06-15 02:37:39 +08:00
|
|
|
|
2018-07-24 19:24:27 +08:00
|
|
|
crypto_free_aead(ctx->aead_send);
|
2018-09-25 22:51:51 +08:00
|
|
|
tls_free_open_rec(sk);
|
2019-07-20 01:29:17 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
void tls_sw_free_ctx_tx(struct tls_context *tls_ctx)
|
|
|
|
{
|
|
|
|
struct tls_sw_context_tx *ctx = tls_sw_ctx_tx(tls_ctx);
|
2018-04-30 15:16:15 +08:00
|
|
|
|
|
|
|
kfree(ctx);
|
|
|
|
}
|
|
|
|
|
2018-07-13 19:33:41 +08:00
|
|
|
void tls_sw_release_resources_rx(struct sock *sk)
|
2018-04-30 15:16:15 +08:00
|
|
|
{
|
|
|
|
struct tls_context *tls_ctx = tls_get_ctx(sk);
|
|
|
|
struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx);
|
|
|
|
|
2019-04-20 07:52:19 +08:00
|
|
|
kfree(tls_ctx->rx.rec_seq);
|
|
|
|
kfree(tls_ctx->rx.iv);
|
|
|
|
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
if (ctx->aead_recv) {
|
2018-07-24 19:24:27 +08:00
|
|
|
kfree_skb(ctx->recv_pkt);
|
|
|
|
ctx->recv_pkt = NULL;
|
2022-04-12 03:19:08 +08:00
|
|
|
__skb_queue_purge(&ctx->rx_list);
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
crypto_free_aead(ctx->aead_recv);
|
|
|
|
strp_stop(&ctx->strp);
|
2019-07-20 01:29:17 +08:00
|
|
|
/* If tls_sw_strparser_arm() was not called (cleanup paths)
|
|
|
|
* we still want to strp_stop(), but sk->sk_data_ready was
|
|
|
|
* never swapped.
|
|
|
|
*/
|
|
|
|
if (ctx->saved_data_ready) {
|
|
|
|
write_lock_bh(&sk->sk_callback_lock);
|
|
|
|
sk->sk_data_ready = ctx->saved_data_ready;
|
|
|
|
write_unlock_bh(&sk->sk_callback_lock);
|
|
|
|
}
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
}
|
2018-07-13 19:33:41 +08:00
|
|
|
}
|
|
|
|
|
2019-07-20 01:29:17 +08:00
|
|
|
void tls_sw_strparser_done(struct tls_context *tls_ctx)
|
2018-07-13 19:33:41 +08:00
|
|
|
{
|
|
|
|
struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx);
|
|
|
|
|
2019-07-20 01:29:17 +08:00
|
|
|
strp_done(&ctx->strp);
|
|
|
|
}
|
|
|
|
|
|
|
|
void tls_sw_free_ctx_rx(struct tls_context *tls_ctx)
|
|
|
|
{
|
|
|
|
struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx);
|
2017-06-15 02:37:39 +08:00
|
|
|
|
|
|
|
kfree(ctx);
|
|
|
|
}
|
|
|
|
|
2019-07-20 01:29:17 +08:00
|
|
|
void tls_sw_free_resources_rx(struct sock *sk)
|
|
|
|
{
|
|
|
|
struct tls_context *tls_ctx = tls_get_ctx(sk);
|
|
|
|
|
|
|
|
tls_sw_release_resources_rx(sk);
|
|
|
|
tls_sw_free_ctx_rx(tls_ctx);
|
|
|
|
}
|
|
|
|
|
net/tls: Fixed race condition in async encryption
On processors with multi-engine crypto accelerators, it is possible that
multiple records get encrypted in parallel and their encryption
completion is notified to different cpus in multicore processor. This
leads to the situation where tls_encrypt_done() starts executing in
parallel on different cores. In current implementation, encrypted
records are queued to tx_ready_list in tls_encrypt_done(). This requires
addition to linked list 'tx_ready_list' to be protected. As
tls_decrypt_done() could be executing in irq content, it is not possible
to protect linked list addition operation using a lock.
To fix the problem, we remove linked list addition operation from the
irq context. We do tx_ready_list addition/removal operation from
application context only and get rid of possible multiple access to
the linked list. Before starting encryption on the record, we add it to
the tail of tx_ready_list. To prevent tls_tx_records() from transmitting
it, we mark the record with a new flag 'tx_ready' in 'struct tls_rec'.
When record encryption gets completed, tls_encrypt_done() has to only
update the 'tx_ready' flag to true & linked list add operation is not
required.
The changed logic brings some other side benefits. Since the records
are always submitted in tls sequence number order for encryption, the
tx_ready_list always remains sorted and addition of new records to it
does not have to traverse the linked list.
Lastly, we renamed tx_ready_list in 'struct tls_sw_context_tx' to
'tx_list'. This is because now, the some of the records at the tail are
not ready to transmit.
Fixes: a42055e8d2c3 ("net/tls: Add support for async encryption")
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-09-24 18:05:56 +08:00
|
|
|
/* The work handler to transmitt the encrypted records in tx_list */
|
2018-09-21 12:16:13 +08:00
|
|
|
static void tx_work_handler(struct work_struct *work)
|
|
|
|
{
|
|
|
|
struct delayed_work *delayed_work = to_delayed_work(work);
|
|
|
|
struct tx_work *tx_work = container_of(delayed_work,
|
|
|
|
struct tx_work, work);
|
|
|
|
struct sock *sk = tx_work->sk;
|
|
|
|
struct tls_context *tls_ctx = tls_get_ctx(sk);
|
2019-07-20 01:29:16 +08:00
|
|
|
struct tls_sw_context_tx *ctx;
|
2018-09-21 12:16:13 +08:00
|
|
|
|
2019-07-20 01:29:16 +08:00
|
|
|
if (unlikely(!tls_ctx))
|
2018-09-21 12:16:13 +08:00
|
|
|
return;
|
|
|
|
|
2019-07-20 01:29:16 +08:00
|
|
|
ctx = tls_sw_ctx_tx(tls_ctx);
|
|
|
|
if (test_bit(BIT_TX_CLOSING, &ctx->tx_bitmask))
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (!test_and_clear_bit(BIT_TX_SCHEDULED, &ctx->tx_bitmask))
|
|
|
|
return;
|
2019-11-06 06:24:35 +08:00
|
|
|
mutex_lock(&tls_ctx->tx_lock);
|
2018-09-21 12:16:13 +08:00
|
|
|
lock_sock(sk);
|
|
|
|
tls_tx_records(sk, -1);
|
|
|
|
release_sock(sk);
|
2019-11-06 06:24:35 +08:00
|
|
|
mutex_unlock(&tls_ctx->tx_lock);
|
2018-09-21 12:16:13 +08:00
|
|
|
}
|
|
|
|
|
2019-02-27 23:38:04 +08:00
|
|
|
void tls_sw_write_space(struct sock *sk, struct tls_context *ctx)
|
|
|
|
{
|
|
|
|
struct tls_sw_context_tx *tx_ctx = tls_sw_ctx_tx(ctx);
|
|
|
|
|
|
|
|
/* Schedule the transmission if tx list is ready */
|
2019-11-06 06:24:34 +08:00
|
|
|
if (is_tx_ready(tx_ctx) &&
|
|
|
|
!test_and_set_bit(BIT_TX_SCHEDULED, &tx_ctx->tx_bitmask))
|
|
|
|
schedule_delayed_work(&tx_ctx->tx_work.work, 0);
|
2019-02-27 23:38:04 +08:00
|
|
|
}
|
|
|
|
|
2019-07-20 01:29:14 +08:00
|
|
|
void tls_sw_strparser_arm(struct sock *sk, struct tls_context *tls_ctx)
|
|
|
|
{
|
|
|
|
struct tls_sw_context_rx *rx_ctx = tls_sw_ctx_rx(tls_ctx);
|
|
|
|
|
|
|
|
write_lock_bh(&sk->sk_callback_lock);
|
|
|
|
rx_ctx->saved_data_ready = sk->sk_data_ready;
|
|
|
|
sk->sk_data_ready = tls_data_ready;
|
|
|
|
write_unlock_bh(&sk->sk_callback_lock);
|
|
|
|
|
|
|
|
strp_check_rcv(&rx_ctx->strp);
|
|
|
|
}
|
|
|
|
|
2022-07-06 07:59:24 +08:00
|
|
|
void tls_update_rx_zc_capable(struct tls_context *tls_ctx)
|
|
|
|
{
|
|
|
|
struct tls_sw_context_rx *rx_ctx = tls_sw_ctx_rx(tls_ctx);
|
|
|
|
|
|
|
|
rx_ctx->zc_capable = tls_ctx->rx_no_pad ||
|
|
|
|
tls_ctx->prot_info.version != TLS_1_3_VERSION;
|
|
|
|
}
|
|
|
|
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx)
|
2017-06-15 02:37:39 +08:00
|
|
|
{
|
2019-02-14 15:11:35 +08:00
|
|
|
struct tls_context *tls_ctx = tls_get_ctx(sk);
|
|
|
|
struct tls_prot_info *prot = &tls_ctx->prot_info;
|
2017-06-15 02:37:39 +08:00
|
|
|
struct tls_crypto_info *crypto_info;
|
2018-04-30 15:16:15 +08:00
|
|
|
struct tls_sw_context_tx *sw_ctx_tx = NULL;
|
|
|
|
struct tls_sw_context_rx *sw_ctx_rx = NULL;
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
struct cipher_context *cctx;
|
|
|
|
struct crypto_aead **aead;
|
|
|
|
struct strp_callbacks cb;
|
2019-03-20 10:03:36 +08:00
|
|
|
u16 nonce_size, tag_size, iv_size, rec_seq_size, salt_size;
|
2019-01-16 18:40:16 +08:00
|
|
|
struct crypto_tfm *tfm;
|
2019-03-20 10:03:36 +08:00
|
|
|
char *iv, *rec_seq, *key, *salt, *cipher_name;
|
2019-01-31 05:58:05 +08:00
|
|
|
size_t keysize;
|
2017-06-15 02:37:39 +08:00
|
|
|
int rc = 0;
|
|
|
|
|
|
|
|
if (!ctx) {
|
|
|
|
rc = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2018-04-30 15:16:15 +08:00
|
|
|
if (tx) {
|
2018-07-13 19:33:42 +08:00
|
|
|
if (!ctx->priv_ctx_tx) {
|
|
|
|
sw_ctx_tx = kzalloc(sizeof(*sw_ctx_tx), GFP_KERNEL);
|
|
|
|
if (!sw_ctx_tx) {
|
|
|
|
rc = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
ctx->priv_ctx_tx = sw_ctx_tx;
|
|
|
|
} else {
|
|
|
|
sw_ctx_tx =
|
|
|
|
(struct tls_sw_context_tx *)ctx->priv_ctx_tx;
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
}
|
|
|
|
} else {
|
2018-07-13 19:33:42 +08:00
|
|
|
if (!ctx->priv_ctx_rx) {
|
|
|
|
sw_ctx_rx = kzalloc(sizeof(*sw_ctx_rx), GFP_KERNEL);
|
|
|
|
if (!sw_ctx_rx) {
|
|
|
|
rc = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
ctx->priv_ctx_rx = sw_ctx_rx;
|
|
|
|
} else {
|
|
|
|
sw_ctx_rx =
|
|
|
|
(struct tls_sw_context_rx *)ctx->priv_ctx_rx;
|
2018-04-30 15:16:15 +08:00
|
|
|
}
|
2017-06-15 02:37:39 +08:00
|
|
|
}
|
|
|
|
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
if (tx) {
|
2018-07-13 19:33:42 +08:00
|
|
|
crypto_init_wait(&sw_ctx_tx->async_wait);
|
2020-05-23 04:10:31 +08:00
|
|
|
spin_lock_init(&sw_ctx_tx->encrypt_compl_lock);
|
2018-09-12 23:44:42 +08:00
|
|
|
crypto_info = &ctx->crypto_send.info;
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
cctx = &ctx->tx;
|
2018-04-30 15:16:15 +08:00
|
|
|
aead = &sw_ctx_tx->aead_send;
|
net/tls: Fixed race condition in async encryption
On processors with multi-engine crypto accelerators, it is possible that
multiple records get encrypted in parallel and their encryption
completion is notified to different cpus in multicore processor. This
leads to the situation where tls_encrypt_done() starts executing in
parallel on different cores. In current implementation, encrypted
records are queued to tx_ready_list in tls_encrypt_done(). This requires
addition to linked list 'tx_ready_list' to be protected. As
tls_decrypt_done() could be executing in irq content, it is not possible
to protect linked list addition operation using a lock.
To fix the problem, we remove linked list addition operation from the
irq context. We do tx_ready_list addition/removal operation from
application context only and get rid of possible multiple access to
the linked list. Before starting encryption on the record, we add it to
the tail of tx_ready_list. To prevent tls_tx_records() from transmitting
it, we mark the record with a new flag 'tx_ready' in 'struct tls_rec'.
When record encryption gets completed, tls_encrypt_done() has to only
update the 'tx_ready' flag to true & linked list add operation is not
required.
The changed logic brings some other side benefits. Since the records
are always submitted in tls sequence number order for encryption, the
tx_ready_list always remains sorted and addition of new records to it
does not have to traverse the linked list.
Lastly, we renamed tx_ready_list in 'struct tls_sw_context_tx' to
'tx_list'. This is because now, the some of the records at the tail are
not ready to transmit.
Fixes: a42055e8d2c3 ("net/tls: Add support for async encryption")
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-09-24 18:05:56 +08:00
|
|
|
INIT_LIST_HEAD(&sw_ctx_tx->tx_list);
|
2018-09-21 12:16:13 +08:00
|
|
|
INIT_DELAYED_WORK(&sw_ctx_tx->tx_work.work, tx_work_handler);
|
|
|
|
sw_ctx_tx->tx_work.sk = sk;
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
} else {
|
2018-07-13 19:33:42 +08:00
|
|
|
crypto_init_wait(&sw_ctx_rx->async_wait);
|
2020-05-23 04:10:31 +08:00
|
|
|
spin_lock_init(&sw_ctx_rx->decrypt_compl_lock);
|
2018-09-12 23:44:42 +08:00
|
|
|
crypto_info = &ctx->crypto_recv.info;
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
cctx = &ctx->rx;
|
2019-01-16 18:40:16 +08:00
|
|
|
skb_queue_head_init(&sw_ctx_rx->rx_list);
|
2018-04-30 15:16:15 +08:00
|
|
|
aead = &sw_ctx_rx->aead_recv;
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
}
|
|
|
|
|
2017-06-15 02:37:39 +08:00
|
|
|
switch (crypto_info->cipher_type) {
|
|
|
|
case TLS_CIPHER_AES_GCM_128: {
|
2021-11-29 19:10:14 +08:00
|
|
|
struct tls12_crypto_info_aes_gcm_128 *gcm_128_info;
|
|
|
|
|
|
|
|
gcm_128_info = (void *)crypto_info;
|
2017-06-15 02:37:39 +08:00
|
|
|
nonce_size = TLS_CIPHER_AES_GCM_128_IV_SIZE;
|
|
|
|
tag_size = TLS_CIPHER_AES_GCM_128_TAG_SIZE;
|
|
|
|
iv_size = TLS_CIPHER_AES_GCM_128_IV_SIZE;
|
2021-11-29 19:10:14 +08:00
|
|
|
iv = gcm_128_info->iv;
|
2017-06-15 02:37:39 +08:00
|
|
|
rec_seq_size = TLS_CIPHER_AES_GCM_128_REC_SEQ_SIZE;
|
2021-11-29 19:10:14 +08:00
|
|
|
rec_seq = gcm_128_info->rec_seq;
|
2019-01-31 05:58:05 +08:00
|
|
|
keysize = TLS_CIPHER_AES_GCM_128_KEY_SIZE;
|
|
|
|
key = gcm_128_info->key;
|
|
|
|
salt = gcm_128_info->salt;
|
2019-03-20 10:03:36 +08:00
|
|
|
salt_size = TLS_CIPHER_AES_GCM_128_SALT_SIZE;
|
|
|
|
cipher_name = "gcm(aes)";
|
2019-01-31 05:58:05 +08:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
case TLS_CIPHER_AES_GCM_256: {
|
2021-11-29 19:10:14 +08:00
|
|
|
struct tls12_crypto_info_aes_gcm_256 *gcm_256_info;
|
|
|
|
|
|
|
|
gcm_256_info = (void *)crypto_info;
|
2019-01-31 05:58:05 +08:00
|
|
|
nonce_size = TLS_CIPHER_AES_GCM_256_IV_SIZE;
|
|
|
|
tag_size = TLS_CIPHER_AES_GCM_256_TAG_SIZE;
|
|
|
|
iv_size = TLS_CIPHER_AES_GCM_256_IV_SIZE;
|
2021-11-29 19:10:14 +08:00
|
|
|
iv = gcm_256_info->iv;
|
2019-01-31 05:58:05 +08:00
|
|
|
rec_seq_size = TLS_CIPHER_AES_GCM_256_REC_SEQ_SIZE;
|
2021-11-29 19:10:14 +08:00
|
|
|
rec_seq = gcm_256_info->rec_seq;
|
2019-01-31 05:58:05 +08:00
|
|
|
keysize = TLS_CIPHER_AES_GCM_256_KEY_SIZE;
|
|
|
|
key = gcm_256_info->key;
|
|
|
|
salt = gcm_256_info->salt;
|
2019-03-20 10:03:36 +08:00
|
|
|
salt_size = TLS_CIPHER_AES_GCM_256_SALT_SIZE;
|
|
|
|
cipher_name = "gcm(aes)";
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
case TLS_CIPHER_AES_CCM_128: {
|
2021-11-29 19:10:14 +08:00
|
|
|
struct tls12_crypto_info_aes_ccm_128 *ccm_128_info;
|
|
|
|
|
|
|
|
ccm_128_info = (void *)crypto_info;
|
2019-03-20 10:03:36 +08:00
|
|
|
nonce_size = TLS_CIPHER_AES_CCM_128_IV_SIZE;
|
|
|
|
tag_size = TLS_CIPHER_AES_CCM_128_TAG_SIZE;
|
|
|
|
iv_size = TLS_CIPHER_AES_CCM_128_IV_SIZE;
|
2021-11-29 19:10:14 +08:00
|
|
|
iv = ccm_128_info->iv;
|
2019-03-20 10:03:36 +08:00
|
|
|
rec_seq_size = TLS_CIPHER_AES_CCM_128_REC_SEQ_SIZE;
|
2021-11-29 19:10:14 +08:00
|
|
|
rec_seq = ccm_128_info->rec_seq;
|
2019-03-20 10:03:36 +08:00
|
|
|
keysize = TLS_CIPHER_AES_CCM_128_KEY_SIZE;
|
|
|
|
key = ccm_128_info->key;
|
|
|
|
salt = ccm_128_info->salt;
|
|
|
|
salt_size = TLS_CIPHER_AES_CCM_128_SALT_SIZE;
|
|
|
|
cipher_name = "ccm(aes)";
|
2017-06-15 02:37:39 +08:00
|
|
|
break;
|
|
|
|
}
|
2020-11-24 23:24:49 +08:00
|
|
|
case TLS_CIPHER_CHACHA20_POLY1305: {
|
2021-11-29 19:10:14 +08:00
|
|
|
struct tls12_crypto_info_chacha20_poly1305 *chacha20_poly1305_info;
|
|
|
|
|
2020-11-24 23:24:49 +08:00
|
|
|
chacha20_poly1305_info = (void *)crypto_info;
|
|
|
|
nonce_size = 0;
|
|
|
|
tag_size = TLS_CIPHER_CHACHA20_POLY1305_TAG_SIZE;
|
|
|
|
iv_size = TLS_CIPHER_CHACHA20_POLY1305_IV_SIZE;
|
|
|
|
iv = chacha20_poly1305_info->iv;
|
|
|
|
rec_seq_size = TLS_CIPHER_CHACHA20_POLY1305_REC_SEQ_SIZE;
|
|
|
|
rec_seq = chacha20_poly1305_info->rec_seq;
|
|
|
|
keysize = TLS_CIPHER_CHACHA20_POLY1305_KEY_SIZE;
|
|
|
|
key = chacha20_poly1305_info->key;
|
|
|
|
salt = chacha20_poly1305_info->salt;
|
|
|
|
salt_size = TLS_CIPHER_CHACHA20_POLY1305_SALT_SIZE;
|
|
|
|
cipher_name = "rfc7539(chacha20,poly1305)";
|
|
|
|
break;
|
|
|
|
}
|
2021-09-16 11:37:38 +08:00
|
|
|
case TLS_CIPHER_SM4_GCM: {
|
|
|
|
struct tls12_crypto_info_sm4_gcm *sm4_gcm_info;
|
|
|
|
|
|
|
|
sm4_gcm_info = (void *)crypto_info;
|
|
|
|
nonce_size = TLS_CIPHER_SM4_GCM_IV_SIZE;
|
|
|
|
tag_size = TLS_CIPHER_SM4_GCM_TAG_SIZE;
|
|
|
|
iv_size = TLS_CIPHER_SM4_GCM_IV_SIZE;
|
|
|
|
iv = sm4_gcm_info->iv;
|
|
|
|
rec_seq_size = TLS_CIPHER_SM4_GCM_REC_SEQ_SIZE;
|
|
|
|
rec_seq = sm4_gcm_info->rec_seq;
|
|
|
|
keysize = TLS_CIPHER_SM4_GCM_KEY_SIZE;
|
|
|
|
key = sm4_gcm_info->key;
|
|
|
|
salt = sm4_gcm_info->salt;
|
|
|
|
salt_size = TLS_CIPHER_SM4_GCM_SALT_SIZE;
|
|
|
|
cipher_name = "gcm(sm4)";
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
case TLS_CIPHER_SM4_CCM: {
|
|
|
|
struct tls12_crypto_info_sm4_ccm *sm4_ccm_info;
|
|
|
|
|
|
|
|
sm4_ccm_info = (void *)crypto_info;
|
|
|
|
nonce_size = TLS_CIPHER_SM4_CCM_IV_SIZE;
|
|
|
|
tag_size = TLS_CIPHER_SM4_CCM_TAG_SIZE;
|
|
|
|
iv_size = TLS_CIPHER_SM4_CCM_IV_SIZE;
|
|
|
|
iv = sm4_ccm_info->iv;
|
|
|
|
rec_seq_size = TLS_CIPHER_SM4_CCM_REC_SEQ_SIZE;
|
|
|
|
rec_seq = sm4_ccm_info->rec_seq;
|
|
|
|
keysize = TLS_CIPHER_SM4_CCM_KEY_SIZE;
|
|
|
|
key = sm4_ccm_info->key;
|
|
|
|
salt = sm4_ccm_info->salt;
|
|
|
|
salt_size = TLS_CIPHER_SM4_CCM_SALT_SIZE;
|
|
|
|
cipher_name = "ccm(sm4)";
|
|
|
|
break;
|
|
|
|
}
|
2017-06-15 02:37:39 +08:00
|
|
|
default:
|
|
|
|
rc = -EINVAL;
|
2018-01-16 23:04:26 +08:00
|
|
|
goto free_priv;
|
2017-06-15 02:37:39 +08:00
|
|
|
}
|
|
|
|
|
2019-01-31 05:58:31 +08:00
|
|
|
if (crypto_info->version == TLS_1_3_VERSION) {
|
|
|
|
nonce_size = 0;
|
2019-02-14 15:11:35 +08:00
|
|
|
prot->aad_size = TLS_HEADER_SIZE;
|
|
|
|
prot->tail_size = 1;
|
2019-01-31 05:58:31 +08:00
|
|
|
} else {
|
2019-02-14 15:11:35 +08:00
|
|
|
prot->aad_size = TLS_AAD_SPACE_SIZE;
|
|
|
|
prot->tail_size = 0;
|
2019-01-31 05:58:31 +08:00
|
|
|
}
|
|
|
|
|
2022-07-08 09:03:10 +08:00
|
|
|
/* Sanity-check the sizes for stack allocations. */
|
|
|
|
if (iv_size > MAX_IV_SIZE || nonce_size > MAX_IV_SIZE ||
|
|
|
|
rec_seq_size > TLS_MAX_REC_SEQ_SIZE || tag_size != TLS_TAG_SIZE ||
|
|
|
|
prot->aad_size > TLS_MAX_AAD_SIZE) {
|
|
|
|
rc = -EINVAL;
|
|
|
|
goto free_priv;
|
|
|
|
}
|
|
|
|
|
2019-02-14 15:11:35 +08:00
|
|
|
prot->version = crypto_info->version;
|
|
|
|
prot->cipher_type = crypto_info->cipher_type;
|
|
|
|
prot->prepend_size = TLS_HEADER_SIZE + nonce_size;
|
|
|
|
prot->tag_size = tag_size;
|
|
|
|
prot->overhead_size = prot->prepend_size +
|
|
|
|
prot->tag_size + prot->tail_size;
|
|
|
|
prot->iv_size = iv_size;
|
2019-03-20 10:03:36 +08:00
|
|
|
prot->salt_size = salt_size;
|
|
|
|
cctx->iv = kmalloc(iv_size + salt_size, GFP_KERNEL);
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
if (!cctx->iv) {
|
2017-06-15 02:37:39 +08:00
|
|
|
rc = -ENOMEM;
|
2018-01-16 23:04:26 +08:00
|
|
|
goto free_priv;
|
2017-06-15 02:37:39 +08:00
|
|
|
}
|
2019-01-31 05:58:05 +08:00
|
|
|
/* Note: 128 & 256 bit salt are the same size */
|
2019-02-14 15:11:35 +08:00
|
|
|
prot->rec_seq_size = rec_seq_size;
|
2019-03-20 10:03:36 +08:00
|
|
|
memcpy(cctx->iv, salt, salt_size);
|
|
|
|
memcpy(cctx->iv + salt_size, iv, iv_size);
|
2018-08-01 00:50:24 +08:00
|
|
|
cctx->rec_seq = kmemdup(rec_seq, rec_seq_size, GFP_KERNEL);
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
if (!cctx->rec_seq) {
|
2017-06-15 02:37:39 +08:00
|
|
|
rc = -ENOMEM;
|
|
|
|
goto free_iv;
|
|
|
|
}
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
|
|
|
|
if (!*aead) {
|
2019-03-20 10:03:36 +08:00
|
|
|
*aead = crypto_alloc_aead(cipher_name, 0, 0);
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
if (IS_ERR(*aead)) {
|
|
|
|
rc = PTR_ERR(*aead);
|
|
|
|
*aead = NULL;
|
2017-06-15 02:37:39 +08:00
|
|
|
goto free_rec_seq;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
ctx->push_pending_record = tls_sw_push_pending_record;
|
|
|
|
|
2019-01-31 05:58:05 +08:00
|
|
|
rc = crypto_aead_setkey(*aead, key, keysize);
|
|
|
|
|
2017-06-15 02:37:39 +08:00
|
|
|
if (rc)
|
|
|
|
goto free_aead;
|
|
|
|
|
2019-02-14 15:11:35 +08:00
|
|
|
rc = crypto_aead_setauthsize(*aead, prot->tag_size);
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
if (rc)
|
|
|
|
goto free_aead;
|
|
|
|
|
2018-04-30 15:16:15 +08:00
|
|
|
if (sw_ctx_rx) {
|
2019-01-16 18:40:16 +08:00
|
|
|
tfm = crypto_aead_tfm(sw_ctx_rx->aead_recv);
|
2019-02-09 15:53:28 +08:00
|
|
|
|
2022-07-06 07:59:24 +08:00
|
|
|
tls_update_rx_zc_capable(ctx);
|
|
|
|
sw_ctx_rx->async_capable =
|
|
|
|
crypto_info->version != TLS_1_3_VERSION &&
|
|
|
|
!!(tfm->__crt_alg->cra_flags & CRYPTO_ALG_ASYNC);
|
2019-01-16 18:40:16 +08:00
|
|
|
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
/* Set up strparser */
|
|
|
|
memset(&cb, 0, sizeof(cb));
|
|
|
|
cb.rcv_msg = tls_queue;
|
|
|
|
cb.parse_msg = tls_read_size;
|
|
|
|
|
2018-04-30 15:16:15 +08:00
|
|
|
strp_init(&sw_ctx_rx->strp, sk, &cb);
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
goto out;
|
2017-06-15 02:37:39 +08:00
|
|
|
|
|
|
|
free_aead:
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
crypto_free_aead(*aead);
|
|
|
|
*aead = NULL;
|
2017-06-15 02:37:39 +08:00
|
|
|
free_rec_seq:
|
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-23 01:10:35 +08:00
|
|
|
kfree(cctx->rec_seq);
|
|
|
|
cctx->rec_seq = NULL;
|
2017-06-15 02:37:39 +08:00
|
|
|
free_iv:
|
2018-04-30 15:16:15 +08:00
|
|
|
kfree(cctx->iv);
|
|
|
|
cctx->iv = NULL;
|
2018-01-16 23:04:26 +08:00
|
|
|
free_priv:
|
2018-04-30 15:16:15 +08:00
|
|
|
if (tx) {
|
|
|
|
kfree(ctx->priv_ctx_tx);
|
|
|
|
ctx->priv_ctx_tx = NULL;
|
|
|
|
} else {
|
|
|
|
kfree(ctx->priv_ctx_rx);
|
|
|
|
ctx->priv_ctx_rx = NULL;
|
|
|
|
}
|
2017-06-15 02:37:39 +08:00
|
|
|
out:
|
|
|
|
return rc;
|
|
|
|
}
|