When an akcipher test fails, we don't know which algorithm failed
because the name is not printed. This patch fixes this.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use the parameter 'gfp_flags' instead of 'flag' as second argument of
dma_pool_alloc(). The parameter 'flag' is for the TDMA descriptor, its
content has no sense for the allocator.
Fixes: bac8e805a3 ("crypto: marvell - Copy IV vectors by DMA...")
Signed-off-by: Romain Perier <romain.perier@free-electrons.com>
Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The props->ap[] array is defined like this:
struct alg_props ap[NX_MAX_FC][NX_MAX_MODE][3];
So we can see that if msc->fc and msc->mode are == to NX_MAX_FC or
NX_MAX_MODE then we're off by one.
Fixes: ae0222b728 ('powerpc/crypto: nx driver code supporting nx encryption')
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
To allow for child request context the struct akcipher_request child_req
needs to be at the end of the structure.
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch inlines the functions scatterwalk_start, scatterwalk_map
and scatterwalk_done as they're all tiny and mostly used by the block
cipher walker.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Nothing bad will happen even if sg->length is zero, so there is
no point in keeping this BUG_ON in scatterwalk_start.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The offset advance in scatterwalk_pagedone not only is unnecessary,
but it was also buggy when it was needed by scatterwalk_copychunks.
As the latter has long ago been fixed to call scatterwalk_advance
directly, we can remove this unnecessary offset adjustment.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When there is more data to be processed, the current test in
scatterwalk_done may prevent us from calling pagedone even when
we should.
In particular, if we're on an SG entry spanning multiple pages
where the last page is not a full page, we will incorrectly skip
calling pagedone on the second last page.
This patch fixes this by adding a separate test for whether we've
reached the end of a page.
Cc: stable@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When hard preemption is enabled there is no need to explicitly
call crypto_yield. This patch eliminates it if that is the case.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The function ablkcipher_done_slow is pretty much identical to
scatterwalk_copychunks except that it doesn't actually copy as
the processing hasn't been completed yet.
This patch allows scatterwalk_copychunks to be used in this case
by specifying out == 2.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch removes the now unused scatterwalk_bytes_sglen. Anyone
using this out-of-tree should switch over to sg_nents_for_len.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We already have a generic function sg_nents_for_len which does
the same thing. This patch switches omap over to it and also
adds error handling in case the SG list is short.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch removes the old crypto_grab_skcipher helper and replaces
it with crypto_grab_skcipher2.
As this is the final entry point into givcipher this patch also
removes all traces of the top-level givcipher interface, including
all implicit IV generators such as chainiv.
The bottom-level givcipher interface remains until the drivers
using it are converted.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As there are no more kernel users of built-in IV generators we
can remove the special lookup for skciphers.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts cts over to the skcipher interface. It also
optimises the implementation to use one CBC operation for all but
the last block, which is then processed separately.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds an skcipher null object alongside the existing
null blkcipher so that IV generators using it can switch over
to skcipher.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts chacha20poly1305 to use the new skcipher
interface as opposed to ablkcipher.
It also fixes a buglet where we may end up with an async poly1305
when the user asks for a async algorithm. This shouldn't be a
problem yet as there aren't any async implementations of poly1305
out there.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts authencesn to use the new skcipher interface as
opposed to ablkcipher.
It also fixes a little bug where if a sync version of authencesn
is requested we may still end up using an async ahash. This should
have no effect as none of the authencesn users can request for a
sync authencesn.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts authenc to use the new skcipher interface as
opposed to ablkcipher.
It also fixes a little bug where if a sync version of authenc
is requested we may still end up using an async ahash. This should
have no effect as none of the authenc users can request for a
sync authenc.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a chunk size parameter to aead algorithms, just
like the chunk size for skcipher algorithms.
However, unlike skcipher we do not currently export this to AEAD
users. It is only meant to be used by AEAD implementors for now.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Current the default null skcipher is actually a crypto_blkcipher.
This patch creates a synchronous crypto_skcipher version of the
null cipher which unfortunately has to settle for the name skcipher2.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch allows skcipher algorithms and instances to be created
and registered with the crypto API. They are accessible through
the top-level skcipher interface, along with ablkcipher/blkcipher
algorithms and instances.
This patch also introduces a new parameter called chunk size
which is meant for ciphers such as CTR and CTS which ostensibly
can handle arbitrary lengths, but still behave like block ciphers
in that you can only process a partial block at the very end.
For these ciphers the block size will continue to be set to 1
as it is now while the chunk size will be set to the underlying
block size.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
for condition comparison and cleanup multiline comment style
In sha*_ctx_mgr_submit, we currently use the | operator instead of ||
((ctx->partial_block_buffer_length) | (len < SHA1_BLOCK_SIZE))
Switching it to || and remove extraneous paranthesis to
adhere to coding style.
Also cleanup inconsistent multiline comment style.
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
There is not need to drop leading zeros from the RSA output
operations results.
Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add DH support under kpp api. Drop struct qat_rsa_request and
introduce a more generic struct qat_asym_request and share it
between RSA and DH requests.
Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch fix double words "the the" in crypto-API.tmpl.
Signed-off-by: Masanari Iida <standby24x7@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Extend qat driver to use RSA CRT mode when all CRT related components are
present in the private key. Simplify code in qat_rsa_setkey by adding
qat_rsa_clear_ctx.
Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Key generated with openssl. It also contains all fields required
for testing CRT mode
Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When parsing a private key, store all non-optional fields. These
are required for enabling CRT mode for decrypt and verify
Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Different product families will use FLR or SBR.
Virtual Function devices have no reset method.
Signed-off-by: Conor McLoughlin <conor.mcloughlin@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove unneeded error handling on the result of a call to
platform_get_resource when the value is passed to
devm_ioremap_resource.
The Coccinelle semantic patch that makes this change is as follows:
// <smpl>
@@
expression pdev,res,n,e,e1;
expression ret != 0;
identifier l;
@@
- res = platform_get_resource(pdev, IORESOURCE_MEM, n);
... when != res
- if (res == NULL) { ... \(goto l;\|return ret;\) }
... when != res
+ res = platform_get_resource(pdev, IORESOURCE_MEM, n);
e = devm_ioremap_resource(e1, res);
// </smpl>
Signed-off-by: Amitoj Kaur Chawla <amitoj1606@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add RSA support to caam driver.
Initial author is Yashpal Dutta <yashpal.dutta@freescale.com>.
Signed-off-by: Tudor Ambarus <tudor-dan.ambarus@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Report correct error in case of failure
Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Drop all asn1 related code and use the new rsa_helper
functions rsa_parse_[pub|priv]_key for parsing the key
Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use the vector polynomial multiply-sum instructions in POWER8 to
speed up crc32c.
This is just over 41x faster than the slice-by-8 method that it
replaces. Measurements on a 4.1 GHz POWER8 show it sustaining
52 GiB/sec.
A simple btrfs write performance test:
dd if=/dev/zero of=/mnt/tmpfile bs=1M count=4096
sync
is over 3.7x faster.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
gcc provides FUNC_START/FUNC_END macros to help with creating
assembly functions. Mirror these in the kernel so we can more easily
share code between userspace and the kernel. FUNC_END is just a
stub since we don't currently annotate the end of kernel functions.
It might make sense to do a wholesale search and replace, but for
now just create a couple of defines.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As the software RSA implementation now produces fixed-length
output, we need to eliminate leading zeros in the calling code
instead.
This patch does just that for pkcs1pad signature verification.
Fixes: 9b45b7bba3 ("crypto: rsa - Generate fixed-length output")
Reported-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds HMAC-SHA3 test modes in tcrypt module
and related test vectors.
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The arm-neon-sha implementations have cra_priority of 150...300, so
increase omap-sham priority to 400 to ensure it is on top of any
software alg.
Signed-off-by: Bin Liu <b-liu@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The multibuffer hash speed test is incorrectly bailing because
of an EINPROGRESS return value. This patch fixes it by setting
ret to zero if it is equal to -EINPROGRESS.
Reported-by: Megha Dey <megha.dey@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In the vast majority of cases (2^-32 on 32-bit and 2^-64 on 64-bit)
cases, the result from encryption/signing will require no padding.
This patch makes these two operations write their output directly
to the final destination. Only in the exceedingly rare cases where
fixup is needed to we copy it out and back to add the leading zeroes.
This patch also makes use of the crypto_akcipher_set_crypt API
instead of writing the akcipher request directly.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rather than repeatedly checking the key size on each operation,
we should be checking it once when the key is set.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We don't currently support using akcipher in atomic contexts,
so GFP_KERNEL should always be used.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>