skcipher_walk_done() assumes it's a bug if, after the "slow" path is
executed where the next chunk of data is processed via a bounce buffer,
the algorithm says it didn't process all bytes. Thus it WARNs on this.
However, this can happen legitimately when the message needs to be
evenly divisible into "blocks" but isn't, and the algorithm has a
'walksize' greater than the block size. For example, ecb-aes-neonbs
sets 'walksize' to 128 bytes and only supports messages evenly divisible
into 16-byte blocks. If, say, 17 message bytes remain but they straddle
scatterlist elements, the skcipher_walk code will take the "slow" path
and pass the algorithm all 17 bytes in the bounce buffer. But the
algorithm will only be able to process 16 bytes, triggering the WARN.
Fix this by just removing the WARN_ON(). Returning -EINVAL, as the code
already does, is the right behavior.
This bug was detected by my patches that improve testmgr to fuzz
algorithms against their generic implementation.
Fixes: b286d8b1a6 ("crypto: skcipher - Add skcipher walk interface")
Cc: <stable@vger.kernel.org> # v4.10+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The ->digest() method of crct10dif-pclmul reads the current CRC value
from the shash_desc context. But this value is uninitialized, causing
crypto_shash_digest() to compute the wrong result. Fix it.
Probably this wasn't noticed before because lib/crc-t10dif.c only uses
crypto_shash_update(), not crypto_shash_digest(). Likewise,
crypto_shash_digest() is not yet tested by the crypto self-tests because
those only test the ahash API which only uses shash init/update/final.
Fixes: 0b95a7f857 ("crypto: crct10dif - Glue code to cast accelerated CRCT10DIF assembly as a crypto transform")
Cc: <stable@vger.kernel.org> # v3.11+
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The ->digest() method of crct10dif-generic reads the current CRC value
from the shash_desc context. But this value is uninitialized, causing
crypto_shash_digest() to compute the wrong result. Fix it.
Probably this wasn't noticed before because lib/crc-t10dif.c only uses
crypto_shash_update(), not crypto_shash_digest(). Likewise,
crypto_shash_digest() is not yet tested by the crypto self-tests because
those only test the ahash API which only uses shash init/update/final.
This bug was detected by my patches that improve testmgr to fuzz
algorithms against their generic implementation.
Fixes: 2d31e518a4 ("crypto: crct10dif - Wrap crc_t10dif function all to use crypto transform framework")
Cc: <stable@vger.kernel.org> # v3.11+
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fixes gcc '-Wunused-but-set-variable' warning:
drivers/crypto/nx/nx-842.c: In function 'decompress':
drivers/crypto/nx/nx-842.c:356:25: warning: variable 'dpadding' set but not used [-Wunused-but-set-variable]
drivers/crypto/nx/nx-842-pseries.c: In function 'nx842_pseries_compress':
drivers/crypto/nx/nx-842-pseries.c:299:15: warning: variable 'max_sync_size' set but not used [-Wunused-but-set-variable]
drivers/crypto/nx/nx-842-pseries.c: In function 'nx842_pseries_decompress':
drivers/crypto/nx/nx-842-pseries.c:430:15: warning: variable 'max_sync_size' set but not used [-Wunused-but-set-variable]
They are not used any more and can be removed.
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Reviewed-by: Mukesh Ojha <mojha@codeaurora.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
'err' is set in err path, but it's not returned to callers.
Don't always return -EINPROGRESS, return err.
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Reviewed-by: Mukesh Ojha <mojha@codeaurora.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fixes gcc '-Wunused-but-set-variable' warning:
drivers/crypto/marvell/hash.c: In function 'mv_cesa_ahash_pad_req':
drivers/crypto/marvell/hash.c:138:15: warning: variable 'index' set but not used [-Wunused-but-set-variable]
It's never used and can be removed.
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Reviewed-by: Mukesh Ojha <mojha@codeaurora.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use kmemdup rather than duplicating its implementation
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Acked-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
cacheline_aligned is a special section. It cannot be const at the same
time because it's not read-only. It doesn't give any MMU protection.
Mark it ____cacheline_aligned to not place it in a special section,
but just align it in .rodata
Cc: herbert@gondor.apana.org.au
Suggested-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Two per-CPU variables are allocated as pointer to per-CPU memory which
then are used as scratch buffers.
We could be smart about this and use instead a per-CPU struct which
contains the pointers already and then we need to allocate just the
scratch buffers.
Add a lock to the struct. By doing so we can avoid the get_cpu()
statement and gain lockdep coverage (if enabled) to ensure that the lock
is always acquired in the right context. On non-preemptible kernels the
lock vanishes.
It is okay to use raw_cpu_ptr() in order to get a pointer to the struct
since it is protected by the spinlock.
The diffstat of this is negative and according to size scompress.o:
text data bss dec hex filename
1847 160 24 2031 7ef dbg_before.o
1754 232 4 1990 7c6 dbg_after.o
1799 64 24 1887 75f no_dbg-before.o
1703 88 4 1795 703 no_dbg-after.o
The overall size increase difference is also negative. The increase in
the data section is only four bytes without lockdep.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If scomp_acomp_comp_decomp() fails to allocate memory for the
destination then we never copy back the data we compressed.
It is probably best to return an error code instead 0 in case of
failure.
I haven't found any user that is using acomp_request_set_params()
without the `dst' buffer so there is probably no harm.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The current definition and implementation of the SEV_GET_ID command
does not provide the length of the unique ID returned by the firmware.
As per the firmware specification, the firmware may return an ID
length that is not restricted to 64 bytes as assumed by the SEV_GET_ID
command.
Introduce the SEV_GET_ID2 command to overcome with the SEV_GET_ID
limitations. Deprecate the SEV_GET_ID in the favor of SEV_GET_ID2.
At the same time update SEV API web link.
Cc: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Gary Hook <gary.hook@amd.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Nathaniel McCallum <npmccallum@redhat.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
create_caam_req_fq() doesn't return NULL pointers so there is no need to
check. The NULL checks are problematic because it's hard to say how a
NULL return should be handled, so removing the checks is a nice cleanup.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Some i.MX6 devices (imx6D, imx6Q, imx6DL, imx6S, imx6DP and imx6DQ) have
an issue wherein AXI bus transactions may not occur in the correct order.
This isn't a problem running single descriptors, but can be if running
multiple concurrent descriptors. Reworking the CAAM driver to throttle
to single requests is impractical, so this patch limits the AXI pipeline
to a depth of one (from a default of 4) to preclude this situation from
occurring.
This patch applies to known affected platforms.
Signed-off-by: Radu Solea <radu.solea@nxp.com>
Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In caam_jr_enqueue(), a write barrier is needed to order stores to job
ring slot before declaring addition of new job into input job ring.
The register write is done using wr_reg32() which internally uses
iowrite32() for write operation. The api iowrite32() issues a write
barrier before issuing write operation. Therefore, the wmb() preceding
wr_reg32() can be safely removed.
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Reviewed-by: Horia Geanta <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
For each job ring, the variable 'ringsize' is initialised but never
used. Similarly variables 'inp_ring_write_index' and 'head' always track
the same value and instead of 'inp_ring_write_index', caam_jr_enqueue()
can use 'head' itself. Both these variables have been removed.
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
For each job ring pair, the output ring is processed exactly by one cpu
at a time under a tasklet context (one per ring). Therefore, there is no
need to protect a job ring's access & its private data structure using a
lock. Hence the lock can be removed.
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Reviewed-by: Horia Geanta <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix sparse warnings:
drivers/crypto/vmx/vmx.c:44:12: warning:
symbol 'p8_init' was not declared. Should it be static?
drivers/crypto/vmx/vmx.c:70:13: warning:
symbol 'p8_exit' was not declared. Should it be static?
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix sparse warning:
drivers/crypto/cavium/cpt/cptvf_main.c:644:6:
warning: symbol 'cptvf_device_init' was not declared. Should it be static?
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
It's never used since introduction in commit
9d12ba86f8 ("crypto: brcm - Add Broadcom SPU driver")
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix following sparse warnings:
drivers/crypto/cavium/zip/zip_crypto.c:72:5: warning: symbol 'zip_ctx_init' was not declared. Should it be static?
drivers/crypto/cavium/zip/zip_crypto.c:110:6: warning: symbol 'zip_ctx_exit' was not declared. Should it be static?
drivers/crypto/cavium/zip/zip_crypto.c:122:5: warning: symbol 'zip_compress' was not declared. Should it be static?
drivers/crypto/cavium/zip/zip_crypto.c:158:5: warning: symbol 'zip_decompress' was not declared. Should it be static?
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix sparse warning:
drivers/crypto/ccp/ccp-crypto-rsa.c:251:5:
warning: symbol 'ccp_register_rsa_alg' was not declared. Should it be static?
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Acked-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix sparse warnings:
drivers/crypto/cavium/cpt/cptvf_reqmanager.c:226:5: warning: symbol 'send_cpt_command' was not declared. Should it be static?
drivers/crypto/cavium/cpt/cptvf_reqmanager.c:273:6: warning: symbol 'do_request_cleanup' was not declared. Should it be static?
drivers/crypto/cavium/cpt/cptvf_reqmanager.c:319:6: warning: symbol 'do_post_process' was not declared. Should it be static?
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
cptvf_mbox_send_ack and cptvf_mbox_send_nack are never
used since introdution in commit c694b23329 ("crypto: cavium
- Add the Virtual Function driver for CPT")
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Spotted while reviewind patches from Eric Biggers.
Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In salsa20_docrypt(), use crypto_xor_cpy() instead of crypto_xor().
This avoids having to memcpy() the src buffer to the dst buffer.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In chacha_docrypt(), use crypto_xor_cpy() instead of crypto_xor().
This avoids having to memcpy() the src buffer to the dst buffer.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The original assembly imported from OpenSSL has two copy-paste
errors in handling CTR mode. When dealing with a 2 or 3 block tail,
the code branches to the CBC decryption exit path, rather than to
the CTR exit path.
This leads to corruption of the IV, which leads to subsequent blocks
being corrupted.
This can be detected with libkcapi test suite, which is available at
https://github.com/smuellerDD/libkcapi
Reported-by: Ondrej Mosnáček <omosnacek@gmail.com>
Fixes: 5c380d623e ("crypto: vmx - Add support for VMS instructions by ASM")
Cc: stable@vger.kernel.org
Signed-off-by: Daniel Axtens <dja@axtens.net>
Tested-by: Michael Ellerman <mpe@ellerman.id.au>
Tested-by: Ondrej Mosnacek <omosnacek@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Building with clang for a 32-bit architecture runs over the stack
frame limit in the setkey function:
drivers/crypto/ccree/cc_cipher.c:318:12: error: stack frame size of 1152 bytes in function 'cc_cipher_setkey' [-Werror,-Wframe-larger-than=]
The problem is that there are two large variables: the temporary
'tmp' array and the SHASH_DESC_ON_STACK() declaration. Moving
the first into the block in which it is used reduces the
total frame size to 768 bytes, which seems more reasonable
and is under the warning limit.
Fixes: 63ee04c8b4 ("crypto: ccree - add skcipher support")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-By: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
All crypto API algorithms are supposed to support the case where they
are called in a context where SIMD instructions are unusable, e.g. IRQ
context on some architectures. However, this isn't tested for by the
self-tests, causing bugs to go undetected.
Now that all algorithms have been converted to use crypto_simd_usable(),
update the self-tests to test the no-SIMD case. First, a bool
testvec_config::nosimd is added. When set, the crypto operation is
executed with preemption disabled and with crypto_simd_usable() mocked
out to return false on the current CPU.
A bool test_sg_division::nosimd is also added. For hash algorithms it's
honored by the corresponding ->update(). By setting just a subset of
these bools, the case where some ->update()s are done in SIMD context
and some are done in no-SIMD context is also tested.
These bools are then randomly set by generate_random_testvec_config().
For now, all no-SIMD testing is limited to the extra crypto self-tests,
because it might be a bit too invasive for the regular self-tests.
But this could be changed later.
This has already found bugs in the arm64 AES-GCM and ChaCha algorithms.
This would have found some past bugs as well.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Replace all calls to may_use_simd() in the shared SIMD helpers with
crypto_simd_usable(), in order to allow testing the no-SIMD code paths.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Replace all calls to may_use_simd() in the arm64 crypto code with
crypto_simd_usable(), in order to allow testing the no-SIMD code paths.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Replace all calls to may_use_simd() in the arm crypto code with
crypto_simd_usable(), in order to allow testing the no-SIMD code paths.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Replace all calls to irq_fpu_usable() in the x86 crypto code with
crypto_simd_usable(), in order to allow testing the no-SIMD code paths.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
So that the no-SIMD fallback code can be tested by the crypto
self-tests, add a macro crypto_simd_usable() which wraps may_use_simd(),
but also returns false if the crypto self-tests have set a per-CPU bool
to disable SIMD in crypto code on the current CPU.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The arm64 gcm-aes-ce algorithm is failing the extra crypto self-tests
following my patches to test the !may_use_simd() code paths, which
previously were untested. The problem is that in the !may_use_simd()
case, an odd number of AES blocks can be processed within each step of
the skcipher_walk. However, the skcipher_walk is being done with a
"stride" of 2 blocks and is advanced by an even number of blocks after
each step. This causes the encryption to produce the wrong ciphertext
and authentication tag, and causes the decryption to incorrectly fail.
Fix it by only processing an even number of blocks per step.
Fixes: c2b24c36e0 ("crypto: arm64/aes-gcm-ce - fix scatterwalk API violation")
Fixes: 71e52c278c ("crypto: arm64/aes-ce-gcm - operate on two input blocks at a time")
Cc: <stable@vger.kernel.org> # v4.19+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The arm64 implementations of ChaCha and XChaCha are failing the extra
crypto self-tests following my patches to test the !may_use_simd() code
paths, which previously were untested. The problem is as follows:
When !may_use_simd(), the arm64 NEON implementations fall back to the
generic implementation, which uses the skcipher_walk API to iterate
through the src/dst scatterlists. Due to how the skcipher_walk API
works, walk.stride is set from the skcipher_alg actually being used,
which in this case is the arm64 NEON algorithm. Thus walk.stride is
5*CHACHA_BLOCK_SIZE, not CHACHA_BLOCK_SIZE.
This unnecessarily large stride shouldn't cause an actual problem.
However, the generic implementation computes round_down(nbytes,
walk.stride). round_down() assumes the round amount is a power of 2,
which 5*CHACHA_BLOCK_SIZE is not, so it gives the wrong result.
This causes the following case in skcipher_walk_done() to be hit,
causing a WARN() and failing the encryption operation:
if (WARN_ON(err)) {
/* unexpected case; didn't process all bytes */
err = -EINVAL;
goto finish;
}
Fix it by rounding down to CHACHA_BLOCK_SIZE instead of walk.stride.
(Or we could replace round_down() with rounddown(), but that would add a
slow division operation every time, which I think we should avoid.)
Fixes: 2fe55987b2 ("crypto: arm64/chacha - use combined SIMD/ALU routine for more speed")
Cc: <stable@vger.kernel.org> # v5.0+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Newer combinations of the glibc, kernel and openssh can result in long initial
startup times on OMAP devices:
[ 6.671425] systemd-rc-once[102]: Creating ED25519 key; this may take some time ...
[ 142.652491] systemd-rc-once[102]: Creating ED25519 key; done.
due to the blocking getrandom(2) system call:
[ 142.610335] random: crng init done
Set the quality level for the omap hwrng driver allowing the kernel to use the
hwrng as an entropy source at boot.
Signed-off-by: Rouven Czerwinski <r.czerwinski@pengutronix.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that all AEAD algorithms (that I have the hardware to test, at
least) have been fixed to not modify the user-provided aead_request,
remove the workaround from testmgr that reset aead_request::tfm after
each AEAD encryption/decryption.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the x86 implementations of MORUS-1280 to use the AEAD SIMD
helpers, rather than hand-rolling the same functionality. This
simplifies the code and also fixes the bug where the user-provided
aead_request is modified.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the x86 implementation of MORUS-640 to use the AEAD SIMD
helpers, rather than hand-rolling the same functionality. This
simplifies the code and also fixes the bug where the user-provided
aead_request is modified.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the x86 implementation of AEGIS-256 to use the AEAD SIMD
helpers, rather than hand-rolling the same functionality. This
simplifies the code and also fixes the bug where the user-provided
aead_request is modified.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the x86 implementation of AEGIS-128L to use the AEAD SIMD
helpers, rather than hand-rolling the same functionality. This
simplifies the code and also fixes the bug where the user-provided
aead_request is modified.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the x86 implementation of AEGIS-128 to use the AEAD SIMD
helpers, rather than hand-rolling the same functionality. This
simplifies the code and also fixes the bug where the user-provided
aead_request is modified.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the AES-NI implementations of "gcm(aes)" and "rfc4106(gcm(aes))"
to use the AEAD SIMD helpers, rather than hand-rolling the same
functionality. This simplifies the code and also fixes the bug where
the user-provided aead_request is modified.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the AES-NI glue code to use simd_register_skciphers_compat() to
create SIMD wrappers for all the internal skcipher algorithms at once,
rather than wrapping each one individually. This simplifies the code.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Update the crypto_simd module to support wrapping AEAD algorithms.
Previously it only supported skciphers. The code for each is similar.
I'll be converting the x86 implementations of AES-GCM, AEGIS, and MORUS
to use this. Currently they each independently implement the same
functionality. This will not only simplify the code, but it will also
fix the bug detected by the improved self-tests: the user-provided
aead_request is modified. This is because these algorithms currently
reuse the original request, whereas the crypto_simd helpers build a new
request in the original request's context.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Instead of reading job ring's occupancy registers for every req/rsp
enqueued/dequeued respectively, we read these registers once and store
them in memory. After completing a job enqueue/dequeue, we decrement
these values. When these values become zero, we refresh the snapshot of
job ring's occupancy registers. This eliminates need of expensive device
register read operations for every job enqueued and dequeued and hence
makes caam_jr_enqueue() and caam_jr_dequeue() faster. The performance of
kernel ipsec improved by about 6% on ls1028 (for frame size 408 bytes).
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>