Of the three fields in crt_u.cipher (struct cipher_tfm), ->cit_setkey()
is pointless because it always points to setkey() in crypto/cipher.c.
->cit_decrypt_one() and ->cit_encrypt_one() are slightly less pointless,
since if the algorithm doesn't have an alignmask, they are set directly
to ->cia_encrypt() and ->cia_decrypt(). However, this "optimization"
isn't worthwhile because:
- The "cipher" algorithm type is the only algorithm still using crt_u,
so it's bloating every struct crypto_tfm for every algorithm type.
- If the algorithm has an alignmask, this "optimization" actually makes
things slower, as it causes 2 indirect calls per block rather than 1.
- It adds extra code complexity.
- Some templates already call ->cia_encrypt()/->cia_decrypt() directly
instead of going through ->cit_encrypt_one()/->cit_decrypt_one().
- The "cipher" algorithm type never gives optimal performance anyway.
For that, a higher-level type such as skcipher needs to be used.
Therefore, just remove the extra indirection, and make
crypto_cipher_setkey(), crypto_cipher_encrypt_one(), and
crypto_cipher_decrypt_one() be direct calls into crypto/cipher.c.
Also remove the unused function crypto_cipher_cast().
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crt_u.compress (struct compress_tfm) is pointless because its two
fields, ->cot_compress() and ->cot_decompress(), always point to
crypto_compress() and crypto_decompress().
Remove this pointless indirection, and just make crypto_comp_compress()
and crypto_comp_decompress() be direct calls to what used to be
crypto_compress() and crypto_decompress().
Also remove the unused function crypto_comp_cast().
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The whole point of using an AEAD over length-preserving encryption is
that the data is authenticated. However currently the fuzz tests don't
test any inauthentic inputs to verify that the data is actually being
authenticated. And only two algorithms ("rfc4543(gcm(aes))" and
"ccm(aes)") even have any inauthentic test vectors at all.
Therefore, update the AEAD fuzz tests to sometimes generate inauthentic
test vectors, either by generating a (ciphertext, AAD) pair without
using the key, or by mutating an authentic pair that was generated.
To avoid flakiness, only assume this works reliably if the auth tag is
at least 8 bytes. Also account for the rfc4106, rfc4309, and rfc7539esp
algorithms intentionally ignoring the last 8 AAD bytes, and for some
algorithms doing extra checks that result in EINVAL rather than EBADMSG.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In preparation for adding inauthentic input fuzz tests, which don't
require that a generic implementation of the algorithm be available,
refactor test_aead_vs_generic_impl() so that instead there's a
higher-level function test_aead_extra() which initializes a struct
aead_extra_tests_ctx and then calls test_aead_vs_generic_impl() with a
pointer to that struct.
As a bonus, this reduces stack usage.
Also switch from crypto_aead_alg(tfm)->maxauthsize to
crypto_aead_maxauthsize(), now that the latter is available in
<crypto/aead.h>.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The alignment bug in ghash_setkey() fixed by commit 5c6bc4dfa5
("crypto: ghash - fix unaligned memory access in ghash_setkey()")
wasn't reliably detected by the crypto self-tests on ARM because the
tests only set the keys directly from the test vectors.
To improve test coverage, update the tests to sometimes pass misaligned
keys to setkey(). This applies to shash, ahash, skcipher, and aead.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When checking two implementations of the same skcipher algorithm for
consistency, require that the minimum key size be the same, not just the
maximum key size. There's no good reason to allow different minimum key
sizes.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently if the comparison fuzz tests encounter an encryption error
when generating an skcipher or AEAD test vector, they will still test
the decryption side (passing it the uninitialized ciphertext buffer)
and expect it to fail with the same error.
This is sort of broken because it's not well-defined usage of the API to
pass an uninitialized buffer, and furthermore in the AEAD case it's
acceptable for the decryption error to be EBADMSG (meaning "inauthentic
input") even if the encryption error was something else like EINVAL.
Fix this for skcipher by explicitly initializing the ciphertext buffer
on error, and for AEAD by skipping the decryption test on error.
Reported-by: Pascal Van Leeuwen <pvanleeuwen@verimatrix.com>
Fixes: d435e10e67 ("crypto: testmgr - fuzz skciphers against their generic implementation")
Fixes: 40153b10d9 ("crypto: testmgr - fuzz AEADs against their generic implementation")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a helper function crypto_skcipher_min_keysize() to mirror
crypto_skcipher_max_keysize().
This will be used by the self-tests.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Move crypto_aead_maxauthsize() to <crypto/aead.h> so that it's available
to users of the API, not just AEAD implementations.
This will be used by the self-tests.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Both source and destination are scatterlists that can contain multiple
entries under the omap crypto cleanup handling. Current code only copies
data from the first source scatterlist entry to the target scatterlist,
potentially omitting any sg entries following the first one. Instead,
implement a new routine that walks through both source and target and
copies the data over once it goes.
Signed-off-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If no data is provided for DES request, just return immediately. No
processing is needed in this case.
Signed-off-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove the error print in this case, and just return the error.
Signed-off-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently omap-aes-gcm algorithms are using local implementation for
crypto request queuing logic. Instead, implement this via usage of
crypto engine which is used already for rest of the omap aes algorithms.
This avoids some random conflicts / crashes also which can happen if
both aes and aes-gcm are attempted to be used simultaneously.
Signed-off-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently the offset for unaligned sg lists is not handled properly
leading into wrong results with certain testmgr self tests. Fix the
handling to account for proper offset within the current sg list.
Signed-off-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If we only have assocdata with an omap-aes-gcm, it currently just
completes it directly without passing it over to the crypto HW. This
produces wrong results.
Fix by passing the request down to the crypto HW, and fix the DMA
support code to accept a case where we don't expect any output data.
In the case where only assocdata is provided, it just passes through
the accelerator and provides authentication results, without any
encrypted/decrypted buffer via DMA.
Signed-off-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The OMAP AES-GCM implementation uses a fallback ecb(aes) skcipher to
produce the keystream to encrypt the output tag. Let's use the new
AES library instead - this is much simpler, and shouldn't affect
performance given that it only involves a single block.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Tero Kristo <t-kristo@ti.com>
Tested-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
RFC4106 requires the associated data to be a certain size, so reject
inputs that are wrong. This also prevents crashes or other problems due
to assoclen becoming negative after subtracting 8 bytes.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Tero Kristo <t-kristo@ti.com>
Tested-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
GCM only permits certain tag lengths, so populate the .setauthsize
hooks which ensure that only permitted sizes are accepted by the
implementation.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Tero Kristo <t-kristo@ti.com>
Tested-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The OMAP gcm(aes) driver invokes omap_crypto_align_sg() without
dealing with the errors it may return, resulting in a crash if
the routine fails in a __get_free_pages(GFP_ATOMIC) call. So
bail and return the error rather than limping on if one occurs.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Tero Kristo <t-kristo@ti.com>
Tested-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
CTR is a streamcipher mode of AES, so set the blocksize accordingly.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Tero Kristo <t-kristo@ti.com>
Tested-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Block modes such as ECB and CBC only support input sizes that are
a round multiple of the block size, so align with the generic code
which returns -EINVAL when encountering inputs that violate this
rule.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Tero Kristo <t-kristo@ti.com>
Tested-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Aligned data cleanup is using wrong pointers in the cleanup calls. Most
of the time these are right, but can cause mysterious problems in some
cases. Fix to use the same pointers that were used with the align call.
Signed-off-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The updated crypto manager finds a couple of new bugs from the omap-sham
driver. Basically the split update cases fail to calculate the amount of
data to be sent properly, leading into failed results and hangs with the
hw accelerator.
To fix these, the buffer handling needs to be fixed, but we do some cleanup
for the code at the same time to cut away some unnecessary code so that
it is easier to fix.
Signed-off-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix a corner case where only authdata is generated, without any provided
assocdata / cryptdata. Passing the empty scatterlists to OMAP AES core driver
in this case would confuse it, failing to map DMAs.
Signed-off-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Current buffer handling logic fails in a case where the buffer contains
existing data from previous update which is divisible by block size.
This results in a block size of data to be left missing from the sg
list going out to the hw accelerator, ending up in stalling the
crypto accelerator driver (the last request never completes fully due
to missing data.)
Fix this by passing the total size of the data instead of the data size
of current request, and also parsing the buffer contents within the
prepare request handling.
Signed-off-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently omap-aes driver does not copy end result IV out at all. This
is evident with the additional checks done at the crypto test manager.
Fix by copying out the IV values from HW.
Signed-off-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently omap-des driver does not copy end result IV out at all. This
is evident with the additional checks done at the crypto test manager.
Fix by copying out the IV values from HW.
Signed-off-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The driver removal should also cleanup the created sysfs group. If not,
the driver fails the subsequent probe as the files exist already. Also,
drop a completely unnecessary pointer assignment from the removal
function at the same time.
Signed-off-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The driver removal should also cleanup the created sysfs group. If not,
the driver fails the subsequent probe as the files exist already.
Signed-off-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When using huge data amount, allocating free pages fails as the kernel
isn't able to process get_free_page requests larger than MAX_ORDER.
Also, the DMA subsystem has an inherent limitation that data size
larger than some 2MB can't be handled properly. In these cases,
split up the data instead to smaller requests so that the kernel
can allocate the data, and also so that the DMA driver can handle
the separate SG elements.
Signed-off-by: Tero Kristo <t-kristo@ti.com>
Tested-by: Bin Liu <b-liu@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The essiv and hmac templates refuse to use any hash algorithm that has a
->setkey() function, which includes not just algorithms that always need
a key, but also algorithms that optionally take a key.
Previously the only optionally-keyed hash algorithms in the crypto API
were non-cryptographic algorithms like crc32, so this didn't really
matter. But that's changed with BLAKE2 support being added. BLAKE2
should work with essiv and hmac, just like any other cryptographic hash.
Fix this by allowing the use of both algorithms without a ->setkey()
function and algorithms that have the OPTIONAL_KEY flag set.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Due to the removal of the blkcipher and ablkcipher algorithm types,
crypto_skcipher_extsize() now simply calls crypto_alg_extsize(). So
remove it and just use crypto_alg_extsize().
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Due to the removal of the blkcipher and ablkcipher algorithm types,
crypto_skcipher::decrypt is now redundant since it always equals
crypto_skcipher_alg(tfm)->decrypt.
Remove it and update crypto_skcipher_decrypt() accordingly.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Due to the removal of the blkcipher and ablkcipher algorithm types,
crypto_skcipher::encrypt is now redundant since it always equals
crypto_skcipher_alg(tfm)->encrypt.
Remove it and update crypto_skcipher_encrypt() accordingly.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Due to the removal of the blkcipher and ablkcipher algorithm types,
crypto_skcipher::setkey now always points to skcipher_setkey().
Simplify by removing this function pointer and instead just making
skcipher_setkey() be crypto_skcipher_setkey() directly.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Due to the removal of the blkcipher and ablkcipher algorithm types,
crypto_skcipher::keysize is now redundant since it always equals
crypto_skcipher_alg(tfm)->max_keysize.
Remove it and update crypto_skcipher_default_keysize() accordingly.
Also rename crypto_skcipher_default_keysize() to
crypto_skcipher_max_keysize() to clarify that it specifically returns
the maximum key size, not some unspecified "default".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Due to the removal of the blkcipher and ablkcipher algorithm types,
crypto_skcipher::ivsize is now redundant since it always equals
crypto_skcipher_alg(tfm)->ivsize.
Remove it and update crypto_skcipher_ivsize() accordingly.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Update a comment to refer to crypto_alloc_skcipher() rather than
crypto_alloc_blkcipher() (the latter having been removed).
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Another instance of CRYPTO_BLKCIPHER made it in just after it was
renamed to CRYPTO_SKCIPHER. Fix it.
Fixes: 416d82204d ("crypto: hisilicon - add HiSilicon SEC V2 driver")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We should not be modifying the original request's MAY_SLEEP flag
upon completion. It makes no sense to do so anyway.
Reported-by: Eric Biggers <ebiggers@kernel.org>
Fixes: 5068c7a883 ("crypto: pcrypt - Add pcrypt crypto...")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The SIMD based GHASH implementation for arm64 is typically much faster
than the generic one, and doesn't use any lookup tables, so it is
clearly preferred when available. So bump the priority to reflect that.
Fixes: 5a22b198cd ("crypto: arm64/ghash - register PMULL variants ...")
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Instead of casting pointers to callback functions, add C wrappers
to avoid type mismatch failures with Control-Flow Integrity (CFI)
checking.
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
AMD Seattle incorporates a non-PCI version of the v3 CCP crypto
accelerator, and this version was left behind when the maximum
RSA modulus size was parameterized in order to support v5 hardware
which supports larger moduli than v3 hardware does. Due to this
oversight, RSA acceleration no longer works at all on these systems.
Fix this by setting the .rsamax property to the appropriate value
for v3 platform hardware.
Fixes: e28c190db6 ("csrypto: ccp - Expand RSA support for a v5 ccp")
Cc: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix brown paper bag bug of not releasing backlog list item buffer
when backlog was consumed causing a memory leak when backlog is
used.
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Cc: stable@vger.kernel.org # v4.19+
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix some typos in error message text.
Signed-off-by: Hadar Gat <hadar.gat@arm.com>
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix some typos in code comments.
Signed-off-by: Hadar Gat <hadar.gat@arm.com>
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The define of CC_DEV_SHA_MAX is not needed since we moved
to runtime detection of capabilities. Remove it.
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The crypto glue performed function prototype casting via macros to make
indirect calls to assembly routines. Instead of performing casts at the
call sites (which trips Control Flow Integrity prototype checking), switch
each prototype to a common standard set of arguments which allows the
removal of the existing macros. In order to keep pointer math unchanged,
internal casting between u128 pointers and u8 pointers is added.
Co-developed-by: João Moreira <joao.moreira@intel.com>
Signed-off-by: João Moreira <joao.moreira@intel.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If the pcrypt template is used multiple times in an algorithm, then a
deadlock occurs because all pcrypt instances share the same
padata_instance, which completes requests in the order submitted. That
is, the inner pcrypt request waits for the outer pcrypt request while
the outer request is already waiting for the inner.
This patch fixes this by allocating a set of queues for each pcrypt
instance instead of using two global queues. In order to maintain
the existing user-space interface, the pinst structure remains global
so any sysfs modifications will apply to every pcrypt instance.
Note that when an update occurs we have to allocate memory for
every pcrypt instance. Should one of the allocations fail we
will abort the update without rolling back changes already made.
The new per-instance data structure is called padata_shell and is
essentially a wrapper around parallel_data.
Reproducer:
#include <linux/if_alg.h>
#include <sys/socket.h>
#include <unistd.h>
int main()
{
struct sockaddr_alg addr = {
.salg_type = "aead",
.salg_name = "pcrypt(pcrypt(rfc4106-gcm-aesni))"
};
int algfd, reqfd;
char buf[32] = { 0 };
algfd = socket(AF_ALG, SOCK_SEQPACKET, 0);
bind(algfd, (void *)&addr, sizeof(addr));
setsockopt(algfd, SOL_ALG, ALG_SET_KEY, buf, 20);
reqfd = accept(algfd, 0, 0);
write(reqfd, buf, 32);
read(reqfd, buf, 16);
}
Reported-by: syzbot+56c7151cad94eec37c521f0e47d2eee53f9361c4@syzkaller.appspotmail.com
Fixes: 5068c7a883 ("crypto: pcrypt - Add pcrypt crypto parallelization wrapper")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>