When calling .import() on a cryptd ahash_request, the structure members
that describe the child transform in the shash_desc need to be initialized
like they are when calling .init()
Cc: stable@vger.kernel.org
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The AEAD code path incorrectly uses the child tfm to track the
cryptd refcnt, and then potentially frees the child tfm.
Fixes: 81760ea6a9 ("crypto: cryptd - Add helpers to check...")
Reported-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The optimised crc32c implementation depends on VMX (aka. Altivec)
instructions, so the kernel must be built with Altivec support in order
for the crc32c code to build.
Fixes: 6dd7a82cc5 ("crypto: powerpc - Add POWER8 optimised crc32c")
Acked-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
On 32-bit (e.g. with m68k-linux-gnu-gcc-4.1):
crypto/sha3_generic.c:27: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:28: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:29: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:29: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:31: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:31: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:32: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:32: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:32: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:33: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:33: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:34: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:34: warning: integer constant is too large for ‘long’ type
Fixes: 53964b9ee6 ("crypto: sha3 - Add SHA-3 hash algorithm")
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
important is the use of a ChaCha20-based CRNG for /dev/urandom, which
is faster, more efficient, and easier to make scalable for
silly/abusive userspace programs that want to read from /dev/urandom
in a tight loop on NUMA systems.
This set of patches also improves entropy gathering on VM's running on
Microsoft Azure, and will take advantage of a hw random number
generator (if present) to initialize the /dev/urandom pool.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQEcBAABCAAGBQJXla/8AAoJEPL5WVaVDYGj4xgH/3qDPHpFvynUMbMgg/MRQNt2
K1zhKKQjYuJ465aIiPME21tGC1C5JxO3a9mgDy0pfJmLVvCWRUx9dDtdI59Xkaev
KuVTbqdl8D75lftX3/jF3lXKd5dVLrW2V9hOMcESQqBFuoc1B8sJztR0upQsdGvA
I8+jjxRffSzxDZY4it6p/lqsTdEfwhA+mEc6ztZ+Ccsdlqd+GNCvB/YE4tk+U6x/
tGKaKfuiHSXpR4P9ks/L5gwaIcFQ6Y1rdVqd8YP3iC9YXGFZdJkRXklRRj1mCgWs
YoCDMATTos+6iVONft94fF5pW6HND1F3bxPNo/RpMZca3MlqbiCSij3ky+gqUc4=
=mSBt
-----END PGP SIGNATURE-----
Merge tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random
Pull random driver updates from Ted Ts'o:
"A number of improvements for the /dev/random driver; the most
important is the use of a ChaCha20-based CRNG for /dev/urandom, which
is faster, more efficient, and easier to make scalable for
silly/abusive userspace programs that want to read from /dev/urandom
in a tight loop on NUMA systems.
This set of patches also improves entropy gathering on VM's running on
Microsoft Azure, and will take advantage of a hw random number
generator (if present) to initialize the /dev/urandom pool"
(It turns out that the random tree hadn't been in linux-next this time
around, because it had been dropped earlier as being too quiet. Oh
well).
* tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random:
random: strengthen input validation for RNDADDTOENTCNT
random: add backtracking protection to the CRNG
random: make /dev/urandom scalable for silly userspace programs
random: replace non-blocking pool with a Chacha20-based CRNG
random: properly align get_random_int_hash
random: add interrupt callback to VMBus IRQ handler
random: print a warning for the first ten uninitialized random users
random: initialize the non-blocking pool via add_hwgenerator_randomness()
Pull crypto updates from Herbert Xu:
"Here is the crypto update for 4.8:
API:
- first part of skcipher low-level conversions
- add KPP (Key-agreement Protocol Primitives) interface.
Algorithms:
- fix IPsec/cryptd reordering issues that affects aesni
- RSA no longer does explicit leading zero removal
- add SHA3
- add DH
- add ECDH
- improve DRBG performance by not doing CTR by hand
Drivers:
- add x86 AVX2 multibuffer SHA256/512
- add POWER8 optimised crc32c
- add xts support to vmx
- add DH support to qat
- add RSA support to caam
- add Layerscape support to caam
- add SEC1 AEAD support to talitos
- improve performance by chaining requests in marvell/cesa
- add support for Araneus Alea I USB RNG
- add support for Broadcom BCM5301 RNG
- add support for Amlogic Meson RNG
- add support Broadcom NSP SoC RNG"
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (180 commits)
crypto: vmx - Fix aes_p8_xts_decrypt build failure
crypto: vmx - Ignore generated files
crypto: vmx - Adding support for XTS
crypto: vmx - Adding asm subroutines for XTS
crypto: skcipher - add comment for skcipher_alg->base
crypto: testmgr - Print akcipher algorithm name
crypto: marvell - Fix wrong flag used for GFP in mv_cesa_dma_add_iv_op
crypto: nx - off by one bug in nx_of_update_msc()
crypto: rsa-pkcs1pad - fix rsa-pkcs1pad request struct
crypto: scatterwalk - Inline start/map/done
crypto: scatterwalk - Remove unnecessary BUG in scatterwalk_start
crypto: scatterwalk - Remove unnecessary advance in scatterwalk_pagedone
crypto: scatterwalk - Fix test in scatterwalk_done
crypto: api - Optimise away crypto_yield when hard preemption is on
crypto: scatterwalk - add no-copy support to copychunks
crypto: scatterwalk - Remove scatterwalk_bytes_sglen
crypto: omap - Stop using crypto scatterwalk_bytes_sglen
crypto: skcipher - Remove top-level givcipher interface
crypto: user - Remove crypto_lookup_skcipher call
crypto: cts - Convert to skcipher
...
Pull crypto fixes from Herbert Xu:
"This fixes a sporadic build failure in the qat driver as well as a
memory corruption bug in rsa-pkcs1pad"
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: rsa-pkcs1pad - fix rsa-pkcs1pad request struct
crypto: qat - make qat_asym_algs.o depend on asn1 headers
To allow for child request context the struct akcipher_request child_req
needs to be at the end of the structure.
Cc: stable@vger.kernel.org
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When an akcipher test fails, we don't know which algorithm failed
because the name is not printed. This patch fixes this.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
To allow for child request context the struct akcipher_request child_req
needs to be at the end of the structure.
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch inlines the functions scatterwalk_start, scatterwalk_map
and scatterwalk_done as they're all tiny and mostly used by the block
cipher walker.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Nothing bad will happen even if sg->length is zero, so there is
no point in keeping this BUG_ON in scatterwalk_start.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The offset advance in scatterwalk_pagedone not only is unnecessary,
but it was also buggy when it was needed by scatterwalk_copychunks.
As the latter has long ago been fixed to call scatterwalk_advance
directly, we can remove this unnecessary offset adjustment.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When there is more data to be processed, the current test in
scatterwalk_done may prevent us from calling pagedone even when
we should.
In particular, if we're on an SG entry spanning multiple pages
where the last page is not a full page, we will incorrectly skip
calling pagedone on the second last page.
This patch fixes this by adding a separate test for whether we've
reached the end of a page.
Cc: stable@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The function ablkcipher_done_slow is pretty much identical to
scatterwalk_copychunks except that it doesn't actually copy as
the processing hasn't been completed yet.
This patch allows scatterwalk_copychunks to be used in this case
by specifying out == 2.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch removes the now unused scatterwalk_bytes_sglen. Anyone
using this out-of-tree should switch over to sg_nents_for_len.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch removes the old crypto_grab_skcipher helper and replaces
it with crypto_grab_skcipher2.
As this is the final entry point into givcipher this patch also
removes all traces of the top-level givcipher interface, including
all implicit IV generators such as chainiv.
The bottom-level givcipher interface remains until the drivers
using it are converted.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As there are no more kernel users of built-in IV generators we
can remove the special lookup for skciphers.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts cts over to the skcipher interface. It also
optimises the implementation to use one CBC operation for all but
the last block, which is then processed separately.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds an skcipher null object alongside the existing
null blkcipher so that IV generators using it can switch over
to skcipher.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts chacha20poly1305 to use the new skcipher
interface as opposed to ablkcipher.
It also fixes a buglet where we may end up with an async poly1305
when the user asks for a async algorithm. This shouldn't be a
problem yet as there aren't any async implementations of poly1305
out there.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts authencesn to use the new skcipher interface as
opposed to ablkcipher.
It also fixes a little bug where if a sync version of authencesn
is requested we may still end up using an async ahash. This should
have no effect as none of the authencesn users can request for a
sync authencesn.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts authenc to use the new skcipher interface as
opposed to ablkcipher.
It also fixes a little bug where if a sync version of authenc
is requested we may still end up using an async ahash. This should
have no effect as none of the authenc users can request for a
sync authenc.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a chunk size parameter to aead algorithms, just
like the chunk size for skcipher algorithms.
However, unlike skcipher we do not currently export this to AEAD
users. It is only meant to be used by AEAD implementors for now.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Current the default null skcipher is actually a crypto_blkcipher.
This patch creates a synchronous crypto_skcipher version of the
null cipher which unfortunately has to settle for the name skcipher2.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch allows skcipher algorithms and instances to be created
and registered with the crypto API. They are accessible through
the top-level skcipher interface, along with ablkcipher/blkcipher
algorithms and instances.
This patch also introduces a new parameter called chunk size
which is meant for ciphers such as CTR and CTS which ostensibly
can handle arbitrary lengths, but still behave like block ciphers
in that you can only process a partial block at the very end.
For these ciphers the block size will continue to be set to 1
as it is now while the chunk size will be set to the underlying
block size.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Arbitrary X.509 certificates without authority key identifiers (AKIs)
can be added to "trusted" keyrings, including IMA or EVM certs loaded
from the filesystem. Signature verification is currently bypassed for
certs without AKIs.
Trusted keys were recently refactored, and this bug is not present in
4.6.
restrict_link_by_signature should return -ENOKEY (no matching parent
certificate found) if the certificate being evaluated has no AKIs,
instead of bypassing signature checks and returning 0 (new certificate
accepted).
Reported-by: Petko Manolov <petkan@mip-labs.com>
Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: James Morris <james.l.morris@oracle.com>
Commit e68503bd68 forgot to set digest_len and thus cause the following
error reported by kexec when launching a crash kernel:
kexec_file_load failed: Bad message
Fixes: e68503bd68 (KEYS: Generalise system_verify_data() to provide access to internal content)
Signed-off-by: Lans Zhang <jia.zhang@windriver.com>
Tested-by: Dave Young <dyoung@redhat.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: Vivek Goyal <vgoyal@redhat.com>
cc: kexec@lists.infradead.org
cc: linux-crypto@vger.kernel.org
Signed-off-by: James Morris <james.l.morris@oracle.com>
Key generated with openssl. It also contains all fields required
for testing CRT mode
Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When parsing a private key, store all non-optional fields. These
are required for enabling CRT mode for decrypt and verify
Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Report correct error in case of failure
Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use the vector polynomial multiply-sum instructions in POWER8 to
speed up crc32c.
This is just over 41x faster than the slice-by-8 method that it
replaces. Measurements on a 4.1 GHz POWER8 show it sustaining
52 GiB/sec.
A simple btrfs write performance test:
dd if=/dev/zero of=/mnt/tmpfile bs=1M count=4096
sync
is over 3.7x faster.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As the software RSA implementation now produces fixed-length
output, we need to eliminate leading zeros in the calling code
instead.
This patch does just that for pkcs1pad signature verification.
Fixes: 9b45b7bba3 ("crypto: rsa - Generate fixed-length output")
Reported-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds HMAC-SHA3 test modes in tcrypt module
and related test vectors.
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The multibuffer hash speed test is incorrectly bailing because
of an EINPROGRESS return value. This patch fixes it by setting
ret to zero if it is equal to -EINPROGRESS.
Reported-by: Megha Dey <megha.dey@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In the vast majority of cases (2^-32 on 32-bit and 2^-64 on 64-bit)
cases, the result from encryption/signing will require no padding.
This patch makes these two operations write their output directly
to the final destination. Only in the exceedingly rare cases where
fixup is needed to we copy it out and back to add the leading zeroes.
This patch also makes use of the crypto_akcipher_set_crypt API
instead of writing the akcipher request directly.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rather than repeatedly checking the key size on each operation,
we should be checking it once when the key is set.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We don't currently support using akcipher in atomic contexts,
so GFP_KERNEL should always be used.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The helper pkcs1pad_sg_set_buf tries to split a buffer that crosses
a page boundary into two SG entries. This is unnecessary. This
patch removes that.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The only user of rsa-pkcs1pad always uses the hash so there is
no reason to support the case of not having a hash.
This patch also changes the digest info lookup so that it is
only done once during template instantiation rather than on each
operation.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>