Added a mpi_read_buf() helper function to export MPI to a buf provided by
the user, and a mpi_get_size() helper, that tells the user how big the buf is.
Changed mpi_free to use kzfree instead of kfree because it is used to free
crypto keys.
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The top-level CRYPTO_DEV_VMX option already depends on PPC64 so
there is no need to depend on it again at CRYPTO_DEV_VMX_ENCRYPT.
This patch also removes a redundant "default n".
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The '__init aesni_init()' function calls the '__exit crypto_fpu_exit()'
function directly. Since they are in different sections, this generates
a warning.
make CONFIG_DEBUG_SECTION_MISMATCH=y
...
WARNING: arch/x86/crypto/aesni-intel.o(.init.text+0x12b): Section
mismatch in reference from the function init_module() to the function
.exit.text:crypto_fpu_exit()
The function __init init_module() references
a function __exit crypto_fpu_exit().
This is often seen when error handling in the init function
uses functionality in the exit path.
The fix is often to remove the __exit annotation of
crypto_fpu_exit() so it may be used outside an exit section.
Fix the warning by removing the __exit annotation.
Signed-off-by: Jeremiah Mahler <jmmahler@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Replace the NX842_MEM_COMPRESS define with a function that returns the
specific platform driver's required working memory size.
The common nx-842.c driver refuses to load if there is no platform
driver present, so instead of defining an approximate working memory
size that's the maximum approximate size of both platform driver's
size requirements, the platform driver can directly provide its
specific, i.e. sizeof(struct nx842_workmem), size requirements which
the 842-nx crypto compression driver will use.
This saves memory by both reducing the required size of each driver
to the specific sizeof() amount, as well as using the specific loaded
platform driver's required amount, instead of the maximum of both.
Signed-off-by: Dan Streetman <ddstreet@ieee.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Move the contents of the include/linux/nx842.h header file into the
drivers/crypto/nx/nx-842.h header file. Remove the nx842.h header
file and its entry in the MAINTAINERS file.
The include/linux/nx842.h header originally was there because the
crypto/842.c driver needed it to communicate with the nx-842 hw
driver. However, that crypto compression driver was moved into
the drivers/crypto/nx/ directory, and now can directly include the
nx-842.h header. Nothing else needs the public include/linux/nx842.h
header file, as all use of the nx-842 hardware driver will be through
the "842-nx" crypto compression driver, since the direct nx-842 api is
very limited in the buffer alignments and sizes that it will accept,
and the crypto compression interface handles those limitations and
allows any alignment and size buffers.
Signed-off-by: Dan Streetman <ddstreet@ieee.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently the driver assumes that the SG list contains exactly
the number of bytes required. This assumption is incorrect.
Up until now this has been harmless. However with the new AEAD
interface this now breaks as the AD SG list contains more bytes
than just the AD.
This patch fixes this by always clamping the AD SG list by the
specified AD length.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch makes use of the new sg_nents_for_len helper to replace
the custom sg_count function.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This driver uses SZ_64K so it should include linux/sizes.h rather
than relying on others to pull it in for it.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Be more verbose and also report ->backend_cra_name when
crypto_alloc_shash() or crypto_alloc_cipher() fail in
drbg_init_hash_kernel() or drbg_init_sym_kernel()
correspondingly.
Example
DRBG: could not allocate digest TFM handle: hmac(sha256)
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As required by SP800-90A, the DRBG implements are reseeding threshold.
This threshold is at 2**48 (64 bit) and 2**32 bit (32 bit) as
implemented in drbg_max_requests.
With the recently introduced changes, the DRBG is now always used as a
stdrng which is initialized very early in the boot cycle. To ensure that
sufficient entropy is present, the Jitter RNG is added to even provide
entropy at early boot time.
However, the 2nd seed source, the nonblocking pool, is usually
degraded at that time. Therefore, the DRBG is seeded with the Jitter RNG
(which I believe contains good entropy, which however is questioned by
others) and is seeded with a degradded nonblocking pool. This seed is
now used for quasi the lifetime of the system (2**48 requests is a lot).
The patch now changes the reseed threshold as follows: up until the time
the DRBG obtains a seed from a fully iniitialized nonblocking pool, the
reseeding threshold is lowered such that the DRBG is forced to reseed
itself resonably often. Once it obtains the seed from a fully
initialized nonblocking pool, the reseed threshold is set to the value
required by SP800-90A.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch removes the kernel blocking API as it has been completely
replaced by the callback API.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The get_blocking_random_bytes API is broken because the wait can
be arbitrarily long (potentially forever) so there is no safe way
of calling it from within the kernel.
This patch replaces it with the new callback API which does not
have this problem.
The patch also removes the entropy buffer registered with the DRBG
handle in favor of stack variables to hold the seed data.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The get_blocking_random_bytes API is broken because the wait can
be arbitrarily long (potentially forever) so there is no safe way
of calling it from within the kernel.
This patch replaces it with a callback API instead. The callback
is invoked potentially from interrupt context so the user needs
to schedule their own work thread if necessary.
In addition to adding callbacks, they can also be removed as
otherwise this opens up a way for user-space to allocate kernel
memory with no bound (by opening algif_rng descriptors and then
closing them).
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
nios2 is the only architecture that does not inline get_cycles
and does not export it. This breaks crypto as it uses get_cycles
in a number of modules.
Reported-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Replace the global -O0 compiler flag from the Makefile with GCC
pragmas to mark only the functions required to be compiled without
optimizations.
This patch also adds a comment describing the rationale for the
functions chosen to be compiled without optimizations.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently caam assumes that the SG list contains exactly the number
of bytes required. This assumption is incorrect.
Up until now this has been harmless. However with the new AEAD
interface this now breaks as the AD SG list contains more bytes
than just the AD.
This patch fixes this by always clamping the AD SG list by the
specified AD length.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch fixes an issue when building an internal AD representation.
We need to check assoclen and not only blindly loop over assoc sgl.
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The device doensn't support the default value and will change it to 256, which
will cause performace degradation for biger packets.
Add an explicit write to set it to 1024.
Reported-by: Tianliang Wang <tianliang.wang@intel.com>
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch fix some typos found in crypto-API.xml.
It is because the file is generated from comments in sources,
so I had to fix typo in sources.
Signed-off-by: Masanari Iida <standby24x7@gmail.com>
Acked-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch fix some spelling typo found in crypto-API.tmpl
Signed-off-by: Masanari Iida <standby24x7@gmail.com>
Acked-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch creates a new invisible Kconfig option CRYPTO_RNG_DEFAULT
that simply selects the DRBG. This new option is then selected
by the IV generators.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the stdrng module alias and increases the priority
to ensure that it is loaded in preference to other RNGs.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We currently do the IV seeding on the first givencrypt call in
order to conserve entropy. However, this does not work with
DRBG which cannot be called from interrupt context. In fact,
with DRBG we don't need to conserve entropy anyway. So this
patch moves the seeding into the init function.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We currently do the IV seeding on the first givencrypt call in
order to conserve entropy. However, this does not work with
DRBG which cannot be called from interrupt context. In fact,
with DRBG we don't need to conserve entropy anyway. So this
patch moves the seeding into the init function.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We currently do the IV seeding on the first givencrypt call in
order to conserve entropy. However, this does not work with
DRBG which cannot be called from interrupt context. In fact,
with DRBG we don't need to conserve entropy anyway. So this
patch moves the seeding into the init function.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We currently do the IV seeding on the first givencrypt call in
order to conserve entropy. However, this does not work with
DRBG which cannot be called from interrupt context. In fact,
with DRBG we don't need to conserve entropy anyway. So this
patch moves the seeding into the init function.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reduce the nx-842 pSeries driver minimum buffer size from 128 to 8.
Also replace the single use of IO_BUFFER_ALIGN macro with the standard
and correct DDE_BUFFER_ALIGN.
The hw sometimes rejects buffers that contain padding past the end of the
8-byte aligned section where it sees the "end" marker. With the minimum
buffer size set too high, some highly compressed buffers were being padded
and the hw was incorrectly rejecting them; this sets the minimum correctly
so there will be no incorrect padding.
Signed-off-by: Dan Streetman <ddstreet@ieee.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
draft-ietf-ipsecme-chacha20-poly1305 defines the use of ChaCha20/Poly1305 in
ESP. It uses additional four byte key material as a salt, which is then used
with an 8 byte IV to form the ChaCha20 nonce as defined in the RFC7539.
Signed-off-by: Martin Willi <martin@strongswan.org>
Acked-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This AEAD uses a chacha20 ablkcipher and a poly1305 ahash to construct the
ChaCha20-Poly1305 AEAD as defined in RFC7539. It supports both synchronous and
asynchronous operations, even if we currently have no async chacha20 or poly1305
drivers.
Signed-off-by: Martin Willi <martin@strongswan.org>
Acked-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Poly1305 is a fast message authenticator designed by Daniel J. Bernstein.
It is further defined in RFC7539 as a building block for the ChaCha20-Poly1305
AEAD for use in IETF protocols.
This is a portable C implementation of the algorithm without architecture
specific optimizations, based on public domain code by Daniel J. Bernstein and
Andrew Moon.
Signed-off-by: Martin Willi <martin@strongswan.org>
Acked-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We explicitly set the Initial block Counter by prepending it to the nonce in
Little Endian. The same test vector is used for both encryption and decryption,
ChaCha20 is a cipher XORing a keystream.
Signed-off-by: Martin Willi <martin@strongswan.org>
Acked-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
ChaCha20 is a high speed 256-bit key size stream cipher algorithm designed by
Daniel J. Bernstein. It is further specified in RFC7539 for use in IETF
protocols as a building block for the ChaCha20-Poly1305 AEAD.
This is a portable C implementation without any architecture specific
optimizations. It uses a 16-byte IV, which includes the 12-byte ChaCha20 nonce
prepended by the initial block counter. Some algorithms require an explicit
counter value, for example the mentioned AEAD construction.
Signed-off-by: Martin Willi <martin@strongswan.org>
Acked-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Scatter gather lists can be created with more available entries than are
actually used (e.g. using sg_init_table() to reserve a specific number
of sg entries, but in actuality using something less than that based on
the data length). The caller sometimes fails to mark the last entry
with sg_mark_end(). In these cases, sg_nents() will return the original
size of the sg list as opposed to the actual number of sg entries that
contain valid data.
On arm64, if the sg_nents() value is used in a call to dma_map_sg() in
this situation, then it causes a BUG_ON in lib/swiotlb.c because an
"empty" sg list entry results in dma_capable() returning false and
swiotlb trying to create a bounce buffer of size 0. This occurred in
the userspace crypto interface before being fixed by
0f477b655a ("crypto: algif - Mark sgl end at the end of data")
Protect against this by using the new sg_nents_for_len() function which
returns only the number of sg entries required to meet the desired
length and supplying that value to dma_map_sg().
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When performing a dma_map_sg() call, the number of sg entries to map is
required. Using sg_nents to retrieve the number of sg entries will
return the total number of entries in the sg list up to the entry marked
as the end. If there happen to be unused entries in the list, these will
still be counted. Some dma_map_sg() implementations will not handle the
unused entries correctly (lib/swiotlb.c) and execute a BUG_ON.
The sg_nents_for_len() function will traverse the sg list and return the
number of entries required to satisfy the supplied length argument. This
can then be supplied to the dma_map_sg() call to successfully map the
sg.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
On architectures where flush_dcache_page is not needed, we will
end up generating all the code up to the PageSlab call. This is
because PageSlab operates on a volatile pointer and thus cannot
be optimised away.
This patch works around this by checking whether flush_dcache_page
is needed before we call PageSlab which then allows PageSlab to be
compiled awy.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Change the nx-842 common driver to wait for loading of both platform
drivers, and fail loading if the platform driver pointer is not set.
Add an independent platform driver pointer, that the platform drivers
set if they find they are able to load (i.e. if they find their platform
devicetree node(s)).
The problem is currently, the main nx-842 driver will stay loaded even
if there is no platform driver and thus no possible way it can do any
compression or decompression. This allows the crypto 842-nx driver
to load even if it won't actually work. For crypto compression users
(e.g. zswap) that expect an available crypto compression driver to
actually work, this is bad. This patch fixes that, so the 842-nx crypto
compression driver won't load if it doesn't have the driver and hardware
available to perform the compression.
Signed-off-by: Dan Streetman <ddstreet@ieee.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts rfc4106-gcm-aesni to the new AEAD interface.
The low-level interface remains as is for now because we can't
touch it until cryptd itself is upgraded.
In the conversion I've also removed the duplicate copy of the
context in the top-level algorithm. Now all processing is carried
out in the low-level __driver-gcm-aes-aesni algorithm.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>