The local variable "ret" will be set to an appropriate value a bit later.
Thus omit the explicit initialisation at the beginning.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Return a value at the end without storing it in an intermediate variable.
* Delete the local variable "ret" which became unnecessary with
this refactoring.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Adjust jump labels according to the current Linux coding style convention.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Adjust jump labels according to the current Linux coding style convention.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* A multiplication for the size determination of a memory allocation
indicated that an array data structure should be processed.
Thus use the corresponding function "kmalloc_array".
This issue was detected by using the Coccinelle software.
* Replace the specification of a data type by a pointer dereference
to make the corresponding size determination a bit safer according to
the Linux coding style convention.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a helper to map the source scatterlist into the descriptor.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a helper function to perform the descriptor allocation.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Strictly, dma_map_sg() may coalesce SG entries, but in practise on iMX
hardware, this will never happen. However, dma_map_sg() can fail, and
we completely fail to check its return value. So, fix this properly.
Arrange the code to map the scatterlist early, so we know how many
scatter table entries to allocate, and then fill them in. This allows
us to keep relatively simple error cleanup paths.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Ensure that we clean up allocations and DMA mappings after encountering
an error rather than just giving up and leaking memory and resources.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since the extended descriptor includes the hardware descriptor, and the
sec4 scatterlist immediately follows this, we can declare it as a array
at the very end of the extended descriptor. This allows us to get rid
of an initialiser for every site where we allocate an extended
descriptor.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Mark the hardware descriptor as being cache line aligned; on DMA
incoherent architectures, the hardware descriptor should sit in a
separate cache line from the CPU accessed data to avoid polluting
the caches.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rather than giving the descriptor as hw_desc[0], give it's real size.
All places where we allocate an ahash_edesc incorporate DESC_JOB_IO_LEN
bytes of job descriptor.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
caamhash contains this weird code:
src_nents = sg_count(req->src, req->nbytes);
dma_map_sg(jrdev, req->src, src_nents ? : 1, DMA_TO_DEVICE);
...
edesc->src_nents = src_nents;
sg_count() returns zero when sg_nents_for_len() returns zero or one.
This means we don't need to use a hardware scatterlist. However,
setting src_nents to zero causes problems when we unmap:
if (edesc->src_nents)
dma_unmap_sg_chained(dev, req->src, edesc->src_nents,
DMA_TO_DEVICE, edesc->chained);
as zero here means that we have no entries to unmap. This causes us
to leak DMA mappings, where we map one scatterlist entry and then
fail to unmap it.
This can be fixed in two ways: either by writing the number of entries
that were requested of dma_map_sg(), or by reworking the "no SG
required" case.
We adopt the re-work solution here - we replace sg_count() with
sg_nents_for_len(), so src_nents now contains the real number of
scatterlist entries, and we then change the test for using the
hardware scatterlist to src_nents > 1 rather than just non-zero.
This change passes my sshd, openssl tests hashing /bin and tcrypt
tests.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since 6de62f15b5 ("crypto: algif_hash - Require setkey before
accept(2)"), the AF_ALG interface requires userspace to provide a key
to any algorithm that has a setkey method. However, the non-HMAC
algorithms are not keyed, so setting a key is unnecessary.
Fix this by removing the setkey method from the non-keyed hash
algorithms.
Fixes: 6de62f15b5 ("crypto: algif_hash - Require setkey before accept(2)")
Cc: <stable@vger.kernel.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
There are SoCs like LS1043A where CAAM endianness (BE) does not match
the default endianness of the core (LE).
Moreover, there are requirements for the driver to handle cases like
CPU_BIG_ENDIAN=y on ARM-based SoCs.
This requires for a complete rewrite of the I/O accessors.
PPC-specific accessors - {in,out}_{le,be}XX - are replaced with
generic ones - io{read,write}[be]XX.
Endianness is detected dynamically (at runtime) to allow for
multiplatform kernels, for e.g. running the same kernel image
on LS1043A (BE CAAM) and LS2080A (LE CAAM) armv8-based SoCs.
While here: debugfs entries need to take into consideration the
endianness of the core when displaying data. Add the necessary
glue code so the entries remain the same, but they are properly
read, regardless of the core and/or SEC endianness.
Note: pdb.h fixes only what is currently being used (IPsec).
Reviewed-by: Tudor Ambarus <tudor-dan.ambarus@nxp.com>
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Alex Porosanu <alexandru.porosanu@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When buffer 0 is used we should use buflen_0 instead of buflen_1.
Fix it.
Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The sg_nents_for_len() function could fail, this patch add a check for
its return value.
We do the same for sg_count since it use sg_nents_for_len().
Signed-off-by: LABBE Corentin <clabbe.montjoie@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The kernel's coding style suggests that closing braces for initialisers
should not be aligned to the open brace column. The CodingStyle doc
shows how this should be done. Remove the additional tab.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Avoid exporting lots of state by only exporting what we really require,
which is the buffer containing the set of pending bytes to be hashed,
number of pending bytes, the context buffer, and the function pointer
state. This reduces down the exported state size to 216 bytes from
576 bytes.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
caam does not properly calculate the size of the retained state
when non-block aligned hashes are requested - it uses the wrong
buffer sizes, which results in errors such as:
caam_jr 2102000.jr1: 40000501: DECO: desc idx 5: SGT Length Error. The descriptor is trying to read more data than is contained in the SGT table.
We end up here with:
in_len 0x46 blocksize 0x40 last_bufsize 0x0 next_bufsize 0x6
to_hash 0x40 ctx_len 0x28 nbytes 0x20
which results in a job descriptor of:
jobdesc@889: ed03d918: b0861c08 3daa0080 f1400000 3d03d938
jobdesc@889: ed03d928: 00000068 f8400000 3cde2a40 00000028
where the word at 0xed03d928 is the expected data size (0x68), and a
scatterlist containing:
sg@892: ed03d938: 00000000 3cde2a40 00000028 00000000
sg@892: ed03d948: 00000000 3d03d100 00000006 00000000
sg@892: ed03d958: 00000000 7e8aa700 40000020 00000000
0x68 comes from 0x28 (the context size) plus the "in_len" rounded down
to a block size (0x40). in_len comes from 0x26 bytes of unhashed data
from the previous operation, plus the 0x20 bytes from the latest
operation.
The fixed version would create:
sg@892: ed03d938: 00000000 3cde2a40 00000028 00000000
sg@892: ed03d948: 00000000 3d03d100 00000026 00000000
sg@892: ed03d958: 00000000 7e8aa700 40000020 00000000
which replaces the 0x06 length with the correct 0x26 bytes of previously
unhashed data.
This fixes a previous commit which erroneously "fixed" this due to a
DMA-API bug report; that commit indicates that the bug was caused via a
test_ahash_pnum() function in the tcrypt module. No such function has
ever existed in the mainline kernel. Given that the change in this
commit has been tested with DMA API debug enabled and shows no issue,
I can only conclude that test_ahash_pnum() was triggering that bad
behaviour by CAAM.
Fixes: 7d5196aba3 ("crypto: caam - Correct DMA unmap size in ahash_update_ctx()")
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When exporting and importing the hash state, we will only export and
import into hashes which share the same struct crypto_ahash pointer.
(See hash_accept->af_alg_accept->hash_accept_parent.)
This means that saving the caam_hash_ctx structure on export, and
restoring it on import is a waste of resources. So, remove this code.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Print the errno code when hash registration fails, so we know why the
failure occurred. This aids debugging.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The caam driver use two dma_map_sg path according to SG are chained
or not.
Since dma_map_sg can handle both case, clean the code with all
references to sg chained.
Thus removing dma_map_sg_chained, dma_unmap_sg_chained
and __sg_count functions.
Signed-off-by: LABBE Corentin <clabbe.montjoie@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
"The preferred form for passing a size of a struct is the following:
p = kmalloc(sizeof(*p), ...);
....
The preferred form for allocating a zeroed array is the following:
p = kcalloc(n, sizeof(...), ...); "
,so do as suggested.
Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com>
Reviewed-by: Horia Geant? <horia.geanta@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When doing pointer operation for accessing the HW S/G table,
a value representing number of entries (and not number of bytes)
must be used.
Cc: <stable@vger.kernel.org> # 3.6+
Fixes: 045e36780f ("crypto: caam - ahash hmac support")
Signed-off-by: Horia Geant? <horia.geanta@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Register only algorithms supported by CAAM hardware, using the CHA
version and instantiation registers to identify hardware capabilities.
Signed-off-by: Victoria Milhoan <vicki.milhoan@freescale.com>
Tested-by: Horia Geantă <horia.geanta@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since fields must be ORed in to operate correctly using any order of
operations, changed allocations of the combination of extended
descriptor structs + hardware scatterlists to use kzalloc() instead
of kmalloc(), so as to ensure that residue data would not be ORed in
with the correct data.
Signed-off-by: Steve Cornelius <steve.cornelius@freescale.com>
Signed-off-by: Victoria Milhoan <vicki.milhoan@freescale.com>
Tested-by: Horia Geantă <horia.geanta@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Freescale i.MX6 ARM platforms do not support hardware cache coherency.
This patch adds cache coherency support to the CAAM driver.
Signed-off-by: Victoria Milhoan <vicki.milhoan@freescale.com>
Tested-by: Horia Geantă <horia.geanta@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Pull crypto update from Herbert Xu:
"Here is the crypto update for 4.2:
API:
- Convert RNG interface to new style.
- New AEAD interface with one SG list for AD and plain/cipher text.
All external AEAD users have been converted.
- New asymmetric key interface (akcipher).
Algorithms:
- Chacha20, Poly1305 and RFC7539 support.
- New RSA implementation.
- Jitter RNG.
- DRBG is now seeded with both /dev/random and Jitter RNG. If kernel
pool isn't ready then DRBG will be reseeded when it is.
- DRBG is now the default crypto API RNG, replacing krng.
- 842 compression (previously part of powerpc nx driver).
Drivers:
- Accelerated SHA-512 for arm64.
- New Marvell CESA driver that supports DMA and more algorithms.
- Updated powerpc nx 842 support.
- Added support for SEC1 hardware to talitos"
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (292 commits)
crypto: marvell/cesa - remove COMPILE_TEST dependency
crypto: algif_aead - Temporarily disable all AEAD algorithms
crypto: af_alg - Forbid the use internal algorithms
crypto: echainiv - Only hold RNG during initialisation
crypto: seqiv - Add compatibility support without RNG
crypto: eseqiv - Offer normal cipher functionality without RNG
crypto: chainiv - Offer normal cipher functionality without RNG
crypto: user - Add CRYPTO_MSG_DELRNG
crypto: user - Move cryptouser.h to uapi
crypto: rng - Do not free default RNG when it becomes unused
crypto: skcipher - Allow givencrypt to be NULL
crypto: sahara - propagate the error on clk_disable_unprepare() failure
crypto: rsa - fix invalid select for AKCIPHER
crypto: picoxcell - Update to the current clk API
crypto: nx - Check for bogus firmware properties
crypto: marvell/cesa - add DT bindings documentation
crypto: marvell/cesa - add support for Kirkwood and Dove SoCs
crypto: marvell/cesa - add support for Orion SoCs
crypto: marvell/cesa - add allhwsupport module parameter
crypto: marvell/cesa - add support for all armada SoCs
...
The CAAM driver uses two data buffers to store data for a hashing operation,
with one buffer defined as active. This change forces switching of the
active buffer when executing a hashing operation to avoid a later DMA unmap
using the length of the opposite buffer.
Signed-off-by: Victoria Milhoan <vicki.milhoan@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
buf_0 and buf_1 in caam_hash_state are not next to each other.
Accessing buf_1 is incorrect from &buf_0 with an offset of only
size_of(buf_0). The same issue is also with buflen_0 and buflen_1
Cc: <stable@vger.kernel.org> # 3.13+
Signed-off-by: Cristian Stoica <cristian.stoica@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use dma_mapping_error for every dma_map_single / dma_map_page.
Signed-off-by: Horia Geanta <horia.geanta@freescale.com>
Acked-by: Kim Phillips <kim.phillips@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The layer which registers with the crypto API should check for the presence of
the CAAM device it is going to use. If the platform's device tree doesn't have
the required CAAM node, the layer should return an error and not register the
algorithms with crypto API layer.
Signed-off-by: Ruchika Gupta <ruchika.gupta@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
At few places in caamhash and caamalg, after allocating a dmable
buffer for sg table , the buffer was being modified. As per
definition of DMA_FROM_DEVICE ,afer allocation the memory should
be treated as read-only by the driver. This patch shifts the
allocation of dmable buffer for sg table after it is populated
by the driver, making it read-only as per the DMA API's requirement.
Signed-off-by: Ruchika Gupta <ruchika.gupta@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The tentacles of this function were firmly attached to various
places in the CAAM code. Just cut them, or this cthulhu function
will sprout them anew.
Signed-off-by: Marek Vasut <marex@denx.de>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Horia Geanta <horia.geanta@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In case hash key is bigger than algorithm block size, it is hashed.
In this case, memory is allocated to keep this hash in hashed_key.
hashed_key has to be freed on the key_dma dma mapping error path.
Signed-off-by: Horia Geanta <horia.geanta@freescale.com>
Reviewed-by: Marek Vasut <marex@denx.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
- Earlier interface layers - caamalg, caamhash, caamrng were
directly using the Controller driver private structure to access
the Job ring.
- Changed the above to use alloc/free API's provided by Job Ring Drive
Signed-off-by: Ruchika Gupta <ruchika.gupta@freescale.com>
Reviewed-by: Garg Vakul-B16394 <vakul@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>