Many HMAC users directly use directly 0x36/0x5c values.
It's better with crypto to use a name instead of directly some crypto
constant.
This patch simply add HMAC_IPAD_VALUE/HMAC_OPAD_VALUE defines in a new
include file "crypto/hmac.h" and use them in crypto/hmac.c
Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto_gcm_setkey() was using wait_for_completion_interruptible() to
wait for completion of async crypto op but if a signal occurs it
may return before DMA ops of HW crypto provider finish, thus
corrupting the data buffer that is kfree'ed in this case.
Resolve this by using wait_for_completion() instead.
Reported-by: Eric Biggers <ebiggers3@gmail.com>
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
CC: stable@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
drbg_kcapi_sym_ctr() was using wait_for_completion_interruptible() to
wait for completion of async crypto op but if a signal occurs it
may return before DMA ops of HW crypto provider finish, thus
corrupting the output buffer.
Resolve this by using wait_for_completion() instead.
Reported-by: Eric Biggers <ebiggers3@gmail.com>
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
CC: stable@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
public_key_verify_signature() was passing the CRYPTO_TFM_REQ_MAY_BACKLOG
flag to akcipher_request_set_callback() but was not handling correctly
the case where a -EBUSY error could be returned from the call to
crypto_akcipher_verify() if backlog was used, possibly casuing
data corruption due to use-after-free of buffers.
Resolve this by handling -EBUSY correctly.
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
CC: stable@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The tcrypt AEAD cycles speed tests disables irqs during the test, which is
broken at the very least since commit
'1425d2d17f7309c6 ("crypto: tcrypt - Fix AEAD speed tests")'
adds a wait for completion as part of the test and probably since
switching to the new AEAD API.
While the result of taking a cycle count diff may not mean much on SMP
systems if the task migrates, it's good enough for tcrypt being the quick
& dirty dev tool it is. It's also what all the other (i.e. hash) cycle
speed tests do.
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Reported-by: Ofir Drang <ofir.drang@arm.com>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The API setkey checks for key sizes and alignment went AWOL during the
skcipher conversion. This patch restores them.
Cc: <stable@vger.kernel.org>
Fixes: 4e6c3df4d7 ("crypto: skcipher - Add low-level skcipher...")
Reported-by: Baozeng <sploving1@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The DMA_PREP_FENCE is to be used when preparing Tx descriptor if output
of Tx descriptor is to be used by next/dependent Tx descriptor.
The DMA_PREP_FENSE will not be set correctly in do_async_gen_syndrome()
when calling dma->device_prep_dma_pq() under following conditions:
1. ASYNC_TX_FENCE not set in submit->flags
2. DMA_PREP_FENCE not set in dma_flags
3. src_cnt (= (disks - 2)) is greater than dma_maxpq(dma, dma_flags)
This patch fixes DMA_PREP_FENCE usage in do_async_gen_syndrome() taking
inspiration from do_async_xor() implementation.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There are many code paths opencoding kvmalloc. Let's use the helper
instead. The main difference to kvmalloc is that those users are
usually not considering all the aspects of the memory allocator. E.g.
allocation requests <= 32kB (with 4kB pages) are basically never failing
and invoke OOM killer to satisfy the allocation. This sounds too
disruptive for something that has a reasonable fallback - the vmalloc.
On the other hand those requests might fallback to vmalloc even when the
memory allocator would succeed after several more reclaim/compaction
attempts previously. There is no guarantee something like that happens
though.
This patch converts many of those places to kv[mz]alloc* helpers because
they are more conservative.
Link: http://lkml.kernel.org/r/20170306103327.2766-2-mhocko@kernel.org
Signed-off-by: Michal Hocko <mhocko@suse.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> # Xen bits
Acked-by: Kees Cook <keescook@chromium.org>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Andreas Dilger <andreas.dilger@intel.com> # Lustre
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> # KVM/s390
Acked-by: Dan Williams <dan.j.williams@intel.com> # nvdim
Acked-by: David Sterba <dsterba@suse.com> # btrfs
Acked-by: Ilya Dryomov <idryomov@gmail.com> # Ceph
Acked-by: Tariq Toukan <tariqt@mellanox.com> # mlx4
Acked-by: Leon Romanovsky <leonro@mellanox.com> # mlx5
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Anton Vorontsov <anton@enomsg.org>
Cc: Colin Cross <ccross@android.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Kent Overstreet <kent.overstreet@gmail.com>
Cc: Santosh Raspatur <santosh@chelsio.com>
Cc: Hariprasad S <hariprasad@chelsio.com>
Cc: Yishai Hadas <yishaih@mellanox.com>
Cc: Oleg Drokin <oleg.drokin@intel.com>
Cc: "Yan, Zheng" <zyan@redhat.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: David Miller <davem@davemloft.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Pull security subsystem updates from James Morris:
"Highlights:
IMA:
- provide ">" and "<" operators for fowner/uid/euid rules
KEYS:
- add a system blacklist keyring
- add KEYCTL_RESTRICT_KEYRING, exposes keyring link restriction
functionality to userland via keyctl()
LSM:
- harden LSM API with __ro_after_init
- add prlmit security hook, implement for SELinux
- revive security_task_alloc hook
TPM:
- implement contextual TPM command 'spaces'"
* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: (98 commits)
tpm: Fix reference count to main device
tpm_tis: convert to using locality callbacks
tpm: fix handling of the TPM 2.0 event logs
tpm_crb: remove a cruft constant
keys: select CONFIG_CRYPTO when selecting DH / KDF
apparmor: Make path_max parameter readonly
apparmor: fix parameters so that the permission test is bypassed at boot
apparmor: fix invalid reference to index variable of iterator line 836
apparmor: use SHASH_DESC_ON_STACK
security/apparmor/lsm.c: set debug messages
apparmor: fix boolreturn.cocci warnings
Smack: Use GFP_KERNEL for smk_netlbl_mls().
smack: fix double free in smack_parse_opts_str()
KEYS: add SP800-56A KDF support for DH
KEYS: Keyring asymmetric key restrict method with chaining
KEYS: Restrict asymmetric key linkage using a specific keychain
KEYS: Add a lookup_restriction function for the asymmetric key type
KEYS: Add KEYCTL_RESTRICT_KEYRING
KEYS: Consistent ordering for __key_link_begin and restrict check
KEYS: Add an optional lookup_restriction hook to key_type
...
Pull networking updates from David Millar:
"Here are some highlights from the 2065 networking commits that
happened this development cycle:
1) XDP support for IXGBE (John Fastabend) and thunderx (Sunil Kowuri)
2) Add a generic XDP driver, so that anyone can test XDP even if they
lack a networking device whose driver has explicit XDP support
(me).
3) Sparc64 now has an eBPF JIT too (me)
4) Add a BPF program testing framework via BPF_PROG_TEST_RUN (Alexei
Starovoitov)
5) Make netfitler network namespace teardown less expensive (Florian
Westphal)
6) Add symmetric hashing support to nft_hash (Laura Garcia Liebana)
7) Implement NAPI and GRO in netvsc driver (Stephen Hemminger)
8) Support TC flower offload statistics in mlxsw (Arkadi Sharshevsky)
9) Multiqueue support in stmmac driver (Joao Pinto)
10) Remove TCP timewait recycling, it never really could possibly work
well in the real world and timestamp randomization really zaps any
hint of usability this feature had (Soheil Hassas Yeganeh)
11) Support level3 vs level4 ECMP route hashing in ipv4 (Nikolay
Aleksandrov)
12) Add socket busy poll support to epoll (Sridhar Samudrala)
13) Netlink extended ACK support (Johannes Berg, Pablo Neira Ayuso,
and several others)
14) IPSEC hw offload infrastructure (Steffen Klassert)"
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (2065 commits)
tipc: refactor function tipc_sk_recv_stream()
tipc: refactor function tipc_sk_recvmsg()
net: thunderx: Optimize page recycling for XDP
net: thunderx: Support for XDP header adjustment
net: thunderx: Add support for XDP_TX
net: thunderx: Add support for XDP_DROP
net: thunderx: Add basic XDP support
net: thunderx: Cleanup receive buffer allocation
net: thunderx: Optimize CQE_TX handling
net: thunderx: Optimize RBDR descriptor handling
net: thunderx: Support for page recycling
ipx: call ipxitf_put() in ioctl error path
net: sched: add helpers to handle extended actions
qed*: Fix issues in the ptp filter config implementation.
qede: Fix concurrency issue in PTP Tx path processing.
stmmac: Add support for SIMATIC IOT2000 platform
net: hns: fix ethtool_get_strings overflow in hns driver
tcp: fix wraparound issue in tcp_lp
bpf, arm64: fix jit branch offset related to ldimm64
bpf, arm64: implement jiting of BPF_XADD
...
Pull crypto updates from Herbert Xu:
"Here is the crypto update for 4.12:
API:
- Add batch registration for acomp/scomp
- Change acomp testing to non-unique compressed result
- Extend algorithm name limit to 128 bytes
- Require setkey before accept(2) in algif_aead
Algorithms:
- Add support for deflate rfc1950 (zlib)
Drivers:
- Add accelerated crct10dif for powerpc
- Add crc32 in stm32
- Add sha384/sha512 in ccp
- Add 3des/gcm(aes) for v5 devices in ccp
- Add Queue Interface (QI) backend support in caam
- Add new Exynos RNG driver
- Add ThunderX ZIP driver
- Add driver for hardware random generator on MT7623 SoC"
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (101 commits)
crypto: stm32 - Fix OF module alias information
crypto: algif_aead - Require setkey before accept(2)
crypto: scomp - add support for deflate rfc1950 (zlib)
crypto: scomp - allow registration of multiple scomps
crypto: ccp - Change ISR handler method for a v5 CCP
crypto: ccp - Change ISR handler method for a v3 CCP
crypto: crypto4xx - rename ce_ring_contol to ce_ring_control
crypto: testmgr - Allow ecb(cipher_null) in FIPS mode
Revert "crypto: arm64/sha - Add constant operand modifier to ASM_EXPORT"
crypto: ccp - Disable interrupts early on unload
crypto: ccp - Use only the relevant interrupt bits
hwrng: mtk - Add driver for hardware random generator on MT7623 SoC
dt-bindings: hwrng: Add Mediatek hardware random generator bindings
crypto: crct10dif-vpmsum - Fix missing preempt_disable()
crypto: testmgr - replace compression known answer test
crypto: acomp - allow registration of multiple acomps
hwrng: n2 - Use devm_kcalloc() in n2rng_probe()
crypto: chcr - Fix error handling related to 'chcr_alloc_shash'
padata: get_next is never NULL
crypto: exynos - Add new Exynos RNG driver
...
Some cipher implementations will crash if you try to use them
without calling setkey first. This patch adds a check so that
the accept(2) call will fail with -ENOKEY if setkey hasn't been
done on the socket yet.
Fixes: 400c40cf78 ("crypto: algif - add AEAD support")
Cc: <stable@vger.kernel.org>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add scomp backend for zlib-deflate compression algorithm.
This backend outputs data using the format defined in rfc1950
(raw deflate surrounded by zlib header and footer).
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add crypto_register_scomps and crypto_unregister_scomps to allow
the registration of multiple implementations with one call.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The cipher_null is not a real cipher, FIPS mode should not restrict its use.
It is used for several tests (for example in cryptsetup testsuite) and also
temporarily for reencryption of not yet encrypted device in cryptsetup-reencrypt tool.
Problem is easily reproducible with
cryptsetup benchmark -c null
Signed-off-by: Milan Broz <gmazyland@gmail.com>
Acked-by: Stephan Müller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Compression implementations might return valid outputs that
do not match what specified in the test vectors.
For this reason, the testmgr might report that a compression
implementation failed the test even if the data produced
by the compressor is correct.
This implements a decompress-and-verify test for acomp
compression tests rather than a known answer test.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add crypto_register_acomps and crypto_unregister_acomps to allow
the registration of multiple implementations with one call.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
A function in kernel/bpf/syscall.c which got a bug fix in 'net'
was moved to kernel/bpf/verifier.c in 'net-next'.
Signed-off-by: David S. Miller <davem@davemloft.net>
Pull crypto fixes from Herbert Xu:
"This fixes the following problems:
- regression in new XTS/LRW code when used with async crypto
- long-standing bug in ahash API when used with certain algos
- bogus memory dereference in async algif_aead with certain algos"
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: algif_aead - Fix bogus request dereference in completion function
crypto: ahash - Fix EINPROGRESS notification callback
crypto: lrw - Fix use-after-free on EINPROGRESS
crypto: xts - Fix use-after-free on EINPROGRESS
Pass the new extended ACK reporting struct to all of the generic
netlink parsing functions. For now, pass NULL in almost all callers
(except for some in the core.)
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add the base infrastructure and UAPI for netlink extended ACK
reporting. All "manual" calls to netlink_ack() pass NULL for now and
thus don't get extended ACK reporting.
Big thanks goes to Pablo Neira Ayuso for not only bringing up the
whole topic at netconf (again) but also coming up with the nlattr
passing trick and various other ideas.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Reviewed-by: David Ahern <dsa@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Decompress function in LZ4 library is supposed to return an error code or
negative result. But, it returns -1 when any error is detected. Return
error code when the library returns negative value.
Signed-off-by: Myungho Jung <mhjungk@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch removes the hard-coded 64-byte limit on the length
of the algorithm name through bind(2). The address length can
now exceed that. The user-space structure remains unchanged.
In order to use a longer name simply extend the salg_name array
beyond its defined 64 bytes length.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch hard-codes CRYPTO_MAX_NAME in the user-space API to
64, which is the current value of CRYPTO_MAX_ALG_NAME. This patch
also replaces all remaining occurences of CRYPTO_MAX_ALG_NAME
in the user-space API with CRYPTO_MAX_NAME.
This way the user-space API will not be modified when we raise
the value of CRYPTO_MAX_ALG_NAME.
Furthermore, the code has been updated to handle names longer than
the user-space API. They will be truncated.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Alexander Sverdlin <alexander.sverdlin@nokia.com>
Tested-by: Alexander Sverdlin <alexander.sverdlin@nokia.com>
The algif_aead completion function tries to deduce the aead_request
from the crypto_async_request argument. This is broken because
the API does not guarantee that the same request will be pased to
the completion function. Only the value of req->data can be used
in the completion function.
This patch fixes it by storing a pointer to sk in areq and using
that instead of passing in sk through req->data.
Fixes: 83094e5e9e ("crypto: af_alg - add async support to...")
Cc: <stable@vger.kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The ahash API modifies the request's callback function in order
to clean up after itself in some corner cases (unaligned final
and missing finup).
When the request is complete ahash will restore the original
callback and everything is fine. However, when the request gets
an EBUSY on a full queue, an EINPROGRESS callback is made while
the request is still ongoing.
In this case the ahash API will incorrectly call its own callback.
This patch fixes the problem by creating a temporary request
object on the stack which is used to relay EINPROGRESS back to
the original completion function.
This patch also adds code to preserve the original flags value.
Fixes: ab6bf4e5e5 ("crypto: hash - Fix the pointer voodoo in...")
Cc: <stable@vger.kernel.org>
Reported-by: Sabrina Dubroca <sd@queasysnail.net>
Tested-by: Sabrina Dubroca <sd@queasysnail.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When we get an EINPROGRESS completion in lrw, we will end up marking
the request as done and freeing it. This then blows up when the
request is really completed as we've already freed the memory.
Fixes: 700cb3f5fe ("crypto: lrw - Convert to skcipher")
Cc: <stable@vger.kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When we get an EINPROGRESS completion in xts, we will end up marking
the request as done and freeing it. This then blows up when the
request is really completed as we've already freed the memory.
Fixes: f1c131b454 ("crypto: xts - Convert to skcipher")
Cc: <stable@vger.kernel.org>
Reported-by: Nathan Royce <nroycea+kernel@gmail.com>
Reported-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Krzysztof Kozlowski <krzk@kernel.org>
Since the gf128mul_x_ble function used by xts.c is now defined inline
in the header file, the XTS module no longer depends on gf128mul.
Therefore, the 'select CRYPTO_GF128MUL' line can be safely removed.
Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com>
Reviewd-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently, gf128mul_x_ble works with pointers to be128, even though it
actually interprets the words as little-endian. Consequently, it uses
cpu_to_le64/le64_to_cpu on fields of type __be64, which is incorrect.
This patch fixes that by changing the function to accept pointers to
le128 and updating all users accordingly.
Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com>
Reviewd-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The gf128mul_x_ble function is currently defined in gf128mul.c, because
it depends on the gf128mul_table_be multiplication table.
However, since the function is very small and only uses two values from
the table, it is better for it to be defined as inline function in
gf128mul.h. That way, the function can be inlined by the compiler for
better performance.
For consistency, the other gf128mul_x_* functions are also moved to the
header file. In addition, the code is rewritten to be constant-time.
After this change, the speed of the generic 'xts(aes)' implementation
increased from ~225 MiB/s to ~235 MiB/s (measured using 'cryptsetup
benchmark -c aes-xts-plain64' on an Intel system with CRYPTO_AES_X86_64
and CRYPTO_AES_NI_INTEL disabled).
Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com>
Reviewd-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a restrict_link_by_key_or_keyring_chain link restriction that
searches for signing keys in the destination keyring in addition to the
signing key or keyring designated when the destination keyring was
created. Userspace enables this behavior by including the "chain" option
in the keyring restriction:
keyctl(KEYCTL_RESTRICT_KEYRING, keyring, "asymmetric",
"key_or_keyring:<signing key>:chain");
Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Adds restrict_link_by_signature_keyring(), which uses the restrict_key
member of the provided destination_keyring data structure as the
key or keyring to search for signing keys.
Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Look up asymmetric keyring restriction information using the key-type
lookup_restrict hook.
Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
The first argument to the restrict_link_func_t functions was a keyring
pointer. These functions are called by the key subsystem with this
argument set to the destination keyring, but restrict_link_by_signature
expects a pointer to the relevant trusted keyring.
Restrict functions may need something other than a single struct key
pointer to allow or reject key linkage, so the data used to make that
decision (such as the trust keyring) is moved to a new, fourth
argument. The first argument is now always the destination keyring.
Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
PKCS#7: Handle certificates that are blacklisted when verifying the chain
of trust on the signatures on a PKCS#7 message.
Signed-off-by: David Howells <dhowells@redhat.com>
Allow X.509 certs to be blacklisted based on their TBSCertificate hash.
This is convenient since we have to determine this anyway to be able to
check the signature on an X.509 certificate. This is also what UEFI uses
in its blacklist.
If a certificate built into the kernel is blacklisted, something like the
following might then be seen during boot:
X.509: Cert 123412341234c55c1dcc601ab8e172917706aa32fb5eaf826813547fdf02dd46 is blacklisted
Problem loading in-kernel X.509 certificate (-129)
where the hex string shown is the blacklisted hash.
Signed-off-by: David Howells <dhowells@redhat.com>
Pull crypto fixes from Herbert Xu:
"This fixes the following issues:
- memory corruption when kmalloc fails in xts/lrw
- mark some CCP DMA channels as private
- fix reordering race in padata
- regression in omap-rng DT description"
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: xts,lrw - fix out-of-bounds write after kmalloc failure
crypto: ccp - Make some CCP DMA channels private
padata: avoid race in reordering
dt-bindings: rng: clocks property on omap_rng not always mandatory
An SGL to be initialized only once even when its buffers are written
to several times.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3DES is missing the fips_allowed flag for CTR mode.
Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Acked-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The md5_transform function is no longer used any where in the tree,
except for the crypto api's actual implementation of md5, so we can drop
the function from lib and put it as a static function of the crypto
file, where it belongs. There should be no new users of md5_transform,
anyway, since there are more modern ways of doing what it once achieved.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
vpmsum implementations often don't kick in for short test vectors.
This is a simple test module that does a configurable number of
random tests, each up to 64kB and each with random offsets.
Both CRC-T10DIF and CRC32C are tested.
Cc: Anton Blanchard <anton@samba.org>
Signed-off-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
T10DIF is a CRC16 used heavily in NVMe.
It turns out we can accelerate it with a CRC32 library and a few
little tricks.
Provide the accelerator based the refactored CRC32 code.
Cc: Anton Blanchard <anton@samba.org>
Thanks-to: Hong Bo Peng <penghb@cn.ibm.com>
Signed-off-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In the generic XTS and LRW algorithms, for input data > 128 bytes, a
temporary buffer is allocated to hold the values to be XOR'ed with the
data before and after encryption or decryption. If the allocation
fails, the fixed-size buffer embedded in the request buffer is meant to
be used as a fallback --- resulting in more calls to the ECB algorithm,
but still producing the correct result. However, we weren't correctly
limiting subreq->cryptlen in this case, resulting in pre_crypt()
overrunning the embedded buffer. Fix this by setting subreq->cryptlen
correctly.
Fixes: f1c131b454 ("crypto: xts - Convert to skcipher")
Fixes: 700cb3f5fe ("crypto: lrw - Convert to skcipher")
Cc: stable@vger.kernel.org # v4.10+
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Lockdep issues a circular dependency warning when AFS issues an operation
through AF_RXRPC from a context in which the VFS/VM holds the mmap_sem.
The theory lockdep comes up with is as follows:
(1) If the pagefault handler decides it needs to read pages from AFS, it
calls AFS with mmap_sem held and AFS begins an AF_RXRPC call, but
creating a call requires the socket lock:
mmap_sem must be taken before sk_lock-AF_RXRPC
(2) afs_open_socket() opens an AF_RXRPC socket and binds it. rxrpc_bind()
binds the underlying UDP socket whilst holding its socket lock.
inet_bind() takes its own socket lock:
sk_lock-AF_RXRPC must be taken before sk_lock-AF_INET
(3) Reading from a TCP socket into a userspace buffer might cause a fault
and thus cause the kernel to take the mmap_sem, but the TCP socket is
locked whilst doing this:
sk_lock-AF_INET must be taken before mmap_sem
However, lockdep's theory is wrong in this instance because it deals only
with lock classes and not individual locks. The AF_INET lock in (2) isn't
really equivalent to the AF_INET lock in (3) as the former deals with a
socket entirely internal to the kernel that never sees userspace. This is
a limitation in the design of lockdep.
Fix the general case by:
(1) Double up all the locking keys used in sockets so that one set are
used if the socket is created by userspace and the other set is used
if the socket is created by the kernel.
(2) Store the kern parameter passed to sk_alloc() in a variable in the
sock struct (sk_kern_sock). This informs sock_lock_init(),
sock_init_data() and sk_clone_lock() as to the lock keys to be used.
Note that the child created by sk_clone_lock() inherits the parent's
kern setting.
(3) Add a 'kern' parameter to ->accept() that is analogous to the one
passed in to ->create() that distinguishes whether kernel_accept() or
sys_accept4() was the caller and can be passed to sk_alloc().
Note that a lot of accept functions merely dequeue an already
allocated socket. I haven't touched these as the new socket already
exists before we get the parameter.
Note also that there are a couple of places where I've made the accepted
socket unconditionally kernel-based:
irda_accept()
rds_rcp_accept_one()
tcp_accept_from_sock()
because they follow a sock_create_kern() and accept off of that.
Whilst creating this, I noticed that lustre and ocfs don't create sockets
through sock_create_kern() and thus they aren't marked as for-kernel,
though they appear to be internal. I wonder if these should do that so
that they use the new set of lock keys.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When requesting a fallback algorithm, we should propagate the
NEED_FALLBACK bit when search for the underlying algorithm.
This will prevents drivers from allocating unnecessary fallbacks that
are never called. For instance, currently the vmx-crypto driver will use
the following chain of calls when calling the fallback implementation:
p8_aes_ctr -> ctr(p8_aes) -> aes-generic
However p8_aes will always delegate its calls to aes-generic. With this
patch, p8_aes_ctr will be able to use ctr(aes-generic) directly as its
fallback. The same applies to aes_s390.
Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When requesting a fallback algorithm, we should propagate the
NEED_FALLBACK bit when search for the underlying algorithm.
This will prevents drivers from allocating unnecessary fallbacks that
are never called. For instance, currently the vmx-crypto driver will use
the following chain of calls when calling the fallback implementation:
p8_aes_cbc -> cbc(p8_aes) -> aes-generic
However p8_aes will always delegate its calls to aes-generic. With this
patch, p8_aes_cbc will be able to use cbc(aes-generic) directly as its
fallback. The same applies to aes_s390.
Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Cryptographic test vectors should never be modified, so constify them to
enforce this at both compile-time and run-time. This moves a significant
amount of data from .data to .rodata when the crypto tests are enabled.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Constify the buffer passed to crypto_kpp_set_secret() and
kpp_alg.set_secret, since it is never modified.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
To prevent unnecessary branching, mark the exit condition of the
primary loop as likely(), given that a carry in a 32-bit counter
occurs very rarely.
On arm64, the resulting code is emitted by GCC as
9a8: cmp w1, #0x3
9ac: add x3, x0, w1, uxtw
9b0: b.ls 9e0 <crypto_inc+0x38>
9b4: ldr w2, [x3,#-4]!
9b8: rev w2, w2
9bc: add w2, w2, #0x1
9c0: rev w4, w2
9c4: str w4, [x3]
9c8: cbz w2, 9d0 <crypto_inc+0x28>
9cc: ret
where the two remaining branch conditions (one for size < 4 and one for
the carry) are statically predicted as non-taken, resulting in optimal
execution in the vast majority of cases.
Also, replace the open coded alignment test with IS_ALIGNED().
Cc: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Constify the multiplication tables passed to the 4k and 64k
multiplication functions, as they are not modified by these functions.
Cc: Alex Cope <alexcope@google.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Though the GF(2^128) byte overflow tables were named the "lle" and "bbe"
tables, they are not actually tied to these element formats
specifically, but rather to particular a "bit endianness". For example,
the bbe table is actually used for both bbe and ble multiplication.
Therefore, rename the tables to "le" and "be" and update the comment to
explain this.
Cc: Alex Cope <alexcope@google.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The xx() macro serves no purpose and can be removed.
Cc: Alex Cope <alexcope@google.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix incorrect references to GF(128) instead of GF(2^128), as these are
two entirely different fields, and fix a few other incorrect comments.
Cc: Alex Cope <alexcope@google.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Pull crypto fixes from Herbert Xu:
- vmalloc stack regression in CCM
- Build problem in CRC32 on ARM
- Memory leak in cavium
- Missing Kconfig dependencies in atmel and mediatek
- XTS Regression on some platforms (s390 and ppc)
- Memory overrun in CCM test vector
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: vmx - Use skcipher for xts fallback
crypto: vmx - Use skcipher for cbc fallback
crypto: testmgr - Pad aes_ccm_enc_tv_template vector
crypto: arm/crc32 - add build time test for CRC instruction support
crypto: arm/crc32 - fix build error with outdated binutils
crypto: ccm - move cbcmac input off the stack
crypto: xts - Propagate NEED_FALLBACK bit
crypto: api - Add crypto_requires_off helper
crypto: atmel - CRYPTO_DEV_MEDIATEK should depend on HAS_DMA
crypto: atmel - CRYPTO_DEV_ATMEL_TDES and CRYPTO_DEV_ATMEL_SHA should depend on HAS_DMA
crypto: cavium - fix leak on curr if curr->head fails to be allocated
crypto: cavium - Fix couple of static checker errors
We are going to split <linux/sched/stat.h> out of <linux/sched.h>, which
will have to be picked up from other headers and a couple of .c files.
Create a trivial placeholder <linux/sched/stat.h> file that just
maps to <linux/sched.h> to make this patch obviously correct and
bisectable.
Include the new header in the files that are going to need it.
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Fix up affected files that include this signal functionality via sched.h.
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
We are going to move scheduler ABI details to <uapi/linux/sched/types.h>,
which will be used from a number of .c files.
Create empty placeholder header that maps to <linux/types.h>.
Include the new header in the files that are going to need it.
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Running with KASAN and crypto tests currently gives
BUG: KASAN: global-out-of-bounds in __test_aead+0x9d9/0x2200 at addr ffffffff8212fca0
Read of size 16 by task cryptomgr_test/1107
Address belongs to variable 0xffffffff8212fca0
CPU: 0 PID: 1107 Comm: cryptomgr_test Not tainted 4.10.0+ #45
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.9.1-1.fc24 04/01/2014
Call Trace:
dump_stack+0x63/0x8a
kasan_report.part.1+0x4a7/0x4e0
? __test_aead+0x9d9/0x2200
? crypto_ccm_init_crypt+0x218/0x3c0 [ccm]
kasan_report+0x20/0x30
check_memory_region+0x13c/0x1a0
memcpy+0x23/0x50
__test_aead+0x9d9/0x2200
? kasan_unpoison_shadow+0x35/0x50
? alg_test_akcipher+0xf0/0xf0
? crypto_skcipher_init_tfm+0x2e3/0x310
? crypto_spawn_tfm2+0x37/0x60
? crypto_ccm_init_tfm+0xa9/0xd0 [ccm]
? crypto_aead_init_tfm+0x7b/0x90
? crypto_alloc_tfm+0xc4/0x190
test_aead+0x28/0xc0
alg_test_aead+0x54/0xd0
alg_test+0x1eb/0x3d0
? alg_find_test+0x90/0x90
? __sched_text_start+0x8/0x8
? __wake_up_common+0x70/0xb0
cryptomgr_test+0x4d/0x60
kthread+0x173/0x1c0
? crypto_acomp_scomp_free_ctx+0x60/0x60
? kthread_create_on_node+0xa0/0xa0
ret_from_fork+0x2c/0x40
Memory state around the buggy address:
ffffffff8212fb80: 00 00 00 00 01 fa fa fa fa fa fa fa 00 00 00 00
ffffffff8212fc00: 00 01 fa fa fa fa fa fa 00 00 00 00 01 fa fa fa
>ffffffff8212fc80: fa fa fa fa 00 05 fa fa fa fa fa fa 00 00 00 00
^
ffffffff8212fd00: 01 fa fa fa fa fa fa fa 00 00 00 00 01 fa fa fa
ffffffff8212fd80: fa fa fa fa 00 00 00 00 00 05 fa fa fa fa fa fa
This always happens on the same IV which is less than 16 bytes.
Per Ard,
"CCM IVs are 16 bytes, but due to the way they are constructed
internally, the final couple of bytes of input IV are dont-cares.
Apparently, we do read all 16 bytes, which triggers the KASAN errors."
Fix this by padding the IV with null bytes to be at least 16 bytes.
Cc: stable@vger.kernel.org
Fixes: 0bc5a6c5c7 ("crypto: testmgr - Disable rfc4309 test and convert
test vectors")
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Commit f15f05b0a5 ("crypto: ccm - switch to separate cbcmac driver")
refactored the CCM driver to allow separate implementations of the
underlying MAC to be provided by a platform. However, in doing so, it
moved some data from the linear region to the stack, which violates the
SG constraints when the stack is virtually mapped.
So move idata/odata back to the request ctx struct, of which we can
reasonably expect that it has been allocated using kmalloc() et al.
Reported-by: Johannes Berg <johannes@sipsolutions.net>
Fixes: f15f05b0a5 ("crypto: ccm - switch to separate cbcmac driver")
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When we're used as a fallback algorithm, we should propagate
the NEED_FALLBACK bit when searching for the underlying ECB mode.
This just happens to fix a hang too because otherwise the search
may end up loading the same module that triggered this XTS creation.
Cc: stable@vger.kernel.org #4.10
Fixes: f1c131b454 ("crypto: xts - Convert to skcipher")
Reported-by: Harald Freudenberger <freude@linux.vnet.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Update the crypto modules using LZ4 compression as well as the test
cases in testmgr.h to work with the new LZ4 module version.
Link: http://lkml.kernel.org/r/1486321748-19085-4-git-send-email-4sschmid@informatik.uni-hamburg.de
Signed-off-by: Sven Schmidt <4sschmid@informatik.uni-hamburg.de>
Cc: Bongkyu Kim <bongkyu.kim@lge.com>
Cc: Rui Salvaterra <rsalvaterra@gmail.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: David S. Miller <davem@davemloft.net>
Cc: Anton Vorontsov <anton@enomsg.org>
Cc: Colin Cross <ccross@android.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Tony Luck <tony.luck@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Since the
commit f1c131b454
crypto: xts - Convert to skcipher
the XTS mode is based on ECB, so the mode must select
ECB otherwise it can fail to initialize.
Signed-off-by: Milan Broz <gmazyland@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The CCM driver forces 32-bit alignment even if the underlying ciphers
don't care about alignment. This is because crypto_xor() used to require
this, but since this is no longer the case, drop the hardcoded minimum
of 32 bits.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The CCM driver was recently updated to defer the MAC part of the algorithm
to a dedicated crypto transform, and a template for instantiating such
transforms was added at the same time.
However, this new cbcmac template fails to take the alignmask of the
encapsulated cipher into account, which may result in buffer addresses
being passed down that are not sufficiently aligned.
So update the code to ensure that the digest buffer in the desc ctx
appears at a sufficiently aligned offset, and tweak the code so that all
calls to crypto_cipher_encrypt_one() operate on this buffer exclusively.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Instead of unconditionally forcing 4 byte alignment for all generic
chaining modes that rely on crypto_xor() or crypto_inc() (which may
result in unnecessary copying of data when the underlying hardware
can perform unaligned accesses efficiently), make those functions
deal with unaligned input explicitly, but only if the Kconfig symbol
HAVE_EFFICIENT_UNALIGNED_ACCESS is set. This will allow us to drop
the alignmasks from the CBC, CMAC, CTR, CTS, PCBC and SEQIV drivers.
For crypto_inc(), this simply involves making the 4-byte stride
conditional on HAVE_EFFICIENT_UNALIGNED_ACCESS being set, given that
it typically operates on 16 byte buffers.
For crypto_xor(), an algorithm is implemented that simply runs through
the input using the largest strides possible if unaligned accesses are
allowed. If they are not, an optimal sequence of memory accesses is
emitted that takes the relative alignment of the input buffers into
account, e.g., if the relative misalignment of dst and src is 4 bytes,
the entire xor operation will be completed using 4 byte loads and stores
(modulo unaligned bits at the start and end). Note that all expressions
involving misalign are simply eliminated by the compiler when
HAVE_EFFICIENT_UNALIGNED_ACCESS is defined.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
An ancient gcc bug (first reported in 2003) has apparently resurfaced
on MIPS, where kernelci.org reports an overly large stack frame in the
whirlpool hash algorithm:
crypto/wp512.c:987:1: warning: the frame size of 1112 bytes is larger than 1024 bytes [-Wframe-larger-than=]
With some testing in different configurations, I'm seeing large
variations in stack frames size up to 1500 bytes for what should have
around 300 bytes at most. I also checked the reference implementation,
which is essentially the same code but also comes with some test and
benchmarking infrastructure.
It seems that recent compiler versions on at least arm, arm64 and powerpc
have a partial fix for this problem, but enabling "-fsched-pressure", but
even with that fix they suffer from the issue to a certain degree. Some
testing on arm64 shows that the time needed to hash a given amount of
data is roughly proportional to the stack frame size here, which makes
sense given that the wp512 implementation is doing lots of loads for
table lookups, and the problem with the overly large stack is a result
of doing a lot more loads and stores for spilled registers (as seen from
inspecting the object code).
Disabling -fschedule-insns consistently fixes the problem for wp512,
in my collection of cross-compilers, the results are consistently better
or identical when comparing the stack sizes in this function, though
some architectures (notable x86) have schedule-insns disabled by
default.
The four columns are:
default: -O2
press: -O2 -fsched-pressure
nopress: -O2 -fschedule-insns -fno-sched-pressure
nosched: -O2 -no-schedule-insns (disables sched-pressure)
default press nopress nosched
alpha-linux-gcc-4.9.3 1136 848 1136 176
am33_2.0-linux-gcc-4.9.3 2100 2076 2100 2104
arm-linux-gnueabi-gcc-4.9.3 848 848 1048 352
cris-linux-gcc-4.9.3 272 272 272 272
frv-linux-gcc-4.9.3 1128 1000 1128 280
hppa64-linux-gcc-4.9.3 1128 336 1128 184
hppa-linux-gcc-4.9.3 644 308 644 276
i386-linux-gcc-4.9.3 352 352 352 352
m32r-linux-gcc-4.9.3 720 656 720 268
microblaze-linux-gcc-4.9.3 1108 604 1108 256
mips64-linux-gcc-4.9.3 1328 592 1328 208
mips-linux-gcc-4.9.3 1096 624 1096 240
powerpc64-linux-gcc-4.9.3 1088 432 1088 160
powerpc-linux-gcc-4.9.3 1080 584 1080 224
s390-linux-gcc-4.9.3 456 456 624 360
sh3-linux-gcc-4.9.3 292 292 292 292
sparc64-linux-gcc-4.9.3 992 240 992 208
sparc-linux-gcc-4.9.3 680 592 680 312
x86_64-linux-gcc-4.9.3 224 240 272 224
xtensa-linux-gcc-4.9.3 1152 704 1152 304
aarch64-linux-gcc-7.0.0 224 224 1104 208
arm-linux-gnueabi-gcc-7.0.1 824 824 1048 352
mips-linux-gcc-7.0.0 1120 648 1120 272
x86_64-linux-gcc-7.0.1 240 240 304 240
arm-linux-gnueabi-gcc-4.4.7 840 392
arm-linux-gnueabi-gcc-4.5.4 784 728 784 320
arm-linux-gnueabi-gcc-4.6.4 736 728 736 304
arm-linux-gnueabi-gcc-4.7.4 944 784 944 352
arm-linux-gnueabi-gcc-4.8.5 464 464 760 352
arm-linux-gnueabi-gcc-4.9.3 848 848 1048 352
arm-linux-gnueabi-gcc-5.3.1 824 824 1064 336
arm-linux-gnueabi-gcc-6.1.1 808 808 1056 344
arm-linux-gnueabi-gcc-7.0.1 824 824 1048 352
Trying the same test for serpent-generic, the picture is a bit different,
and while -fno-schedule-insns is generally better here than the default,
-fsched-pressure wins overall, so I picked that instead.
default press nopress nosched
alpha-linux-gcc-4.9.3 1392 864 1392 960
am33_2.0-linux-gcc-4.9.3 536 524 536 528
arm-linux-gnueabi-gcc-4.9.3 552 552 776 536
cris-linux-gcc-4.9.3 528 528 528 528
frv-linux-gcc-4.9.3 536 400 536 504
hppa64-linux-gcc-4.9.3 524 208 524 480
hppa-linux-gcc-4.9.3 768 472 768 508
i386-linux-gcc-4.9.3 564 564 564 564
m32r-linux-gcc-4.9.3 712 576 712 532
microblaze-linux-gcc-4.9.3 724 392 724 512
mips64-linux-gcc-4.9.3 720 384 720 496
mips-linux-gcc-4.9.3 728 384 728 496
powerpc64-linux-gcc-4.9.3 704 304 704 480
powerpc-linux-gcc-4.9.3 704 296 704 480
s390-linux-gcc-4.9.3 560 560 592 536
sh3-linux-gcc-4.9.3 540 540 540 540
sparc64-linux-gcc-4.9.3 544 352 544 496
sparc-linux-gcc-4.9.3 544 344 544 496
x86_64-linux-gcc-4.9.3 528 536 576 528
xtensa-linux-gcc-4.9.3 752 544 752 544
aarch64-linux-gcc-7.0.0 432 432 656 480
arm-linux-gnueabi-gcc-7.0.1 616 616 808 536
mips-linux-gcc-7.0.0 720 464 720 488
x86_64-linux-gcc-7.0.1 536 528 600 536
arm-linux-gnueabi-gcc-4.4.7 592 440
arm-linux-gnueabi-gcc-4.5.4 776 448 776 544
arm-linux-gnueabi-gcc-4.6.4 776 448 776 544
arm-linux-gnueabi-gcc-4.7.4 768 448 768 544
arm-linux-gnueabi-gcc-4.8.5 488 488 776 544
arm-linux-gnueabi-gcc-4.9.3 552 552 776 536
arm-linux-gnueabi-gcc-5.3.1 552 552 776 536
arm-linux-gnueabi-gcc-6.1.1 560 560 776 536
arm-linux-gnueabi-gcc-7.0.1 616 616 808 536
I did not do any runtime tests with serpent, so it is possible that stack
frame size does not directly correlate with runtime performance here and
it actually makes things worse, but it's more likely to help here, and
the reduced stack frame size is probably enough reason to apply the patch,
especially given that the crypto code is often used in deep call chains.
Link: https://kernelci.org/build/id/58797d7559b5149efdf6c3a9/logs/
Link: http://www.larc.usp.br/~pbarreto/WhirlpoolPage.html
Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=11488
Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79149
Cc: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Update the generic CCM driver to defer CBC-MAC processing to a
dedicated CBC-MAC ahash transform rather than open coding this
transform (and much of the associated scatterwalk plumbing) in
the CCM driver itself.
This cleans up the code considerably, but more importantly, it allows
the use of alternative CBC-MAC implementations that don't suffer from
performance degradation due to significant setup time (e.g., the NEON
based AES code needs to enable/disable the NEON, and load the S-box
into 16 SIMD registers, which cannot be amortized over the entire input
when using the cipher interface)
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In preparation of splitting off the CBC-MAC transform in the CCM
driver into a separate algorithm, define some test cases for the
AES incarnation of cbcmac.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Lookup table based AES is sensitive to timing attacks, which is due to
the fact that such table lookups are data dependent, and the fact that
8 KB worth of tables covers a significant number of cachelines on any
architecture, resulting in an exploitable correlation between the key
and the processing time for known plaintexts.
For network facing algorithms such as CTR, CCM or GCM, this presents a
security risk, which is why arch specific AES ports are typically time
invariant, either through the use of special instructions, or by using
SIMD algorithms that don't rely on table lookups.
For generic code, this is difficult to achieve without losing too much
performance, but we can improve the situation significantly by switching
to an implementation that only needs 256 bytes of table data (the actual
S-box itself), which can be prefetched at the start of each block to
eliminate data dependent latencies.
This code encrypts at ~25 cycles per byte on ARM Cortex-A57 (while the
ordinary generic AES driver manages 18 cycles per byte on this
hardware). Decryption is substantially slower.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The generic AES code exposes a 32-bit align mask, which forces all
users of the code to use temporary buffers or take other measures to
ensure the alignment requirement is adhered to, even on architectures
that don't care about alignment for software algorithms such as this
one.
So drop the align mask, and fix the code to use get_unaligned_le32()
where appropriate, which will resolve to whatever is optimal for the
architecture.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kernel panics when userspace program try to access AEAD interface.
Remove node from Linked List before freeing its memory.
Cc: <stable@vger.kernel.org>
Signed-off-by: Harsh Jain <harsh@chelsio.com>
Reviewed-by: Stephan Müller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
tcrypt is very tight-lipped when it succeeds, but a bit more feedback
would be useful when developing or debugging crypto drivers, especially
since even a successful run ends with the module failing to insert. Add
a couple of debug prints, which can be enabled with dynamic debug:
Before:
# insmod tcrypt.ko mode=10
insmod: can't insert 'tcrypt.ko': Resource temporarily unavailable
After:
# insmod tcrypt.ko mode=10 dyndbg
tcrypt: testing ecb(aes)
tcrypt: testing cbc(aes)
tcrypt: testing lrw(aes)
tcrypt: testing xts(aes)
tcrypt: testing ctr(aes)
tcrypt: testing rfc3686(ctr(aes))
tcrypt: all tests passed
insmod: can't insert 'tcrypt.ko': Resource temporarily unavailable
Signed-off-by: Rabin Vincent <rabinv@axis.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Make sure CRYPTO_ALG_DEAD bit is cleared before proceeding with
the algorithm registration. This fixes qat-dh registration when
driver is restarted
Cc: <stable@vger.kernel.org>
Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When working on AES in CCM mode for ARM, my code passed the internal
tcrypt test before I had even bothered to implement the AES-192 and
AES-256 code paths, which is strange because the tcrypt does contain
AES-192 and AES-256 test vectors for CCM.
As it turned out, the define AES_CCM_ENC_TEST_VECTORS was out of sync
with the actual number of test vectors, causing only the AES-128 ones
to be executed.
So get rid of the defines, and wrap the test vector references in a
macro that calculates the number of vectors automatically.
The following test vector counts were out of sync with the respective
defines:
BF_CTR_ENC_TEST_VECTORS 2 -> 3
BF_CTR_DEC_TEST_VECTORS 2 -> 3
TF_CTR_ENC_TEST_VECTORS 2 -> 3
TF_CTR_DEC_TEST_VECTORS 2 -> 3
SERPENT_CTR_ENC_TEST_VECTORS 2 -> 3
SERPENT_CTR_DEC_TEST_VECTORS 2 -> 3
AES_CCM_ENC_TEST_VECTORS 8 -> 14
AES_CCM_DEC_TEST_VECTORS 7 -> 17
AES_CCM_4309_ENC_TEST_VECTORS 7 -> 23
AES_CCM_4309_DEC_TEST_VECTORS 10 -> 23
CAMELLIA_CTR_ENC_TEST_VECTORS 2 -> 3
CAMELLIA_CTR_DEC_TEST_VECTORS 2 -> 3
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
There are some hashes (e.g. sha224) that have some internal trickery
to make sure that only the correct number of output bytes are
generated. If something goes wrong, they could potentially overrun
the output buffer.
Make the test more robust by allocating only enough space for the
correct output size so that memory debugging will catch the error if
the output is overrun.
Tested by intentionally breaking sha224 to output all 256
internally-generated bits while running on KASAN.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Continuing from this commit: 52f5684c8e
("kernel: use macros from compiler.h instead of __attribute__((...))")
I submitted 4 total patches. They are part of task I've taken up to
increase compiler portability in the kernel. I've cleaned up the
subsystems under /kernel /mm /block and /security, this patch targets
/crypto.
There is <linux/compiler.h> which provides macros for various gcc specific
constructs. Eg: __weak for __attribute__((weak)). I've cleaned all
instances of gcc specific attributes with the right macros for the crypto
subsystem.
I had to make one additional change into compiler-gcc.h for the case when
one wants to use this: __attribute__((aligned) and not specify an alignment
factor. From the gcc docs, this will result in the largest alignment for
that data type on the target machine so I've named the macro
__aligned_largest. Please advise if another name is more appropriate.
Signed-off-by: Gideon Israel Dsouza <gidisrael@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
It's recommended to use kmemdup instead of kmalloc followed by memcpy.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In some cases, SIMD algorithms can only perform optimally when
allowed to operate on multiple input blocks in parallel. This is
especially true for bit slicing algorithms, which typically take
the same amount of time processing a single block or 8 blocks in
parallel. However, other SIMD algorithms may benefit as well from
bigger strides.
So add a walksize attribute to the skcipher algorithm definition, and
wire it up to the skcipher walk API. To avoid confusion between the
skcipher and AEAD attributes, rename the skcipher_walk chunksize
attribute to 'stride', and set it from the walksize (in the skcipher
case) or from the chunksize (in the AEAD case).
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
With this reproducer:
struct sockaddr_alg alg = {
.salg_family = 0x26,
.salg_type = "hash",
.salg_feat = 0xf,
.salg_mask = 0x5,
.salg_name = "digest_null",
};
int sock, sock2;
sock = socket(AF_ALG, SOCK_SEQPACKET, 0);
bind(sock, (struct sockaddr *)&alg, sizeof(alg));
sock2 = accept(sock, NULL, NULL);
setsockopt(sock, SOL_ALG, ALG_SET_KEY, "\x9b\xca", 2);
accept(sock2, NULL, NULL);
==== 8< ======== 8< ======== 8< ======== 8< ====
one can immediatelly see an UBSAN warning:
UBSAN: Undefined behaviour in crypto/algif_hash.c:187:7
variable length array bound value 0 <= 0
CPU: 0 PID: 15949 Comm: syz-executor Tainted: G E 4.4.30-0-default #1
...
Call Trace:
...
[<ffffffff81d598fd>] ? __ubsan_handle_vla_bound_not_positive+0x13d/0x188
[<ffffffff81d597c0>] ? __ubsan_handle_out_of_bounds+0x1bc/0x1bc
[<ffffffffa0e2204d>] ? hash_accept+0x5bd/0x7d0 [algif_hash]
[<ffffffffa0e2293f>] ? hash_accept_nokey+0x3f/0x51 [algif_hash]
[<ffffffffa0e206b0>] ? hash_accept_parent_nokey+0x4a0/0x4a0 [algif_hash]
[<ffffffff8235c42b>] ? SyS_accept+0x2b/0x40
It is a correct warning, as hash state is propagated to accept as zero,
but creating a zero-length variable array is not allowed in C.
Fix this as proposed by Herbert -- do "?: 1" on that site. No sizeof or
similar happens in the code there, so we just allocate one byte even
though we do not use the array.
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net> (maintainer:CRYPTO API)
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This converts the ChaCha20 code from a blkcipher to a skcipher, which
is now the preferred way to implement symmetric block and stream ciphers.
This ports the generic and x86 versions at the same time because the
latter reuses routines of the former.
Note that the skcipher_walk() API guarantees that all presented blocks
except the final one are a multiple of the chunk size, so we can simplify
the encrypt() routine somewhat.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Christopher Covington reported a crash on aarch64 on recent Fedora
kernels:
kernel BUG at ./include/linux/scatterlist.h:140!
Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
Modules linked in:
CPU: 2 PID: 752 Comm: cryptomgr_test Not tainted 4.9.0-11815-ge93b1cc #162
Hardware name: linux,dummy-virt (DT)
task: ffff80007c650080 task.stack: ffff800008910000
PC is at sg_init_one+0xa0/0xb8
LR is at sg_init_one+0x24/0xb8
...
[<ffff000008398db8>] sg_init_one+0xa0/0xb8
[<ffff000008350a44>] test_acomp+0x10c/0x438
[<ffff000008350e20>] alg_test_comp+0xb0/0x118
[<ffff00000834f28c>] alg_test+0x17c/0x2f0
[<ffff00000834c6a4>] cryptomgr_test+0x44/0x50
[<ffff0000080dac70>] kthread+0xf8/0x128
[<ffff000008082ec0>] ret_from_fork+0x10/0x50
The test vectors used for input are part of the kernel image. These
inputs are passed as a buffer to sg_init_one which eventually blows up
with BUG_ON(!virt_addr_valid(buf)). On arm64, virt_addr_valid returns
false for the kernel image since virt_to_page will not return the
correct page. Fix this by copying the input vectors to heap buffer
before setting up the scatterlist.
Reported-by: Christopher Covington <cov@codeaurora.org>
Fixes: d7db7a882d ("crypto: acomp - update testmgr with support for acomp")
Signed-off-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Pull crypto fixes from Herbert Xu:
"This fixes the following issues:
- a crash regression in the new skcipher walker
- incorrect return value in public_key_verify_signature
- fix for in-place signing in the sign-file utility"
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: skcipher - fix crash in virtual walk
sign-file: Fix inplace signing when src and dst names are both specified
crypto: asymmetric_keys - set error code on failure
Pull crypto updates from Herbert Xu:
"Here is the crypto update for 4.10:
API:
- add skcipher walk interface
- add asynchronous compression (acomp) interface
- fix algif_aed AIO handling of zero buffer
Algorithms:
- fix unaligned access in poly1305
- fix DRBG output to large buffers
Drivers:
- add support for iMX6UL to caam
- fix givenc descriptors (used by IPsec) in caam
- accelerated SHA256/SHA512 for ARM64 from OpenSSL
- add SSE CRCT10DIF and CRC32 to ARM/ARM64
- add AEAD support to Chelsio chcr
- add Armada 8K support to omap-rng"
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (148 commits)
crypto: testmgr - fix overlap in chunked tests again
crypto: arm/crc32 - accelerated support based on x86 SSE implementation
crypto: arm64/crc32 - accelerated support based on x86 SSE implementation
crypto: arm/crct10dif - port x86 SSE implementation to ARM
crypto: arm64/crct10dif - port x86 SSE implementation to arm64
crypto: testmgr - add/enhance test cases for CRC-T10DIF
crypto: testmgr - avoid overlap in chunked tests
crypto: chcr - checking for IS_ERR() instead of NULL
crypto: caam - check caam_emi_slow instead of re-lookup platform
crypto: algif_aead - fix AIO handling of zero buffer
crypto: aes-ce - Make aes_simd_algs static
crypto: algif_skcipher - set error code when kcalloc fails
crypto: caam - make aamalg_desc a proper module
crypto: caam - pass key buffers with typesafe pointers
crypto: arm64/aes-ce-ccm - Fix AEAD decryption length
MAINTAINERS: add crypto headers to crypto entry
crypt: doc - remove misleading mention of async API
crypto: doc - fix header file name
crypto: api - fix comment typo
crypto: skcipher - Add separate walker for AEAD decryption
..
The new skcipher walk API may crash in the following way. (Interestingly,
the tcrypt boot time tests seem unaffected, while an explicit test using
the module triggers it)
Unable to handle kernel NULL pointer dereference at virtual address 00000000
...
[<ffff000008431d84>] __memcpy+0x84/0x180
[<ffff0000083ec0d0>] skcipher_walk_done+0x328/0x340
[<ffff0000080c5c04>] ctr_encrypt+0x84/0x100
[<ffff000008406d60>] simd_skcipher_encrypt+0x88/0x98
[<ffff0000083fa05c>] crypto_rfc3686_crypt+0x8c/0x98
[<ffff0000009b0900>] test_skcipher_speed+0x518/0x820 [tcrypt]
[<ffff0000009b31c0>] do_test+0x1408/0x3b70 [tcrypt]
[<ffff0000009bd050>] tcrypt_mod_init+0x50/0x1000 [tcrypt]
[<ffff0000080838f4>] do_one_initcall+0x44/0x138
[<ffff0000081aee60>] do_init_module+0x68/0x1e0
[<ffff0000081524d0>] load_module+0x1fd0/0x2458
[<ffff000008152c38>] SyS_finit_module+0xe0/0xf0
[<ffff0000080836f0>] el0_svc_naked+0x24/0x28
This is due to the fact that skcipher_done_slow() may be entered with
walk->buffer unset. Since skcipher_walk_done() already deals with the
case where walk->buffer == walk->page, it appears to be the intention
that walk->buffer point to walk->page after skcipher_next_slow(), so
ensure that is the case.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In function public_key_verify_signature(), returns variable ret on
error paths. When the call to kmalloc() fails, the value of ret is 0,
and it is not set to an errno before returning. This patch fixes the
bug.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=188891
Signed-off-by: Pan Bian <bianpan2016@163.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The previous description have been misleading and partially incorrect.
Reported-by: Harsh Jain <harshjain.prof@gmail.com>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Jonathan Corbet <corbet@lwn.net>
Pull crypto fixes from Herbert Xu:
"This fixes the following issues:
- Fix pointer size when caam is used with AArch64 boot loader on
AArch32 kernel.
- Fix ahash state corruption in marvell driver.
- Fix buggy algif_aed tag handling.
- Prevent mcryptd from being used with incompatible algorithms which
can cause crashes"
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: algif_aead - fix uninitialized variable warning
crypto: mcryptd - Check mcryptd algorithm compatibility
crypto: algif_aead - fix AEAD tag memory handling
crypto: caam - fix pointer size for AArch64 boot loader, AArch32 kernel
crypto: marvell - Don't corrupt state of an STD req for re-stepped ahash
crypto: marvell - Don't copy hash operation twice into the SRAM
Commit 7e4c7f17cd ("crypto: testmgr - avoid overlap in chunked tests")
attempted to address a problem in the crypto testmgr code where chunked
test cases are copied to memory in a way that results in overlap.
However, the fix recreated the exact same issue for other chunked tests,
by putting IDX3 within 492 bytes of IDX1, which causes overlap if the
first chunk exceeds 492 bytes, which is the case for at least one of
the xts(aes) test cases.
So increase IDX3 by another 1000 bytes.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In case the user provided insufficient data, the code may return
prematurely without any operation. In this case, the processed
data indicated with outlen is zero.
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The existing test cases only exercise a small slice of the various
possible code paths through the x86 SSE/PCLMULQDQ implementation,
and the upcoming ports of it for arm64. So add one that exceeds 256
bytes in size, and convert another to a chunked test.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The IDXn offsets are chosen such that tap values (which may go up to
255) end up overlapping in the xbuf allocation. In particular, IDX1
and IDX3 are too close together, so update IDX3 to avoid this issue.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Algorithms not compatible with mcryptd could be spawned by mcryptd
with a direct crypto_alloc_tfm invocation using a "mcryptd(alg)" name
construct. This causes mcryptd to crash the kernel if an arbitrary
"alg" is incompatible and not intended to be used with mcryptd. It is
an issue if AF_ALG tries to spawn mcryptd(alg) to expose it externally.
But such algorithms must be used internally and not be exposed.
We added a check to enforce that only internal algorithms are allowed
with mcryptd at the time mcryptd is spawning an algorithm.
Link: http://marc.info/?l=linux-crypto-vger&m=148063683310477&w=2
Cc: stable@vger.kernel.org
Reported-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
For encryption, the AEAD ciphers require AAD || PT as input and generate
AAD || CT || Tag as output and vice versa for decryption. Prior to this
patch, the AF_ALG interface for AEAD ciphers requires the buffer to be
present as input for encryption. Similarly, the output buffer for
decryption required the presence of the tag buffer too. This implies
that the kernel reads / writes data buffers from/to kernel space
even though this operation is not required.
This patch changes the AF_ALG AEAD interface to be consistent with the
in-kernel AEAD cipher requirements.
Due to this handling, he changes are transparent to user space with one
exception: the return code of recv indicates the mount of output buffer.
That output buffer has a different size compared to before the patch
which implies that the return code of recv will also be different.
For example, a decryption operation uses 16 bytes AAD, 16 bytes CT and
16 bytes tag, the AF_ALG AEAD interface before showed a recv return
code of 48 (bytes) whereas after this patch, the return code is 32
since the tag is not returned any more.
Reported-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Handle the case when the caller provided a zero buffer to
sendmsg/sendpage. Such scenario is legal for AEAD ciphers when no
plaintext / ciphertext and no AAD is provided and the caller only
requests the generation of the tag value.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix bug https://bugzilla.kernel.org/show_bug.cgi?id=188521. In function
skcipher_recvmsg_async(), variable err takes the return value, and its
value should be negative on failures. Because variable err may be
reassigned and checked before calling kcalloc(), its value may be 0
(indicates no error) even if kcalloc() fails. This patch fixes the bug
by explicitly assigning -ENOMEM to err when kcalloc() returns a NULL
pointer.
Signed-off-by: Pan Bian <bianpan2016@163.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The AEAD decrypt interface includes the authentication tag in
req->cryptlen. Therefore we need to exlucde that when doing
a walk over it.
This patch adds separate walker functions for AEAD encryption
and decryption.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
The new skcipher_walk_aead() may crash in the following way due to
the walk flag SKCIPHER_WALK_PHYS not being cleared at the start of the
walk:
Unable to handle kernel NULL pointer dereference at virtual address 00000001
[..]
Internal error: Oops: 96000044 [#1] PREEMPT SMP
[..]
PC is at skcipher_walk_next+0x208/0x450
LR is at skcipher_walk_next+0x1e4/0x450
pc : [<ffff2b93b7104e20>] lr : [<ffff2b93b7104dfc>] pstate: 40000045
sp : ffffb925fa517940
[...]
[<ffff2b93b7104e20>] skcipher_walk_next+0x208/0x450
[<ffff2b93b710535c>] skcipher_walk_first+0x54/0x148
[<ffff2b93b7105664>] skcipher_walk_aead+0xd4/0x108
[<ffff2b93b6e77928>] ccm_encrypt+0x68/0x158
So clear the flag at the appropriate time.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Both asn1 headers are included by rsa_helper.c, so rsa_helper.o
should explicitly depend on them.
Signed-off-by: David Michael <david.michael@coreos.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When using SGs, only heap memory (memory that is valid as per
virt_addr_valid) is allowed to be referenced. The CTR DRBG used to
reference the caller-provided memory directly in an SG. In case the
caller provided stack memory pointers, the SG mapping is not considered
to be valid. In some cases, this would even cause a paging fault.
The change adds a new scratch buffer that is used unconditionally to
catch the cases where the caller-provided buffer is not suitable for
use in an SG. The crypto operation of the CTR DRBG produces its output
with that scratch buffer and finally copies the content of the
scratch buffer to the caller's buffer.
The scratch buffer is allocated during allocation time of the CTR DRBG
as its access is protected with the DRBG mutex.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
With virtually-mapped stacks (CONFIG_VMAP_STACK=y), using the
scatterlist crypto API with stack buffers is not allowed, and with
appropriate debugging options will cause the
'BUG_ON(!virt_addr_valid(buf));' in sg_set_buf() to be triggered.
Use a heap buffer instead.
Fixes: d7db7a882d ("crypto: acomp - update testmgr with support for acomp")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch moves the core CBC implementation into a header file
so that it can be reused by drivers implementing CBC.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts cbc over to the skcipher interface. It also
rearranges the code to allow it to be reused by drivers.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts aesni (including fpu) over to the skcipher
interface. The LRW implementation has been removed as the generic
LRW code can now be used directly on top of the accelerated ECB
implementation.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently we manually filter out internal algorithms using a list
in testmgr. This is dangerous as internal algorithms cannot be
safely used even by testmgr. This patch ensures that they're never
processed by testmgr at all.
This patch also removes an obsolete bypass for nivciphers which
no longer exist.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds xts helpers that use the skcipher interface rather
than blkcipher. This will be used by aesni_intel.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the simd skcipher helper which is meant to be
a replacement for ablk helper. It replaces the underlying blkcipher
interface with skcipher, and also presents the top-level algorithm
as an skcipher.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently all bits not set in mask are cleared in crypto_larval_lookup.
This is unnecessary as wherever the type bits are used it is always
masked anyway.
This patch removes the clearing so that we may use bits set in the
type but not in the mask for special purposes, e.g., picking up
internal algorithms.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts xts over to the skcipher interface. It also
optimises the implementation to be based on ECB instead of the
underlying cipher. For compatibility the existing naming scheme
of xts(aes) is maintained as opposed to the more obvious one of
xts(ecb(aes)).
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts lrw over to the skcipher interface. It also
optimises the implementation to be based on ECB instead of the
underlying cipher. For compatibility the existing naming scheme
of lrw(aes) is maintained as opposed to the more obvious one of
lrw(ecb(aes)).
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the skcipher walk interface which replaces both
blkcipher walk and ablkcipher walk. Just like blkcipher walk it
can also be used for AEAD algorithms.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
For consistency with the other 246 kernel configuration options,
rename CRYPT_CRC32C_VPMSUM to CRYPTO_CRC32C_VPMSUM.
Signed-off-by: Jean Delvare <jdelvare@suse.de>
Cc: Anton Blanchard <anton@samba.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
udplite conflict is resolved by taking what 'net-next' did
which removed the backlog receive method assignment, since
it is no longer necessary.
Two entries were added to the non-priv ethtool operations
switch statement, one in 'net' and one in 'net-next, so
simple overlapping changes.
Signed-off-by: David S. Miller <davem@davemloft.net>
All conflicts were simple overlapping changes except perhaps
for the Thunder driver.
That driver has a change_mtu method explicitly for sending
a message to the hardware. If that fails it returns an
error.
Normally a driver doesn't need an ndo_change_mtu method becuase those
are usually just range changes, which are now handled generically.
But since this extra operation is needed in the Thunder driver, it has
to stay.
However, if the message send fails we have to restore the original
MTU before the change because the entire call chain expects that if
an error is thrown by ndo_change_mtu then the MTU did not change.
Therefore code is added to nicvf_change_mtu to remember the original
MTU, and to restore it upon nicvf_update_hw_max_frs() failue.
Signed-off-by: David S. Miller <davem@davemloft.net>
The aliasing check in map_and_copy is no longer necessary because
the IPsec ESP code no longer provides an IV that points into the
actual request data. As this check is now triggering BUG checks
due to the vmalloced stack code, I'm removing it.
Reported-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Recently an init call was added to hash_recvmsg so as to reset
the hash state in case a sendmsg call was never made.
Unfortunately this ended up clobbering the result if the previous
sendmsg was done with a MSG_MORE flag. This patch fixes it by
excluding that case when we make the init call.
Fixes: a8348bca29 ("algif_hash - Fix NULL hash crash with shash")
Reported-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The CTR DRBG segments the number of random bytes to be generated into
128 byte blocks. The current code misses the advancement of the output
buffer pointer when the requestor asks for more than 128 bytes of data.
In this case, the next 128 byte block of random numbers is copied to
the beginning of the output buffer again. This implies that only the
first 128 bytes of the output buffer would ever be filled.
The patch adds the advancement of the buffer pointer to fill the entire
buffer.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Recently algif_hash has been changed to allow null hashes. This
triggers a bug when used with an shash algorithm whereby it will
cause a crash during the digest operation.
This patch fixes it by avoiding the digest operation and instead
doing an init followed by a final which avoids the buggy code in
shash.
This patch also ensures that the result buffer is freed after an
error so that it is not returned as a genuine hash result on the
next recv call.
The shash/ahash wrapper code will be fixed later to handle this
case correctly.
Fixes: 493b2ed3f7 ("crypto: algif_hash - Handle NULL hashes correctly")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Laura Abbott <labbott@redhat.com>
GF(2^128) multiplication tables are typically used for secret
information, so it's a good idea to zero them on free.
Signed-off-by: Alex Cope <alexcope@google.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Drop duplicate header module.h from jitterentropy-kcapi.c.
Signed-off-by: Geliang Tang <geliangtang@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Similar to commit 14135f30e3 ("inet: fix sleeping inside inet_wait_for_connect()"),
sk_wait_event() needs to fix too, because release_sock() is blocking,
it changes the process state back to running after sleep, which breaks
the previous prepare_to_wait().
Switch to the new wait API.
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This code is unlikely to be useful in the future because transforms
don't know how often keys will be changed, new algorithms are unlikely
to use lle representation, and tables should be replaced with
carryless multiplication instructions when available.
Signed-off-by: Alex Cope <alexcope@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix the single instance where a positive EINVAL was returned.
Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
By using the unaligned access helpers, we drastically improve
performance on small MIPS routers that have to go through the exception
fix-up handler for these unaligned accesses.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Acked-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove the unused but set variable tfm in cryptd_enqueue_request to fix
the following warning when building with 'W=1':
crypto/cryptd.c:125:21: warning: variable 'tfm' set but not used [-Wunused-but-set-variable]
Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since commit 3a01d0ee2b ("crypto: skcipher - Remove top-level
givcipher interface"), crypto_spawn_skcipher2() and
crypto_spawn_skcipher() are equivalent. So switch callers of
crypto_spawn_skcipher2() to crypto_spawn_skcipher() and remove it.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since commit 3a01d0ee2b ("crypto: skcipher - Remove top-level
givcipher interface"), crypto_grab_skcipher2() and
crypto_grab_skcipher() are equivalent. So switch callers of
crypto_grab_skcipher2() to crypto_grab_skcipher() and remove it.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix dependency between acomp and scomp that appears when acomp is
built as module
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add tests to the test manager for algorithms exposed through acomp.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add scomp backend for deflate compression algorithm.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add scomp backend for 842 compression algorithm.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add scomp backend for lz4hc compression algorithm.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add scomp backend for lz4 compression algorithm.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add scomp backend for lzo compression algorithm.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a synchronous back-end (scomp) to acomp. This allows to easily
expose the already present compression algorithms in LKCF via acomp.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add acomp, an asynchronous compression api that uses scatterlist
buffers.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use the new API to create and destroy the crypto engine kthread
worker. The API hides some implementation details.
In particular, kthread_create_worker() allocates and initializes
struct kthread_worker. It runs the kthread the right way
and stores task_struct into the worker structure.
kthread_destroy_worker() flushes all pending works, stops
the kthread and frees the structure.
This patch does not change the existing behavior except for
dynamically allocating struct kthread_worker and storing
only the pointer of this structure.
It is compile tested only because I did not find an easy
way how to run the code. Well, it should be pretty safe
given the nature of the change.
Signed-off-by: Petr Mladek <pmladek@suse.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix to return error code -EINVAL from the invalid alg ivsize error
handling case instead of 0, as done elsewhere in this function.
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The per-transform 'consts' array is accessed as __be64 in
crypto_cmac_digest_setkey() but was only guaranteed to be aligned to
__alignof__(long). Fix this by aligning it to __alignof__(__be64).
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
cmac_create() previously returned 0 if a cipher with a block size other
than 8 or 16 bytes was specified. It should return -EINVAL instead.
Granted, this doesn't actually change any behavior because cryptomgr
currently ignores any return value other than -EAGAIN from template
->create() functions.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto_exit_cipher_ops() and crypto_exit_compress_ops() are no-ops and
have been for a long time, so remove them.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently FIPS depends on MODULE_SIG, even if MODULES is disabled.
This change allows the enabling of FIPS without support for modules.
If module loading support is enabled, only then does
FIPS require MODULE_SIG.
Signed-off-by: Alec Ari <neotheuser@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
A good practice is to prefix the names of functions by the name
of the subsystem.
The kthread worker API is a mix of classic kthreads and workqueues. Each
worker has a dedicated kthread. It runs a generic function that process
queued works. It is implemented as part of the kthread subsystem.
This patch renames the existing kthread worker API to use
the corresponding name from the workqueues API prefixed by
kthread_:
__init_kthread_worker() -> __kthread_init_worker()
init_kthread_worker() -> kthread_init_worker()
init_kthread_work() -> kthread_init_work()
insert_kthread_work() -> kthread_insert_work()
queue_kthread_work() -> kthread_queue_work()
flush_kthread_work() -> kthread_flush_work()
flush_kthread_worker() -> kthread_flush_worker()
Note that the names of DEFINE_KTHREAD_WORK*() macros stay
as they are. It is common that the "DEFINE_" prefix has
precedence over the subsystem names.
Note that INIT() macros and init() functions use different
naming scheme. There is no good solution. There are several
reasons for this solution:
+ "init" in the function names stands for the verb "initialize"
aka "initialize worker". While "INIT" in the macro names
stands for the noun "INITIALIZER" aka "worker initializer".
+ INIT() macros are used only in DEFINE() macros
+ init() functions are used close to the other kthread()
functions. It looks much better if all the functions
use the same scheme.
+ There will be also kthread_destroy_worker() that will
be used close to kthread_cancel_work(). It is related
to the init() function. Again it looks better if all
functions use the same naming scheme.
+ there are several precedents for such init() function
names, e.g. amd_iommu_init_device(), free_area_init_node(),
jump_label_init_type(), regmap_init_mmio_clk(),
+ It is not an argument but it was inconsistent even before.
[arnd@arndb.de: fix linux-next merge conflict]
Link: http://lkml.kernel.org/r/20160908135724.1311726-1-arnd@arndb.de
Link: http://lkml.kernel.org/r/1470754545-17632-3-git-send-email-pmladek@suse.com
Suggested-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Petr Mladek <pmladek@suse.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Borislav Petkov <bp@suse.de>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Pull crypto updates from Herbert Xu:
"Here is the crypto update for 4.9:
API:
- The crypto engine code now supports hashes.
Algorithms:
- Allow keys >= 2048 bits in FIPS mode for RSA.
Drivers:
- Memory overwrite fix for vmx ghash.
- Add support for building ARM sha1-neon in Thumb2 mode.
- Reenable ARM ghash-ce code by adding import/export.
- Reenable img-hash by adding import/export.
- Add support for multiple cores in omap-aes.
- Add little-endian support for sha1-powerpc.
- Add Cavium HWRNG driver for ThunderX SoC"
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (137 commits)
crypto: caam - treat SGT address pointer as u64
crypto: ccp - Make syslog errors human-readable
crypto: ccp - clean up data structure
crypto: vmx - Ensure ghash-generic is enabled
crypto: testmgr - add guard to dst buffer for ahash_export
crypto: caam - Unmap region obtained by of_iomap
crypto: sha1-powerpc - little-endian support
crypto: gcm - Fix IV buffer size in crypto_gcm_setkey
crypto: vmx - Fix memory corruption caused by p8_ghash
crypto: ghash-generic - move common definitions to a new header file
crypto: caam - fix sg dump
hwrng: omap - Only fail if pm_runtime_get_sync returns < 0
crypto: omap-sham - shrink the internal buffer size
crypto: omap-sham - add support for export/import
crypto: omap-sham - convert driver logic to use sgs for data xmit
crypto: omap-sham - change the DMA threshold value to a define
crypto: omap-sham - add support functions for sg based data handling
crypto: omap-sham - rename sgl to sgl_tmp for deprecation
crypto: omap-sham - align algorithms on word offset
crypto: omap-sham - add context export/import stubs
...
This is bit large pile of code which bring in some nice additions:
- Error reporting: we have added a new mechanism for users of dmaenegine to
register a callback_result which tells them the result of the dma
transaction. Right now only one user ntb is using it.
- As we discussed on KS mailing list and pointed out NO_IRQ has no place in
kernel, this also remove NO_IRQ from dmaengine subsystem (both arm and
ppc users)
- Support for IOMMU slave transfers and it implementation for arm.
- To get better build coverage, enable COMPILE_TEST for bunch of driver,
and fix the warning and sparse complaints on these.
- Apart from above, usual updates spread across drivers.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJX9cGYAAoJEHwUBw8lI4NHXh0P/3OsctPYnwcangOz268hHDap
7ZHwau96K7DRi8cFCc0XmG083Ivqih/fWMFJBUOEsuwS3zPHkfgfhsvm7MqrK3vv
psJIwnubwTVVQ3lePYJlnna6mijcRNXVAooRLiqylA3QPIYRxECDFVDRNwf39D+I
bYp5tmlFcobugOUUoMqq1D/gH8EHUWxrnrsS6UBBpYm+cusc6u9/JXlOb4pcJGSL
V340zQ0S9FNuEM3b+1kMAeq3DG2wLXv9oJzz/6EN59sx5AdjlYUPHd/PvTYOeG0T
crdtDfL+7xcqP0Ms4SGTOD4kXSe6nErr3bIBHQXI6ZmJn0j//+3yU21kTMl95kM+
RM7nE4vItuQR0jPxVlhuLCcf3q7zMi+noOPZ1DVRTE1Yf9AizAgbPXyOE+jzGUUi
6E+0Mj6CLpFH/Mffxphs7L6GKwfWqaLjAupbjR6EWZud37KAwvpcB1CkJEgT9C4s
OiZ4INTPxXmw9dX/T9CPOyh8oZ8mB9LTUzHoJDvDGuwYm7HE0U9pzHG4bP0mjIIt
y3RboP78t1HC9oZUrxCoGhvekJtok0k3RLGJTSx9ujklY9MJGG/F1KEC6APp5tXu
0UToMXpgXSUkKEZesmsJFj/lbh1+h/yo5zTG5Hek8lh1K0sczaoWu3xTTSY9SSZQ
ihlqyvdzSBweKo8ktU8A
=9iA3
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-4.9-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
"This is bit large pile of code which bring in some nice additions:
- Error reporting: we have added a new mechanism for users of
dmaenegine to register a callback_result which tells them the
result of the dma transaction. Right now only one user (ntb) is
using it.
- As we discussed on KS mailing list and pointed out NO_IRQ has no
place in kernel, this also remove NO_IRQ from dmaengine subsystem
(both arm and ppc users)
- Support for IOMMU slave transfers and its implementation for arm.
- To get better build coverage, enable COMPILE_TEST for bunch of
driver, and fix the warning and sparse complaints on these.
- Apart from above, usual updates spread across drivers"
* tag 'dmaengine-4.9-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (169 commits)
async_pq_val: fix DMA memory leak
dmaengine: virt-dma: move function declarations
dmaengine: omap-dma: Enable burst and data pack for SG
DT: dmaengine: rcar-dmac: document R8A7743/5 support
dmaengine: fsldma: Unmap region obtained by of_iomap
dmaengine: jz4780: fix resource leaks on error exit return
dma-debug: fix ia64 build, use PHYS_PFN
dmaengine: coh901318: fix integer overflow when shifting more than 32 places
dmaengine: edma: avoid uninitialized variable use
dma-mapping: fix m32r build warning
dma-mapping: fix ia64 build, use PHYS_PFN
dmaengine: ti-dma-crossbar: enable COMPILE_TEST
dmaengine: omap-dma: enable COMPILE_TEST
dmaengine: edma: enable COMPILE_TEST
dmaengine: ti-dma-crossbar: Fix of_device_id data parameter usage
dmaengine: ti-dma-crossbar: Correct type for of_find_property() third parameter
dmaengine/ARM: omap-dma: Fix the DMAengine compile test on non OMAP configs
dmaengine: edma: Rename set_bits and remove unused clear_bits helper
dmaengine: edma: Use correct type for of_find_property() third parameter
dmaengine: edma: Fix of_device_id data parameter usage (legacy vs TPCC)
...
Add missing dmaengine_unmap_put(), so we don't OOM during RAID6 sync.
Fixes: 1786b943da ("async_pq_val: convert to dmaengine_unmap_data")
Signed-off-by: Justin Maggard <jmaggard@netgear.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Add a guard to 'state' buffer and warn if its consistency after
call to crypto_ahash_export() changes, so that any write that
goes beyond advertised statesize (and thus causing potential
memory corruption [1]) is more visible.
[1] https://marc.info/?l=linux-crypto-vger&m=147467656516085
Signed-off-by: Jan Stancek <jstancek@redhat.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Marcelo Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The cipher block size for GCM is 16 bytes, and thus the CTR transform
used in crypto_gcm_setkey() will also expect a 16-byte IV. However,
the code currently reserves only 8 bytes for the IV, causing
an out-of-bounds access in the CTR transform. This patch fixes
the issue by setting the size of the IV buffer to 16 bytes.
Fixes: 84c9115230 ("[CRYPTO] gcm: Add support for async ciphers")
Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Move common values and types used by ghash-generic to a new header file
so drivers can directly use ghash-generic as a fallback implementation.
Fixes: cc333cd68d ("crypto: vmx - Adding GHASH routines for VMX module")
Cc: stable@vger.kernel.org
Signed-off-by: Marcelo Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As the software RSA implementation now produces fixed-length
output, we need to eliminate leading zeros in the calling code
instead.
This patch does just that for pkcs1pad decryption while signature
verification was fixed in an earlier patch.
Fixes: 9b45b7bba3 ("crypto: rsa - Generate fixed-length output")
Reported-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When we need to allocate a temporary blkcipher_walk_next and it
fails, the code is supposed to take the slow path of processing
the data block by block. However, due to an unrelated change
we instead end up dereferencing the NULL pointer.
This patch fixes it by moving the unrelated bsize setting out
of the way so that we enter the slow path as inteded.
Fixes: 7607bd8ff0 ("[CRYPTO] blkcipher: Added blkcipher_walk_virt_block")
Cc: stable@vger.kernel.org
Reported-by: xiakaixu <xiakaixu@huawei.com>
Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
The current implementation uses a global per-cpu array to store
data which are used to derive the next IV. This is insecure as
the attacker may change the stored data.
This patch removes all traces of chaining and replaces it with
multiplication of the salt and the sequence number.
Fixes: a10f554fa7 ("crypto: echainiv - Add encrypted chain IV...")
Cc: stable@vger.kernel.org
Reported-by: Mathias Krause <minipli@googlemail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Right now attempting to read an empty hash simply returns zeroed
bytes, this patch corrects this by calling the digest function
using an empty input.
Reported-by: Russell King - ARM Linux <linux@armlinux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The current crypto engine allow only ablkcipher_request to be enqueued.
Thus denying any use of it for hardware that also handle hash algo.
This patch modify the API for allowing to enqueue ciphers and hash.
Since omap-aes/omap-des are the only users, this patch also convert them
to the new cryptoengine API.
Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch move the whole crypto engine API to its own header
crypto/engine.h.
Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When calling .import() on a cryptd ahash_request, the structure members
that describe the child transform in the shash_desc need to be initialized
like they are when calling .init()
Cc: stable@vger.kernel.org
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In FIPS mode, additional restrictions may apply. If these restrictions
are violated, the kernel will panic(). This patch allows test vectors
for symmetric ciphers to be marked as to be skipped in FIPS mode.
Together with the patch, the XTS test vectors where the AES key is
identical to the tweak key is disabled in FIPS mode. This test vector
violates the FIPS requirement that both keys must be different.
Reported-by: Tapas Sarangi <TSarangi@trustwave.com>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch fixes an unused label warning triggered when the macro
XOR_SELECT_TEMPLATE is not set.
Fixes: 39457acda9 ("crypto: xor - skip speed test if the xor...")
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Suggested-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The AEAD code path incorrectly uses the child tfm to track the
cryptd refcnt, and then potentially frees the child tfm.
Fixes: 81760ea6a9 ("crypto: cryptd - Add helpers to check...")
Reported-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
With a public notification, NIST now allows the use of RSA keys with a
modulus >= 2048 bits. The new rule allows any modulus size >= 2048 bits
provided that either 2048 or 3072 bits are supported at least so that
the entire RSA implementation can be CAVS tested.
This patch fixes the inability to boot the kernel in FIPS mode, because
certs/x509.genkey defines a 4096 bit RSA key per default. This key causes
the RSA signature verification to fail in FIPS mode without the patch
below.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix to return a negative error code from the error handling
case instead of 0.
Signed-off-by: Wei Yongjun <weiyj.lk@gmail.com>
Acked-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If the architecture selected the xor function with XOR_SELECT_TEMPLATE
the speed result of the do_xor_speed benchmark is of limited value.
The speed measurement increases the bootup time a little, which can
makes a difference for kernels used in container like virtual machines.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When calling the DRBG health test in FIPS mode, the Jitter RNG is not
yet present in the kernel crypto API which will cause the instantiation
to fail and thus the health test to fail.
As the health tests cover the enforcement of various thresholds, invoke
the functions that are supposed to enforce the thresholds directly.
This patch also saves precious seed.
Reported-by: Tapas Sarangi <TSarangi@trustwave.com>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The sentence 'Based on' is misspelled, respell it.
Signed-off-by: LABBE Corentin <clabbe.montjoie@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
"if (!ret == template[i].fail)" is confusing to compilers (gcc5):
crypto/testmgr.c: In function '__test_aead':
crypto/testmgr.c:531:12: warning: logical not is only applied to the
left hand side of comparison [-Wlogical-not-parentheses]
if (!ret == template[i].fail) {
^
Let there be 'if (template[i].fail == !ret) '.
Signed-off-by: Yanjiang Jin <yanjiang.jin@windriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The optimised crc32c implementation depends on VMX (aka. Altivec)
instructions, so the kernel must be built with Altivec support in order
for the crc32c code to build.
Fixes: 6dd7a82cc5 ("crypto: powerpc - Add POWER8 optimised crc32c")
Acked-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
On 32-bit (e.g. with m68k-linux-gnu-gcc-4.1):
crypto/sha3_generic.c:27: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:28: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:29: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:29: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:31: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:31: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:32: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:32: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:32: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:33: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:33: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:34: warning: integer constant is too large for ‘long’ type
crypto/sha3_generic.c:34: warning: integer constant is too large for ‘long’ type
Fixes: 53964b9ee6 ("crypto: sha3 - Add SHA-3 hash algorithm")
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
important is the use of a ChaCha20-based CRNG for /dev/urandom, which
is faster, more efficient, and easier to make scalable for
silly/abusive userspace programs that want to read from /dev/urandom
in a tight loop on NUMA systems.
This set of patches also improves entropy gathering on VM's running on
Microsoft Azure, and will take advantage of a hw random number
generator (if present) to initialize the /dev/urandom pool.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQEcBAABCAAGBQJXla/8AAoJEPL5WVaVDYGj4xgH/3qDPHpFvynUMbMgg/MRQNt2
K1zhKKQjYuJ465aIiPME21tGC1C5JxO3a9mgDy0pfJmLVvCWRUx9dDtdI59Xkaev
KuVTbqdl8D75lftX3/jF3lXKd5dVLrW2V9hOMcESQqBFuoc1B8sJztR0upQsdGvA
I8+jjxRffSzxDZY4it6p/lqsTdEfwhA+mEc6ztZ+Ccsdlqd+GNCvB/YE4tk+U6x/
tGKaKfuiHSXpR4P9ks/L5gwaIcFQ6Y1rdVqd8YP3iC9YXGFZdJkRXklRRj1mCgWs
YoCDMATTos+6iVONft94fF5pW6HND1F3bxPNo/RpMZca3MlqbiCSij3ky+gqUc4=
=mSBt
-----END PGP SIGNATURE-----
Merge tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random
Pull random driver updates from Ted Ts'o:
"A number of improvements for the /dev/random driver; the most
important is the use of a ChaCha20-based CRNG for /dev/urandom, which
is faster, more efficient, and easier to make scalable for
silly/abusive userspace programs that want to read from /dev/urandom
in a tight loop on NUMA systems.
This set of patches also improves entropy gathering on VM's running on
Microsoft Azure, and will take advantage of a hw random number
generator (if present) to initialize the /dev/urandom pool"
(It turns out that the random tree hadn't been in linux-next this time
around, because it had been dropped earlier as being too quiet. Oh
well).
* tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random:
random: strengthen input validation for RNDADDTOENTCNT
random: add backtracking protection to the CRNG
random: make /dev/urandom scalable for silly userspace programs
random: replace non-blocking pool with a Chacha20-based CRNG
random: properly align get_random_int_hash
random: add interrupt callback to VMBus IRQ handler
random: print a warning for the first ten uninitialized random users
random: initialize the non-blocking pool via add_hwgenerator_randomness()
Pull crypto updates from Herbert Xu:
"Here is the crypto update for 4.8:
API:
- first part of skcipher low-level conversions
- add KPP (Key-agreement Protocol Primitives) interface.
Algorithms:
- fix IPsec/cryptd reordering issues that affects aesni
- RSA no longer does explicit leading zero removal
- add SHA3
- add DH
- add ECDH
- improve DRBG performance by not doing CTR by hand
Drivers:
- add x86 AVX2 multibuffer SHA256/512
- add POWER8 optimised crc32c
- add xts support to vmx
- add DH support to qat
- add RSA support to caam
- add Layerscape support to caam
- add SEC1 AEAD support to talitos
- improve performance by chaining requests in marvell/cesa
- add support for Araneus Alea I USB RNG
- add support for Broadcom BCM5301 RNG
- add support for Amlogic Meson RNG
- add support Broadcom NSP SoC RNG"
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (180 commits)
crypto: vmx - Fix aes_p8_xts_decrypt build failure
crypto: vmx - Ignore generated files
crypto: vmx - Adding support for XTS
crypto: vmx - Adding asm subroutines for XTS
crypto: skcipher - add comment for skcipher_alg->base
crypto: testmgr - Print akcipher algorithm name
crypto: marvell - Fix wrong flag used for GFP in mv_cesa_dma_add_iv_op
crypto: nx - off by one bug in nx_of_update_msc()
crypto: rsa-pkcs1pad - fix rsa-pkcs1pad request struct
crypto: scatterwalk - Inline start/map/done
crypto: scatterwalk - Remove unnecessary BUG in scatterwalk_start
crypto: scatterwalk - Remove unnecessary advance in scatterwalk_pagedone
crypto: scatterwalk - Fix test in scatterwalk_done
crypto: api - Optimise away crypto_yield when hard preemption is on
crypto: scatterwalk - add no-copy support to copychunks
crypto: scatterwalk - Remove scatterwalk_bytes_sglen
crypto: omap - Stop using crypto scatterwalk_bytes_sglen
crypto: skcipher - Remove top-level givcipher interface
crypto: user - Remove crypto_lookup_skcipher call
crypto: cts - Convert to skcipher
...
Pull crypto fixes from Herbert Xu:
"This fixes a sporadic build failure in the qat driver as well as a
memory corruption bug in rsa-pkcs1pad"
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: rsa-pkcs1pad - fix rsa-pkcs1pad request struct
crypto: qat - make qat_asym_algs.o depend on asn1 headers
To allow for child request context the struct akcipher_request child_req
needs to be at the end of the structure.
Cc: stable@vger.kernel.org
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When an akcipher test fails, we don't know which algorithm failed
because the name is not printed. This patch fixes this.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
To allow for child request context the struct akcipher_request child_req
needs to be at the end of the structure.
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch inlines the functions scatterwalk_start, scatterwalk_map
and scatterwalk_done as they're all tiny and mostly used by the block
cipher walker.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Nothing bad will happen even if sg->length is zero, so there is
no point in keeping this BUG_ON in scatterwalk_start.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The offset advance in scatterwalk_pagedone not only is unnecessary,
but it was also buggy when it was needed by scatterwalk_copychunks.
As the latter has long ago been fixed to call scatterwalk_advance
directly, we can remove this unnecessary offset adjustment.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When there is more data to be processed, the current test in
scatterwalk_done may prevent us from calling pagedone even when
we should.
In particular, if we're on an SG entry spanning multiple pages
where the last page is not a full page, we will incorrectly skip
calling pagedone on the second last page.
This patch fixes this by adding a separate test for whether we've
reached the end of a page.
Cc: stable@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The function ablkcipher_done_slow is pretty much identical to
scatterwalk_copychunks except that it doesn't actually copy as
the processing hasn't been completed yet.
This patch allows scatterwalk_copychunks to be used in this case
by specifying out == 2.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch removes the now unused scatterwalk_bytes_sglen. Anyone
using this out-of-tree should switch over to sg_nents_for_len.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch removes the old crypto_grab_skcipher helper and replaces
it with crypto_grab_skcipher2.
As this is the final entry point into givcipher this patch also
removes all traces of the top-level givcipher interface, including
all implicit IV generators such as chainiv.
The bottom-level givcipher interface remains until the drivers
using it are converted.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As there are no more kernel users of built-in IV generators we
can remove the special lookup for skciphers.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts cts over to the skcipher interface. It also
optimises the implementation to use one CBC operation for all but
the last block, which is then processed separately.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds an skcipher null object alongside the existing
null blkcipher so that IV generators using it can switch over
to skcipher.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts chacha20poly1305 to use the new skcipher
interface as opposed to ablkcipher.
It also fixes a buglet where we may end up with an async poly1305
when the user asks for a async algorithm. This shouldn't be a
problem yet as there aren't any async implementations of poly1305
out there.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts authencesn to use the new skcipher interface as
opposed to ablkcipher.
It also fixes a little bug where if a sync version of authencesn
is requested we may still end up using an async ahash. This should
have no effect as none of the authencesn users can request for a
sync authencesn.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts authenc to use the new skcipher interface as
opposed to ablkcipher.
It also fixes a little bug where if a sync version of authenc
is requested we may still end up using an async ahash. This should
have no effect as none of the authenc users can request for a
sync authenc.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a chunk size parameter to aead algorithms, just
like the chunk size for skcipher algorithms.
However, unlike skcipher we do not currently export this to AEAD
users. It is only meant to be used by AEAD implementors for now.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Current the default null skcipher is actually a crypto_blkcipher.
This patch creates a synchronous crypto_skcipher version of the
null cipher which unfortunately has to settle for the name skcipher2.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch allows skcipher algorithms and instances to be created
and registered with the crypto API. They are accessible through
the top-level skcipher interface, along with ablkcipher/blkcipher
algorithms and instances.
This patch also introduces a new parameter called chunk size
which is meant for ciphers such as CTR and CTS which ostensibly
can handle arbitrary lengths, but still behave like block ciphers
in that you can only process a partial block at the very end.
For these ciphers the block size will continue to be set to 1
as it is now while the chunk size will be set to the underlying
block size.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Arbitrary X.509 certificates without authority key identifiers (AKIs)
can be added to "trusted" keyrings, including IMA or EVM certs loaded
from the filesystem. Signature verification is currently bypassed for
certs without AKIs.
Trusted keys were recently refactored, and this bug is not present in
4.6.
restrict_link_by_signature should return -ENOKEY (no matching parent
certificate found) if the certificate being evaluated has no AKIs,
instead of bypassing signature checks and returning 0 (new certificate
accepted).
Reported-by: Petko Manolov <petkan@mip-labs.com>
Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: James Morris <james.l.morris@oracle.com>
Commit e68503bd68 forgot to set digest_len and thus cause the following
error reported by kexec when launching a crash kernel:
kexec_file_load failed: Bad message
Fixes: e68503bd68 (KEYS: Generalise system_verify_data() to provide access to internal content)
Signed-off-by: Lans Zhang <jia.zhang@windriver.com>
Tested-by: Dave Young <dyoung@redhat.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: Vivek Goyal <vgoyal@redhat.com>
cc: kexec@lists.infradead.org
cc: linux-crypto@vger.kernel.org
Signed-off-by: James Morris <james.l.morris@oracle.com>
Key generated with openssl. It also contains all fields required
for testing CRT mode
Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When parsing a private key, store all non-optional fields. These
are required for enabling CRT mode for decrypt and verify
Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Report correct error in case of failure
Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use the vector polynomial multiply-sum instructions in POWER8 to
speed up crc32c.
This is just over 41x faster than the slice-by-8 method that it
replaces. Measurements on a 4.1 GHz POWER8 show it sustaining
52 GiB/sec.
A simple btrfs write performance test:
dd if=/dev/zero of=/mnt/tmpfile bs=1M count=4096
sync
is over 3.7x faster.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As the software RSA implementation now produces fixed-length
output, we need to eliminate leading zeros in the calling code
instead.
This patch does just that for pkcs1pad signature verification.
Fixes: 9b45b7bba3 ("crypto: rsa - Generate fixed-length output")
Reported-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds HMAC-SHA3 test modes in tcrypt module
and related test vectors.
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The multibuffer hash speed test is incorrectly bailing because
of an EINPROGRESS return value. This patch fixes it by setting
ret to zero if it is equal to -EINPROGRESS.
Reported-by: Megha Dey <megha.dey@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In the vast majority of cases (2^-32 on 32-bit and 2^-64 on 64-bit)
cases, the result from encryption/signing will require no padding.
This patch makes these two operations write their output directly
to the final destination. Only in the exceedingly rare cases where
fixup is needed to we copy it out and back to add the leading zeroes.
This patch also makes use of the crypto_akcipher_set_crypt API
instead of writing the akcipher request directly.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rather than repeatedly checking the key size on each operation,
we should be checking it once when the key is set.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We don't currently support using akcipher in atomic contexts,
so GFP_KERNEL should always be used.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The helper pkcs1pad_sg_set_buf tries to split a buffer that crosses
a page boundary into two SG entries. This is unnecessary. This
patch removes that.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The only user of rsa-pkcs1pad always uses the hash so there is
no reason to support the case of not having a hash.
This patch also changes the digest info lookup so that it is
only done once during template instantiation rather than on each
operation.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Every implementation of RSA that we have naturally generates
output with leading zeroes. The one and only user of RSA,
pkcs1pad wants to have those leading zeroes in place, in fact
because they are currently absent it has to write those zeroes
itself.
So we shouldn't be stripping leading zeroes in the first place.
In fact this patch makes rsa-generic produce output with fixed
length so that pkcs1pad does not need to do any extra work.
This patch also changes DH to use the new interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch allows RSA implementations to produce output with
leading zeroes. testmgr will skip leading zeroes when comparing
the output.
This patch also tries to make the RSA test function generic enough
to potentially handle other akcipher algorithms.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the helper crypto_inst_setname because the current
helper crypto_alloc_instance2 is no longer useful given that we
now look up the algorithm after we allocate the instance object.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts tcrypt to use the new skcipher interface as
opposed to ablkcipher/blkcipher.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The function crypto_ahash_extsize did not include padding when
computing the tfm context size. This patch fixes this by using
the generic crypto_alg_extsize helper.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As it is, if you get an async ahash with a sync skcipher you'll
end up with a sync authenc, which is wrong.
This patch fixes it by considering the ASYNC bit from ahash as
well.
It also fixes a little bug where if a sync version of authenc
is requested we may still end up using an async ahash.
Neither of them should have any effect as none of the authenc
users can request for a sync authenc.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch resolves a number of issues with the mb speed test
function:
* The tfm is never freed.
* Memory is allocated even when we're not using mb.
* When an error occurs we don't wait for completion for other requests.
* When an error occurs during allocation we may leak memory.
* The test function ignores plen but still runs for plen != blen.
* The backlog flag is incorrectly used (may crash).
This patch tries to resolve all these issues as well as making
the code consistent with the existing hash speed testing function.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
The recently added test_mb_ahash_speed() has clearly serious coding
style issues. Try to fix some of them:
1. Don't mix pr_err() and printk();
2. Don't wrap strings;
3. Properly align goto statement in if() block;
4. Align wrapped arguments on new line;
5. Don't wrap functions on first argument;
Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a new mode to calculate the speed of the sha512_mb algorithm
Signed-off-by: Megha Dey <megha.dey@linux.intel.com>
Reviewed-by: Fenghua Yu <fenghua.yu@intel.com>
Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add the config CRYPTO_SHA512_MB which will enable the computation
using the SHA512 multi-buffer algorithm.
Signed-off-by: Megha Dey <megha.dey@linux.intel.com>
Reviewed-by: Fenghua Yu <fenghua.yu@intel.com>
Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The existing test suite to calculate the speed of the SHA algorithms
assumes serial (single buffer)) computation of data. With the SHA
multibuffer algorithms, we work on 8 lanes of data in parallel. Hence,
the need to introduce a new test suite to calculate the speed for these
algorithms.
Signed-off-by: Megha Dey <megha.dey@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add the config CRYPTO_SHA256_MB which will enable the computation using the
SHA256 multi-buffer algorithm.
Signed-off-by: Megha Dey <megha.dey@linux.intel.com>
Reviewed-by: Fenghua Yu <fenghua.yu@intel.com>
Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
There is another ecdh_shared_secret in net/bluetooth/ecc.c
Fixes: 3c4b23901a ("crypto: ecdh - Add ECDH software support")
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As part of the Y2038 development, __getnstimeofday is not supposed to be
used any more. It is now replaced with ktime_get_ns. The Jitter RNG uses
the time stamp to measure the execution time of a given code path and
tries to detect variations in the execution time. Therefore, the only
requirement the Jitter RNG has, is a sufficient high resolution to
detect these variations.
The change was tested on x86 to show an identical behavior as RDTSC. The
used test code simply measures the execution time of the heart of the
RNG:
jent_get_nstime(&time);
jent_memaccess(ec, min);
jent_fold_time(NULL, time, &folded, min);
jent_get_nstime(&time2);
return ((time2 - time));
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Implement ECDH under kpp API
* Provide ECC software support for curve P-192 and
P-256.
* Add kpp test for ECDH with data generated by OpenSSL
Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Implement MPI based Diffie-Hellman under kpp API
* Test provided uses data generad by OpenSSL
Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add key-agreement protocol primitives (kpp) API which allows to
implement primitives required by protocols such as DH and ECDH.
The API is composed mainly by the following functions
* set_secret() - It allows the user to set his secret, also
referred to as his private key, along with the parameters
known to both parties involved in the key-agreement session.
* generate_public_key() - It generates the public key to be sent to
the other counterpart involved in the key-agreement session. The
function has to be called after set_params() and set_secret()
* generate_secret() - It generates the shared secret for the session
Other functions such as init() and exit() are provided for allowing
cryptographic hardware to be inizialized properly before use
Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert wants the sha1-mb algorithm to have an async implementation:
https://lkml.org/lkml/2016/4/5/286.
Currently, sha1-mb uses an async interface for the outer algorithm
and a sync interface for the inner algorithm. This patch introduces
a async interface for even the inner algorithm.
Signed-off-by: Megha Dey <megha.dey@linux.intel.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch fixes an old bug where requests can be reordered because
some are processed by cryptd while others are processed directly
in softirq context.
The fix is to always postpone to cryptd if there are currently
requests outstanding from the same tfm.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds helpers to check whether a given tfm is currently
queued. This is meant to be used by ablk_helper and similar
entities to ensure that no reordering is introduced because of
requests queued in cryptd with respect to requests being processed
in softirq context.
The per-cpu queue length limit is also increased to 1000 in line
with network limits.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch commit eed1e1afd8 as
it is only a workaround for the real bug and the proper fix has
now been applied as 055ddaace0
("crypto: user - re-add size check for CRYPTO_MSG_GETALG").
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Commit 9aa867e465 ("crypto: user - Add CRYPTO_MSG_DELRNG")
accidentally removed the minimum size check for CRYPTO_MSG_GETALG
netlink messages. This allows userland to send a truncated
CRYPTO_MSG_GETALG message as short as a netlink header only making
crypto_report() operate on uninitialized memory by accessing data
beyond the end of the netlink message.
Fix this be re-adding the minimum required size of CRYPTO_MSG_GETALG
messages to the crypto_msg_min[] array.
Fixes: 9aa867e465 ("crypto: user - Add CRYPTO_MSG_DELRNG")
Cc: stable@vger.kernel.org # v4.2
Signed-off-by: Mathias Krause <minipli@googlemail.com>
Cc: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We accidentally return PTR_ERR(NULL) which is success but we should
return -ENOMEM.
Fixes: 3559128521 ('crypto: drbg - use CTR AES instead of ECB AES')
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Added support for SHA-3 algorithm test's
in tcrypt module and related test vectors.
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the implementation of SHA3 algorithm
in software and it's based on original implementation
pushed in patch https://lwn.net/Articles/518415/ with
additional changes to match the padding rules specified
in SHA-3 specification.
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As it is if you ask for a sync gcm you may actually end up with
an async one because it does not filter out async implementations
of ghash.
This patch fixes this by adding the necessary filter when looking
for ghash.
Cc: stable@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Return the raw key with no other processing so that the caller
can copy it or MPI parse it, etc.
The scope is to have only one ANS.1 parser for all RSA
implementations.
Update the RSA software implementation so that it does
the MPI conversion on top.
Signed-off-by: Tudor Ambarus <tudor-dan.ambarus@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The TFM object maintains the key for the CTR DRBG.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The CTR DRBG update function performs a full CTR AES operation including
the XOR with "plaintext" data. Hence, remove the XOR from the code and
use the CTR mode to do the XOR.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Hardware cipher implementation may require aligned buffers. All buffers
that potentially are processed with a cipher are now aligned.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The CTR DRBG derives its random data from the CTR that is encrypted with
AES.
This patch now changes the CTR DRBG implementation such that the
CTR AES mode is employed. This allows the use of steamlined CTR AES
implementation such as ctr-aes-aesni.
Unfortunately there are the following subtile changes we need to apply
when using the CTR AES mode:
- the CTR mode increments the counter after the cipher operation, but
the CTR DRBG requires the increment before the cipher op. Hence, the
crypto_inc is applied to the counter (drbg->V) once it is
recalculated.
- the CTR mode wants to encrypt data, but the CTR DRBG is interested in
the encrypted counter only. The full CTR mode is the XOR of the
encrypted counter with the plaintext data. To access the encrypted
counter, the patch uses a NULL data vector as plaintext to be
"encrypted".
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The CTR DRBG code always set the key for each sym cipher invocation even
though the key has not been changed.
The patch ensures that the setkey is only invoked when a new key is
generated by the DRBG.
With this patch, the CTR DRBG performance increases by more than 150%.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The CRYPTO_MSG_GETALG netlink message type provides a buffer to the
kernel to retrieve information from the kernel. The data buffer will not
provide any input and will not be read. Hence the nlmsg_parse is not
applicable to this netlink message type.
This patch fixes the following kernel log message when using this
netlink interface:
netlink: 208 bytes leftover after parsing attributes in process `XXX'.
Patch successfully tested with libkcapi from [1] which uses
CRYPTO_MSG_GETALG to obtain cipher-specific information from the kernel.
[1] http://www.chronox.de/libkcapi.html
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Pull crypto fixes from Herbert Xu:
"This fixes the following issues:
- missing selection in public_key that may result in a build failure
- Potential crash in error path in omap-sham
- ccp AES XTS bug that affects requests larger than 4096"
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: ccp - Fix AES XTS error for request sizes above 4096
crypto: public_key: select CRYPTO_AKCIPHER
crypto: omap-sham - potential Oops on error in probe
Pull security subsystem updates from James Morris:
"Highlights:
- A new LSM, "LoadPin", from Kees Cook is added, which allows forcing
of modules and firmware to be loaded from a specific device (this
is from ChromeOS, where the device as a whole is verified
cryptographically via dm-verity).
This is disabled by default but can be configured to be enabled by
default (don't do this if you don't know what you're doing).
- Keys: allow authentication data to be stored in an asymmetric key.
Lots of general fixes and updates.
- SELinux: add restrictions for loading of kernel modules via
finit_module(). Distinguish non-init user namespace capability
checks. Apply execstack check on thread stacks"
* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: (48 commits)
LSM: LoadPin: provide enablement CONFIG
Yama: use atomic allocations when reporting
seccomp: Fix comment typo
ima: add support for creating files using the mknodat syscall
ima: fix ima_inode_post_setattr
vfs: forbid write access when reading a file into memory
fs: fix over-zealous use of "const"
selinux: apply execstack check on thread stacks
selinux: distinguish non-init user namespace capability checks
LSM: LoadPin for kernel file loading restrictions
fs: define a string representation of the kernel_read_file_id enumeration
Yama: consolidate error reporting
string_helpers: add kstrdup_quotable_file
string_helpers: add kstrdup_quotable_cmdline
string_helpers: add kstrdup_quotable
selinux: check ss_initialized before revalidating an inode label
selinux: delay inode label lookup as long as possible
selinux: don't revalidate an inode's label when explicitly setting it
selinux: Change bool variable name to index.
KEYS: Add KEYCTL_DH_COMPUTE command
...
In some rare randconfig builds, we can end up with
ASYMMETRIC_PUBLIC_KEY_SUBTYPE enabled but CRYPTO_AKCIPHER disabled,
which fails to link because of the reference to crypto_alloc_akcipher:
crypto/built-in.o: In function `public_key_verify_signature':
:(.text+0x110e4): undefined reference to `crypto_alloc_akcipher'
This adds a Kconfig 'select' statement to ensure the dependency
is always there.
Cc: <stable@vger.kernel.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Pull crypto update from Herbert Xu:
"API:
- Crypto self tests can now be disabled at boot/run time.
- Add async support to algif_aead.
Algorithms:
- A large number of fixes to MPI from Nicolai Stange.
- Performance improvement for HMAC DRBG.
Drivers:
- Use generic crypto engine in omap-des.
- Merge ppc4xx-rng and crypto4xx drivers.
- Fix lockups in sun4i-ss driver by disabling IRQs.
- Add DMA engine support to ccp.
- Reenable talitos hash algorithms.
- Add support for Hisilicon SoC RNG.
- Add basic crypto driver for the MXC SCC.
Others:
- Do not allocate crypto hash tfm in NORECLAIM context in ecryptfs"
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (77 commits)
crypto: qat - change the adf_ctl_stop_devices to void
crypto: caam - fix caam_jr_alloc() ret code
crypto: vmx - comply with ABIs that specify vrsave as reserved.
crypto: testmgr - Add a flag allowing the self-tests to be disabled at runtime.
crypto: ccp - constify ccp_actions structure
crypto: marvell/cesa - Use dma_pool_zalloc
crypto: qat - make adf_vf_isr.c dependant on IOV config
crypto: qat - Fix typo in comments
lib: asn1_decoder - add MODULE_LICENSE("GPL")
crypto: omap-sham - Use dma_request_chan() for requesting DMA channel
crypto: omap-des - Use dma_request_chan() for requesting DMA channel
crypto: omap-aes - Use dma_request_chan() for requesting DMA channel
crypto: omap-des - Integrate with the crypto engine framework
crypto: s5p-sss - fix incorrect usage of scatterlists api
crypto: s5p-sss - Fix missed interrupts when working with 8 kB blocks
crypto: s5p-sss - Use common BIT macro
crypto: mxc-scc - fix unwinding in mxc_scc_crypto_register()
crypto: mxc-scc - signedness bugs in mxc_scc_ablkcipher_req_init()
crypto: talitos - fix ahash algorithms registration
crypto: ccp - Ensure all dependencies are specified
...
The PKCS#7 test key type should use the secondary keyring instead of the
built-in keyring if available as the source of trustworthy keys.
Signed-off-by: David Howells <dhowells@redhat.com>
As akcipher uses an SG interface, you must not use vmalloc memory
as input for it. This patch fixes testmgr to copy the vmalloc
test vectors to kmalloc memory before running the test.
This patch also removes a superfluous sg_virt call in do_test_rsa.
Cc: <stable@vger.kernel.org>
Reported-by: Anatoly Pugachev <matorola@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Running self-tests for a short-lived KVM VM takes 28ms on my laptop.
This commit adds a flag 'cryptomgr.notests' which allows them to be
disabled.
However if fips=1 as well, we ignore this flag as FIPS mode mandates
that the self-tests are run.
Signed-off-by: Richard W.M. Jones <rjones@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The pkcs1pad template needs CRYPTO_MANAGER so it needs
to be explicitly selected by CRYPTO_RSA.
Reported-by: Jamie Heilman <jamie@audible.transient.net>
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The crypto hash walk code is broken when supplied with an offset
greater than or equal to PAGE_SIZE. This patch fixes it by adjusting
walk->pg and walk->offset when this happens.
Cc: <stable@vger.kernel.org>
Reported-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
__GFP_REPEAT has a rather weak semantic but since it has been introduced
around 2.6.12 it has been ignored for low order allocations.
lzo_init uses __GFP_REPEAT to allocate LZO1X_MEM_COMPRESS 16K. This is
order 3 allocation request and __GFP_REPEAT is ignored for this size
as well as all <= PAGE_ALLOC_COSTLY requests.
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-crypto@vger.kernel.org
Signed-off-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The output buffer length has to be at least as big as the key_size.
It is then updated to the actual output size by the implementation.
Cc: <stable@vger.kernel.org>
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Move the point at which a key is determined to be trustworthy to
__key_link() so that we use the contents of the keyring being linked in to
to determine whether the key being linked in is trusted or not.
What is 'trusted' then becomes a matter of what's in the keyring.
Currently, the test is done when the key is parsed, but given that at that
point we can only sensibly refer to the contents of the system trusted
keyring, we can only use that as the basis for working out the
trustworthiness of a new key.
With this change, a trusted keyring is a set of keys that once the
trusted-only flag is set cannot be added to except by verification through
one of the contained keys.
Further, adding a key into a trusted keyring, whilst it might grant
trustworthiness in the context of that keyring, does not automatically
grant trustworthiness in the context of a second keyring to which it could
be secondarily linked.
To accomplish this, the authentication data associated with the key source
must now be retained. For an X.509 cert, this means the contents of the
AuthorityKeyIdentifier and the signature data.
If system keyrings are disabled then restrict_link_by_builtin_trusted()
resolves to restrict_link_reject(). The integrity digital signature code
still works correctly with this as it was previously using
KEY_FLAG_TRUSTED_ONLY, which doesn't permit anything to be added if there
is no system keyring against which trust can be determined.
Signed-off-by: David Howells <dhowells@redhat.com>
Make the system trusted keyring depend on the asymmetric key type as
there's not a lot of point having it if you can't then load asymmetric keys
onto it.
This requires the ASYMMETRIC_KEY_TYPE to be made a bool, not a tristate, as
the Kconfig language doesn't then correctly force ASYMMETRIC_KEY_TYPE to
'y' rather than 'm' if SYSTEM_TRUSTED_KEYRING is 'y'.
Making SYSTEM_TRUSTED_KEYRING *select* ASYMMETRIC_KEY_TYPE instead doesn't
work as the Kconfig interpreter then wrongly complains about dependency
loops.
Signed-off-by: David Howells <dhowells@redhat.com>
We should call verify_signature() rather than directly calling
public_key_verify_signature() if we have a struct key to use as we
shouldn't be poking around in the private data of the key struct as that's
subtype dependent.
Signed-off-by: David Howells <dhowells@redhat.com>
Generalise x509_request_asymmetric_key(). It doesn't really have any
dependencies on X.509 features as it uses generalised IDs and the
public_key structs that contain data extracted from X.509.
Signed-off-by: David Howells <dhowells@redhat.com>
Make the determination of the trustworthiness of a key dependent on whether
a key that can verify it is present in the supplied ring of trusted keys
rather than whether or not the verifying key has KEY_FLAG_TRUSTED set.
verify_pkcs7_signature() will return -ENOKEY if the PKCS#7 message trust
chain cannot be verified.
Signed-off-by: David Howells <dhowells@redhat.com>
Generalise system_verify_data() to provide access to internal content
through a callback. This allows all the PKCS#7 stuff to be hidden inside
this function and removed from the PE file parser and the PKCS#7 test key.
If external content is not required, NULL should be passed as data to the
function. If the callback is not required, that can be set to NULL.
The function is now called verify_pkcs7_signature() to contrast with
verify_pefile_signature() and the definitions of both have been moved into
linux/verification.h along with the key_being_used_for enum.
Signed-off-by: David Howells <dhowells@redhat.com>
There's a bug in the code determining whether a certificate is self-signed
or not: if they have neither AKID nor SKID then we just assume that the
cert is self-signed, which may not be true.
Fix this by checking that the raw subject name matches the raw issuer name
and that the public key algorithm for the key and signature are both the
same in addition to requiring that the AKID bits match.
Signed-off-by: David Howells <dhowells@redhat.com>
Extract the signature digest for an X.509 certificate earlier, at the end
of x509_cert_parse() rather than leaving it to the callers thereof since it
has to be called anyway.
Further, immediately after that, check the signature on self-signed
certificates, also rather in the callers of x509_cert_parse().
We note in the x509_certificate struct the following bits of information:
(1) Whether the signature is self-signed (even if we can't check the
signature due to missing crypto).
(2) Whether the key held in the certificate needs unsupported crypto to be
used. We may get a PKCS#7 message with X.509 certs that we can't make
use of - we just ignore them and give ENOPKG at the end it we couldn't
verify anything if at least one of these unusable certs are in the
chain of trust.
(3) Whether the signature held in the certificate needs unsupported crypto
to be checked. We can still use the key held in this certificate,
even if we can't check the signature on it - if it is held in the
system trusted keyring, for instance. We just can't add it to a ring
of trusted keys or follow it further up the chain of trust.
Making these checks earlier allows x509_check_signature() to be removed and
replaced with direct calls to public_key_verify_signature().
Signed-off-by: David Howells <dhowells@redhat.com>
Point to the public_key_signature struct from the pkcs7_signed_info struct
rather than embedding it. This makes the code consistent with the X.509
signature handling and makes it possible to have a common cleanup function.
We also save a copy of the digest in the signature without sharing the
memory with the crypto layer metadata.
Signed-off-by: David Howells <dhowells@redhat.com>
Retain the key verification data (ie. the struct public_key_signature)
including the digest and the key identifiers.
Note that this means that we need to take a separate copy of the digest in
x509_get_sig_params() rather than lumping it in with the crypto layer data.
Signed-off-by: David Howells <dhowells@redhat.com>
Add key identifier pointers to public_key_signature struct so that they can
be used to retain the identifier of the key to be used to verify the
signature in both PKCS#7 and X.509.
Signed-off-by: David Howells <dhowells@redhat.com>
Allow authentication data to be stored in an asymmetric key in the 4th
element of the key payload and provide a way for it to be destroyed.
For the public key subtype, this will be a public_key_signature struct.
Signed-off-by: David Howells <dhowells@redhat.com>
The HMAC implementation allows setting the HMAC key independently from
the hashing operation. Therefore, the key only needs to be set when a
new key is generated.
This patch increases the speed of the HMAC DRBG by at least 35% depending
on the use case.
The patch is fully CAVS tested.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The OID_sha224 case is missing a break and it falls through
to the -ENOPKG error default. Since HASH_ALGO_SHA224 seems
to be supported, this looks like an unintentional missing break.
Fixes: 07f081fb50 ("PKCS#7: Add OIDs for sha224, sha284 and sha512 hash algos and use them")
Cc: <stable@vger.kernel.org> # 4.2+
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Following the async change for algif_skcipher
this patch adds similar async read to algif_aead.
changes in v3:
- add call to aead_reset_ctx directly from aead_put_sgl instead of calling
them separatelly one after the other
- remove wait from aead_sock_destruct function as it is not needed
when sock_hold is used
changes in v2:
- change internal data structures from fixed size arrays, limited to
RSGL_MAX_ENTRIES, to linked list model with no artificial limitation.
- use sock_kmalloc instead of kmalloc for memory allocation
- use sock_hold instead of separate atomic ctr to wait for outstanding
request
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Pull crypto fix from Herbert Xu:
"This fixes a bug in pkcs7_validate_trust and its users where the
output value may in fact be taken from uninitialised memory"
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
PKCS#7: pkcs7_validate_trust(): initialize the _trusted output argument
Despite what the DocBook comment to pkcs7_validate_trust() says, the
*_trusted argument is never set to false.
pkcs7_validate_trust() only positively sets *_trusted upon encountering
a trusted PKCS#7 SignedInfo block.
This is quite unfortunate since its callers, system_verify_data() for
example, depend on pkcs7_validate_trust() clearing *_trusted on non-trust.
Indeed, UBSAN splats when attempting to load the uninitialized local
variable 'trusted' from system_verify_data() in pkcs7_validate_trust():
UBSAN: Undefined behaviour in crypto/asymmetric_keys/pkcs7_trust.c:194:14
load of value 82 is not a valid value for type '_Bool'
[...]
Call Trace:
[<ffffffff818c4d35>] dump_stack+0xbc/0x117
[<ffffffff818c4c79>] ? _atomic_dec_and_lock+0x169/0x169
[<ffffffff8194113b>] ubsan_epilogue+0xd/0x4e
[<ffffffff819419fa>] __ubsan_handle_load_invalid_value+0x111/0x158
[<ffffffff819418e9>] ? val_to_string.constprop.12+0xcf/0xcf
[<ffffffff818334a4>] ? x509_request_asymmetric_key+0x114/0x370
[<ffffffff814b83f0>] ? kfree+0x220/0x370
[<ffffffff818312c2>] ? public_key_verify_signature_2+0x32/0x50
[<ffffffff81835e04>] pkcs7_validate_trust+0x524/0x5f0
[<ffffffff813c391a>] system_verify_data+0xca/0x170
[<ffffffff813c3850>] ? top_trace_array+0x9b/0x9b
[<ffffffff81510b29>] ? __vfs_read+0x279/0x3d0
[<ffffffff8129372f>] mod_verify_sig+0x1ff/0x290
[...]
The implication is that pkcs7_validate_trust() effectively grants trust
when it really shouldn't have.
Fix this by explicitly setting *_trusted to false at the very beginning
of pkcs7_validate_trust().
Cc: <stable@vger.kernel.org>
Signed-off-by: Nicolai Stange <nicstange@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Merge second patch-bomb from Andrew Morton:
- a couple of hotfixes
- the rest of MM
- a new timer slack control in procfs
- a couple of procfs fixes
- a few misc things
- some printk tweaks
- lib/ updates, notably to radix-tree.
- add my and Nick Piggin's old userspace radix-tree test harness to
tools/testing/radix-tree/. Matthew said it was a godsend during the
radix-tree work he did.
- a few code-size improvements, switching to __always_inline where gcc
screwed up.
- partially implement character sets in sscanf
* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (118 commits)
sscanf: implement basic character sets
lib/bug.c: use common WARN helper
param: convert some "on"/"off" users to strtobool
lib: add "on"/"off" support to kstrtobool
lib: update single-char callers of strtobool()
lib: move strtobool() to kstrtobool()
include/linux/unaligned: force inlining of byteswap operations
include/uapi/linux/byteorder, swab: force inlining of some byteswap operations
include/asm-generic/atomic-long.h: force inlining of some atomic_long operations
usb: common: convert to use match_string() helper
ide: hpt366: convert to use match_string() helper
ata: hpt366: convert to use match_string() helper
power: ab8500: convert to use match_string() helper
power: charger_manager: convert to use match_string() helper
drm/edid: convert to use match_string() helper
pinctrl: convert to use match_string() helper
device property: convert to use match_string() helper
lib/string: introduce match_string() helper
radix-tree tests: add test for radix_tree_iter_next
radix-tree tests: add regression3 test
...
CMA allocation should be guaranteed to succeed by definition, but,
unfortunately, it would be failed sometimes. It is hard to track down
the problem, because it is related to page reference manipulation and we
don't have any facility to analyze it.
This patch adds tracepoints to track down page reference manipulation.
With it, we can find exact reason of failure and can fix the problem.
Following is an example of tracepoint output. (note: this example is
stale version that printing flags as the number. Recent version will
print it as human readable string.)
<...>-9018 [004] 92.678375: page_ref_set: pfn=0x17ac9 flags=0x0 count=1 mapcount=0 mapping=(nil) mt=4 val=1
<...>-9018 [004] 92.678378: kernel_stack:
=> get_page_from_freelist (ffffffff81176659)
=> __alloc_pages_nodemask (ffffffff81176d22)
=> alloc_pages_vma (ffffffff811bf675)
=> handle_mm_fault (ffffffff8119e693)
=> __do_page_fault (ffffffff810631ea)
=> trace_do_page_fault (ffffffff81063543)
=> do_async_page_fault (ffffffff8105c40a)
=> async_page_fault (ffffffff817581d8)
[snip]
<...>-9018 [004] 92.678379: page_ref_mod: pfn=0x17ac9 flags=0x40048 count=2 mapcount=1 mapping=0xffff880015a78dc1 mt=4 val=1
[snip]
...
...
<...>-9131 [001] 93.174468: test_pages_isolated: start_pfn=0x17800 end_pfn=0x17c00 fin_pfn=0x17ac9 ret=fail
[snip]
<...>-9018 [004] 93.174843: page_ref_mod_and_test: pfn=0x17ac9 flags=0x40068 count=0 mapcount=0 mapping=0xffff880015a78dc1 mt=4 val=-1 ret=1
=> release_pages (ffffffff8117c9e4)
=> free_pages_and_swap_cache (ffffffff811b0697)
=> tlb_flush_mmu_free (ffffffff81199616)
=> tlb_finish_mmu (ffffffff8119a62c)
=> exit_mmap (ffffffff811a53f7)
=> mmput (ffffffff81073f47)
=> do_exit (ffffffff810794e9)
=> do_group_exit (ffffffff81079def)
=> SyS_exit_group (ffffffff81079e74)
=> entry_SYSCALL_64_fastpath (ffffffff817560b6)
This output shows that problem comes from exit path. In exit path, to
improve performance, pages are not freed immediately. They are gathered
and processed by batch. During this process, migration cannot be
possible and CMA allocation is failed. This problem is hard to find
without this page reference tracepoint facility.
Enabling this feature bloat kernel text 30 KB in my configuration.
text data bss dec hex filename
12127327 2243616 1507328 15878271 f2487f vmlinux_disabled
12157208 2258880 1507328 15923416 f2f8d8 vmlinux_enabled
Note that, due to header file dependency problem between mm.h and
tracepoint.h, this feature has to open code the static key functions for
tracepoints. Proposed by Steven Rostedt in following link.
https://lkml.org/lkml/2015/12/9/699
[arnd@arndb.de: crypto/async_pq: use __free_page() instead of put_page()]
[iamjoonsoo.kim@lge.com: fix build failure for xtensa]
[akpm@linux-foundation.org: tweak Kconfig text, per Vlastimil]
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Michal Nazarewicz <mina86@mina86.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>