This updated the generic SHA-512 implementation to use the
generic shared SHA-512 glue code.
It also implements a .finup hook crypto_sha512_finup() and exports
it to other modules. The import and export() functions and the
.statesize member are dropped, since the default implementation
is perfectly suitable for this module.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This updates the generic SHA-256 implementation to use the
new shared SHA-256 glue code.
It also implements a .finup hook crypto_sha256_finup() and exports
it to other modules. The import and export() functions and the
.statesize member are dropped, since the default implementation
is perfectly suitable for this module.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This updated the generic SHA-1 implementation to use the generic
shared SHA-1 glue code.
It also implements a .finup hook crypto_sha1_finup() and exports
it to other modules. The import and export() functions and the
.statesize member are dropped, since the default implementation
is perfectly suitable for this module.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
To reduce the number of copies of boilerplate code throughout
the tree, this patch implements generic glue for the SHA-512
algorithm. This allows a specific arch or hardware implementation
to only implement the special handling that it needs.
The users need to supply an implementation of
void (sha512_block_fn)(struct sha512_state *sst, u8 const *src, int blocks)
and pass it to the SHA-512 base functions. For easy casting between the
prototype above and existing block functions that take a 'u64 state[]'
as their first argument, the 'state' member of struct sha512_state is
moved to the base of the struct.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
To reduce the number of copies of boilerplate code throughout
the tree, this patch implements generic glue for the SHA-256
algorithm. This allows a specific arch or hardware implementation
to only implement the special handling that it needs.
The users need to supply an implementation of
void (sha256_block_fn)(struct sha256_state *sst, u8 const *src, int blocks)
and pass it to the SHA-256 base functions. For easy casting between the
prototype above and existing block functions that take a 'u32 state[]'
as their first argument, the 'state' member of struct sha256_state is
moved to the base of the struct.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
To reduce the number of copies of boilerplate code throughout
the tree, this patch implements generic glue for the SHA-1
algorithm. This allows a specific arch or hardware implementation
to only implement the special handling that it needs.
The users need to supply an implementation of
void (sha1_block_fn)(struct sha1_state *sst, u8 const *src, int blocks)
and pass it to the SHA-1 base functions. For easy casting between the
prototype above and existing block functions that take a 'u32 state[]'
as their first argument, the 'state' member of struct sha1_state is
moved to the base of the struct.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch makes crypto_unregister_instance take a crypto_instance
instead of a crypto_alg. This allows us to remove a duplicate
CRYPTO_ALG_INSTANCE check in crypto_unregister_instance.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Change the RNGs to always return 0 in success case.
This patch ensures that seqiv.c works with RNGs other than krng. seqiv
expects that any return code other than 0 is an error. Without the
patch, rfc4106(gcm(aes)) will not work when using a DRBG or an ANSI
X9.31 RNG.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Pull crypto update from Herbert Xu:
"Here is the crypto update for 3.20:
- Added 192/256-bit key support to aesni GCM.
- Added MIPS OCTEON MD5 support.
- Fixed hwrng starvation and race conditions.
- Added note that memzero_explicit is not a subsitute for memset.
- Added user-space interface for crypto_rng.
- Misc fixes"
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (71 commits)
crypto: tcrypt - do not allocate iv on stack for aead speed tests
crypto: testmgr - limit IV copy length in aead tests
crypto: tcrypt - fix buflen reminder calculation
crypto: testmgr - mark rfc4106(gcm(aes)) as fips_allowed
crypto: caam - fix resource clean-up on error path for caam_jr_init
crypto: caam - pair irq map and dispose in the same function
crypto: ccp - terminate ccp_support array with empty element
crypto: caam - remove unused local variable
crypto: caam - remove dead code
crypto: caam - don't emit ICV check failures to dmesg
hwrng: virtio - drop extra empty line
crypto: replace scatterwalk_sg_next with sg_next
crypto: atmel - Free memory in error path
crypto: doc - remove colons in comments
crypto: seqiv - Ensure that IV size is at least 8 bytes
crypto: cts - Weed out non-CBC algorithms
MAINTAINERS: add linux-crypto to hw random
crypto: cts - Remove bogus use of seqiv
crypto: qat - don't need qat_auth_state struct
crypto: algif_rng - fix sparse non static symbol warning
...
With that, all ->sendmsg() instances are converted to iov_iter primitives
and are agnostic wrt the kind of iov_iter they are working with.
So's the last remaining ->recvmsg() instance that wasn't kind-agnostic yet.
All ->sendmsg() and ->recvmsg() advance ->msg_iter by the amount actually
copied and none of them modifies the underlying iovec, etc.
Cc: linux-crypto@vger.kernel.org
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Modify crypto drivers to use the generic SG helper since
both of them are equivalent and the one from crypto is redundant.
See also:
468577abe3 reverted in
b2ab4a57b0
Signed-off-by: Cristian Stoica <cristian.stoica@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use setsockopt on the tfm FD to provide the authentication tag size for
an AEAD cipher. This is achieved by adding a callback function which is
intended to be used by the AEAD AF_ALG implementation.
The optlen argument of the setsockopt specifies the authentication tag
size to be used with the AEAD tfm.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
AEAD requires the caller to specify the following information separate
from the data stream. This information allows the AEAD interface handler
to identify the AAD, ciphertext/plaintext and the authentication tag:
* Associated authentication data of arbitrary length and
length
* Length of authentication tag for encryption
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix documentation typo for shash_alg->descsize.
Add documentation for initially uncovered member variables.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The API function calls exported by the kernel crypto API for SHASHes
to be used by consumers are documented.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The API function calls exported by the kernel crypto API for AHASHes
to be used by consumers are documented.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The hash data structures needed to be filled in by cipher developers are
documented.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The API function calls exported by the kernel crypto API for RNGs to
be used by consumers are documented.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a macro which replaces the use of a Variable Length Array In Struct (VLAIS)
with a C99 compliant equivalent. This macro instead allocates the appropriate
amount of memory using an char array.
The new code can be compiled with both gcc and clang.
struct shash_desc contains a flexible array member member ctx declared with
CRYPTO_MINALIGN_ATTR, so sizeof(struct shash_desc) aligns the beginning
of the array declared after struct shash_desc with long long.
No trailing padding is required because it is not a struct type that can
be used in an array.
The CRYPTO_MINALIGN_ATTR is required so that desc is aligned with long long
as would be the case for a struct containing a member with
CRYPTO_MINALIGN_ATTR.
If you want to get to the ctx at the end of the shash_desc as before you can do
so using shash_desc_ctx(shash)
Signed-off-by: Behan Webster <behanw@converseincode.com>
Reviewed-by: Mark Charlebois <charlebm@gmail.com>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Michał Mirosław <mirqus@gmail.com>
Pull security subsystem updates from James Morris.
Mostly ima, selinux, smack and key handling updates.
* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: (65 commits)
integrity: do zero padding of the key id
KEYS: output last portion of fingerprint in /proc/keys
KEYS: strip 'id:' from ca_keyid
KEYS: use swapped SKID for performing partial matching
KEYS: Restore partial ID matching functionality for asymmetric keys
X.509: If available, use the raw subjKeyId to form the key description
KEYS: handle error code encoded in pointer
selinux: normalize audit log formatting
selinux: cleanup error reporting in selinux_nlmsg_perm()
KEYS: Check hex2bin()'s return when generating an asymmetric key ID
ima: detect violations for mmaped files
ima: fix race condition on ima_rdwr_violation_check and process_measurement
ima: added ima_policy_flag variable
ima: return an error code from ima_add_boot_aggregate()
ima: provide 'ima_appraise=log' kernel option
ima: move keyring initialization to ima_init()
PKCS#7: Handle PKCS#7 messages that contain no X.509 certs
PKCS#7: Better handling of unsupported crypto
KEYS: Overhaul key identification when searching for asymmetric keys
KEYS: Implement binary asymmetric key ID handling
...
Bring back the functionality whereby an asymmetric key can be matched with a
partial match on one of its IDs.
Whilst we're at it, allow for the possibility of having an increased number of
IDs.
Reported-by: Dmitry Kasatkin <d.kasatkin@samsung.com>
Signed-off-by: Dmitry Kasatkin <d.kasatkin@samsung.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Make use of the new match string preparsing to overhaul key identification
when searching for asymmetric keys. The following changes are made:
(1) Use the previously created asymmetric_key_id struct to hold the following
key IDs derived from the X.509 certificate or PKCS#7 message:
id: serial number + issuer
skid: subjKeyId + subject
authority: authKeyId + issuer
(2) Replace the hex fingerprint attached to key->type_data[1] with an
asymmetric_key_ids struct containing the id and the skid (if present).
(3) Make the asymmetric_type match data preparse select one of two searches:
(a) An iterative search for the key ID given if prefixed with "id:". The
prefix is expected to be followed by a hex string giving the ID to
search for. The criterion key ID is checked against all key IDs
recorded on the key.
(b) A direct search if the key ID is not prefixed with "id:". This will
look for an exact match on the key description.
(4) Make x509_request_asymmetric_key() take a key ID. This is then converted
into "id:<hex>" and passed into keyring_search() where match preparsing
will turn it back into a binary ID.
(5) X.509 certificate verification then takes the authority key ID and looks
up a key that matches it to find the public key for the certificate
signature.
(6) PKCS#7 certificate verification then takes the id key ID and looks up a
key that matches it to find the public key for the signed information
block signature.
Additional changes:
(1) Multiple subjKeyId and authKeyId values on an X.509 certificate cause the
cert to be rejected with -EBADMSG.
(2) The 'fingerprint' ID is gone. This was primarily intended to convey PGP
public key fingerprints. If PGP is supported in future, this should
generate a key ID that carries the fingerprint.
(3) Th ca_keyid= kernel command line option is now converted to a key ID and
used to match the authority key ID. Possibly this should only match the
actual authKeyId part and not the issuer as well.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
The maximum values for additional input string or generated blocks is
larger than 1<<32. To ensure a sensible value on 32 bit systems, return
SIZE_MAX on 32 bit systems. This value is lower than the maximum
allowed values defined in SP800-90A. The standard allow lower maximum
values, but not larger values.
SIZE_MAX - 1 is used for drbg_max_addtl to allow
drbg_healthcheck_sanity to check the enforcement of the variable
without wrapping.
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
SP800-90A mandates several hard-coded values. The old drbg_cores allows
the setting of these values per DRBG implementation. However, due to the
hard requirement of SP800-90A, these values are now returned globally
for each DRBG.
The ability to set such values per DRBG is therefore removed.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch introduces the multi-buffer crypto daemon which is responsible
for submitting crypto jobs in a work queue to the responsible multi-buffer
crypto algorithm. The idea of the multi-buffer algorihtm is to put
data streams from multiple jobs in a wide (AVX2) register and then
take advantage of SIMD instructions to do crypto computation on several
buffers simultaneously.
The multi-buffer crypto daemon is also responsbile for flushing the
remaining buffers to complete the computation if no new buffers arrive
for a while.
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Pull security subsystem updates from James Morris:
"In this release:
- PKCS#7 parser for the key management subsystem from David Howells
- appoint Kees Cook as seccomp maintainer
- bugfixes and general maintenance across the subsystem"
* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: (94 commits)
X.509: Need to export x509_request_asymmetric_key()
netlabel: shorter names for the NetLabel catmap funcs/structs
netlabel: fix the catmap walking functions
netlabel: fix the horribly broken catmap functions
netlabel: fix a problem when setting bits below the previously lowest bit
PKCS#7: X.509 certificate issuer and subject are mandatory fields in the ASN.1
tpm: simplify code by using %*phN specifier
tpm: Provide a generic means to override the chip returned timeouts
tpm: missing tpm_chip_put in tpm_get_random()
tpm: Properly clean sysfs entries in error path
tpm: Add missing tpm_do_selftest to ST33 I2C driver
PKCS#7: Use x509_request_asymmetric_key()
Revert "selinux: fix the default socket labeling in sock_graft()"
X.509: x509_request_asymmetric_keys() doesn't need string length arguments
PKCS#7: fix sparse non static symbol warning
KEYS: revert encrypted key change
ima: add support for measuring and appraising firmware
firmware_class: perform new LSM checks
security: introduce kernel_fw_from_file hook
PKCS#7: Missing inclusion of linux/err.h
...
Change formal parameters to not clash with global names to
eliminate many W=2 warnings.
Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
pkcs7_request_asymmetric_key() and x509_request_asymmetric_key() do the same
thing, the latter being a copy of the former created by the IMA folks, so drop
the PKCS#7 version as the X.509 location is more general.
Whilst we're at it, rename the arguments of x509_request_asymmetric_key() to
better reflect what the values being passed in are intended to match on an
X.509 cert.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
The current locking approach of the DRBG tries to keep the protected
code paths very minimal. It is therefore possible that two threads query
one DRBG instance at the same time. When thread A requests random
numbers, a shadow copy of the DRBG state is created upon which the
request for A is processed. After finishing the state for A's request is
merged back into the DRBG state. If now thread B requests random numbers
from the same DRBG after the request for thread A is received, but
before A's shadow state is merged back, the random numbers for B will be
identical to the ones for A. Please note that the time window is very
small for this scenario.
To prevent that there is even a theoretical chance for thread A and B
having the same DRBG state, the current time stamp is provided as
additional information string for each new request.
The addition of the time stamp as additional information string implies
that now all generate functions must be capable to process a linked
list with additional information strings instead of a scalar.
CC: Rafael Aquini <aquini@redhat.com>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Find the intersection between the X.509 certificate chain contained in a PKCS#7
message and a set of keys that we already know and trust.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Find the appropriate key in the PKCS#7 key list and verify the signature with
it. There may be several keys in there forming a chain. Any link in that
chain or the root of that chain may be in our keyrings.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Implement a parser for a PKCS#7 signed-data message as described in part of
RFC 2315.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
The DRBG-style linked list to manage input data that is fed into the
cipher invocations is replaced with the kernel linked list
implementation.
The change is transparent to users of the interfaces offered by the
DRBG. Therefore, no changes to the testmgr code is needed.
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The header file includes the definition of:
* DRBG data structures with
- struct drbg_state as main structure
- struct drbg_core referencing the backend ciphers
- struct drbg_state_ops callbach handlers for specific code
supporting the Hash, HMAC, CTR DRBG implementations
- struct drbg_conc defining a linked list for input data
- struct drbg_test_data holding the test "entropy" data for CAVS
testing and testmgr.c
- struct drbg_gen allowing test data, additional information
string and personalization string data to be funneled through
the kernel crypto API -- the DRBG requires additional
parameters when invoking the reset and random number
generation requests than intended by the kernel crypto API
* wrapper function to the kernel crypto API functions using struct
drbg_gen to pass through all data needed for DRBG
* wrapper functions to kernel crypto API functions usable for testing
code to inject test_data into the DRBG as needed by CAVS testing and
testmgr.c.
* DRBG flags required for the operation of the DRBG and for selecting
the particular DRBG type and backend cipher
* getter functions for data from struct drbg_core
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use skcipher_givcrypt_cast(crypto_dequeue_request(queue)) instead, which
does the same thing in much cleaner way. The skcipher_givcrypt_cast()
actually uses container_of() instead of messing around with offsetof()
too.
Signed-off-by: Marek Vasut <marex@denx.de>
Reported-by: Arnd Bergmann <arnd@arndb.de>
Cc: Pantelis Antoniou <panto@antoniou-consulting.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
It makes no sense for crypto_yield() to be defined in scatterwalk.h ,
move it into algapi.h as it's an internal function to crypto API.
Signed-off-by: Marek Vasut <marex@denx.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Although the existing hash walk interface has already been used
by a number of ahash crypto drivers, it turns out that none of
them were really asynchronous. They were all essentially polling
for completion.
That's why nobody has noticed until now that the walk interface
couldn't work with a real asynchronous driver since the memory
is mapped using kmap_atomic.
As we now have a use-case for a real ahash implementation on x86,
this patch creates a minimal ahash walk interface. Basically it
just calls kmap instead of kmap_atomic and does away with the
crypto_yield call. Real ahash crypto drivers don't need to yield
since by definition they won't be hogging the CPU.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
These defines might be needed by crypto drivers.
Signed-off-by: Horia Geanta <horia.geanta@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This adds the function blkcipher_aead_walk_virt_block, which allows the caller
to use the blkcipher walk API to handle the input and output scatterlists.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In order to allow other uses of the blkcipher walk API than the blkcipher
algos themselves, this patch copies some of the transform data members to the
walk struct so the transform is only accessed at walk init time.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that scatterwalk_sg_chain sets the chain pointer bit the sg_page
call in scatterwalk_sg_next hits a BUG_ON when CONFIG_DEBUG_SG is
enabled. Use sg_chain_ptr instead of sg_page on a chain entry.
Cc: stable@vger.kernel.org
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The scatterwalk_crypto_chain function invokes the scatterwalk_sg_chain
function to chain two scatterlists, but the chain pointer indication
bit is not set. When the resulting scatterlist is used, for example,
by sg_nents to count the number of scatterlist entries, a segfault occurs
because sg_nents does not follow the chain pointer to the chained scatterlist.
Update scatterwalk_sg_chain to set the chain pointer indication bit as is
done by the sg_chain function.
Cc: stable@vger.kernel.org
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Pull crypto update from Herbert Xu:
- Made x86 ablk_helper generic for ARM
- Phase out chainiv in favour of eseqiv (affects IPsec)
- Fixed aes-cbc IV corruption on s390
- Added constant-time crypto_memneq which replaces memcmp
- Fixed aes-ctr in omap-aes
- Added OMAP3 ROM RNG support
- Add PRNG support for MSM SoC's
- Add and use Job Ring API in caam
- Misc fixes
[ NOTE! This pull request was sent within the merge window, but Herbert
has some questionable email sending setup that makes him public enemy
#1 as far as gmail is concerned. So most of his emails seem to be
trapped by gmail as spam, resulting in me not seeing them. - Linus ]
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (49 commits)
crypto: s390 - Fix aes-cbc IV corruption
crypto: omap-aes - Fix CTR mode counter length
crypto: omap-sham - Add missing modalias
padata: make the sequence counter an atomic_t
crypto: caam - Modify the interface layers to use JR API's
crypto: caam - Add API's to allocate/free Job Rings
crypto: caam - Add Platform driver for Job Ring
hwrng: msm - Add PRNG support for MSM SoC's
ARM: DT: msm: Add Qualcomm's PRNG driver binding document
crypto: skcipher - Use eseqiv even on UP machines
crypto: talitos - Simplify key parsing
crypto: picoxcell - Simplify and harden key parsing
crypto: ixp4xx - Simplify and harden key parsing
crypto: authencesn - Simplify key parsing
crypto: authenc - Export key parsing helper function
crypto: mv_cesa: remove deprecated IRQF_DISABLED
hwrng: OMAP3 ROM Random Number Generator support
crypto: sha256_ssse3 - also test for BMI2
crypto: mv_cesa - Remove redundant of_match_ptr
crypto: sahara - Remove redundant of_match_ptr
...
This patch makes use of the newly defined common hash algorithm info,
replacing, for example, PKEY_HASH with HASH_ALGO.
Changelog:
- Lindent fixes - Mimi
CC: David Howells <dhowells@redhat.com>
Signed-off-by: Dmitry Kasatkin <d.kasatkin@samsung.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
This patch provides a single place for information about hash algorithms,
such as hash sizes and kernel driver names, which will be used by IMA
and the public key code.
Changelog:
- Fix sparse and checkpatch warnings
- Move hash algo enums to uapi for userspace signing functions.
Signed-off-by: Dmitry Kasatkin <d.kasatkin@samsung.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
AEAD key parsing is duplicated to multiple places in the kernel. Add a
common helper function to consolidate that functionality.
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Mathias Krause <mathias.krause@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When comparing MAC hashes, AEAD authentication tags, or other hash
values in the context of authentication or integrity checking, it
is important not to leak timing information to a potential attacker,
i.e. when communication happens over a network.
Bytewise memory comparisons (such as memcmp) are usually optimized so
that they return a nonzero value as soon as a mismatch is found. E.g,
on x86_64/i5 for 512 bytes this can be ~50 cyc for a full mismatch
and up to ~850 cyc for a full match (cold). This early-return behavior
can leak timing information as a side channel, allowing an attacker to
iteratively guess the correct result.
This patch adds a new method crypto_memneq ("memory not equal to each
other") to the crypto API that compares memory areas of the same length
in roughly "constant time" (cache misses could change the timing, but
since they don't reveal information about the content of the strings
being compared, they are effectively benign). Iow, best and worst case
behaviour take the same amount of time to complete (in contrast to
memcmp).
Note that crypto_memneq (unlike memcmp) can only be used to test for
equality or inequality, NOT for lexicographical order. This, however,
is not an issue for its use-cases within the crypto API.
We tried to locate all of the places in the crypto API where memcmp was
being used for authentication or integrity checking, and convert them
over to crypto_memneq.
crypto_memneq is declared noinline, placed in its own source file,
and compiled with optimizations that might increase code size disabled
("Os") because a smart compiler (or LTO) might notice that the return
value is always compared against zero/nonzero, and might then
reintroduce the same early-return optimization that we are trying to
avoid.
Using #pragma or __attribute__ optimization annotations of the code
for disabling optimization was avoided as it seems to be considered
broken or unmaintained for long time in GCC [1]. Therefore, we work
around that by specifying the compile flag for memneq.o directly in
the Makefile. We found that this seems to be most appropriate.
As we use ("Os"), this patch also provides a loop-free "fast-path" for
frequently used 16 byte digests. Similarly to kernel library string
functions, leave an option for future even further optimized architecture
specific assembler implementations.
This was a joint work of James Yonan and Daniel Borkmann. Also thanks
for feedback from Florian Weimer on this and earlier proposals [2].
[1] http://gcc.gnu.org/ml/gcc/2012-07/msg00211.html
[2] https://lkml.org/lkml/2013/2/10/131
Signed-off-by: James Yonan <james@openvpn.net>
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Cc: Florian Weimer <fw@deneb.enyo.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Store public key algorithm ID in public_key_signature struct for reference
purposes. This allows a public_key_signature struct to be embedded in
struct x509_certificate and other places more easily.
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Josh Boyer <jwboyer@redhat.com>
Store public key algo ID in public_key struct for reference purposes. This
allows it to be removed from the x509_certificate struct and used to find a
default in public_key_verify_signature().
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Josh Boyer <jwboyer@redhat.com>
Move the public-key algorithm pointer array from x509_public_key.c to
public_key.c as it isn't X.509 specific.
Note that to make this configure correctly, the public key part must be
dependent on the RSA module rather than the other way round. This needs a
further patch to make use of the crypto module loading stuff rather than using
a fixed table.
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Josh Boyer <jwboyer@redhat.com>
Rename the arrays of public key parameters (public key algorithm names, hash
algorithm names and ID type names) so that the array name ends in "_name".
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Josh Boyer <jwboyer@redhat.com>
Create a generic version of ablk_helper so it can be reused
by other architectures.
Acked-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Crypto layer only passes nbytes to encrypt but in omap-aes driver we need to
know number of SG elements to pass to dmaengine slave API. We add function for
the same to scatterwalk library.
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Other SHA512 routines may need to use the generic routine when
FPU is not available.
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Other SHA256 routine may need to use the generic routine when
FPU is not available.
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
CAST5 and CAST6 both use same lookup tables, which can be moved shared module
'cast_common'.
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
VMAC implementation, as it is, does not work with blocks that
are not multiples of 128-bytes. Furthermore, this is a problem
when using the implementation on scatterlists, even
when the complete plain text is 128-byte multiple, as the pieces
that get passed to vmac_update can be pretty much any size.
I also added test cases for unaligned blocks.
Signed-off-by: Salman Qazi <sqazi@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Pull module signing support from Rusty Russell:
"module signing is the highlight, but it's an all-over David Howells frenzy..."
Hmm "Magrathea: Glacier signing key". Somebody has been reading too much HHGTTG.
* 'modules-next' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux: (37 commits)
X.509: Fix indefinite length element skip error handling
X.509: Convert some printk calls to pr_devel
asymmetric keys: fix printk format warning
MODSIGN: Fix 32-bit overflow in X.509 certificate validity date checking
MODSIGN: Make mrproper should remove generated files.
MODSIGN: Use utf8 strings in signer's name in autogenerated X.509 certs
MODSIGN: Use the same digest for the autogen key sig as for the module sig
MODSIGN: Sign modules during the build process
MODSIGN: Provide a script for generating a key ID from an X.509 cert
MODSIGN: Implement module signature checking
MODSIGN: Provide module signing public keys to the kernel
MODSIGN: Automatically generate module signing keys if missing
MODSIGN: Provide Kconfig options
MODSIGN: Provide gitignore and make clean rules for extra files
MODSIGN: Add FIPS policy
module: signature checking hook
X.509: Add a crypto key parser for binary (DER) X.509 certificates
MPILIB: Provide a function to read raw data into an MPI
X.509: Add an ASN.1 decoder
X.509: Add simple ASN.1 grammar compiler
...
Provide signature verification using an asymmetric-type key to indicate the
public key to be used.
The API is a single function that can be found in crypto/public_key.h:
int verify_signature(const struct key *key,
const struct public_key_signature *sig)
The first argument is the appropriate key to be used and the second argument
is the parsed signature data:
struct public_key_signature {
u8 *digest;
u16 digest_size;
enum pkey_hash_algo pkey_hash_algo : 8;
union {
MPI mpi[2];
struct {
MPI s; /* m^d mod n */
} rsa;
struct {
MPI r;
MPI s;
} dsa;
};
};
This should be filled in prior to calling the function. The hash algorithm
should already have been called and the hash finalised and the output should
be in a buffer pointed to by the 'digest' member.
Any extra data to be added to the hash by the hash format (eg. PGP) should
have been added by the caller prior to finalising the hash.
It is assumed that the signature is made up of a number of MPI values. If an
algorithm becomes available for which this is not the case, the above structure
will have to change.
It is also assumed that it will have been checked that the signature algorithm
matches the key algorithm.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Add a subtype for supporting asymmetric public-key encryption algorithms such
as DSA (FIPS-186) and RSA (PKCS#1 / RFC1337).
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Fix "symbol 'x' was not declared. Should it be static?" sparse warnings.
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix "symbol 'x' was not declared. Should it be static?" sparse warnings.
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rename cast6 module to cast6_generic to allow autoloading of optimized
implementations. Generic functions and s-boxes are exported to be able to use
them within optimized implementations.
Signed-off-by: Johannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rename cast5 module to cast5_generic to allow autoloading of optimized
implementations. Generic functions and s-boxes are exported to be able to use
them within optimized implementations.
Signed-off-by: Johannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add crypto_[un]register_shashes() to allow simplifying init/exit code of shash
crypto modules that register multiple algorithms.
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We lookup algorithms with crypto_alg_mod_lookup() when instantiating via
crypto_add_alg(). However, algorithms that are wrapped by an IV genearator
(e.g. aead or genicv type algorithms) need special care. The userspace
process hangs until it gets a timeout when we use crypto_alg_mod_lookup()
to lookup these algorithms. So export the lookup functions for these
algorithms and use them in crypto_add_alg().
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We leak the crypto instance when we unregister an instance with
crypto_del_alg(). Therefore we introduce crypto_unregister_instance()
to unlink the crypto instance from the template's instances list and
to free the recources of the instance properly.
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add xts_crypt() function that can be used by cipher implementations that can
benefit from parallelized cipher operations.
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Export gf128mul table initialization routines and add lrw_crypt() function
that can be used by cipher implementations that can benefit from parallelized
cipher operations.
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Serpent SSE2 assembler implementations only provide 4-way/8-way parallel
functions and need setkey and one-block encrypt/decrypt functions.
CC: Dag Arne Osvik <osvik@ii.uib.no>
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We add a report function pointer to struct crypto_type. This function
pointer is used from the crypto userspace configuration API to report
crypto algorithms to userspace.
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Patch splits up the blowfish crypto routine into a common part (key setup)
which will be used by blowfish crypto modules (x86_64 assembly and generic-c).
Also fixes errors/warnings reported by checkpatch.
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
On Tue, Aug 16, 2011 at 03:22:34PM +1000, Stephen Rothwell wrote:
>
> After merging the final tree, today's linux-next build (powerpc
> allyesconfig) produced this warning:
>
> In file included from security/integrity/ima/../integrity.h:16:0,
> from security/integrity/ima/ima.h:27,
> from security/integrity/ima/ima_policy.c:20:
> include/crypto/sha.h:86:10: warning: 'struct shash_desc' declared inside parameter list
> include/crypto/sha.h:86:10: warning: its scope is only this definition or declaration, which is probably not what you want
>
> Introduced by commit 7c390170b4 ("crypto: sha1 - export sha1_update for
> reuse"). I guess you need to include crypto/hash.h in crypto/sha.h.
This patch fixes this by providing a declaration for struct shash_desc.
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Export the update function as crypto_sha1_update() to not have the need
to reimplement the same algorithm for each SHA-1 implementation. This
way the generic SHA-1 implementation can be used as fallback for other
implementations that fail to run under certain circumstances, like the
need for an FPU context while executing in IRQ context.
Signed-off-by: Mathias Krause <minipli@googlemail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove linux/mm.h inclusion from netdevice.h -- it's unused (I've checked manually).
To prevent mm.h inclusion via other channels also extract "enum dma_data_direction"
definition into separate header. This tiny piece is what gluing netdevice.h with mm.h
via "netdevice.h => dmaengine.h => dma-mapping.h => scatterlist.h => mm.h".
Removal of mm.h from scatterlist.h was tried and was found not feasible
on most archs, so the link was cutoff earlier.
Hope people are OK with tiny include file.
Note, that mm_types.h is still dragged in, but it is a separate story.
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch moves padlock.h from drivers/crypto into include/crypto
so that it may be used by the via-rng driver.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
A lot of crypto algorithms implement their own chaining function.
So add a generic one that can be used from all the algorithms that
need scatterlist chaining.
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch creates the backbone of the user-space interface for
the Crypto API, through a new socket family AF_ALG.
Each session corresponds to one or more connections obtained from
that socket. The number depends on the number of inputs/outputs
of that particular type of operation. For most types there will
be a s ingle connection/file descriptor that is used for both input
and output. AEAD is one of the few that require two inputs.
Each algorithm type will provide its own implementation that plugs
into af_alg. They're keyed using a string such as "skcipher" or
"hash".
IOW this patch only contains the boring bits that is required
to hold everything together.
Thakns to Miloslav Trmac for reviewing this and contributing
fixes and improvements.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: David S. Miller <davem@davemloft.net>
Tested-by: Martin Willi <martin@strongswan.org>
The patch below updates broken web addresses in the kernel
Signed-off-by: Justin P. Mattock <justinmattock@gmail.com>
Cc: Maciej W. Rozycki <macro@linux-mips.org>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Finn Thain <fthain@telegraphics.com.au>
Cc: Randy Dunlap <rdunlap@xenotime.net>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Dimitry Torokhov <dmitry.torokhov@gmail.com>
Cc: Mike Frysinger <vapier.adi@gmail.com>
Acked-by: Ben Pfaff <blp@cs.stanford.edu>
Acked-by: Hans J. Koch <hjk@linutronix.de>
Reviewed-by: Finn Thain <fthain@telegraphics.com.au>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
This patch adds AEAD support into the cryptd framework. Having AEAD
support in cryptd enables crypto drivers that use the AEAD
interface type (such as the patch for AEAD based RFC4106 AES-GCM
implementation using Intel New Instructions) to leverage cryptd for
asynchronous processing.
Signed-off-by: Adrian Hoban <adrian.hoban@intel.com>
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: Gabriele Paoloni <gabriele.paoloni@intel.com>
Signed-off-by: Aidan O'Mahony <aidan.o.mahony@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
These are akin to the blkcipher_walk helpers.
The main differences in the async variant are:
1) Only physical walking is supported. We can't hold on to
kmap mappings across the async operation to support virtual
ablkcipher_walk operations anyways.
2) Bounce buffers used for async more need to be persistent and
freed at a later point in time when the async op completes.
Therefore we maintain a list of writeback buffers and require
that the ablkcipher_walk user call the 'complete' operation
so we can copy the bounce buffers out to the real buffers and
free up the bounce buffer chunks.
These interfaces will be used by the new Niagara2 crypto driver.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds export/import support to md5. The exported type is
defined by struct md5_state.
This is modeled after the equivalent change to sha1_generic.
Signed-off-by: Max Vozeler <max@hinterhof.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a parallel crypto template that takes a crypto
algorithm and converts it to process the crypto transforms in
parallel. For the moment only aead algorithms are supported.
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6941c3a0 disabled compilation of the legacy digest code but didn't
actually remove it. Rectify this. Also, remove the crypto_hash_type
extern declaration from algapi.h now that the struct is gone.
Signed-off-by: Benjamin Gilbert <bgilbert@cs.cmu.edu>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
PCLMULQDQ is used to accelerate the most time-consuming part of GHASH,
carry-less multiplication. More information about PCLMULQDQ can be
found at:
http://software.intel.com/en-us/articles/carry-less-multiplication-and-its-usage-for-computing-the-gcm-mode/
Because PCLMULQDQ changes XMM state, its usage must be enclosed with
kernel_fpu_begin/end, which can be used only in process context, the
acceleration is implemented as crypto_ahash. That is, request in soft
IRQ context will be defered to the cryptd kernel thread.
Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (102 commits)
crypto: sha-s390 - Fix warnings in import function
crypto: vmac - New hash algorithm for intel_txt support
crypto: api - Do not displace newly registered algorithms
crypto: ansi_cprng - Fix module initialization
crypto: xcbc - Fix alignment calculation of xcbc_tfm_ctx
crypto: fips - Depend on ansi_cprng
crypto: blkcipher - Do not use eseqiv on stream ciphers
crypto: ctr - Use chainiv on raw counter mode
Revert crypto: fips - Select CPRNG
crypto: rng - Fix typo
crypto: talitos - add support for 36 bit addressing
crypto: talitos - align locks on cache lines
crypto: talitos - simplify hmac data size calculation
crypto: mv_cesa - Add support for Orion5X crypto engine
crypto: cryptd - Add support to access underlaying shash
crypto: gcm - Use GHASH digest algorithm
crypto: ghash - Add GHASH digest algorithm for GCM
crypto: authenc - Convert to ahash
crypto: api - Fix aligned ctx helper
crypto: hmac - Prehash ipad/opad
...
This patch adds VMAC (a fast MAC) support into crypto framework.
Signed-off-by: Shane Wang <shane.wang@intel.com>
Signed-off-by: Joseph Cihula <joseph.cihula@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As struct skcipher_givcrypt_request includes struct crypto_request
at a non-zero offset, testing for NULL after converting the pointer
returned by crypto_dequeue_request does not work. This can result
in IPsec crashes when the queue is depleted.
This patch fixes it by doing the pointer conversion only when the
return value is non-NULL. In particular, we create a new function
__crypto_dequeue_request that does the pointer conversion.
Reported-by: Brad Bosch <bradbosch@comcast.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
cryptd_alloc_ahash() will allocate a cryptd-ed ahash for specified
algorithm name. The new allocated one is guaranteed to be cryptd-ed
ahash, so the shash underlying can be gotten via cryptd_ahash_child().
Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The aligned ctx helper was using a bogus alignment value thas was
one off the correct value. Fortunately the current users do not
require anything beyond the natural alignment of the platform so
this hasn't caused a problem.
This patch fixes that and also removes the unnecessary minimum
check since if the alignment is less than the natural alignment
then the subsequent ALIGN operation should be a noop.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch replaces the 32-bit counters in sha512_generic with
64-bit counters. It also switches the bit count to the simpler
byte count.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch renames struct sha512_ctx and exports it as struct
sha512_state so that other sha512 implementations can use it
as the reference structure for exporting their state.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When an shash algorithm is exported as ahash, ahash will access
its digest size through hash_alg_common. That's why the shash
layout needs to match hash_alg_common. This wasn't the case
because the alignment weren't identical.
This patch fixes the problem.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch exports the finup operation where available and adds
a default finup operation for ahash. The operations final, finup
and digest also will now deal with unaligned result pointers by
copying it. Finally export/import operations are will now be
exported too.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that all ahash implementations have been converted to the new
ahash type, we can remove old_ahash_alg and its associated support.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch changes crypto4xx to use the new style ahash type.
In particular, we now use ahash_alg to define ahash algorithms
instead of crypto_alg.
This is achieved by introducing a union that encapsulates the
new type and the existing crypto_alg structure. They're told
apart through a u32 field containing the type value.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch changes cryptd to use the template->create function
instead of alloc in anticipation for the switch to new style
ahash algorithms.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the helpers crypto_drop_ahash and crypto_drop_shash
so that these spawns can be dropped without ugly casts.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts crypto_ahash to the new style. The old ahash
algorithm type is retained until the existing ahash implementations
are also converted. All ahash users will automatically get the
new crypto_ahash type.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As the extsize and init_tfm functions belong to the frontend the
frontend argument is superfluous.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the helper crypto_ahash_set_reqsize so that
implementations do not directly access the crypto_ahash structure.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch exports the async functions so that they can be reused
by cryptd when it switches over to using shash.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch changes descsize to a run-time attribute so that
implementations can change it in their init functions.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch changes the kfree call to kzfree for async requests.
As the request may contain sensitive data it needs to be zeroed
before it can be reallocated by others.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds export/import support to sha256_generic. The exported
type is defined by struct sha256_state, which is basically the entire
descriptor state of sha256_generic.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds export/import support to sha1_generic. The exported
type is defined by struct sha1_state, which is basically the entire
descriptor state of sha1_generic.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch replaces the full descriptor export with an export of
the partial hash state. This allows the use of a consistent export
format across all implementations of a given algorithm.
This is useful because a number of cases require the use of the
partial hash state, e.g., PadLock can use the SHA1 hash state
to get around the fact that it can only hash contiguous data
chunks.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the helper shash_instance_ctx which is the shash
analogue of crypto_instance_ctx.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds __crypto_shash_cast which turns a crypto_tfm
into crypto_shash. It's analogous to the other __crypto_*_cast
functions.
It hasn't been needed until now since no existing shash algorithms
have had an init function.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds crypto_shash_ctx_aligned which will be needed
by hmac after its conversion to shash.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds shash_register_instance so that shash instances
can be registered without bypassing the shash checks applied to
normal algorithms.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the helper shash_attr_alg2 which locates a shash
algorithm based on the information in the given attribute.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the helper crypto_attr_alg2 which is similar to
crypto_attr_alg but takes an extra frontend argument. This is
intended to be used by new style algorithm types such as shash.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the functions needed to create and use shash
spawns, i.e., to use shash algorithms in a template.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch modifies the spawn infrastructure to support new style
algorithms like shash. In particular, this means storing the
frontend type in the spawn and using crypto_create_tfm to allocate
the tfm.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds shash_instance and the associated alloc/free
functions. This is meant to be an instance that with a shash
algorithm under it. Note that the instance itself doesn't have
to be shash.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a new argument to crypto_alloc_instance which
sets aside some space before the instance for use by algorithms
such as shash that place type-specific data before crypto_alg.
For compatibility the function has been renamed so that existing
users aren't affected.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch introduces the template->create function intended
to replace the existing alloc function. The intention is for
create to handle the registration directly, whereas currently
the caller of alloc has to handle the registration.
This allows type-specific code to be run prior to registration.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The current "comp" crypto interface supports one-shot (de)compression only,
i.e. the whole data buffer to be (de)compressed must be passed at once, and
the whole (de)compressed data buffer will be received at once.
In several use-cases (e.g. compressed file systems that store files in big
compressed blocks), this workflow is not suitable.
Furthermore, the "comp" type doesn't provide for the configuration of
(de)compression parameters, and always allocates workspace memory for both
compression and decompression, which may waste memory.
To solve this, add a "pcomp" partial (de)compression interface that provides
the following operations:
- crypto_compress_{init,update,final}() for compression,
- crypto_decompress_{init,update,final}() for decompression,
- crypto_{,de}compress_setup(), to configure (de)compression parameters
(incl. allocating workspace memory).
The (de)compression methods take a struct comp_request, which was mimicked
after the z_stream object in zlib, and contains buffer pointer and length
pairs for input and output.
The setup methods take an opaque parameter pointer and length pair. Parameters
are supposed to be encoded using netlink attributes, whose meanings depend on
the actual (name of the) (de)compression algorithm.
Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use dedicated workqueue for crypto subsystem
A dedicated workqueue named kcrypto_wq is created to be used by crypto
subsystem. The system shared keventd_wq is not suitable for
encryption/decryption, because of potential starvation problem.
Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This function is needed by algorithms that don't know their own
block size, e.g., in s390 where the code is common between multiple
versions of SHA.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
cryptd_alloc_ablkcipher() will allocate a cryptd-ed ablkcipher for
specified algorithm name. The new allocated one is guaranteed to be
cryptd-ed ablkcipher, so the blkcipher underlying can be gotten via
cryptd_ablkcipher_child().
Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The Intel AES-NI AES acceleration instructions need key_enc, key_dec
in struct crypto_aes_ctx to be 16 byte aligned, it make this easier to
move key_length to be the last one.
Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We were freeing an offset into the slab object instead of the
start. This patch fixes it by calling crypto_destroy_tfm which
allows the correct address to be given.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The tables used by the various AES algorithms are currently
computed at run-time. This has created an init ordering problem
because some AES algorithms may be registered before the tables
have been initialised.
This patch gets around this whole thing by precomputing the tables.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch allows shash algorithms to be used through the old hash
interface. This is a transitional measure so we can convert the
underlying algorithms to shash before converting the users across.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
It is often useful to save the partial state of a hash function
so that it can be used as a base for two or more computations.
The most prominent example is HMAC where all hashes start from
a base determined by the key. Having an import/export interface
means that we only have to compute that base once rather than
for each message.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The shash interface replaces the current synchronous hash interface.
It improves over hash in two ways. Firstly shash is reentrant,
meaning that the same tfm may be used by two threads simultaneously
as all hashing state is stored in a local descriptor.
The other enhancement is that shash no longer takes scatter list
entries. This is because shash is specifically designed for
synchronous algorithms and as such scatter lists are unnecessary.
All existing hash users will be converted to shash once the
algorithms have been completely converted.
There is also a new finup function that combines update with final.
This will be extended to ahash once the algorithm conversion is
done.
This is also the first time that an algorithm type has their own
registration function. Existing algorithm types will be converted
to this way in due course.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch reintroduces a completely revamped crypto_alloc_tfm.
The biggest change is that we now take two crypto_type objects
when allocating a tfm, a frontend and a backend. In fact this
simply formalises what we've been doing behind the API's back.
For example, as it stands crypto_alloc_ahash may use an
actual ahash algorithm or a crypto_hash algorithm. Putting
this in the API allows us to do this much more cleanly.
The existing types will be converted across gradually.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The type exit function needs to undo any allocations done by the type
init function. However, the type init function may differ depending
on the upper-level type of the transform (e.g., a crypto_blkcipher
instantiated as a crypto_ablkcipher).
So we need to move the exit function out of the lower-level
structure and into crypto_tfm itself.
As it stands this is a no-op since nobody uses exit functions at
all. However, all cases where a lower-level type is instantiated
as a different upper-level type (such as blkcipher as ablkcipher)
will be converted such that they allocate the underlying transform
and use that instead of casting (e.g., crypto_ablkcipher casted
into crypto_blkcipher). That will need to use a different exit
function depending on the upper-level type.
This patch also allows the type init/exit functions to call (or not)
cra_init/cra_exit instead of always calling them from the top level.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a random number generator interface as well as a
cryptographic pseudo-random number generator based on AES. It is
meant to be used in cases where a deterministic CPRNG is required.
One of the first applications will be as an input in the IPsec IV
generation process.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch moves the default IV generators into their own modules
in order to break a dependency loop between cryptomgr, rng, and
blkcipher.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
All new crypto interfaces should go into individual files as much
as possible in order to ensure that crypto.h does not collapse under
its own weight.
This patch moves the ahash code into crypto/hash.h and crypto/internal/hash.h
respectively.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the walking helpers for hash algorithms akin to
those of block ciphers. This is a necessary step before we can
reimplement existing hash algorithms using the new ahash interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When I backed out of using the generic sg chaining (as it isn't currently
portable) and introduced scatterwalk_sg_chain/scatterwalk_sg_next I left
out the sg_is_last check in the latter. This causes it to potentially
dereference beyond the end of the sg array.
As most uses of scatterwalk_sg_next are bound by an overall length, this
only affected the chaining code in authenc and eseqiv. Thanks to Patrick
McHardy for identifying this problem.
This patch also clears the "last" bit on the head of the chained list as
it's no longer last. This also went missing in scatterwalk_sg_chain and
is present in sg_chain.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The key expansion routine could be get little more generic, become
a kernel doc entry and then get exported.
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Tested-by: Stefan Hellermann <stefan@the2masters.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The previous patch to move chainiv and eseqiv into blkcipher created
a section mismatch for the chainiv exit function which was also called
from __init. This patch removes the __exit marking on it.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
For compatibility with dm-crypt initramfs setups it is useful to merge
chainiv/seqiv into the crypto_blkcipher module. Since they're required
by most algorithms anyway this is an acceptable trade-off.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As Andrew Morton correctly points out, we need to explicitly include
sched.h as we use the function cond_resched in crypto/scatterwalk.h.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch makes chainiv avoid spinning by postponing requests on lock
contention if the user allows the use of asynchronous algorithms. If
a synchronous algorithm is requested then we behave as before.
This should improve IPsec performance on SMP when two CPUs attempt to
transmit over the same SA. Currently one of them will spin doing nothing
waiting for the other CPU to finish its encryption. This patch makes it
postpone the request and get on with other work.
If only one CPU is transmitting for a given SA, then we will process
the request synchronously as before.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a null blkcipher algorithm called ecb(cipher_null) for
backwards compatibility. Previously the null algorithm when used by
IPsec copied the data byte by byte. This new algorithm optimises that
to a straight memcpy which lets us better measure inherent overheads in
our IPsec code.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch makes crypto_alloc_aead always return algorithms that is
capable of generating their own IVs through givencrypt and givdecrypt.
All existing AEAD algorithms already do. New ones must either supply
their own or specify a generic IV generator with the geniv field.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch creates the infrastructure to help the construction of IV
generator templates that wrap around AEAD algorithms by adding an IV
generator to them. This is useful for AEAD algorithms with no built-in
IV generator or to replace their built-in generator.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch finally makes the givencrypt/givdecrypt operations available
to users by adding crypto_aead_givencrypt and crypto_aead_givdecrypt.
A suite of helpers to allocate and fill in the request is also available.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the underlying givcrypt operations for aead and associated
support elements. The rationale is identical to that of the skcipher
givcrypt operations, i.e., sometimes only the algorithm knows how the
IV should be generated.
A new request type aead_givcrypt_request is added which contains an
embedded aead_request structure with two new elements to support this
operation. The new elements are seq and giv. The seq field should
contain a strictly increasing 64-bit integer which may be used by
certain IV generators as an input value. The giv field will be used
to store the generated IV. It does not need to obey the alignment
requirements of the algorithm because it's not used during the operation.
The existing iv field must still be available as it will be used to store
intermediate IVs and the output IV if chaining is desired.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch finally makes the givencrypt/givdecrypt operations available
to users by adding crypto_skcipher_givencrypt and crypto_skcipher_givdecrypt.
A suite of helpers to allocate and fill in the request is also available.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that gcm and authenc have been converted to crypto_spawn_skcipher,
this patch removes the obsolete crypto_spawn_ablkcipher function.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch makes crypto_alloc_ablkcipher/crypto_grab_skcipher always
return algorithms that are capable of generating their own IVs through
givencrypt and givdecrypt. Each algorithm may specify its default IV
generator through the geniv field.
For algorithms that do not set the geniv field, the blkcipher layer will
pick a default. Currently it's chainiv for synchronous algorithms and
eseqiv for asynchronous algorithms. Note that if these wrappers do not
work on an algorithm then that algorithm must specify its own geniv or
it can't be used at all.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the helper skcipher_givcrypt_complete which should be
called when an ablkcipher algorithm has completed a givcrypt request.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch creates the infrastructure to help the construction of givcipher
templates that wrap around existing blkcipher/ablkcipher algorithms by adding
an IV generator to them.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Different block cipher modes have different requirements for intialisation
vectors. For example, CBC can use a simple randomly generated IV while
modes such as CTR must use an IV generation mechanisms that give a stronger
guarantee on the lack of collisions. Furthermore, disk encryption modes
have their own IV generation algorithms.
Up until now IV generation has been left to the users of the symmetric
key cipher API. This is inconvenient as the number of block cipher modes
increase because the user needs to be aware of which mode is supposed to
be paired with which IV generation algorithm.
Therefore it makes sense to integrate the IV generation into the crypto
API. This patch takes the first step in that direction by creating two
new ablkcipher operations, givencrypt and givdecrypt that generates an
IV before performing the actual encryption or decryption.
The operations are currently not exposed to the user. That will be done
once the underlying functionality has actually been implemented.
It also creates the underlying givcipher type. Algorithms that directly
generate IVs would use it instead of ablkcipher. All other algorithms
(including all existing ones) would generate a givcipher algorithm upon
registration. This givcipher algorithm will be constructed from the geniv
string that's stored in every algorithm. That string will locate a template
which is instantiated by the blkcipher/ablkcipher algorithm in question to
give a givcipher algorithm.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Note: From now on the collective of ablkcipher/blkcipher/givcipher will
be known as skcipher, i.e., symmetric key cipher. The name blkcipher has
always been much of a misnomer since it supports stream ciphers too.
This patch adds the function crypto_grab_skcipher as a new way of getting
an ablkcipher spawn. The problem is that previously we did this in two
steps, first getting the algorithm and then calling crypto_init_spawn.
This meant that each spawn user had to be aware of what type and mask to
use for these two steps. This is difficult and also presents a problem
when the type/mask changes as they're about to be for IV generators.
The new interface does both steps together just like crypto_alloc_ablkcipher.
As a side-effect this also allows us to be stronger on type enforcement
for spawns. For now this is only done for ablkcipher but it's trivial
to extend for other types.
This patch also moves the type/mask logic for skcipher into the helpers
crypto_skcipher_type and crypto_skcipher_mask.
Finally this patch introduces the function crypto_require_sync to determine
whether the user is specifically requesting a sync algorithm.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As discussed previously, this patch moves the basic CTR functionality
into a chainable algorithm called ctr. The IPsec-specific variant of
it is now placed on top with the name rfc3686.
So ctr(aes) gives a chainable cipher with IV size 16 while the IPsec
variant will be called rfc3686(ctr(aes)). This patch also adjusts
gcm accordingly.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a new helper crypto_attr_alg_name which is basically the
first half of crypto_attr_alg. That is, it returns an algorithm name
parameter as a string without looking it up. The caller can then look it
up immediately or defer it until later.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Unfortunately the generic chaining hasn't been ported to all architectures
yet, and notably not s390. So this patch restores the chainging that we've
been using previously which does work everywhere.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The scatterwalk infrastructure is used by algorithms so it needs to
move out of crypto for future users that may live in drivers/crypto
or asm/*/crypto.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Having enckeylen as a template parameter makes it a pain for hardware
devices that implement ciphers with many key sizes since each one would
have to be registered separately.
Since the authenc algorithm is mainly used for legacy purposes where its
key is going to be constructed out of two separate keys, we can in fact
embed this value into the key itself.
This patch does this by prepending an rtnetlink header to the key that
contains the encryption key length.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
With the addition of more stream ciphers we need to curb the proliferation
of ad-hoc xor functions. This patch creates a generic pair of functions,
crypto_inc and crypto_xor which does big-endian increment and exclusive or,
respectively.
For optimum performance, they both use u32 operations so alignment must be
as that of u32 even though the arguments are of type u8 *.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Up until now we have ablkcipher algorithms have been identified as
type BLKCIPHER with the ASYNC bit set. This is suboptimal because
ablkcipher refers to two things. On the one hand it refers to the
top-level ablkcipher interface with requests. On the other hand it
refers to and algorithm type underneath.
As it is you cannot request a synchronous block cipher algorithm
with the ablkcipher interface on top. This is a problem because
we want to be able to eventually phase out the blkcipher top-level
interface.
This patch fixes this by making ABLKCIPHER its own type, just as
we have distinct types for HASH and DIGEST. The type it associated
with the algorithm implementation only.
Which top-level interface is used for synchronous block ciphers is
then determined by the mask that's used. If it's a specific mask
then the old blkcipher interface is given, otherwise we go with the
new ablkcipher interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Resubmitting this patch which extends sha256_generic.c to support SHA-224 as
described in FIPS 180-2 and RFC 3874. HMAC-SHA-224 as described in RFC4231
is then supported through the hmac interface.
Patch includes test vectors for SHA-224 and HMAC-SHA-224.
SHA-224 chould be chosen as a hash algorithm when 112 bits of security
strength is required.
Patch generated against the 2.6.24-rc1 kernel and tested against
2.6.24-rc1-git14 which includes fix for scatter gather implementation for HMAC.
Signed-off-by: Jonathan Lynch <jonathan.lynch@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch exports four tables and the set_key() routine. This ressources
can be shared by other AES implementations (aes-x86_64 for instance).
The decryption key has been turned around (deckey[0] is the first piece
of the key instead of deckey[keylen+20]). The encrypt/decrypt functions
are looking now identical (except they are using different tables and
key).
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This three defines are used in all AES related hardware.
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch creates include/crypto/des.h for common macros shared between
DES implementations.
Signed-off-by: Evgeniy Polyakov <johnpol@2ka.mipt.ru>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
There are currently several SHA implementations that all define their own
initialization vectors and size values. Since this values are idential
move them to a header file under include/crypto.
Signed-off-by: Jan Glauber <jang@de.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the helper blkcipher_walk_virt_block which is similar to
blkcipher_walk_virt but uses a supplied block size instead of the block
size of the block cipher. This is useful for CTR where the block size is
1 but we still want to walk by the block size of the underlying cipher.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
XTS currently considered to be the successor of the LRW mode by the IEEE1619
workgroup. LRW was discarded, because it was not secure if the encyption key
itself is encrypted with LRW.
XTS does not have this problem. The implementation is pretty straightforward,
a new function was added to gf128mul to handle GF(128) elements in ble format.
Four testvectors from the specification
http://grouper.ieee.org/groups/1619/email/pdf00086.pdf
were added, and they verify on my system.
Signed-off-by: Rik Snel <rsnel@cube.dyndns.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the authenc algorithm which constructs an AEAD algorithm
from an asynchronous block cipher and a hash. The construction is done
by concatenating the encrypted result from the cipher with the output
from the hash, as is used by the IPsec ESP protocol.
The authenc algorithm exists as a template with four parameters:
authenc(auth, authsize, enc, enckeylen).
The authentication algorithm, the authentication size (i.e., truncating
the output of the authentication algorithm), the encryption algorithm,
and the encryption key length. Both the size field and the key length
field are in bytes. For example, AES-128 with SHA1-HMAC would be
represented by
authenc(hmac(sha1), 12, cbc(aes), 16)
The key for the authenc algorithm is the concatenation of the keys for
the authentication algorithm with the encryption algorithm. For the
above example, if a key of length 36 bytes is given, then hmac(sha1)
would receive the first 20 bytes while the last 16 would be given to
cbc(aes).
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since not everyone needs a queue pointer and those who need it can
always get it from the context anyway the queue pointer in the
common alg object is redundant.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds crypto_aead which is the interface for AEAD
(Authenticated Encryption with Associated Data) algorithms.
AEAD algorithms perform authentication and encryption in one
step. Traditionally users (such as IPsec) would use two
different crypto algorithms to perform these. With AEAD
this comes down to one algorithm and one operation.
Of course if traditional algorithms were used we'd still
be doing two operations underneath. However, real AEAD
algorithms may allow the underlying operations to be
optimised as well.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This is function does the same thing for ablkcipher that is done for
blkcipher by crypto_blkcipher_ctx_aligned(): it returns an aligned
address of the private ctx.
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the cryptd module which is a template that takes a
synchronous software crypto algorithm and converts it to an asynchronous
one by executing it in a kernel thread.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As it is whenever a new algorithm with the same name is registered
users of the old algorithm will be removed so that they can take
advantage of the new algorithm. This presents a problem when the
new algorithm is not equivalent to the old algorithm. In particular,
the new algorithm might only function on top of the existing one.
Hence we should not remove users unless they can make use of the
new algorithm.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the mid-level interface for asynchronous block ciphers.
It also includes a generic queueing mechanism that can be used by other
asynchronous crypto operations in future.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch passes the type/mask along when constructing instances of
templates. This is in preparation for templates that may support
multiple types of instances depending on what is requested. For example,
the planned software async crypto driver will use this construct.
For the moment this allows us to check whether the instance constructed
is of the correct type and avoid returning success if the type does not
match.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds support for multiple frontend types for each backend
algorithm by passing the type and mask through to the backend type
init function.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
A lot of cypher modes need multiplications in GF(2^128). LRW, ABL, GCM...
I use functions from this library in my LRW implementation and I will
also use them in my ABL (Arbitrary Block Length, an unencumbered (correct
me if I am wrong, wide block cipher mode).
Elements of GF(2^128) must be presented as u128 *, it encourages automatic
and proper alignment.
The library contains support for two different representations of GF(2^128),
see the comment in gf128mul.h. There different levels of optimization
(memory/speed tradeoff).
The code is based on work by Dr Brian Gladman. Notable changes:
- deletion of two optimization modes
- change from u32 to u64 for faster handling on 64bit machines
- support for 'bbe' representation in addition to the, already implemented,
'lle' representation.
- move 'inline void' functions from header to 'static void' in the
source file
- update to use the linux coding style conventions
The original can be found at:
http://fp.gladman.plus.com/AES/modes.vc8.19-06-06.zip
The copyright (and GPL statement) of the original author is preserved.
Signed-off-by: Rik Snel <rsnel@cube.dyndns.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
128bit is a common blocksize in linux kernel cryptography, so it helps to
centralize some common operations.
The code, while mostly trivial, is based on a header file mode_hdr.h in
http://fp.gladman.plus.com/AES/modes.vc8.19-06-06.zip
The original copyright (and GPL statement) of the original author,
Dr Brian Gladman, is preserved.
Signed-off-by: Rik Snel <rsnel@cube.dyndns.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The existing digest user interface is inadequate for support asynchronous
operations. For one it doesn't return a value to indicate success or
failure, nor does it take a per-operation descriptor which is essential
for the issuing of requests while other requests are still outstanding.
This patch is the first in a series of steps to remodel the interface
for asynchronous operations.
For the ease of transition the new interface will be known as "hash"
while the old one will remain as "digest".
This patch also changes sg_next to allow chaining.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds two block cipher algorithms, CBC and ECB. These
are implemented as templates on top of existing single-block cipher
algorithms. They invoke the single-block cipher through the new
encrypt_one/decrypt_one interface.
This also optimises the in-place encryption and decryption to remove
the cost of an IV copy each round.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the new type of block ciphers. Unlike current cipher
algorithms which operate on a single block at a time, block ciphers
operate on an arbitrarily long linear area of data. As it is block-based,
it will skip any data remaining at the end which cannot form a block.
The block cipher has one major difference when compared to the existing
block cipher implementation. The sg walking is now performed by the
algorithm rather than the cipher mid-layer. This is needed for drivers
that directly support sg lists. It also improves performance for all
algorithms as it reduces the total number of indirect calls by one.
In future the existing cipher algorithm will be converted to only have
a single-block interface. This will be done after all existing users
have switched over to the new block cipher type.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch prepares the scatterwalk code for use by the new block cipher
type.
Firstly it halves the size of scatter_walk on 32-bit platforms. This
is important as we allocate at least two of these objects on the stack
for each block cipher operation.
It also exports the symbols since the block cipher code can be built as
a module.
Finally there is a hack in scatterwalk_unmap that relies on progress
being made. Unfortunately, for hardware crypto we can't guarantee
progress to be made since the hardware can fail.
So this also gets rid of the hack by not advancing the address returned
by scatterwalk_map.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds two new operations for the simple cipher that encrypts or
decrypts a single block at a time. This will be the main interface after
the existing block operations have moved over to the new block ciphers.
It also adds the crypto_cipher type which is currently only used on the
new operations but will be extended to setkey as well once existing users
have been converted to use block ciphers where applicable.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the crypto_type structure which will be used for all new
crypto algorithm types, beginning with block ciphers.
The primary purpose of this abstraction is to allow different crypto_type
objects for crypto algorithms of the same type, in particular, there will
be a different crypto_type objects for asynchronous algorithms.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the helpers crypto_get_attr_alg and crypto_alloc_instance
which can be used by simple one-argument templates like hmac to process
input parameters and allocate instances.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that the tfm is passed directly to setkey instead of the ctx, we no
longer need to pass the &tfm->crt_flags pointer.
This patch also gets rid of a few unnecessary checks on the key length
for ciphers as the cipher layer guarantees that the key length is within
the bounds specified by the algorithm.
Rather than testing dia_setkey every time, this patch does it only once
during crypto_alloc_tfm. The redundant check from crypto_digest_setkey
is also removed.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Spawns lock a specific crypto algorithm in place. They can then be used
with crypto_spawn_tfm to allocate a tfm for that algorithm. When the base
algorithm of a spawn is deregistered, all its spawns will be automatically
removed.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
A crypto_template generates a crypto_alg object when given a set of
parameters. this patch adds the basic data structure fo templates
and code to handle their registration/deregistration.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
The crypto API is made up of the part facing users such as IPsec and the
low-level part which is used by cryptographic entities such as algorithms.
This patch splits out the latter so that the two APIs are more clearly
delineated. As a bonus the low-level API can now be modularised if all
algorithms are built as modules.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch splits up the twofish crypto routine into a common part ( key
setup ) which will be uses by all twofish crypto modules ( generic-c , i586
assembler and x86_64 assembler ) and generic-c part. It also creates a new
header file which will be used by all 3 modules.
This eliminates all code duplication.
Correctness was verified with the tcrypt module and automated test scripts.
Signed-off-by: Joachim Fritschi <jfritschi@freenet.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>