x86_64 zero extends 32bit operations, so for 64bit operands,
XORL r32,r32 is functionally equal to XORL r64,r64, but avoids
a REX prefix byte when legacy registers are used.
Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Acked-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The header file algapi.h includes skbuff.h unnecessarily since
all we need is a forward declaration for struct sk_buff. This
patch removes that inclusion.
Unfortunately skbuff.h pulls in a lot of things and drivers over
the years have come to rely on it so this patch adds a lot of
missing inclusions that result from this.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The carry variables are assigned but never used, which upsets
the compiler. This patch removes them.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: Karthikeyan Bhargavan <karthik.bhargavan@gmail.com>
Acked-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This updates to the newer register selection proved by HACL*, which
leads to a more compact instruction encoding, and saves around 100
cycles.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This comes from INRIA's HACL*/Vale. It implements the same algorithm and
implementation strategy as the code it replaces, only this code has been
formally verified, sans the base point multiplication, which uses code
similar to prior, only it uses the formally verified field arithmetic
alongside reproducable ladder generation steps. This doesn't have a
pure-bmi2 version, which means haswell no longer benefits, but the
increased (doubled) code complexity is not worth it for a single
generation of chips that's already old.
Performance-wise, this is around 1% slower on older microarchitectures,
and slightly faster on newer microarchitectures, mainly 10nm ones or
backports of 10nm to 14nm. This implementation is "everest" below:
Xeon E5-2680 v4 (Broadwell)
armfazh: 133340 cycles per call
everest: 133436 cycles per call
Xeon Gold 5120 (Sky Lake Server)
armfazh: 112636 cycles per call
everest: 113906 cycles per call
Core i5-6300U (Sky Lake Client)
armfazh: 116810 cycles per call
everest: 117916 cycles per call
Core i7-7600U (Kaby Lake)
armfazh: 119523 cycles per call
everest: 119040 cycles per call
Core i7-8750H (Coffee Lake)
armfazh: 113914 cycles per call
everest: 113650 cycles per call
Core i9-9880H (Coffee Lake Refresh)
armfazh: 112616 cycles per call
everest: 114082 cycles per call
Core i3-8121U (Cannon Lake)
armfazh: 113202 cycles per call
everest: 111382 cycles per call
Core i7-8265U (Whiskey Lake)
armfazh: 127307 cycles per call
everest: 127697 cycles per call
Core i7-8550U (Kaby Lake Refresh)
armfazh: 127522 cycles per call
everest: 127083 cycles per call
Xeon Platinum 8275CL (Cascade Lake)
armfazh: 114380 cycles per call
everest: 114656 cycles per call
Achieving these kind of results with formally verified code is quite
remarkable, especialy considering that performance is favorable for
newer chips.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
For glue code that's used by Zinc, the actual Crypto API functions might
not necessarily exist, and don't need to exist either. Before this
patch, there are valid build configurations that lead to a unbuildable
kernel. This fixes it to conditionalize those symbols on the existence
of the proper config entry.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This implementation is the fastest available x86_64 implementation, and
unlike Sandy2x, it doesn't requie use of the floating point registers at
all. Instead it makes use of BMI2 and ADX, available on recent
microarchitectures. The implementation was written by Armando
Faz-Hernández with contributions (upstream) from Samuel Neves and me,
in addition to further changes in the kernel implementation from us.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Samuel Neves <sneves@dei.uc.pt>
Co-developed-by: Samuel Neves <sneves@dei.uc.pt>
[ardb: - move to arch/x86/crypto
- wire into lib/crypto framework
- implement crypto API KPP hooks ]
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>