linux-sg2042/arch/x86/crypto
Jussi Kivilinna f3f935a76a crypto: camellia - add AVX2/AES-NI/x86_64 assembler implementation of camellia cipher
Patch adds AVX2/AES-NI/x86-64 implementation of Camellia cipher, requiring
32 parallel blocks for input (512 bytes). Compared to AVX implementation, this
version is extended to use the 256-bit wide YMM registers. For AES-NI
instructions data is split to two 128-bit registers and merged afterwards.
Even with this additional handling, performance should be higher compared
to the AES-NI/AVX implementation.

Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2013-04-25 21:09:07 +08:00
..
Makefile crypto: camellia - add AVX2/AES-NI/x86_64 assembler implementation of camellia cipher 2013-04-25 21:09:07 +08:00
ablk_helper.c crypto: aes_ni - change to use shared ablk_* functions 2012-06-27 14:42:01 +08:00
aes-i586-asm_32.S crypto: x86/aes - assembler clean-ups: use ENTRY/ENDPROC, localize jump targets 2013-01-20 10:16:47 +11:00
aes-x86_64-asm_64.S crypto: x86/aes - assembler clean-ups: use ENTRY/ENDPROC, localize jump targets 2013-01-20 10:16:47 +11:00
aes_glue.c crypto: arch/x86 - cleanup - remove unneeded crypto_alg.cra_list initializations 2012-08-01 17:47:27 +08:00
aesni-intel_asm.S crypto: aesni_intel - add more optimized XTS mode for x86-64 2013-04-25 21:01:53 +08:00
aesni-intel_glue.c crypto: aesni_intel - add more optimized XTS mode for x86-64 2013-04-25 21:01:53 +08:00
blowfish-avx2-asm_64.S crypto: blowfish - add AVX2/x86_64 implementation of blowfish cipher 2013-04-25 21:09:04 +08:00
blowfish-x86_64-asm_64.S crypto: blowfish-x86_64: use ENTRY()/ENDPROC() for assembler functions and localize jump targets 2013-01-20 10:16:48 +11:00
blowfish_avx2_glue.c crypto: blowfish - add AVX2/x86_64 implementation of blowfish cipher 2013-04-25 21:09:04 +08:00
blowfish_glue.c crypto: blowfish - add AVX2/x86_64 implementation of blowfish cipher 2013-04-25 21:09:04 +08:00
camellia-aesni-avx-asm_64.S crypto: x86/camellia-aesni-avx - add more optimized XTS code 2013-04-25 21:01:52 +08:00
camellia-aesni-avx2-asm_64.S crypto: camellia - add AVX2/AES-NI/x86_64 assembler implementation of camellia cipher 2013-04-25 21:09:07 +08:00
camellia-x86_64-asm_64.S crypto: camellia-x86_64/aes-ni: use ENTRY()/ENDPROC() for assembler functions and localize jump targets 2013-01-20 10:16:48 +11:00
camellia_aesni_avx2_glue.c crypto: camellia - add AVX2/AES-NI/x86_64 assembler implementation of camellia cipher 2013-04-25 21:09:07 +08:00
camellia_aesni_avx_glue.c crypto: camellia - add AVX2/AES-NI/x86_64 assembler implementation of camellia cipher 2013-04-25 21:09:07 +08:00
camellia_glue.c crypto: camellia-x86_64 - share common functions and move structures and function definitions to header file 2012-11-09 17:32:31 +08:00
cast5-avx-x86_64-asm_64.S crypto: cast5-avx: use ENTRY()/ENDPROC() for assembler functions and localize jump targets 2013-01-20 10:16:48 +11:00
cast5_avx_glue.c crypto: cast5/avx - avoid using temporary stack buffers 2012-10-24 21:10:55 +08:00
cast6-avx-x86_64-asm_64.S crypto: cast6-avx: use new optimized XTS code 2013-04-25 21:01:52 +08:00
cast6_avx_glue.c crypto: cast6-avx: use new optimized XTS code 2013-04-25 21:01:52 +08:00
crc32-pclmul_asm.S crypto: x86/crc32-pclmul - assembly clean-ups: use ENTRY/ENDPROC 2013-04-03 09:06:29 +08:00
crc32-pclmul_glue.c crypto: crc32 - add crc32 pclmulqdq implementation and wrappers for table implementation 2013-01-20 10:16:45 +11:00
crc32c-intel_glue.c crypto: crc32c - Optimize CRC32C calculation with PCLMULQDQ instruction 2012-10-15 22:18:24 +08:00
crc32c-pcl-intel-asm_64.S crypto: crc32-pclmul - Use gas macro for pclmulqdq 2013-04-25 21:01:44 +08:00
fpu.c crypto: aesni-intel - Merge with fpu.ko 2011-05-16 15:12:47 +10:00
ghash-clmulni-intel_asm.S crypto: x86/ghash - assembler clean-up: use ENDPROC at end of assember functions 2013-01-20 10:16:49 +11:00
ghash-clmulni-intel_glue.c crypto: arch/x86 - cleanup - remove unneeded crypto_alg.cra_list initializations 2012-08-01 17:47:27 +08:00
glue_helper-asm-avx.S crypto: x86 - add more optimized XTS-mode for serpent-avx 2013-04-25 21:01:51 +08:00
glue_helper-asm-avx2.S crypto: twofish - add AVX2/x86_64 assembler implementation of twofish cipher 2013-04-25 21:09:05 +08:00
glue_helper.c crypto: x86 - add more optimized XTS-mode for serpent-avx 2013-04-25 21:01:51 +08:00
salsa20-i586-asm_32.S crypto: x86/salsa20 - assembler cleanup, use ENTRY/ENDPROC for assember functions and rename ECRYPT_* to salsa20_* 2013-01-20 10:16:50 +11:00
salsa20-x86_64-asm_64.S crypto: x86/salsa20 - assembler cleanup, use ENTRY/ENDPROC for assember functions and rename ECRYPT_* to salsa20_* 2013-01-20 10:16:50 +11:00
salsa20_glue.c crypto: x86/salsa20 - assembler cleanup, use ENTRY/ENDPROC for assember functions and rename ECRYPT_* to salsa20_* 2013-01-20 10:16:50 +11:00
serpent-avx-x86_64-asm_64.S crypto: x86 - add more optimized XTS-mode for serpent-avx 2013-04-25 21:01:51 +08:00
serpent-avx2-asm_64.S crypto: serpent - add AVX2/x86_64 assembler implementation of serpent cipher 2013-04-25 21:09:07 +08:00
serpent-sse2-i586-asm_32.S crypto: x86/serpent - use ENTRY/ENDPROC for assember functions and localize jump targets 2013-01-20 10:16:50 +11:00
serpent-sse2-x86_64-asm_64.S crypto: x86/serpent - use ENTRY/ENDPROC for assember functions and localize jump targets 2013-01-20 10:16:50 +11:00
serpent_avx2_glue.c crypto: serpent - add AVX2/x86_64 assembler implementation of serpent cipher 2013-04-25 21:09:07 +08:00
serpent_avx_glue.c crypto: serpent - add AVX2/x86_64 assembler implementation of serpent cipher 2013-04-25 21:09:07 +08:00
serpent_sse2_glue.c crypto: x86/glue_helper - use le128 instead of u128 for CTR mode 2012-10-24 21:10:54 +08:00
sha1_ssse3_asm.S crypto: x86/sha1 - assembler clean-ups: use ENTRY/ENDPROC 2013-01-20 10:16:51 +11:00
sha1_ssse3_glue.c crypto: sha1 - use Kbuild supplied flags for AVX test 2012-06-12 16:37:16 +08:00
sha256-avx-asm.S crypto: sha256 - Optimized sha256 x86_64 assembly routine with AVX instructions. 2013-04-03 09:06:32 +08:00
sha256-avx2-asm.S crypto: sha256 - Optimized sha256 x86_64 routine using AVX2's RORX instructions 2013-04-03 09:06:32 +08:00
sha256-ssse3-asm.S crypto: sha256 - Optimized sha256 x86_64 assembly routine using Supplemental SSE3 instructions. 2013-04-03 09:06:31 +08:00
sha256_ssse3_glue.c crypto: sha256 - Create module providing optimized SHA256 routines using SSSE3, AVX or AVX2 instructions. 2013-04-25 21:00:57 +08:00
sha512-avx-asm.S crypto: sha512 - Optimized SHA512 x86_64 assembly routine using AVX instructions. 2013-04-25 21:00:58 +08:00
sha512-avx2-asm.S crypto: sha512 - Optimized SHA512 x86_64 assembly routine using AVX2 RORX instruction. 2013-04-25 21:00:58 +08:00
sha512-ssse3-asm.S crypto: sha512 - Optimized SHA512 x86_64 assembly routine using Supplemental SSE3 instructions. 2013-04-25 21:00:58 +08:00
sha512_ssse3_glue.c crypto: sha512 - Create module providing optimized SHA512 routines using SSSE3, AVX or AVX2 instructions. 2013-04-25 21:01:42 +08:00
twofish-avx-x86_64-asm_64.S crypto: x86/twofish-avx - use optimized XTS code 2013-04-25 21:01:51 +08:00
twofish-avx2-asm_64.S crypto: twofish - add AVX2/x86_64 assembler implementation of twofish cipher 2013-04-25 21:09:05 +08:00
twofish-i586-asm_32.S crypto: x86/twofish - assembler clean-ups: use ENTRY/ENDPROC, localize jump labels 2013-01-20 10:16:51 +11:00
twofish-x86_64-asm_64-3way.S crypto: x86/twofish - assembler clean-ups: use ENTRY/ENDPROC, localize jump labels 2013-01-20 10:16:51 +11:00
twofish-x86_64-asm_64.S crypto: x86/twofish - assembler clean-ups: use ENTRY/ENDPROC, localize jump labels 2013-01-20 10:16:51 +11:00
twofish_avx2_glue.c crypto: twofish - add AVX2/x86_64 assembler implementation of twofish cipher 2013-04-25 21:09:05 +08:00
twofish_avx_glue.c crypto: twofish - add AVX2/x86_64 assembler implementation of twofish cipher 2013-04-25 21:09:05 +08:00
twofish_glue.c crypto: arch/x86 - cleanup - remove unneeded crypto_alg.cra_list initializations 2012-08-01 17:47:27 +08:00
twofish_glue_3way.c crypto: x86/glue_helper - use le128 instead of u128 for CTR mode 2012-10-24 21:10:54 +08:00