This is another attempt at what Erich Keane tried to do in r355322.
This adds rolb, rolw, rold, rolq and their ror equivalent as always_inline wrappers around __builtin_rotate* which will lower to funnel shift intrinsics in IR.
Additionally, when _MSC_VER is not defined we will define _rotl, _lrotl, _rotr, _lrotr as macros to one of the always_inline intrinsics mentioned above. Making sure that _lrotl/_lrotr use either 32 or 64 bit based on the size of long. These need to be macros because we have builtins with the same name for MS compatibility, but _MSC_VER isn't always defined when those builtins are enabled.
We also define _rotwl and _rotwr as macros aliasing to rolw/rorw just like gcc to complete the set. These don't need to be gated with _MSC_VER because these aren't MS builtins.
I've added tests both for non-MS and -ms-extensions with and without _MSC_VER being defined.
Differential Revision: https://reviews.llvm.org/D59346
llvm-svn: 356423
The above builtins are currently implemented for MSVC mode, however GCC
also implements these. This patch enables them for all platforms.
Additionally, this corrects the type for these builtins to always be
'long int' to match the specification in the Intel Intrinsics Guide.
Change-Id: Ida34be98078709584ef5136c8761783435ec02b1
llvm-svn: 355322