In PR41304:
https://bugs.llvm.org/show_bug.cgi?id=41304
...we have a case where we want to fold a binop of select-shuffle (blended) values.
Rather than try to match commuted variants of the pattern, we can canonicalize the
shuffles and check for mask equality with commuted operands.
We don't produce arbitrary shuffle masks in instcombine, but select-shuffles are a
special case that the backend is required to handle because we already canonicalize
vector select to this shuffle form.
So there should be no codegen difference from this change. It's possible that this
improves CSE in IR though.
Differential Revision: https://reviews.llvm.org/D60016
llvm-svn: 357366
Summary:
PowerPC64/PowerPC64le supports the builtin function __builtin_setrnd to set the floating point rounding mode. This function will use the least significant two bits of integer argument to set the floating point rounding mode.
double __builtin_setrnd(int mode);
The effective values for mode are:
0 - round to nearest
1 - round to zero
2 - round to +infinity
3 - round to -infinity
Note that the mode argument will modulo 4, so if the int argument is greater than 3, it will only use the least significant two bits of the mode. Namely, builtin_setrnd(102)) is equal to builtin_setrnd(2).
Reviewed By: jsji
Differential Revision: https://reviews.llvm.org/D59403
llvm-svn: 357242
Future versions of MSVC make these intrinsics available on x86 & x64,
according to:
http://lists.llvm.org/pipermail/cfe-dev/2019-March/061711.html
The purpose of these builtins is to emit plain, non-atomic, volatile
stores when /volatile:ms (-cc1 -fms-volatile) is enabled.
llvm-svn: 357220
Summary:
These are all implemented by icc as well.
I made bit_scan_forward/reverse forward to the __bsfd/__bsrq since we also have
__bsfq/__bsrq.
Note, when lzcnt is enabled the bsr intrinsics generates lzcnt+xor instead of bsr.
Reviewers: RKSimon, spatel
Subscribers: cfe-commits, llvm-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D59682
llvm-svn: 356848
This is the result of discussions on the list about how to deal with intrinsics
which require codegen to disambiguate them via only the integer/fp overloads.
It causes problems for GlobalISel as some of that information is lost during
translation, while with other operations like IR instructions the information is
encoded into the instruction opcode.
This patch changes clang to emit the new faddp intrinsic if the vector operands
to the builtin have FP element types. LLVM IR AutoUpgrade has been taught to
upgrade existing calls to aarch64.neon.addp with fp vector arguments, and
we remove the workarounds introduced for GlobalISel in r355865.
This is a more permanent solution to PR40968.
Differential Revision: https://reviews.llvm.org/D59655
llvm-svn: 356722
Use the new cx8 feature flag that was added to the backend to represent support for cmpxchg8b. Use this flag to set the MaxAtomicInlineWidth.
This also assumes all the cmpxchg instructions are enabled for CK_Generic which is what cc1 defaults to when nothing is specified.
Differential Revision: https://reviews.llvm.org/D59566
llvm-svn: 356709
gcc and icc both implement popcntd and popcntq which we did not. gcc doesn't seem to require a feature flag for the _popcnt32/_popcnt64 spelling and will use a libcall if its not supported.
Differential Revision: https://reviews.llvm.org/D59567
llvm-svn: 356689
After https://reviews.llvm.org/rL355317 we noticed that quite a decent
amount of code redeclares builtins (memcpy in particular, I believe
reduced from an MSVC header) with a calling convention specified.
This gets particularly troublesome when the user specifies a new
'default' calling convention on the command line.
When looking to add a diagnostic for this case, it was noticed that we
had 3 other diagnostics that differed only slightly. This patch ALSO
unifies those under a 'select'. Unfortunately, the order of words in
ONE of these diagnostics was reversed ("'thiscall' calling convention"
vs "calling convention 'thiscall'"), so this patch also standardizes on
the former.
Differential Revision: https://reviews.llvm.org/D59560
Change-Id: I79f99fe7c2301640755ffdd774b46eb44526bb22
llvm-svn: 356663
gcc has these intrinsics in ia32intrin.h as well. And icc implements them
though they aren't documented in the Intel Intrinsics Guide.
Differential Revision: https://reviews.llvm.org/D59533
llvm-svn: 356609
Summary: This test is failing after r356499 (verified with `ninja check-clang-codegen`). Update the register selection used in the test from x0 to x8.
Reviewers: arsenm, MatzeB, efriedma
Reviewed By: efriedma
Subscribers: efriedma, wdng, javed.absar, kristof.beyls, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D59557
llvm-svn: 356517
The attribute pass_dynamic_object_size(n) behaves exactly like
pass_object_size(n), but instead of evaluating __builtin_object_size on calls,
it evaluates __builtin_dynamic_object_size, which has the potential to produce
runtime code when the object size can't be determined statically.
Differential revision: https://reviews.llvm.org/D58757
llvm-svn: 356515
Summary:
`wasm.throw` builtin's first 'tag' argument should be an immediate index
into the event section.
Reviewers: dschuff, craig.topper
Subscribers: sbc100, jgravelle-google, sunfish, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D59448
llvm-svn: 356436
This is another attempt at what Erich Keane tried to do in r355322.
This adds rolb, rolw, rold, rolq and their ror equivalent as always_inline wrappers around __builtin_rotate* which will lower to funnel shift intrinsics in IR.
Additionally, when _MSC_VER is not defined we will define _rotl, _lrotl, _rotr, _lrotr as macros to one of the always_inline intrinsics mentioned above. Making sure that _lrotl/_lrotr use either 32 or 64 bit based on the size of long. These need to be macros because we have builtins with the same name for MS compatibility, but _MSC_VER isn't always defined when those builtins are enabled.
We also define _rotwl and _rotwr as macros aliasing to rolw/rorw just like gcc to complete the set. These don't need to be gated with _MSC_VER because these aren't MS builtins.
I've added tests both for non-MS and -ms-extensions with and without _MSC_VER being defined.
Differential Revision: https://reviews.llvm.org/D59346
llvm-svn: 356423
Summary:
Because in wasm we merge all catch clauses into one big catchpad, in
case none of the types in catch handlers matches after we test against
each of them, we should unwind to the next EH enclosing scope. For this,
we should NOT use a call to `__cxa_rethrow` but rather a call to our own
rethrow intrinsic, because what we're trying to do here is just to
transfer the control flow into the next enclosing EH pad (or the
caller). Calls to `__cxa_rethrow` should only be used after a call to
`__cxa_begin_catch`.
Reviewers: dschuff
Subscribers: sbc100, jgravelle-google, sunfish, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D59353
llvm-svn: 356317
The constraint "0" in the following asm did not consider the its
relationship with "=y" when try to replace the type of the operands.
asm ("nop" : "=y"(Mu8_1 ) : "0"(Mu8_0 ));
Patch by Xiang Zhang.
Differential Revision: https://reviews.llvm.org/D56990
llvm-svn: 356196
This reverts commit r353765. After talking with our c stdlib folks, we decided
to use the existing pass_object_size attribute to implement _FORTIFY_SOURCE
wrappers, like Bionic does (I didn't realize that pass_object_size could be used
for this purpose). Sorry for the flip/flop, and thanks to James Y. Knight for
pointing this out to me.
llvm-svn: 356103
Currently we have -Rpass for filtering the remarks that are displayed as
diagnostics, but when using -fsave-optimization-record, there is no way
to filter the remarks while generating them.
This adds support for filtering remarks by passes using a regex.
Ex: `clang -fsave-optimization-record -foptimization-record-passes=inline`
will only emit the remarks coming from the pass `inline`.
This adds:
* `-fsave-optimization-record` to the driver
* `-opt-record-passes` to cc1
* `-lto-pass-remarks-filter` to the LTOCodeGenerator
* `--opt-remarks-passes` to lld
* `-pass-remarks-filter` to llc, opt, llvm-lto, llvm-lto2
* `-opt-remarks-passes` to gold-plugin
Differential Revision: https://reviews.llvm.org/D59268
Original llvm-svn: 355964
llvm-svn: 355984
Currently we have -Rpass for filtering the remarks that are displayed as
diagnostics, but when using -fsave-optimization-record, there is no way
to filter the remarks while generating them.
This adds support for filtering remarks by passes using a regex.
Ex: `clang -fsave-optimization-record -foptimization-record-passes=inline`
will only emit the remarks coming from the pass `inline`.
This adds:
* `-fsave-optimization-record` to the driver
* `-opt-record-passes` to cc1
* `-lto-pass-remarks-filter` to the LTOCodeGenerator
* `--opt-remarks-passes` to lld
* `-pass-remarks-filter` to llc, opt, llvm-lto, llvm-lto2
* `-opt-remarks-passes` to gold-plugin
Differential Revision: https://reviews.llvm.org/D59268
llvm-svn: 355964
r355322 fixed this, however is being reverted due to concerns with
enabling it in other modes.
Change-Id: I6a939b7469b8fa196d5871a627eb2330dbd30f29
llvm-svn: 355698
Use this feature to fix a bug on ARM where 4 byte alignment is
incorrectly assumed.
Differential Revision: https://reviews.llvm.org/D57335
llvm-svn: 355685
Use this feature to fix a bug on ARM where 4 byte alignment is
incorrectly assumed.
Differential Revision: https://reviews.llvm.org/D57335
llvm-svn: 355585
Use this feature to fix a bug on ARM where 4 byte alignment is
incorrectly assumed.
Differential Revision: https://reviews.llvm.org/D57335
llvm-svn: 355522
Apparently GCC allows this, and there's code relying on it (see bug).
The idea is to allow expression that would have been allowed if they
were cast to int. So I based the code on how such a cast would be done
(the CK_PointerToIntegral case in IntExprEvaluator::VisitCastExpr()).
Differential Revision: https://reviews.llvm.org/D58821
llvm-svn: 355491
The above builtins are currently implemented for MSVC mode, however GCC
also implements these. This patch enables them for all platforms.
Additionally, this corrects the type for these builtins to always be
'long int' to match the specification in the Intel Intrinsics Guide.
Change-Id: Ida34be98078709584ef5136c8761783435ec02b1
llvm-svn: 355322
When we have an annotated local variable after a function returns, we
generate IR that fails verification with the error
> Instruction referencing instruction not embedded in a basic block!
And it means that bitcast referencing alloca doesn't have a parent basic
block.
Fix by checking if we are at an unreachable point and skip emitting
annotations. This approach is similar to the way we emit variable
initializer and debug info.
rdar://problem/46200420
Reviewers: rjmccall
Reviewed By: rjmccall
Subscribers: aprantl, jkorous, dexonsmith, cfe-commits
Differential Revision: https://reviews.llvm.org/D58147
llvm-svn: 355166
I think the author of the function assumed that `GetInsertBlock()`
wouldn't change from where `atomicPHI` was created, but this isn't
true when `-fsanitize=unsigned-integer-overflow` is enabled (we
generate an overflow/continuation label). Fix by keeping track of the
block we want to return to to complete the cmpxchg loop.
rdar://48406558
Differential revision: https://reviews.llvm.org/D58744
llvm-svn: 355054
This patch enables the following
1) AMD family 17h "znver2" tune flag (-march, -mcpu).
2) ISAs that are enabled for "znver2" architecture.
3) For the time being, it uses the znver1 scheduler model.
4) Tests are updated.
5) This patch is the clang counterpart to D58343
Reviewers: craig.topper
Tags: #clang
Differential Revision: https://reviews.llvm.org/D58344
llvm-svn: 354899
SVN r339438 added support to deduplicate the helpers by using a consistent
naming scheme and using LinkOnceODR semantics. This works on ELF by means of
weak linking semantics, and entirely does not work on PE/COFF where you end up
with multiply defined strong symbols, which is a strong error on PE/COFF.
Assign the functions a COMDAT group so that they can be uniqued by the linker.
This fixes the use of blocks in CoreFoundation on Windows.
llvm-svn: 354678
These currently use _u32, but they should instead use _f16, the
types of the multiplication (matching the various integer vmlal
variants).
Differential Revision: https://reviews.llvm.org/D58306
llvm-svn: 354538
This test wasn't running due to a missing : after the RUN statement.
Enabling this test revealed that it's actually broken.
Differential Revision: https://reviews.llvm.org/D58429
llvm-svn: 354481
This is the second attempt to port ASan to new PM after D52739. This takes the
initialization requried by ASan from the Module by moving it into a separate
class with it's own analysis that the new PM ASan can use.
Changes:
- Split AddressSanitizer into 2 passes: 1 for the instrumentation on the
function, and 1 for the pass itself which creates an instance of the first
during it's run. The same is done for AddressSanitizerModule.
- Add new PM AddressSanitizer and AddressSanitizerModule.
- Add legacy and new PM analyses for reading data needed to initialize ASan with.
- Removed DominatorTree dependency from ASan since it was unused.
- Move GlobalsMetadata and ShadowMapping out of anonymous namespace since the
new PM analysis holds these 2 classes and will need to expose them.
Differential Revision: https://reviews.llvm.org/D56470
llvm-svn: 353985
Argument evaluation order is different between gcc and clang, so pull out
the Builder calls to make the generated IR independent of the host compiler's
argument evaluation order. Thanks to rnk for reminding me of this clang/gcc
difference.
llvm-svn: 353969
r353878 fixed a bug in _mm_loadu_ps and added a command line to catch it. Adding additional command lines to prevent breaking other intrinsics in the future.
llvm-svn: 353887
Add secondary triple to existing SSE test for it. I audited other uses
of __attribute__((__packed__)) in the intrinsic headers, and this seemed
to be the only missing one.
llvm-svn: 353878
Relocatable code generation is meaningless on MSP430, as the platform is too small to use shared libraries.
Patch by Dmitry Mikushev!
Differential Revision: https://reviews.llvm.org/D56927
llvm-svn: 353877
This attribute applies to declarations of C stdlib functions
(sprintf, memcpy...) that have known fortified variants
(__sprintf_chk, __memcpy_chk, ...). When applied, clang will emit
calls to the fortified variant functions instead of calls to the
defaults.
In GCC, this is done by adding gnu_inline-style wrapper functions,
but that doesn't work for us for variadic functions because we don't
support __builtin_va_arg_pack (and have no intention to).
This attribute takes two arguments, the first is 'type' argument
passed through to __builtin_object_size, and the second is a flag
argument that gets passed through to the variadic checking variants.
rdar://47905754
Differential revision: https://reviews.llvm.org/D57918
llvm-svn: 353765
Basically the same issue as string init, except it didn't really have
any visible consequences before I removed the implicit lvalue-to-rvalue
conversion from CodeGen.
While I'm here, a couple minor drive-by cleanups: IgnoreParens never
returns a ConstantExpr, and there was a potential crash with string init
involving a ChooseExpr.
The analyzer test change maybe indicates we could simplify the analyzer
code a little with this fix? Apparently a hack was added to support
lvalues in initializers in r315750, but I'm not really familiar with the
relevant code.
Fixes regression reported in the kernel build at
https://bugs.llvm.org/show_bug.cgi?id=40430#c6 .
Differential Revision: https://reviews.llvm.org/D58069
llvm-svn: 353762
We must only set the construction vtable visibility after we create the
vtable initializer, otherwise the global value will be treated as
declaration rather than definition and the visibility won't be set.
Differential Revision: https://reviews.llvm.org/D58010
llvm-svn: 353742
Summary:
With MSVC, #pragma pack is ignored when there is explicit alignment. This differs from gcc. Clang emulates this difference when compiling for Windows.
It appears that MSVC and its headers consider the __m128/__m128i/__m128d/etc. types to be explicitly aligned and ignores #pragma pack for them. Since we don't have explicit alignment on them in our headers, we don't match the MSVC behavior here.
This patch adds explicit alignment to match this behavior. I'm hoping this won't cause any problems when we're not emulating MSVC. But if someone knows of something that would be different we can swith to conditionally adding the alignment based on _MSC_VER.
I had to add explicitly unaligned types as well so we could use them in the loadu/storeu intrinsics which use __attribute__(__packed__). Using the now explicitly aligned types wouldn't produce align 1 accesses when targeting Windows.
Reviewers: rnk, erichkeane, spatel, RKSimon
Subscribers: cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D57961
llvm-svn: 353555
Some of these functions take some extraneous arguments, e.g. EltSize,
Offset, which are computable from the Type and DataLayout.
Add some asserts to ensure that the computed values are consistent
with the passed-in values, in preparation for eliminating the
extraneous arguments. This also asserts that the Type is an Array for
the calls named "Array" and a Struct for the calls named "Struct".
Then, correct a couple of errors:
1. Using CreateStructGEP on an array type. (this causes the majority
of the test differences, as struct GEPs are created with i32
indices, while array GEPs are created with i64 indices)
2. Passing the wrong Offset to CreateStructGEP in TargetInfo.cpp on
x86-64 NACL (which uses 32-bit pointers).
Differential Revision: https://reviews.llvm.org/D57766
llvm-svn: 353529
The patch in r350643 incorrectly sets the COFF emission based on bits
instead of bytes. This patch converts the 32 via CharUnits to bits to
compare the correct values.
Change-Id: Icf38a16470ad5ae3531374969c033557ddb0d323
llvm-svn: 353411
This is suggested by 3.3.9 of MSP430 EABI document.
We do allow user to manually enable frame pointer. GCC toolchain uses the same behavior.
Patch by Dmitry Mikushev!
Differential Revision: https://reviews.llvm.org/D56925
llvm-svn: 353212
Summary:
This is a follow up for https://reviews.llvm.org/D57278. The previous
revision should have also included Kernel ASan.
rdar://problem/40723397
Subscribers: cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D57711
llvm-svn: 353120
Summary:
Currently, ASan inserts a call to `__asan_handle_no_return` before every
`noreturn` function call/invoke. This is unnecessary for calls to other
runtime funtions. This patch changes ASan to skip instrumentation for
functions calls marked with `!nosanitize` metadata.
Reviewers: TODO
Differential Revision: https://reviews.llvm.org/D57489
llvm-svn: 352948
This is similar to import_module, but sets the import field name
instead.
By default, the import field name is the same as the C/asm/.o symbol
name. However, there are situations where it's useful to have it be
different. For example, suppose I have a wasm API with a module named
"pwsix" and a field named "read". There's no risk of namespace
collisions with user code at the wasm level because the generic name
"read" is qualified by the module name "pwsix". However in the C/asm/.o
namespaces, the module name is not used, so if I have a global function
named "read", it is intruding on the user's namespace.
With the import_field module, I can declare my function (in libc) to be
"__read", and then set the wasm import module to be "pwsix" and the wasm
import field to be "read". So at the C/asm/.o levels, my symbol is
outside the user namespace.
Differential Revision: https://reviews.llvm.org/D57602
llvm-svn: 352930
Summary:
UBSan wants to detect when unreachable code is actually reached, so it
adds instrumentation before every unreachable instruction. However, the
optimizer will remove code after calls to functions marked with
noreturn. To avoid this UBSan removes noreturn from both the call
instruction as well as from the function itself. Unfortunately, ASan
relies on this annotation to unpoison the stack by inserting calls to
_asan_handle_no_return before noreturn functions. This is important for
functions that do not return but access the the stack memory, e.g.,
unwinder functions *like* longjmp (longjmp itself is actually
"double-proofed" via its interceptor). The result is that when ASan and
UBSan are combined, the noreturn attributes are missing and ASan cannot
unpoison the stack, so it has false positives when stack unwinding is
used.
Changes:
Clang-CodeGen now directly insert calls to `__asan_handle_no_return`
when a call to a noreturn function is encountered and both
UBsan-unreachable and ASan are enabled. This allows UBSan to continue
removing the noreturn attribute from functions without any changes to
the ASan pass.
Previously generated code:
```
call void @longjmp
call void @__asan_handle_no_return
call void @__ubsan_handle_builtin_unreachable
```
Generated code (for now):
```
call void @__asan_handle_no_return
call void @longjmp
call void @__asan_handle_no_return
call void @__ubsan_handle_builtin_unreachable
```
rdar://problem/40723397
Reviewers: delcypher, eugenis, vsk
Differential Revision: https://reviews.llvm.org/D57278
> llvm-svn: 352690
llvm-svn: 352829
Summary:
UBSan wants to detect when unreachable code is actually reached, so it
adds instrumentation before every unreachable instruction. However, the
optimizer will remove code after calls to functions marked with
noreturn. To avoid this UBSan removes noreturn from both the call
instruction as well as from the function itself. Unfortunately, ASan
relies on this annotation to unpoison the stack by inserting calls to
_asan_handle_no_return before noreturn functions. This is important for
functions that do not return but access the the stack memory, e.g.,
unwinder functions *like* longjmp (longjmp itself is actually
"double-proofed" via its interceptor). The result is that when ASan and
UBSan are combined, the noreturn attributes are missing and ASan cannot
unpoison the stack, so it has false positives when stack unwinding is
used.
Changes:
Clang-CodeGen now directly insert calls to `__asan_handle_no_return`
when a call to a noreturn function is encountered and both
UBsan-unreachable and ASan are enabled. This allows UBSan to continue
removing the noreturn attribute from functions without any changes to
the ASan pass.
Previously generated code:
```
call void @longjmp
call void @__asan_handle_no_return
call void @__ubsan_handle_builtin_unreachable
```
Generated code (for now):
```
call void @__asan_handle_no_return
call void @longjmp
call void @__asan_handle_no_return
call void @__ubsan_handle_builtin_unreachable
```
rdar://problem/40723397
Reviewers: delcypher, eugenis, vsk
Differential Revision: https://reviews.llvm.org/D57278
llvm-svn: 352690
This builtin has the same UI as __builtin_object_size, but has the
potential to be evaluated dynamically. It is meant to be used as a
drop-in replacement for libraries that use __builtin_object_size when
a dynamic checking mode is enabled. For instance,
__builtin_object_size fails to provide any extra checking in the
following function:
void f(size_t alloc) {
char* p = malloc(alloc);
strcpy(p, "foobar"); // expands to __builtin___strcpy_chk(p, "foobar", __builtin_object_size(p, 0))
}
This is an overflow if alloc < 7, but because LLVM can't fold the
object size intrinsic statically, it folds __builtin_object_size to
-1. With __builtin_dynamic_object_size, alloc is passed through to
__builtin___strcpy_chk.
rdar://32212419
Differential revision: https://reviews.llvm.org/D56760
llvm-svn: 352665
This is meant to be used with clang's __builtin_dynamic_object_size.
When 'true' is passed to this parameter, the intrinsic has the
potential to be folded into instructions that will be evaluated
at run time. When 'false', the objectsize intrinsic behaviour is
unchanged.
rdar://32212419
Differential revision: https://reviews.llvm.org/D56761
llvm-svn: 352664
Introduce an option to request global visibility settings be applied to
declarations without a definition or an explicit visibility, rather than
the existing behavior of giving these default visibility. When the
visibility of all or most extern definitions are known this allows for
the same optimisations -fvisibility permits without updating source code
to annotate all declarations.
Differential Revision: https://reviews.llvm.org/D56868
llvm-svn: 352391
Summary:
The 512-bit cvt(u)qq2tops, cvt(u)qqtopd, and cvt(u)dqtops intrinsics all have the possibility of taking an explicit rounding mode argument. If the rounding mode is CUR_DIRECTION we'd like to emit a sitofp/uitofp instruction and a select like we do for 256-bit intrinsics.
For cvt(u)qqtopd and cvt(u)dqtops we do this when the form of the software intrinsics that doesn't take a rounding mode argument is used. This is done by using convertvector in the header with the select builtin. But if the explicit rounding mode form of the intrinsic is used and CUR_DIRECTION is passed, we don't do this. We shouldn't have this inconsistency.
For cvt(u)qqtops nothing is done because we can't use the select builtin in the header without avx512vl. So we need to use custom codegen for this.
Even when the rounding mode isn't CUR_DIRECTION we should also use select in IR for consistency. And it will remove another scalar integer mask from our intrinsics.
To accomplish all of these goals I've taken a slightly unusual approach. I've added two new X86 specific intrinsics for sitofp/uitofp with rounding. These intrinsics are variadic on the input and output type so we only need 2 instead of 6. This avoids the need for a switch to map them in CGBuiltin.cpp. We just need to check signed vs unsigned. I believe other targets also use variadic intrinsics like this.
So if the rounding mode is CUR_DIRECTION we'll use an sitofp/uitofp instruction. Otherwise we'll use one of the new intrinsics. After that we'll emit a select instruction if needed.
Reviewers: RKSimon, spatel
Reviewed By: RKSimon
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D56998
llvm-svn: 352267
Relocatable code generation is meaningless on MSP430, as the platform is too small to use shared libraries.
Patch by Dmitry Mikushev!
Differential Revision: https://reviews.llvm.org/D56927
llvm-svn: 352181
ACLE specifies that return type for rsr and rsr64 is uint32_t and
uint64_t respectively. D56852 change the return type of rsr64 from
unsigned long to unsigned long long which at least on Linux doesn't
match uint64_t, but the test isn't strict enough to detect that
because compiler implicitly converts unsigned long long to uint64_t,
but it breaks other uses such as printf with PRIx64 type specifier.
This change makes the test stricter enforcing that the return type
of rsr and rsr64 builtins is what is actually specified in ACLE.
Differential Revision: https://reviews.llvm.org/D57210
llvm-svn: 352156
This reverts commit r351740: this broke on platforms where unsigned long
long isn't the same as uint64_t which is what ACLE specifies for the
return value of rsr64.
Differential Revision: https://reviews.llvm.org/D57209
llvm-svn: 352153
This adds a C/C++ attribute which corresponds to the LLVM IR wasm-import-module
attribute. It allows code to specify an explicit import module.
Differential Revision: https://reviews.llvm.org/D57160
llvm-svn: 352106
Generate DILabel metadata and call llvm.dbg.label after label
statement to associate the metadata with the label.
After fixing PR37395.
After fixing problems in LiveDebugVariables.
After fixing NULL symbol problems in AddressPool when enabling
split-dwarf-file.
After fixing PR39094.
After landing D54199 and D54465 to fix Chromium build failed.
Differential Revision: https://reviews.llvm.org/D45045
llvm-svn: 352025
We can't use any other string, anyway, because its type wouldn't
match the type of the PredefinedExpr.
With this change, we don't compute a "nice" name for the __func__ global
when it's used in the initializer for a constant. This doesn't seem like
a great loss, and I'm not sure how to fix it without either storing more
information in the AST, or somehow threading through the information
from ExprConstant.cpp.
This could break some situations involving BlockDecl; currently,
CodeGenFunction::EmitPredefinedLValue has some logic to intentionally
emit a string different from what Sema computed. This code skips that
logic... but that logic can't work correctly in general anyway. (For
example, sizeof(__func__) returns the wrong result.) Hopefully this
doesn't affect practical code.
Fixes https://bugs.llvm.org/show_bug.cgi?id=40313 .
Differential Revision: https://reviews.llvm.org/D56821
llvm-svn: 351766
The ACLE states that 64-bit crc32, wsr, rsr and rbit operands are
uint64_t so we should have the clang builtin match this description
- which is what we already do for AArch32.
Differential Revision: https://reviews.llvm.org/D56852
llvm-svn: 351740
For some reason we were missing tests for several unmasked conversion intrinsics, but had their mask form.
Also use a non-default rounding mode on some tests to provide better coverage for a future patch.
llvm-svn: 351708
These intrinsics can always be replaced with generic integer comparisons without any regression in codegen, even for -O0/-fast-isel cases.
Noticed while cleaning up vector integer comparison costs for PR40376.
A future commit will remove/autoupgrade the existing VPCOM/VPCOMU llvm intrinsics.
llvm-svn: 351687
With commit r351627, LLVM gained the ability to apply (existing) IPO
optimizations on indirections through callbacks, or transitive calls.
The general idea is that we use an abstraction to hide the middle man
and represent the callback call in the context of the initial caller.
It is described in more detail in the commit message of the LLVM patch
r351627, the llvm::AbstractCallSite class description, and the
language reference section on callback-metadata.
This commit enables clang to emit !callback metadata that is
understood by LLVM. It does so in three different cases:
1) For known broker functions declarations that are directly
generated, e.g., __kmpc_fork_call for the OpenMP pragma parallel.
2) For known broker functions that are identified by their name and
source location through the builtin detection, e.g.,
pthread_create from the POSIX thread API.
3) For user annotated functions that carry the "callback(callee, ...)"
attribute. The attribute has to include the name, or index, of
the callback callee and how the passed arguments can be
identified (as many as the callback callee has). See the callback
attribute documentation for detailed information.
Differential Revision: https://reviews.llvm.org/D55483
llvm-svn: 351629
Summary:
This attribute will allow users to opt specific functions out of
speculative load hardening. This compliments the Clang attribute
named speculative_load_hardening. When this attribute or the attribute
speculative_load_hardening is used in combination with the flags
-mno-speculative-load-hardening or -mspeculative-load-hardening,
the function level attribute will override the default during LLVM IR
generation. For example, in the case, where the flag opposes the
function attribute, the function attribute will take precendence.
The sticky inlining behavior of the speculative_load_hardening attribute
may cause a function with the no_speculative_load_hardening attribute
to be tagged with the speculative_load_hardening tag in
subsequent compiler phases which is desired behavior since the
speculative_load_hardening LLVM attribute is designed to be maximally
conservative.
If both attributes are specified for a function, then an error will be
thrown.
Reviewers: chandlerc, echristo, kristof.beyls, aaron.ballman
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D54909
llvm-svn: 351565
llvm.flt.rounds returns an i32, but the builtin expects an integer.
On targets where integers are not 32-bits clang tries to bitcast the result, causing an assertion failure.
The patch enables newlib build for msp430.
Patch by Edward Jones!
Differential Revision: https://reviews.llvm.org/D24461
llvm-svn: 351449