Someone could write:
if (0) {
__c11_atomic_load(ptr, memory_order_release);
}
or the equivalent, which is perfectly valid, so we shouldn't outright reject
invalid orderings on purely static grounds.
rdar://problem/16242991
llvm-svn: 203564
Before:
auto aaaaaaaa = [](int i, // break
int j)
-> int {
return fffffffffffffffffffffffffffffffffffffff(i * j);
};
After:
auto aaaaaaaa = [](int i, // break
int j) -> int {
return fffffffffffffffffffffffffffffffffffffff(i * j);
};
llvm-svn: 203562
This is a conservative check, because it's valid for the expression to be
non-constant, and in cases like that we just don't know whether it's valid.
rdar://problem/16242991
llvm-svn: 203561
The syntax for "cmpxchg" should now look something like:
cmpxchg i32* %addr, i32 42, i32 3 acquire monotonic
where the second ordering argument gives the required semantics in the case
that no exchange takes place. It should be no stronger than the first ordering
constraint and cannot be either "release" or "acq_rel" (since no store will
have taken place).
rdar://problem/15996804
llvm-svn: 203559
This header is generally not available on mingw and can cause build errors.
The windows host code does provide timespec definition that can be used for
mingw case.
llvm-svn: 203555
When an overflow intrinsic is followed by a non-overflow instruction,
replace the latter with an extract. For example:
%sadd = tail call { i32, i1 } @llvm.sadd.with.overflow.i32(i32 %a, i32 %b)
%sadd3 = add i32 %a, %b
Here the add statement will be replaced by an extract.
When an overflow intrinsic follows a non-overflow instruction, a clone
of the intrinsic is inserted before the normal instruction, which makes
it the same as the previous case. Subsequent runs of GVN can then clean
up the duplicate instructions and insert the extract.
This fixes PR8817.
llvm-svn: 203553
Some of this data had gotten out of date, so we weren't quite testing
what we thought we were. This also moves the outdated data test to its
own file to simplify regenerating the test data.
llvm-svn: 203546
In case we are at the innermost band, we try to prepare for vectorization. This
means, we look for the innermost parallel loop and strip mine this loop to the
innermost level using a strip-mine factor corresponding to the number of vector
iterations.
For whatever reason, the code that implemented this feature was broken. We now
added a comment, a test case and obviously also the right code.
llvm-svn: 203544
These tests are logically related, but they're spread about several
different CodeGen directories. Consolidate them in one place to make
them easier to manage.
llvm-svn: 203541
Checking if the host arch is in the triple isn't quite correct. Change
the feature test to match llvm's, which made the same change in r193459.
llvm-svn: 203540
Check if the compiler actually supports the flags that are being added.
Previously, the compiler flags would be used improperly push the flags to the
compiler. Particularly, on Darwin, the option would be pushed to the compiler
even if it does not support it.
llvm-svn: 203532
The official specifications state the name to be ARMNT (as per the Microsoft
Portable Executable and Common Object Format Specification v8.3).
llvm-svn: 203530
to absolute paths when building the includes file for the module. Without this,
the module build would fail, because the relative paths we were using are not
necessarily relative to a directory in our include path.
llvm-svn: 203528
When the MOVBE instructions are available, use them for 16-bit endian
swapping as well as for 32 and 64 bit.
The patterns were already present on the instructions, but weren't being
matched because the operation was unconditionally marked to 'Expand.'
Change that to be conditional on whether the MOVBE instructions are
available. Use 'rolw' to implement the in-register version (32 and 64
bit have the dedicated 'bswap' instruction for that).
Patch by Louis Gerbarg <lgg@apple.com>.
rdar://15479984
llvm-svn: 203524