Before this patch the unix code for creating hardlinks was unused. The code
for creating symbolic links was implemented in lib/Support/LockFileManager.cpp
and the code for creating hard links in lib/Support/*/Path.inc.
The only use we have for these is in LockFileManager.cpp and it can use both
soft and hard links. Just have a create_link function that creates one or the
other depending on the platform.
llvm-svn: 203596
GuardMalloc can print info to stderr, causing these tests to fail.
Since FileCheck errors on empty inputs, just add a bit of dummy
data to make it happy.
llvm-svn: 203595
PGO counters are 64-bit and branch weights are 32-bit. Scale them down
when necessary, instead of just taking the lower 32 bits.
<rdar://problem/16276448>
llvm-svn: 203592
Like the binary operator check of r201702, this actually checks the
condition of every if in a chain against every other condition, an
O(N^2) operation. In most cases N should be small enough to make this
practical, and checking all cases like this makes it much more likely
to catch a copy-paste error within the same series of branches.
Part of IdenticalExprChecker; patch by Daniel Fahlgren!
llvm-svn: 203585
This fixes the bug where we would bitcast the 64-bit floating point result
of cmpneqsd to a 64-bit integer even on 32-bit targets.
Differential Revision: http://llvm-reviews.chandlerc.com/D3009
llvm-svn: 203581
When resolving a function call to an external routine, the dynamic
loader must patch the "nop" after the branch instruction to a load
that restores the TOC register.
Current code does that, but only with the *first* instance of a call
to any particular external routine, i.e. at the point where it also
allocates the call stub. With subsequent calls to the same routine,
current code neglects to patch in the TOC restore code. This is a
bug, and leads to corrupt TOC pointers in those cases.
Fixed by patching in restore code every time.
llvm-svn: 203580
Use the options in the ARMISelLowering to control whether tail calls are
optimised or not. Previously, this option was entirely ignored on the ARM
target and only honoured on x86.
This option is mostly useful in profiling scenarios. The default remains that
tail call optimisations will be applied.
llvm-svn: 203577
This option is from 2010, designed to work around a linker issue on Darwin for
ARM. According to grosbach this is no longer an issue and this option can
safely be removed.
llvm-svn: 203576
Tail call optimisation was previously disabled on all targets other than
iOS5.0+. This enables the tail call optimisation on all Thumb 2 capable
platforms.
The test adjustments are to remove the IR hint "tail" to function invocation.
The tests were designed assuming that tail call optimisations would not kick in
which no longer holds true.
llvm-svn: 203575
After r203553 overflow intrinsics and their non-intrinsic (normal)
instruction get hashed to the same value. This patch prevents PRE from
moving an instruction into a predecessor block, and trying to add a phi
node that gets two different types (the intrinsic result and the
non-intrinsic result), resulting in a failing assert.
llvm-svn: 203574
ATOMIC_STORE operations always get here as a lowered ATOMIC_SWAP, so there's no
need for any code to handle them specially.
There should be no functionality change so no tests.
llvm-svn: 203567
Someone could write:
if (0) {
__c11_atomic_load(ptr, memory_order_release);
}
or the equivalent, which is perfectly valid, so we shouldn't outright reject
invalid orderings on purely static grounds.
rdar://problem/16242991
llvm-svn: 203564
Before:
auto aaaaaaaa = [](int i, // break
int j)
-> int {
return fffffffffffffffffffffffffffffffffffffff(i * j);
};
After:
auto aaaaaaaa = [](int i, // break
int j) -> int {
return fffffffffffffffffffffffffffffffffffffff(i * j);
};
llvm-svn: 203562
This is a conservative check, because it's valid for the expression to be
non-constant, and in cases like that we just don't know whether it's valid.
rdar://problem/16242991
llvm-svn: 203561
The syntax for "cmpxchg" should now look something like:
cmpxchg i32* %addr, i32 42, i32 3 acquire monotonic
where the second ordering argument gives the required semantics in the case
that no exchange takes place. It should be no stronger than the first ordering
constraint and cannot be either "release" or "acq_rel" (since no store will
have taken place).
rdar://problem/15996804
llvm-svn: 203559
This header is generally not available on mingw and can cause build errors.
The windows host code does provide timespec definition that can be used for
mingw case.
llvm-svn: 203555
When an overflow intrinsic is followed by a non-overflow instruction,
replace the latter with an extract. For example:
%sadd = tail call { i32, i1 } @llvm.sadd.with.overflow.i32(i32 %a, i32 %b)
%sadd3 = add i32 %a, %b
Here the add statement will be replaced by an extract.
When an overflow intrinsic follows a non-overflow instruction, a clone
of the intrinsic is inserted before the normal instruction, which makes
it the same as the previous case. Subsequent runs of GVN can then clean
up the duplicate instructions and insert the extract.
This fixes PR8817.
llvm-svn: 203553
Some of this data had gotten out of date, so we weren't quite testing
what we thought we were. This also moves the outdated data test to its
own file to simplify regenerating the test data.
llvm-svn: 203546
In case we are at the innermost band, we try to prepare for vectorization. This
means, we look for the innermost parallel loop and strip mine this loop to the
innermost level using a strip-mine factor corresponding to the number of vector
iterations.
For whatever reason, the code that implemented this feature was broken. We now
added a comment, a test case and obviously also the right code.
llvm-svn: 203544