runOnCountable() allowed the caller to call on a loop without a
predictable backedge-taken count. Change the code so that only loops
with computable backdge-count can call this function, in order to catch
abuses.
llvm-svn: 237044
Summary:
In RewriteStatepointsForGC pass, we create a gc_relocate intrinsic for
each relocated pointer, and the gc_relocate has the same type with the
pointer. During the creation of gc_relocate intrinsic, llvm requires to
mangle its type. However, llvm does not support mangling of all possible
types. RewriteStatepointsForGC will hit an assertion failure when it
tries to create a gc_relocate for pointer to vector of pointers because
mangling for vector of pointers is not supported.
This patch changes the way RewriteStatepointsForGC pass creates
gc_relocate. For each relocated pointer, we erase the type of pointers
and create an unified gc_relocate of type i8 addrspace(1)*. Then a
bitcast is inserted to convert the gc_relocate to the correct type. In
this way, gc_relocate does not need to deal with different types of
pointers and the unsupported type mangling is no longer a problem. This
change would also ease further merge when LLVM erases types of pointers
and introduces an unified pointer type.
Some minor changes are also introduced to gc_relocate related part in
InstCombineCalls, CodeGenPrepare, and Verifier accordingly.
Patch by Chen Li!
Reviewers: reames, AndyAyers, sanjoy
Reviewed By: sanjoy
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D9592
llvm-svn: 237009
Summary:
Since we don't yet have remote windows debugging, it should be safe to assume
that the remote target uses unix path separators.
Reviewers: ovyalov, zturner, clayborg, vharron
Reviewed By: vharron
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D9633
llvm-svn: 237006
Summary:
r235215 adds support for f16 to be considered as a load/store type and
promote f16 operations to f32.
This patch has miscellaneous fixes for the X86 backend so all f16
operations are handled:
1. Set loadextaction for f16 vectors to expand.
2. Handle FP_EXTEND in a switch statement when handling v2f32
3. Do not fold (FP_TO_SINT (load f16)) into FP_TO_INT*_IN_MEM or
(store (SINT_TO_FP )) to a FILD.
Tests included.
Reviewers: ab, srhines, delena
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D9092
llvm-svn: 237004
After mailing list discussion on 11-13 March we would prefer to stick to a
single spelling of the long option.
This reverts commit 30035fe1a7c759c89ee62eb46efce6b3790fcc08.
llvm-svn: 237003
Include algorithm early as otherwise you get a number of particularly unhelpful
messages about failed static assertions. This fixes compilation on Linux with
gcc.
llvm-svn: 237002
When there is two brekapoint on two consecutive instruction then
the second breakpoint is ignored by lldb. This test check for the
correct behaviour in this scenario.
Differential revision: http://reviews.llvm.org/D9661
llvm-svn: 236997
For some reason this happens only when running the full test suite
(e.g., via ninja check-lldb), but not when running the
TestStaticVariables.py tests in isolation. XFAIL for now while
investigating, in an attempt to bring the bot to green and reduce noise.
llvm.org/pr20550
llvm-svn: 236993
Specifically, calculate the deviation between the shortest and longest
element (which is used to prevent excessive whitespace) per column, not
overall. This automatically handles the corner cases of a single column
and a single row so that the actualy implementation becomes simpler.
Before:
vector<int> x = {1,
aaaaaaaaaaaaaaaaaaaaaa,
2,
bbbbbbbbbbbbbbbbbbbbbb,
3,
cccccccccccccccccccccc};
After:
vector<int> x = {1, aaaaaaaaaaaaaaaaaaaaaa,
2, bbbbbbbbbbbbbbbbbbbbbb,
3, cccccccccccccccccccccc};
llvm-svn: 236992
Summary:
Now that all thread events are processed synchronously, there is no need to have separate records
of whether a thread is running. This changes the (ever-dwindling) remains of the TSC to use
NativeThreadLinux as the authoritative source of the state of threads. The rest of the
ThreadContext we need has been moved to a member of NTL.
Test Plan: ninja check-lldb continues to pass
Reviewers: chaoren, ovyalov
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D9562
llvm-svn: 236983
Summary:
Now it's much easier to follow what's happening in this test.
Also removed some unused metadata entries.
Reviewers: hfinkel
Reviewed By: hfinkel
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D9601
llvm-svn: 236981
Summary:
As far as I understand the entire point of this example is to show that
if noalias is not a superset/equal to the alias.scope list on a scope
domain then load could reference locations that the store is not known
to not-alias i.e may alias.
Reviewers: hfinkel
Reviewed By: hfinkel
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D9598
llvm-svn: 236977
Pull various parts of the UnwrappedLineFormatter into their own
abstractions. NFC.
There are two things left for subsequent changes (to keep this
reasonably small)
- the UnwrappedLineFormatter now has a bad name
- the UnwrappedLineFormatter::format function is still too large
llvm-svn: 236974
The QPX single-precision load/store intrinsics have implied
truncation/extension from/to the declared value type of <4 x double> to the
memory type of <4 x float>. When we can prove the alignment of the pointer
argument, and thus replace the intrinsic with a regular load or store, we need
to load or store the correct data type (<4 x float>) instead of (<4 x double>).
llvm-svn: 236973