`ninja lldb` used to always run "echo -n", which on OS X results in literally
echoing "-n" to the screen. Just remove the command from add_custom_target,
then it only adds an alias and `ninja lldb` now reports "no work to do".
Other than that, no intended behavior change.
llvm-svn: 226233
This produces comdats for vtables, typeinfo, typeinfo names, and vtts.
When combined with llvm not producing implicit comdats, not doing this would
cause code bloat on ELF and link errors on COFF.
llvm-svn: 226227
This hooks up the changes necessary to set the trap flag on the
CPU and properly manage the process and thread's resume state
and private state so that the ThreadPlan does its thing.
Stepping still doesn't work as of this change, because there are
some issues with stack frames where it doesn't update the thread's
frame list correctly when it breaks inside of a function, but
I will try to fix that separately.
llvm-svn: 226221
On Windows, opening with "w" opens it as text instead of binary.
This causes translation of newline characters, so that "\n" turns
into "\r\n", which in turn leads to git detecting that the file
has changed and wanting to commit it.
llvm-svn: 226220
Summary:
Shift an older “invalid file” test to get a consistent naming for these tests.
Bugs found by afl-fuzz
Reviewers: rafael
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D6945
llvm-svn: 226219
be exported from a dylib if their containing object file were linked into one.
No test case: No command line tools query this flag, and there are no Object
unit tests.
llvm-svn: 226217
The change used C++11 features not supported by MSVC 2012. I will fix
the change to use things supported MSVC 2012 and recommit shortly.
llvm-svn: 226216
The PPC backend will now assume that PPC64 ELFv1 function descriptors are
invariant. This must be true for well-defined C/C++ code, but I'm providing an
option to disable this assumption in case someone's JIT-engine needs it.
llvm-svn: 226209
Clang would previously become confused and crash here.
It does not make a lot of sense to export these, so warning seems appropriate.
MSVC will export some member functions for this kind of specializations, whereas
MinGW ignores the dllexport-edness. The latter behaviour seems better.
Differential Revision: http://reviews.llvm.org/D6984
llvm-svn: 226208
Function pointers under PPC64 ELFv1 (which is used on PPC64/Linux on the
POWER7, A2 and earlier cores) are really pointers to a function descriptor, a
structure with three pointers: the actual pointer to the code to which to jump,
the pointer to the TOC needed by the callee, and an environment pointer. We
used to chain these loads, and make them opaque to the rest of the optimizer,
so that they'd always occur directly before the call. This is not necessary,
and in fact, highly suboptimal on embedded cores. Once the function pointer is
known, the loads can be performed ahead of time; in fact, they can be hoisted
out of loops.
Now these function descriptors are almost always generated by the linker, and
thus the contents of the descriptors are invariant. As a result, by default,
we'll mark the associated loads as invariant (allowing them to be hoisted out
of loops). I've added a target feature to turn this off, however, just in case
someone needs that option (constructing an on-stack descriptor, casting it to a
function pointer, and then calling it cannot be well-defined C/C++ code, but I
can imagine some JIT-compilation system doing so).
Consider this simple test:
$ cat call.c
typedef void (*fp)();
void bar(fp x) {
for (int i = 0; i < 1600000000; ++i)
x();
}
$ cat main.c
typedef void (*fp)();
void bar(fp x);
void foo() {}
int main() {
bar(foo);
}
On the PPC A2 (the BG/Q supercomputer), marking the function-descriptor loads
as invariant brings the execution time down to ~8 seconds from ~32 seconds with
the loads in the loop.
The difference on the POWER7 is smaller. Compiling with:
gcc -std=c99 -O3 -mcpu=native call.c main.c : ~6 seconds [this is 4.8.2]
clang -O3 -mcpu=native call.c main.c : ~5.3 seconds
clang -O3 -mcpu=native call.c main.c -mno-invariant-function-descriptors : ~4 seconds
(looks like we'd benefit from additional loop unrolling here, as a first
guess, because this is faster with the extra loads)
The -mno-invariant-function-descriptors will be added to Clang shortly.
llvm-svn: 226207
This test casts 0x4 to a function pointer and calls it. Unfortunately, the
faulting address may not exactly be 0x4 on PPC64 ELFv1 systems. The LLVM PPC
backend used to always generate the loads "in order", so we'd fault at 0x4
anyway. However, at upcoming change to loosen that ordering, and we'll pick a
different order on some targets. As a result, as explained in the comment, we
need to allow for certain nearby addresses as well.
llvm-svn: 226202
IRCE eliminates range checks of the form
0 <= A * I + B < Length
by splitting a loop's iteration space into three segments in a way
that the check is completely redundant in the middle segment. As an
example, IRCE will convert
len = < known positive >
for (i = 0; i < n; i++) {
if (0 <= i && i < len) {
do_something();
} else {
throw_out_of_bounds();
}
}
to
len = < known positive >
limit = smin(n, len)
// no first segment
for (i = 0; i < limit; i++) {
if (0 <= i && i < len) { // this check is fully redundant
do_something();
} else {
throw_out_of_bounds();
}
}
for (i = limit; i < n; i++) {
if (0 <= i && i < len) {
do_something();
} else {
throw_out_of_bounds();
}
}
IRCE can deal with multiple range checks in the same loop (it takes
the intersection of the ranges that will make each of them redundant
individually).
Currently IRCE does not do any profitability analysis. That is a
TODO.
Please note that the status of this pass is *experimental*, and it is
not part of any default pass pipeline. Having said that, I will love
to get feedback and general input from people interested in trying
this out.
Differential Revision: http://reviews.llvm.org/D6693
llvm-svn: 226201
Reapply r226071 with fixes. Two fixes:
1. We need to manually remove the old and create the new 'deaf defs'
associated with physical register definitions when we move the definition of
the physical register from the copy point to the point of the original vreg def.
This problem was picked up by the machinstr verifier, and could trigger a
verification failure on test/CodeGen/X86/2009-02-12-DebugInfoVLA.ll, so I've
turned on the verifier in the tests.
2. When moving the def point of the phys reg up, we need to make sure that it
is neither defined nor read in between the two instructions. We don't, however,
extend the live ranges of phys reg defs to cover uses, so just checking for
live-range overlap between the pair interval and the phys reg aliases won't
pick up reads. As a result, we manually iterate over the range and check for
reads.
A test soon to be committed to the PowerPC backend will test this change.
Original commit message:
[RegisterCoalescer] Remove copies to reserved registers
This allows the RegisterCoalescer to join "non-flipped" range pairs with a
physical destination register -- which allows the RegisterCoalescer to remove
copies like this:
<vreg> = something (maybe a load, for example)
... (things that don't use PHYSREG)
PHYSREG = COPY <vreg>
(with all of the restrictions normally applied by the RegisterCoalescer: having
compatible register classes, etc. )
Previously, the RegisterCoalescer handled only the opposite case (copying
*from* a physical register). I don't handle the problem fully here, but try to
get the common case where there is only one use of <vreg> (the COPY).
An upcoming commit to the PowerPC backend will make this pattern much more
common on PPC64/ELF systems.
llvm-svn: 226200
The refactor was motivated by some comments that Greg made
http://reviews.llvm.org/D6918
and also to break a dependency cascade that caused functions linking
in string->int conversion functions to pull in most of lldb
llvm-svn: 226199
Use static functions for helpers rather than static member functions. a) this changes the linking (minor at best), and b) this makes it obvious no object state is involved.
llvm-svn: 226198
This preparation for an update to http://reviews.llvm.org/D6811. GCStrategy.cpp will hopefully be moving into IR/, where as the lowering logic needs to stay in CodeGen/
llvm-svn: 226195
This removes some duplicated classes and definitions.
These instructions are defined:
_e32 // pseudo
_e32_si
_e64 // pseudo
_e64_si
_e64_vi
llvm-svn: 226191
v2: modify hasVALU32BitEncoding instead
v3: - add pseudoToMCOpcode helper to AMDGPUInstInfo, which is used by both
hasVALU32BitEncoding and AMDGPUMCInstLower::lower
- report an error if a pseudo can't be lowered
llvm-svn: 226188