The return value of every assign operator should be Type&, not only for copy and move assign operators.
Patch by Adam Balogh!
Reviewers: hokein, alexfh
Differential Revision: http://reviews.llvm.org/D18264
llvm-svn: 264251
We used to only allow SCEVAddRecExpr for pointer expressions in order to
be able to compute the bounds. However this is also trivially possible
for loop-invariant addresses (scUnknown) since then the bounds are the
address itself.
Interestingly, we used allow this for the special case when the
loop-invariant address happens to also be an SCEVAddRecExpr (in an outer
loop).
There are a couple more loops that are vectorized in SPEC after this.
My guess is that the main reason we don't see more because for example a
loop-invariant load is vectorized into a splat vector with several
vector-inserts. This is likely to make the vectorization unprofitable.
I.e. we don't notice that a later LICM will move all of this out of the
loop so the cost estimate should really be 0.
llvm-svn: 264243
include altivec.h has come and gone.
Rationale: This causes modules, rewrite-includes, etc to be sad and
people should just include altivec.h in their source.
llvm-svn: 264235
The stack-size.yaml test had an empty atom content array. This is
legal, but asking a BumpPtrAllocator for 0 sized data may not be
legal. Instead just avoid requesting any data when we can just return
an empty ArrayRef instead.
llvm-svn: 264234
Its possible for file to have no entry atom which means that there
is no atom to check for being a thumb function. Instead just skip
the thumb check and set the entry address to 0, which matches the
current behaviour of getting a default initialised int from a map.
llvm-svn: 264233
On a 32-bit output, we may write LC_MAIN (which contains a uint64_t) to
an unaligned address. This changes it to use a memcpy instead which is UB safe.
llvm-svn: 264232
We were casting a potentially unaligned pointer to uint32_t and
dereferencing. As the pointer ultimately comes from the object file,
there's no way to guarantee alignment, so use the little32_t read instead.
Also, little32_t knows about endianness, so in theory this may have broken on
big endian machines.
llvm-svn: 264231
The .o path always makes sure to store a power of 2 value in the
Section alignment. However, the YAML code didn't verify this.
Added verification and updated all the tests which had a 3 but meant
to have 2^3.
llvm-svn: 264228
We need the "return address" of a noreturn call to be within the
bounds of the calling function; TrapUnreachable turns 'unreachable'
into a 'ud2' instruction, which has that desired effect.
Differential Revision: http://reviews.llvm.org/D18414
llvm-svn: 264224
If not for lazy linking of linkonce GVs, comdats are just a
preprocessing before symbol resolution.
Lazy linking complicates it since when we pick a visible member of
comdat, we have to make sure the rest of it passes symbol resolution
too.
llvm-svn: 264223
This is a temporary crutch to enable code that currently uses std::error_code
to be incrementally moved over to Error. Requiring all Error instances be
convertible enables clients to call errorToErrorCode on any error (not just
ECErrors created by conversion *from* an error_code).
This patch also moves code for Error from ErrorHandling.cpp into a new
Error.cpp file.
llvm-svn: 264221
Summary:
Though r264012 was fancy enough to make reading the jit entry struct
work with templates, the packing and alignment attributes do not work on
Windows. So, this change makes it plain and simple with manual reading
of the jit entry struct.
Reviewers: clayborg
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D18379
llvm-svn: 264217
Patch by Michal Cierniak!
This patch increments the "Debug Info Version" from 2 to 3.
This is a nop if you just want to generate binaries. I verified
that with and without this patch, when I run llgo -g on a Go
source file, I get exactly the same binary. The purpose of the
patch is to make it possible to run the llvm-dis tool. Without
the patch, it is impossible to disassemble files generated with
llgo:
$ llgo -c -g -emit-llvm src/hello.go
$ llvm-dis hello.o
llvm-dis: warning: ignoring debug info with an invalid version (2) in hello.o
Differential Revision: http://reviews.llvm.org/D18355
llvm-svn: 264212
Remove tests that have neither a triple nor an explicit -fmsc-version flag,
since in the absence of an -fmsc-version flag, the implicit value of the flag
is 17 (MSVC2013) with MSVC triples but 0 (not set) for other triples, and
the default triple is platform dependent.
This relands r263974 with a test fix.
llvm-svn: 264210
Summary:
Previously we were using the codegen test to ensure that we choose the
right overload. But we can do this within sema, with a bit of
cleverness.
I left the constructor/destructor checks in CodeGen, because these
overloads (particularly on the destructors) are hard to check in Sema.
Reviewers: tra
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D18386
llvm-svn: 264207
Summary:
Principally, don't hardcode the line numbers of various notes. This
lets us make changes to the test without recomputing linenos everywhere.
Instead, just tell -verify that we may get 0 or more notes pointing to
the relevant function definitions. Checking that we get exactly the
right note isn't so important (and anyway is checked elsewhere).
Reviewers: tra
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D18385
llvm-svn: 264206
Summary:
We decided this makes life too difficult for code authors. For example,
people may want to detect NVCC and disable variadic templates, which
NVCC does not support, but which we do.
Since people are going to have to change compiler flags *anyway* in
order to compile with clang, if they really want the old behavior, they
can pass -D__NVCC__.
Tested with tensorflow and thrust, no apparent problems.
Reviewers: tra
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D18417
llvm-svn: 264205
The size of a section can be zero, even when it contains atoms, so
long as all of the atoms are also size 0. In this case we were
allocating space for a 0 sized buffer.
Changed this to only allocate when we need the space, but also cleaned
up all the code to use MutableArrayRef instead of uint8_t* so its much much
safer as we get bounds checking on all of our section creation logic.
llvm-svn: 264204
On a 32-bit output, we may write LC_SOURCE_VERSION (which contains a uint64_t) to
an unaligned address. This changes it to use a memcpy instead which is UB safe.
llvm-svn: 264202
The BumpPtrAllocator currently doesn't handle zero length allocations well.
The discussion for how to fix that is ongoing. However, there's no need
for StringRef::copy to actually allocate anything here anyway, so just
return StringRef() when we get a zero length copy.
Reviewed by David Blaikie
llvm-svn: 264201
Strengthen tests of storing frame indices.
Right now this just creates irrelevant scheduling changes.
We don't want to have multiple frame index operands
on an instruction. There seem to be various assumptions
that at least the same frame index will not appear twice
in the LocalStackSlotAllocation pass.
There's no reason to have this happen, and it just
makes it easy to introduce bugs where the immediate
offset is appplied to the storing instruction when it should
really be applied to the value being stored as a separate
add.
This might not be sufficient. It might still be problematic
to have an add fi, fi situation, but that's even less unlikely
to happen in real code.
llvm-svn: 264200
Currently, AnalyzeBranch() fails non-equality comparison between floating points
on X86 (see https://llvm.org/bugs/show_bug.cgi?id=23875). This is because this
function can modify the branch by reversing the conditional jump and removing
unconditional jump if there is a proper fall-through. However, in the case of
non-equality comparison between floating points, this can turn the branch
"unanalyzable". Consider the following case:
jne.BB1
jp.BB1
jmp.BB2
.BB1:
...
.BB2:
...
AnalyzeBranch() will reverse "jp .BB1" to "jnp .BB2" and then "jmp .BB2" will be
removed:
jne.BB1
jnp.BB2
.BB1:
...
.BB2:
...
However, AnalyzeBranch() cannot analyze this branch anymore as there are two
conditional jumps with different targets. This may disable some optimizations
like block-placement: in this case the fall-through behavior is enforced even if
the fall-through block is very cold, which is suboptimal.
Actually this optimization is also done in block-placement pass, which means we
can remove this optimization from AnalyzeBranch(). However, currently
X86::COND_NE_OR_P and X86::COND_NP_OR_E are not reversible: there is no defined
negation conditions for them.
In order to reverse them, this patch defines two new CondCode X86::COND_E_AND_NP
and X86::COND_P_AND_NE. It also defines how to synthesize instructions for them.
Here only the second conditional jump is reversed. This is valid as we only need
them to do this "unconditional jump removal" optimization.
Differential Revision: http://reviews.llvm.org/D11393
llvm-svn: 264199
The goal is to enhance this script to be used with opt and clang:
Group all of the regexes together, so it's easier to see what's going on.
This will make it easier to break main() up into pieces too.
Also, note that some of the regexes are for x86-specific asm.
llvm-svn: 264197