Commit Graph

11194 Commits

Author SHA1 Message Date
Chandler Carruth 219b89b987 [Modules] Move CallSite into the IR library where it belogs. It is
abstracting between a CallInst and an InvokeInst, both of which are IR
concepts.

llvm-svn: 202816
2014-03-04 11:01:28 +00:00
Chandler Carruth 03eb0de93d [Modules] Move GetElementPtrTypeIterator into the IR library. As its
name might indicate, it is an iterator over the types in an instruction
in the IR.... You see where this is going.

Another step of modularizing the support library.

llvm-svn: 202815
2014-03-04 10:40:04 +00:00
Chandler Carruth 8394857f43 [Modules] Move InstIterator out of the Support library, where it had no
business.

This header includes Function and BasicBlock and directly uses the
interfaces of both classes. It has to do with the IR, it even has that
in the name. =] Put it in the library it belongs to.

This is one step toward making LLVM's Support library survive a C++
modules bootstrap.

llvm-svn: 202814
2014-03-04 10:30:26 +00:00
Chandler Carruth 442f784814 [cleanup] Re-sort all the includes with utils/sort_includes.py.
llvm-svn: 202811
2014-03-04 10:07:28 +00:00
Diego Novillo f5041ce558 Pass to emit DWARF path discriminators.
DWARF discriminators are used to distinguish multiple control flow paths
on the same source location. When this happens, instructions across
basic block boundaries will share the same debug location.

This pass detects this situation and creates a new lexical scope to one
of the two instructions. This lexical scope is a child scope of the
original and contains a new discriminator value. This discriminator is
then picked up from MCObjectStreamer::EmitDwarfLocDirective to be
written on the object file.

This fixes http://llvm.org/bugs/show_bug.cgi?id=18270.

llvm-svn: 202752
2014-03-03 20:06:11 +00:00
Benjamin Kramer b2f034b85e [C++11] Use std::tie to simplify compare operators.
No functionality change.

llvm-svn: 202751
2014-03-03 19:58:30 +00:00
Benjamin Kramer 9c794c7a7c [C++11] Remove a leftover std::function instance.
It's not needed anymore.

llvm-svn: 202748
2014-03-03 19:49:02 +00:00
Chandler Carruth d031fe9fcf [C++11] Remove the completely unnecessary requirement on SetVector's
remove_if that its predicate is adaptable. We don't actually need this,
we can write a generic adapter for any predicate.

This lets us remove some very wrong std::function usages. We should
never be using std::function for predicates to algorithms. This incurs
an *indirect* call overhead for every evaluation of the predicate, and
makes it very hard to inline through.

llvm-svn: 202742
2014-03-03 19:28:52 +00:00
Evgeniy Stepanov 77be532f71 [msan] Handle X86 SIMD bitshift intrinsics.
llvm-svn: 202712
2014-03-03 13:47:42 +00:00
Tobias Grosser 4abf9d3a54 [C++11] Add a basic block range view for RegionInfo
This also switches the users in LLVM to ensure this functionality is tested.

llvm-svn: 202705
2014-03-03 13:00:39 +00:00
Chandler Carruth 1583e99c23 [C++11] Add two range adaptor views to User: operands and
operand_values. The first provides a range view over operand Use
objects, and the second provides a range view over the Value*s being
used by those operands.

The naming is "STL-style" rather than "LLVM-style" because we have
historically named iterator methods STL-style, and range methods seem to
have far more in common with their iterator counterparts than with
"normal" APIs. Feel free to bikeshed on this one if you want, I'm happy
to change these around if people feel strongly.

I've switched code in SROA and LCG to exercise these mostly to ensure
they work correctly -- we don't really have an easy way to unittest this
and they're trivial.

llvm-svn: 202687
2014-03-03 10:42:58 +00:00
Benjamin Kramer d6f1f84f51 [C++11] Replace llvm::tie with std::tie.
The old implementation is no longer needed in C++11.

llvm-svn: 202644
2014-03-02 13:30:33 +00:00
Benjamin Kramer b6d0bd48bd [C++11] Replace llvm::next and llvm::prior with std::next and std::prev.
Remove the old functions.

llvm-svn: 202636
2014-03-02 12:27:27 +00:00
Craig Topper 73156025e0 Switch all uses of LLVM_OVERRIDE to just use 'override' directly.
llvm-svn: 202621
2014-03-02 09:09:27 +00:00
Chandler Carruth 002da5db29 [C++11] Switch all uses of the llvm_move macro to use std::move
directly, and remove the macro.

llvm-svn: 202612
2014-03-02 04:08:41 +00:00
Benjamin Kramer 3a377bce4e Now that we have C++11, turn simple functors into lambdas and remove a ton of boilerplate.
No intended functionality change.

llvm-svn: 202588
2014-03-01 11:47:00 +00:00
Reid Kleckner e6ff5c51e6 Reflow isProfitableToMakeFastCC
llvm-svn: 202555
2014-02-28 22:50:08 +00:00
Kostya Serebryany cb3f6e164b [asan] fix a pair of silly typos
llvm-svn: 202391
2014-02-27 13:13:59 +00:00
Kostya Serebryany ec34665de9 [asan] disable asan-detect-invalid-pointer-pair (was enabled by mistake)
llvm-svn: 202390
2014-02-27 12:56:20 +00:00
Kostya Serebryany 796f6557bf [asan] *experimental* implementation of invalid-pointer-pair detector (finds when two unrelated pointers are compared or subtracted). This implementation has both false positives and false negatives and is not tuned for performance. A bug report for a proper implementation will follow.
llvm-svn: 202389
2014-02-27 12:45:36 +00:00
Reid Kleckner 22869378d9 GlobalOpt: Apply fastcc to internal x86_thiscallcc functions
We should apply fastcc whenever profitable.  We can expand this list,
but there are lots of conventions with performance implications that we
don't want to change.

Differential Revision: http://llvm-reviews.chandlerc.com/D2705

llvm-svn: 202293
2014-02-26 19:57:30 +00:00
Andrew Trick 429e9edd08 Fix PR18165: LSR must avoid scaling factors that exceed the limit on truncated use.
Patch by Michael Zolotukhin!

llvm-svn: 202273
2014-02-26 16:31:56 +00:00
Chandler Carruth dfb2efd0da [SROA] Use the correct index integer size in GEPs through non-default
address spaces.

This isn't really a correctness issue (the values are truncated) but its
much cleaner.

Patch by Matt Arsenault!

llvm-svn: 202252
2014-02-26 10:08:16 +00:00
Chandler Carruth 286d87ed38 [SROA] Teach SROA how to handle pointers from address spaces other than
the default.

Based on the patch by Matt Arsenault, D1764!

I switched one place to use the more direct pointer type to compute the
desired address space, and I reworked the memcpy rewriting section to
reflect significant refactorings that this patch helped inspire.

Thanks to several of the folks who helped review and improve the patch
as well.

llvm-svn: 202247
2014-02-26 08:25:02 +00:00
Chandler Carruth aa72b93ae7 [SROA] Split the alignment computation complete for the memcpy rewriting
to work independently for the slice side and the other side.

This allows us to only compute the minimum of the two when we actually
rewrite to a memcpy that needs to take the minimum, and preserve higher
alignment for one side or the other when rewriting to loads and stores.

This fix was inspired by seeing the result of some refactoring that
makes addrspace handling better.

llvm-svn: 202242
2014-02-26 07:29:54 +00:00
Chandler Carruth 181ed05b6a [SROA] The original refactoring inspired by the addrspace patch in
D1764, which in turn set off the other refactorings to make
'getSliceAlign()' a sensible thing.

There are two possible inputs to the required alignment of a memory
transfer intrinsic: the alignment constraints of the source and the
destination. If we are *only* introducing a (potentially new) offset
onto one side of the transfer, we don't need to consider the alignment
constraints of the other side. Use this to simplify the logic feeding
into alignment computation for unsplit transfers.

Also, hoist the clamp of the magical zero alignment for these intrinsics
to the more customary one alignment early. This lets several other
conditions melt away.

No functionality changed. There is a further improvement this exposes
which *will* change functionality, but that's arriving in a separate
patch.

llvm-svn: 202232
2014-02-26 05:33:36 +00:00
Chandler Carruth 47954c80ed [SROA] Yet another slight refactoring that simplifies an API in the
rewriting logic: don't pass custom offsets for the adjusted pointer to
the new alloca.

We always passed NewBeginOffset here. Sometimes we spelled it
BeginOffset, but only when they were in fact equal. Whats worse, the API
is set up so that you can't reasonably call it with anything else -- it
assumes that you're passing it an offset relative to the *original*
alloca that happens to fall within the new one. That's the whole point
of NewBeginOffset, it's the clamped beginning offset.

No functionality changed.

llvm-svn: 202231
2014-02-26 05:12:43 +00:00
Chandler Carruth 2659e503c3 [SROA] Simplify the computing of alignment: we only ever need the
alignment of the slice being rewritten, not any arbitrary offset.

Every caller is really just trying to compute the alignment for the
whole slice, never for some arbitrary alignment. They are also just
passing a type when they have one to see if we can skip an explicit
alignment in the IR by using the type's alignment. This makes for a much
simpler interface.

Another refactoring inspired by the addrspace patch for SROA, although
only loosely related.

llvm-svn: 202230
2014-02-26 05:02:19 +00:00
Chandler Carruth 735d5bee48 [SROA] Use NewOffsetBegin in the unsplit case for memset merely for
consistency with memcpy rewriting, and fix a latent bug in the alignment
management for memset.

The alignment issue is that getAdjustedAllocaPtr is computing the
*relative* offset into the new alloca, but the alignment isn't being set
to the relative offset, it was using the the absolute offset which is
into the old alloca.

I don't think its possible to write a test case that actually reaches
this code where the resulting alignment would be observably different,
but the intent was clearly to use the relative offset within the new
alloca.

llvm-svn: 202229
2014-02-26 04:45:24 +00:00
Chandler Carruth ea27cf08d8 [SROA] Use the members for New{Begin,End}Offset in the rewrite helpers
rather than passing them as arguments.

While I generally prefer actual arguments, in this case the readability
loss is substantial. By using members we avoid repeatedly calculating
the offsets, and once we're using members it is useful to ensure that
those names *always* refer to the original-alloca-relative new offset
for a rewritten slice.

No functionality changed. Follow-up refactoring, all toward getting the
address space patch merged.

llvm-svn: 202228
2014-02-26 04:25:04 +00:00
Chandler Carruth c46b6eb302 [SROA] Compute the New{Begin,End}Offset values once for each alloca
slice being rewritten.

We had the same code scattered across most of the visits. Instead,
compute the new offsets and the slice size once when we start to visit
a particular slice, and use the member variables from then on. This
reduces quite a bit of code duplication.

No functionality changed. Refactoring inspired to make it easier to
apply the address space patch to SROA.

llvm-svn: 202227
2014-02-26 04:20:00 +00:00
Chandler Carruth 6aedc106ba [SROA] Fix PR18615 with some long overdue simplifications to the bounds
checking in SROA.

The primary change is to just rely on uge for checking that the offset
is within the allocation size. This removes the explicit checks against
isNegative which were terribly error prone (including the reversed logic
that led to PR18615) and prevented us from supporting stack allocations
larger than half the address space.... Ok, so maybe the latter isn't
*common* but it's a silly restriction to have.

Also, we used to try to support a PHI node which loaded from before the
start of the allocation if any of the loaded bytes were within the
allocation. This doesn't make any sense, we have never really supported
loading or storing *before* the allocation starts. The simplified logic
just doesn't care.

We continue to allow loading past the end of the allocation in part to
support cases where there is a PHI and some loads are larger than others
and the larger ones reach past the end of the allocation. We could solve
this a different and more conservative way, but I'm still somewhat
paranoid about this.

llvm-svn: 202224
2014-02-26 03:14:14 +00:00
Chandler Carruth 7b8e112407 [reassociate] Switch two std::sort calls into std::stable_sort calls as
their inputs come from std::stable_sort and they are not total orders.

I'm not a huge fan of this, but the really bad std::stable_sort is right
at the beginning of Reassociate. After we commit to stable-sort based
consistent respect of source order, the downstream sorts shouldn't undo
that unless they have a total order or they are used in an
order-insensitive way. Neither appears to be true for these cases.
I don't have particularly good test cases, but this jumped out by
inspection when looking for output instability in this pass due to
changes in the ordering of std::sort.

llvm-svn: 202196
2014-02-25 21:54:50 +00:00
Chandler Carruth 3b79b2ab4e [SROA] Add an off-by-default *strict* inbounds check to SROA. I had SROA
implemented this way a long time ago and due to the overwhelming bugs
that surfaced, moved to a much more relaxed variant. Richard Smith would
like to understand the magnitude of this problem and it seems fairly
harmless to keep some flag-controlled logic to get the extremely strict
behavior here. I'll remove it if it doesn't prove useful.

llvm-svn: 202193
2014-02-25 21:24:45 +00:00
Rafael Espindola 935125126c Make DataLayout a plain object, not a pass.
Instead, have a DataLayoutPass that holds one. This will allow parts of LLVM
don't don't handle passes to also use DataLayout.

llvm-svn: 202168
2014-02-25 17:30:31 +00:00
Rafael Espindola 6d6e87be9f Factor out calls to AA.getDataLayout().
llvm-svn: 202157
2014-02-25 15:52:19 +00:00
Rafael Espindola 43b5a51e7c Make a few more DataLayout variables const.
llvm-svn: 202155
2014-02-25 14:24:11 +00:00
Chandler Carruth 25adb7b00c [SROA] Use the original load name with the SROA-prefixed IRB rather than
just "load". This helps avoid pointless de-duping with order-sensitive
numbers as we already have unique names from the original load. It also
makes the resulting IR quite a bit easier to read.

llvm-svn: 202140
2014-02-25 11:21:48 +00:00
Chandler Carruth cb93cd2dc9 [SROA] Thread the ability to add a pointer-specific name prefix through
the pointer adjustment code. This is the primary code path that creates
totally new instructions in SROA and being able to lump them based on
the pointer value's name for which they were created causes
*significantly* fewer name collisions and general noise in the debug
output. This is particularly significant because it is making it much
harder to track down instability in the output of SROA, as name
de-duplication is a totally harmless form of instability that gets in
the way of seeing real problems.

The new fancy naming scheme tries to dig out the root "pre-SROA" name
for pointer values and associate that all the way through the pointer
formation instructions. Digging out the root is important to prevent the
multiple iterative rounds of SROA from just layering too much cruft on
top of cruft here. We already track the layers of SROAs iteration in the
alloca name prefix. We don't need to duplicate it here.

Should have no functionality change, and shouldn't have any really
measurable impact on NDEBUG builds, as most of the complex logic is
debug-only.

llvm-svn: 202139
2014-02-25 11:19:56 +00:00
Chandler Carruth 5117553301 [SROA] Rather than copying the logic for building a name prefix into the
PHI-pointer builder, just copy the builder and clobber the obvious
fields.

llvm-svn: 202136
2014-02-25 11:12:04 +00:00
Chandler Carruth 8183a50f9b [SROA] Simplify some of the logic to dig out the old pointer value by
using OldPtr more heavily. Lots of this code was written before the
rewriter had an OldPtr member setup ahead of time. There are already
asserts in place that should ensure this doesn't change any
functionality.

llvm-svn: 202135
2014-02-25 11:08:02 +00:00
Chandler Carruth 7625c54eb4 [SROA] Adjust to new clang-format style.
llvm-svn: 202134
2014-02-25 11:07:58 +00:00
Chandler Carruth a8c4cc68f5 [SROA] Fix a *glaring* bug in r202091: you have to actually *write*
the break statement, not just think it to yourself....

No idea how this worked at all, much less survived most bots, my
bootstrap, and some bot bootstraps!

The Polly one didn't survive, and this was filed as PR18959. I don't
have a reduced test case and honestly I'm not seeing the need. What we
probably need here are better asserts / debug-build behavior in
SmallPtrSet so that this madness doesn't make it so far.

llvm-svn: 202129
2014-02-25 09:45:27 +00:00
Alexey Samsonov 26af6f7f1b Silence GCC warning
llvm-svn: 202119
2014-02-25 07:56:00 +00:00
Alp Toker 70b36995e4 Fix typos
llvm-svn: 202107
2014-02-25 04:21:15 +00:00
Chandler Carruth 83cee7722d [SROA] Add a debugging tool which shuffles the slices sequence prior to
sorting it. This helps uncover latent reliance on the original ordering
which aren't guaranteed to be preserved by std::sort (but often are),
and which are based on the use-def chain orderings which also aren't
(technically) guaranteed.

Only available in C++11 debug builds, and behind a flag to prevent noise
at the moment, but this is generally useful so figured I'd put it in the
tree rather than keeping it out-of-tree.

llvm-svn: 202106
2014-02-25 03:59:29 +00:00
Chandler Carruth bb2a93241d [SROA] Use a more direct way of determining whether we are processing
the destination operand or source operand of a memmove.

It so happens that it was impossible for SROA to try to rewrite
self-memmove where the operands are *identical*, because either such
a think is volatile (and we don't rewrite) or it is non-volatile, and we
don't even register it as a use of the alloca.

However, making the 'IsDest' test *rely* on this subtle fact is... Very
confusing for the reader. We should use the direct and readily available
test of the Use* which gives us concrete information about which operand
is being rewritten.

No functionality changed, I hope! ;]

llvm-svn: 202103
2014-02-25 03:50:14 +00:00
Chandler Carruth 3bf18ed5e3 [SROA] Fix another instability in SROA with respect to the slice
ordering.

The fundamental problem that we're hitting here is that the use-def
chain ordering is *itself* not a stable thing to be relying on in the
rewriting for SROA. Further, we use a non-stable sort over the slices to
arrange them based on the section of the alloca they're operating on.
With a debugging STL implementation (or different implementations in
stage2 and stage3) this can cause stage2 != stage3.

The specific aspect of this problem fixed in this commit deals with the
rewriting and load-speculation around PHIs and Selects. This, like many
other aspects of the use-rewriting in SROA, is really part of the
"strong SSA-formation" that is doen by SROA where it works very hard to
canonicalize loads and stores in *just* the right way to satisfy the
needs of mem2reg[1]. When we have a select (or a PHI) with 2 uses of the
same alloca, we test that loads downstream of the select are
speculatable around it twice. If only one of the operands to the select
needs to be rewritten, then if we get lucky we rewrite that one first
and the select is immediately speculatable. This can cause the order of
operand visitation, and thus the order of slices to be rewritten, to
change an alloca from promotable to non-promotable and vice versa.

The fix is to defer all of the speculation until *after* the rewrite
phase is done. Once we've rewritten everything, we can accurately test
for whether speculation will work (once, instead of twice!) and the
order ceases to matter.

This also happens to simplify the other subtlety of speculation -- we
need to *not* speculate anything unless the result of speculating will
make the alloca fully promotable by mem2reg. I had a previous attempt at
simplifying this, but it was still pretty horrible.

There is actually already a *really* nice test case for this in
basictest.ll, but on multiple STL implementations and inputs, we just
got "lucky". Fortunately, the test case is very small and we can
essentially build it in exactly the opposite way to get reasonable
coverage in both directions even from normal STL implementations.

llvm-svn: 202092
2014-02-25 00:07:09 +00:00
Rafael Espindola aeff8a9c05 Make some DataLayout pointers const.
No functionality change. Just reduces the noise of an upcoming patch.

llvm-svn: 202087
2014-02-24 23:12:18 +00:00
Arnold Schwaighofer 9611d23d63 SLPVectorizer: Try vectorizing 'splat' stores
Vectorize sequential stores of a broadcasted value.
5% on eon.

radar://16124699

llvm-svn: 202067
2014-02-24 19:52:29 +00:00