Commit Graph

10863 Commits

Author SHA1 Message Date
Matt Arsenault 0be1cb1c7b Don't use runtime bounds check between address spaces.
Don't vectorize with a runtime check if it requires a
comparison between pointers with different address spaces.
The values can't be assumed to be directly comparable.
Previously it would create an illegal bitcast.

llvm-svn: 191862
2013-10-02 22:38:17 +00:00
Yi Jiang 8fd1a806d5 Apply slp vectorization on fully-vectorizable tree of height 2
llvm-svn: 191852
2013-10-02 20:20:39 +00:00
Matt Arsenault 39d592fe48 Fix debug printing spacing.
Fix missing newlines, missing and extra spaces in printed messages.

llvm-svn: 191851
2013-10-02 20:04:29 +00:00
Matt Arsenault cccbe16785 Fix comment grammar and capitalization.
llvm-svn: 191850
2013-10-02 20:04:26 +00:00
Benjamin Kramer b9add84ef6 SLPVectorizer: Make store chain finding more aggressive with GetUnderlyingObject.
This recursively strips all GEPs like the existing code. It also handles bitcasts and
other operations that do not change the pointer value.

llvm-svn: 191847
2013-10-02 19:06:06 +00:00
Tom Stellard d3e916eb6a StructurizeCFG: Add dependency on LowerSwitch pass
Switch instructions were crashing the StructurizeCFG pass, and it's
probably easier anyway if we don't need to handle them in this pass.

Reviewed-by: Christian König <christian.koenig@amd.com>
llvm-svn: 191841
2013-10-02 17:04:59 +00:00
Chandler Carruth ea56494625 Remove the very substantial, largely unmaintained legacy PGO
infrastructure.

This was essentially work toward PGO based on a design that had several
flaws, partially dating from a time when LLVM had a different
architecture, and with an effort to modernize it abandoned without being
completed. Since then, it has bitrotted for several years further. The
result is nearly unusable, and isn't helping any of the modern PGO
efforts. Instead, it is getting in the way, adding confusion about PGO
in LLVM and distracting everyone with maintenance on essentially dead
code. Removing it paves the way for modern efforts around PGO.

Among other effects, this removes the last of the runtime libraries from
LLVM. Those are being developed in the separate 'compiler-rt' project
now, with somewhat different licensing specifically more approriate for
runtimes.

llvm-svn: 191835
2013-10-02 15:42:23 +00:00
Alexey Samsonov 31540172d0 Remove "localize global" optimization
Summary:
As discussed in http://llvm-reviews.chandlerc.com/D1754,
this optimization isn't really valid for C, and fires too rarely anyway.

Reviewers: rafael, nicholas

Reviewed By: nicholas

CC: rnk, llvm-commits, nicholas

Differential Revision: http://llvm-reviews.chandlerc.com/D1769

llvm-svn: 191834
2013-10-02 15:31:34 +00:00
Matt Arsenault 517d84e268 Don't merge tiny functions.
It's silly to merge functions like these:

define void @foo(i32 %x) {
  ret void
}

define void @bar(i32 %x) {
  ret void
}

to get

define void @bar(i32) {
  tail call void @foo(i32 %0)
  ret void
}

llvm-svn: 191786
2013-10-01 18:05:30 +00:00
Rafael Espindola 44fee4e0eb Remove several unused variables.
Patch by Alp Toker.

llvm-svn: 191757
2013-10-01 13:32:03 +00:00
Matt Arsenault 5ea37f8d89 Fix code duplication
llvm-svn: 191716
2013-10-01 00:01:14 +00:00
Matt Arsenault 8468062c6e Use right address space size in InstCombineCompares
The test's output doesn't change, but this ensures
this is actually hit with a different address space.

llvm-svn: 191701
2013-09-30 21:11:01 +00:00
Matt Arsenault 06adecabe7 Constant fold ptrtoint + compare with address spaces
llvm-svn: 191699
2013-09-30 21:06:18 +00:00
Benjamin Kramer f00472908a BoundsChecking: Fix refacto.
llvm-svn: 191676
2013-09-30 15:52:50 +00:00
Benjamin Kramer 6e931528fe Convert manual insert point restores to the new RAII object.
llvm-svn: 191675
2013-09-30 15:40:17 +00:00
Benjamin Kramer 6748576a0d InstCombine: Replace manual fast math flag copying with the new IRBuilder RAII helper.
Defines away the issue where cast<Instruction> would fail because constant
folding happened. Also slightly cleaner.

llvm-svn: 191674
2013-09-30 15:39:59 +00:00
Benjamin Kramer d36f1abefd IRBuilder: Add RAII objects to reset insertion points or fast math flags.
Inspired by the object from the SLPVectorizer. This found a minor bug in the
debug loc restoration in the vectorizer where the location of a following
instruction was attached instead of the location from the original instruction.

llvm-svn: 191673
2013-09-30 15:39:48 +00:00
Joey Gouly d51a35c6a0 Fix a bug in InstCombine where it attempted to cast a Value* to an Instruction*
when it was actually a Constant*.

There are quite a few other casts to Instruction that might have the same problem,
but this is the only one I have a test case for.

llvm-svn: 191668
2013-09-30 14:18:35 +00:00
Robert Wilhelm 2788d3ec99 Even more spelling fixes for "instruction".
llvm-svn: 191611
2013-09-28 13:42:22 +00:00
Robert Wilhelm f0cfb83bb4 Fix spelling intruction -> instruction.
llvm-svn: 191610
2013-09-28 11:46:15 +00:00
Matt Arsenault 31cfc78f81 Use right pointer type in DebugIR
llvm-svn: 191576
2013-09-27 22:26:25 +00:00
Matt Arsenault fa25272db9 Use type helper functions
llvm-svn: 191574
2013-09-27 22:18:51 +00:00
Matt Arsenault 29f31735a2 Fix SLPVectorizer using wrong address space for load/store
llvm-svn: 191564
2013-09-27 21:24:57 +00:00
Justin Bogner 4a9ac8cd75 InstCombine: Only foldSelectICmpAndOr for integer types
Currently foldSelectICmpAndOr asserts if the "or" involves a vector
containing several of the same power of two. We can easily avoid this by
only performing the fold on integer types, like foldSelectICmpAnd does.

Fixes <rdar://problem/15012516>

llvm-svn: 191552
2013-09-27 20:35:39 +00:00
Justin Bogner ca9bd8fac1 Transforms: Use getFirstNonPHI to set the insertion point for PHIs
We were previously using getFirstInsertionPt to insert PHI
instructions when vectorizing, but getFirstInsertionPt also skips past
landingpads, causing this to generate invalid IR.

We can avoid this issue by using getFirstNonPHI instead.

llvm-svn: 191526
2013-09-27 15:30:25 +00:00
Puyan Lotfi 74e38de492 First check in. Modified a comment.
llvm-svn: 191491
2013-09-27 07:36:10 +00:00
Arnold Schwaighofer 07520324f5 SLPVectorize: Put horizontal reductions feeding a store under separate flag
Put them under a separate flag for experimentation. They are more likely to
interfere with loop vectorization which happens later in the pass pipeline.

llvm-svn: 191371
2013-09-25 14:02:32 +00:00
Evgeniy Stepanov 32be0340f5 [msan] Fix -Wreturn-type warnings in non-self-hosted build.
llvm-svn: 191361
2013-09-25 08:56:00 +00:00
Yi Jiang edf2d9179e set the cost of tiny trees to INT_MAX in SLP vectorizer to disable vectorization on them
llvm-svn: 191314
2013-09-24 17:26:43 +00:00
Benjamin Kramer 30d249a1b3 Push analysis passes to InstSimplify when they're around anyways.
llvm-svn: 191309
2013-09-24 16:37:40 +00:00
Evgeniy Stepanov 5522a70674 [msan] Handling of atomic load/store, atomic rmw, cmpxchg.
llvm-svn: 191287
2013-09-24 11:20:27 +00:00
Arnold Schwaighofer 22639407d7 Revert "LoopVectorizer: Only allow vectorization of intrinsics."
Revert 191122 - with extra checks we are allowed to vectorize math library
function calls.

Standard library indentifiers are reserved names so functions with external
linkage must not overrided them. However, functions with internal linkage can.

Therefore, we can vectorize calls to math library functions with a check for
external linkage and matching signature. This matches what we do during
SelectionDAG building.

llvm-svn: 191206
2013-09-23 14:54:39 +00:00
Benjamin Kramer 8817cca5ce Provide basic type safety for array_pod_sort comparators.
This makes using array_pod_sort significantly safer. The implementation relies
on function pointer casting but that should be safe as we're dealing with void*
here.

llvm-svn: 191175
2013-09-22 14:09:50 +00:00
Benjamin Kramer 5626259506 Drop spurious handle in comment.
llvm-svn: 191172
2013-09-22 11:24:58 +00:00
Benjamin Kramer 90901a35ce SROA: Handle casts involving vectors of pointers and integer scalars.
SROA wants to convert any types of equivalent widths but it's not possible to
convert vectors of pointers to an integer scalar with a single cast. As a
workaround we add a bitcast to the corresponding int ptr type first. This type
of cast used to be an edge case but has become common with SLP vectorization.
Fixes PR17271.

llvm-svn: 191143
2013-09-21 20:36:04 +00:00
Arnold Schwaighofer d743feef81 SLPVectorizer: Fix multiline comment warning
llvm-svn: 191135
2013-09-21 05:37:30 +00:00
Arnold Schwaighofer 500242d4fe Reapply "SLPVectorizer: Handle more horizontal reductions (disabled)""
Reapply r191108 with a fix for a memory corruption error I introduced.  Of
course, we can't reference the scalars that we replace by vectorizing and then
call their eraseFromParent method. I only 'needed' the scalars to get the
DebugLoc. Just store the DebugLoc before actually vectorizing instead. As a nice
side effect, this also simplifies the interface between BoUpSLP and the
HorizontalReduction class to returning a value pointer (the vectorized tree
root).

radar://14607682

llvm-svn: 191123
2013-09-21 01:06:00 +00:00
Nadav Rotem 3371172a67 LoopVectorizer: Only allow vectorization of intrinsics. We can't know for sure that the functions 'abs' or 'round' are the functions from libm.
rdar://15012650

llvm-svn: 191122
2013-09-21 00:27:05 +00:00
Arnold Schwaighofer f1dfbfdde1 Revert "SLPVectorizer: Handle more horizontal reductions (disabled)"
This reverts commit r191108.

The horizontal.ll test case fails under libgmalloc. Thanks Shuxin for pointing
this out to me.

llvm-svn: 191121
2013-09-21 00:06:20 +00:00
Shuxin Yang 6e35094bbf Resurrect r191017 " GVN proceeds in the presence of dead code" plus a fix to PR17307 & 17308.
The problem of r191017 is that when GVN fabricate a val-number for a dead instruction (in order
to make following expr-PRE happy), it forget to fabricate a leader-table entry for it as well.

llvm-svn: 191118
2013-09-20 23:12:57 +00:00
Benjamin Kramer 0e2d162d1e InstCombine: Remove unused argument. No functionality change.
llvm-svn: 191112
2013-09-20 22:12:42 +00:00
Arnold Schwaighofer 4724963112 SLPVectorizer: Handle more horizontal reductions (disabled)
Match reductions starting at binary operation feeding into a phi. The code
handles trees like

 r += v1 + v2 + v3 ...

and

 r += v1
 r += v2
 ...

and

 r *= v1 + v2 + ...

We currently only handle associative operations (add, fadd fast).

The code can now also handle reductions feeding into stores.

 a[i] = v1 + v2 + v3 + ...

The code is currently disabled behind the flag "-slp-vectorize-hor".  The cost
model for most architectures is not there yet.

I found one opportunity of a horizontal reduction feeding a phi in TSVC
(LoopRerolling-flt) and there are several opportunities where reductions feed
into stores.

radar://14607682

llvm-svn: 191108
2013-09-20 21:18:20 +00:00
Joerg Sonnenberger 1fbe323649 Revert r191017, it results in segmentation faults in Qt.
llvm-svn: 191104
2013-09-20 20:33:57 +00:00
Benjamin Kramer e6461e3053 InstCombine: Canonicalize (gep i8* X, -(ptrtoint Y)) to (sub (ptrtoint X), (ptrtoint Y))
The GEP pattern is what SCEV expander emits for "ugly geps". The latter is what
you get for pointer subtraction in C code. The rest of instcombine already
knows how to deal with that so just canonicalize on that.

llvm-svn: 191090
2013-09-20 14:38:44 +00:00
Shuxin Yang 3a7ca6ec87 [Fast-math] Disable "(C1/X)*C2 => (C1*C2)/X" if C1/X has multiple uses.
If "C1/X" were having multiple uses, the only benefit of this
transformation is to potentially shorten critical path. But it is at the
cost of instroducing additional div.

  The additional div may or may not incur cost depending on how div is
implemented. If it is implemented using Newton–Raphson iteration, it dosen't
seem to incur any cost (FIXME). However, if the div blocks the entire
pipeline, that sounds to be pretty expensive. Let CodeGen to take care 
this transformation.

  This patch sees 6% on a benchmark.

rdar://15032743

llvm-svn: 191037
2013-09-19 21:13:46 +00:00
Benjamin Kramer 0b37cdf9af InstCombine: Don't allow turning vector-of-pointer loads into vector-of-integer.
The code below can't handle any pointers. PR17293.

llvm-svn: 191036
2013-09-19 20:59:04 +00:00
Shuxin Yang 74c9a170b8 GVN proceeds in the presence of dead code.
This is how it ignores the dead code:
1) When a dead branch target, say block B, is identified, all the
    blocks dominated by B is dead as well.

2) The PHIs of those blocks in dominance-frontier(B) is updated such
   that the operands corresponding to dead predecessors are replaced
   by "UndefVal".

   Using lattice's jargon, the "UndefVal" is the "Top" in essence.
   Phi node like this "phi(v1 bb1, undef xx)" will be optimized into
   "v1" if v1 is constant, or v1 is an instruction which dominate this
   PHI node.

3) When analyzing the availability of a load L, all dead mem-ops which
   L depends on disguise as a load which evaluate exactly same value as L.

4) The dead mem-ops will be materialized as "UndefVal" during code motion.

llvm-svn: 191017
2013-09-19 17:22:51 +00:00
Evgeniy Stepanov 37b8645480 [msan] Wrap indirect functions.
Adds a flag to the MemorySanitizer pass that enables runtime rewriting of
indirect calls. This is part of MSanDR implementation and is needed to return
control to the DynamiRio-based helper tool on transition between instrumented
and non-instrumented modules. Disabled by default.

llvm-svn: 191006
2013-09-19 15:22:35 +00:00
Kostya Serebryany f322382e22 [asan] call __asan_stack_malloc_N only if use-after-return detection is enabled with the run-time option
llvm-svn: 190939
2013-09-18 14:07:14 +00:00
Robert Lytton f637e2cb23 Prevent LoopVectorizer and SLPVectorizer running if the target has no vector registers.
XCore target: Add XCoreTargetTransformInfo
This is where getNumberOfRegisters() resides, which in turn returns the
number of vector registers (=0).

llvm-svn: 190936
2013-09-18 12:43:35 +00:00
Craig Topper be3e01e61f Revert accidental commit I had to make to get the test case in PR17268 to still work correctly.
llvm-svn: 190917
2013-09-18 04:10:17 +00:00
Craig Topper 98064b9f4d Lift alignment restrictions for load/store folding on VINSERTF128/VEXTRACTF128. Fixes PR17268.
llvm-svn: 190916
2013-09-18 03:55:53 +00:00
David Blaikie eacc287b49 ifndef NDEBUG-out an asserts-only constant committed in r190863
llvm-svn: 190905
2013-09-18 00:11:27 +00:00
Quentin Colombet 870b662779 Revert the load slicing done in r190870.
To avoid regressions with bitfield optimizations, this slicing should take place
later, like ISel time.

llvm-svn: 190891
2013-09-17 22:01:26 +00:00
Matt Arsenault e6952f28ca Cleanup handling of constant function casts.
Some of this code is no longer necessary since int<->ptr casts are no
longer occur as of r187444.

This also fixes handling vectors of pointers, and adds a bunch of new
testcases for vectors and address spaces.

llvm-svn: 190885
2013-09-17 21:10:14 +00:00
Arnold Schwaighofer 4a3dcaa193 SLPVectorizer: Don't vectorize phi nodes that use invoke values
We can't insert an insertelement after an invoke. We would have to split a
critical edge. So when we see a phi node that uses an invoke we just give up.

radar://14990770

llvm-svn: 190871
2013-09-17 17:03:29 +00:00
Quentin Colombet b8d672ef5b [InstCombiner] Slice a big load in two loads when the elements are next to each
other in memory.

The motivation was to get rid of truncate and shift right instructions that get
in the way of paired load or floating point load.
E.g.,
Consider the following example:
struct Complex {
  float real;
  float imm;
};

When accessing a complex, llvm was generating a 64-bits load and the imm field
was obtained by a trunc(lshr) sequence, resulting in poor code generation, at
least for x86.

The idea is to declare that two load instructions is the canonical form for
loading two arithmetic type, which are next to each other in memory.

Two scalar loads at a constant offset from each other are pretty
easy to detect for the sorts of passes that like to mess with loads. 

<rdar://problem/14477220>

llvm-svn: 190870
2013-09-17 16:57:34 +00:00
Kostya Serebryany bc86efb89d [asan] inline the calls to __asan_stack_free_* with small sizes. Yet another 10%-20% speedup for use-after-return
llvm-svn: 190863
2013-09-17 12:14:50 +00:00
Stepan Dyatkovskiy dc2c4b4462 Bugfix for PR17099:
Wrong cast operation.
MergeFunctions emits Bitcast instead of pointer-to-integer operation.
Patch fixes MergeFunctions::writeThunk function. It replaces
unconditional Bitcast creation with "Value* createCast(...)" method, that
checks operand types and selects proper instruction.
See unit-test as example.

llvm-svn: 190859
2013-09-17 09:36:11 +00:00
Matt Arsenault 899f7d2b00 MemCpyOptimizer: Use max legal int size instead of pointer size
If there are no legal integers, assume 1 byte.

This makes more sense than using the pointer size as
a guess for the maximum GPR width.

It is conceivable to want to use some 64-bit pointers
on a target where 64-bit integers aren't legal.

llvm-svn: 190817
2013-09-16 22:43:16 +00:00
Arnold Schwaighofer 53e622cef4 Don't vectorize if there are outside loop users of the induction variable.
We would have to compute the pre increment value, either by computing it on
every loop iteration or by splitting the edge out of the loop and inserting a
computation for it there.

For now, just give up vectorizing such loops.

Fixes PR17179.

llvm-svn: 190790
2013-09-16 16:17:24 +00:00
Evgeniy Stepanov 604293fbb4 [msan] Check return value of main().
llvm-svn: 190782
2013-09-16 13:24:32 +00:00
Peter Collingbourne 3fa50f9b05 Implement function prefix data as an IR feature.
Previous discussion:
http://lists.cs.uiuc.edu/pipermail/llvmdev/2013-July/063909.html

Differential Revision: http://llvm-reviews.chandlerc.com/D1191

llvm-svn: 190773
2013-09-16 01:08:15 +00:00
Benjamin Kramer 7d6052687e Replace some unnecessary vector copies with references.
llvm-svn: 190770
2013-09-15 22:04:42 +00:00
Robert Wilhelm 042f10ce41 Fix spelling.
llvm-svn: 190750
2013-09-14 09:34:59 +00:00
Chandler Carruth ebeac5cb89 Remove the long, long defunct IR block placement pass.
This pass was based on the previous (essentially unused) profiling
infrastructure and the assumption that by ordering the basic blocks at
the IR level in a particular way, the correct layout would happen in the
end. This sometimes worked, and mostly didn't. It also was a really
naive implementation of the classical paper that dates from when branch
predictors were primarily directional and when loop structure wasn't
commonly available. It also didn't factor into the equation
non-fallthrough branches and other machine level details.

Anyways, for all of these reasons and more, I wrote
MachineBlockPlacement, which completely supercedes this pass. It both
uses modern profile information infrastructure, and actually works. =]

llvm-svn: 190748
2013-09-14 09:28:14 +00:00
Evgeniy Stepanov 0435ecd18f [msan] Add source file:line to stack origin reports.
Compiler part.

llvm-svn: 190689
2013-09-13 12:54:49 +00:00
Duncan Sands c9e95ad0db Avoid a compiler warning about Found not being used when assertions are
disabled.

llvm-svn: 190668
2013-09-13 08:16:06 +00:00
Hal Finkel 8f2e700522 Add getUnrollingPreferences to TTI
Allow targets to customize the default behavior of the generic loop unrolling
transformation. This will be used by the PowerPC backend when targeting the A2
core (which is in-order with a deep pipeline), and using more aggressive
defaults is important.

llvm-svn: 190542
2013-09-11 19:25:43 +00:00
Benjamin Kramer 079b96e6f7 Revert "Give internal classes hidden visibility."
It works with clang, but GCC has different rules so we can't make all of those
hidden. This reverts commit r190534.

llvm-svn: 190536
2013-09-11 18:05:11 +00:00
Benjamin Kramer 6a44af3629 Give internal classes hidden visibility.
Worth 100k on a linux/x86_64 Release+Asserts clang.

llvm-svn: 190534
2013-09-11 17:42:27 +00:00
Matt Arsenault d3471e9ea8 Use type form of getIntPtrType
This doesn't change anything since malloc always returns
address space 0.

llvm-svn: 190498
2013-09-11 07:29:40 +00:00
Matt Arsenault 009faed1be Teach loop-idiom about address space pointer sizes
llvm-svn: 190491
2013-09-11 05:09:42 +00:00
Matt Arsenault 5df49bd703 Add braces
llvm-svn: 190490
2013-09-11 05:09:35 +00:00
Eli Friedman 77d7fbb924 Get rid of unused isPodLike definitions.
llvm-svn: 190461
2013-09-11 00:36:54 +00:00
Eli Friedman 05906faa4d Don't assert on invalid loop vectorization hint.
llvm-svn: 190450
2013-09-10 23:45:25 +00:00
Eli Friedman c1f1f852d7 Fix mistake in r190442.
llvm-svn: 190446
2013-09-10 23:09:24 +00:00
Eli Friedman 1891f69323 Remove unused functions.
llvm-svn: 190442
2013-09-10 22:42:31 +00:00
Matt Arsenault a90a18e0ea Teach ScalarEvolution about pointer address spaces
llvm-svn: 190425
2013-09-10 19:55:24 +00:00
Benjamin Kramer 934f6f39f4 LoopVectorize: PHI nodes are always at the beginning of a block, no need to scan the whole block.
llvm-svn: 190422
2013-09-10 18:46:15 +00:00
Kostya Serebryany 6805de5467 [asan] refactor the use-after-return API so that the size class is computed at compile time instead of at run-time. llvm part
llvm-svn: 190407
2013-09-10 13:16:56 +00:00
Matt Arsenault f631f8c640 Use StringRef::npos for StringRef instead of std::string one
llvm-svn: 190375
2013-09-10 00:41:53 +00:00
Eli Friedman 33d3700716 Don't shrink atomic ops to bool in GlobalOpt.
LLVM IR doesn't currently allow atomic bool load/store operations, and the
transformation is dubious anyway because it isn't profitable on all platforms.

PR17163.

llvm-svn: 190357
2013-09-09 22:00:13 +00:00
Quentin Colombet 5ab555532b [InstCombiner] Expose opportunities to merge subtract and comparison.
Several architectures use the same instruction to perform both a comparison and
a subtract. The instruction selection framework does not allow to consider
different basic blocks to expose such fusion opportunities.

Therefore, these instructions are “merged” by CSE at MI IR level.

To increase the likelihood of CSE to apply in such situation, we reorder the
operands of the comparison, when they have the same complexity, so that they
matches the order of the most frequent subtract.
E.g.,

icmp A, B
...
sub B, A

<rdar://problem/14514580>

llvm-svn: 190352
2013-09-09 20:56:48 +00:00
Bob Wilson e407736a06 Revert patches to add case-range support for PR1255.
The work on this project was left in an unfinished and inconsistent state.
Hopefully someone will eventually get a chance to implement this feature, but
in the meantime, it is better to put things back the way the were.  I have
left support in the bitcode reader to handle the case-range bitcode format,
so that we do not lose bitcode compatibility with the llvm 3.3 release.

This reverts the following commits: 155464, 156374, 156377, 156613, 156704,
156757, 156804 156808, 156985, 157046, 157112, 157183, 157315, 157384, 157575,
157576, 157586, 157612, 157810, 157814, 157815, 157880, 157881, 157882, 157884,
157887, 157901, 158979, 157987, 157989, 158986, 158997, 159076, 159101, 159100,
159200, 159201, 159207, 159527, 159532, 159540, 159583, 159618, 159658, 159659,
159660, 159661, 159703, 159704, 160076, 167356, 172025, 186736

llvm-svn: 190328
2013-09-09 19:14:35 +00:00
Manman Ren d8c68b1852 TBAA: add isTBAAVtableAccess to MDNode so clients can call the function
instead of having its own implementation.

The implementation of isTBAAVtableAccess is in TypeBasedAliasAnalysis.cpp
since it is related to the format of TBAA metadata.

The path for struct-path tbaa will be exercised by
test/Instrumentation/ThreadSanitizer/read_from_global.ll, vptr_read.ll, and
vptr_update.ll when struct-path tbaa is on by default.

llvm-svn: 190216
2013-09-06 22:47:05 +00:00
Matt Arsenault 8227b9f69c Use type helper functions.
llvm-svn: 190113
2013-09-06 00:37:24 +00:00
Matt Arsenault 37d42ecaff Teach CodeGenPrepare about address spaces
llvm-svn: 190112
2013-09-06 00:18:43 +00:00
Matt Arsenault e6db76071c Consistently use dbgs() in debug printing
llvm-svn: 190093
2013-09-05 19:48:28 +00:00
Rafael Espindola d21ac19bda Remove unused argument.
llvm-svn: 190090
2013-09-05 19:15:21 +00:00
Nick Lewycky 2c88067a46 Declare missing dependency on AliasAnalysis. Patch by Liu Xin!
llvm-svn: 190035
2013-09-05 08:19:58 +00:00
Rafael Espindola b7c0b4a327 Rename some variables to match the style guide.
I am about to patch this code, and this makes the diff far more readable.

llvm-svn: 189982
2013-09-04 20:08:46 +00:00
Rafael Espindola b832d49822 Small simplification given that insert of an empty range is a nop.
llvm-svn: 189971
2013-09-04 18:53:21 +00:00
Rafael Espindola 49a6c153c9 Refactor duplicated logic to a helper function.
No functionality change.

llvm-svn: 189969
2013-09-04 18:37:36 +00:00
Rafael Espindola 9406516af1 Remove dead code.
llvm-svn: 189967
2013-09-04 18:16:02 +00:00
Rafael Espindola 128c5ea902 Revert "Add r159136 back now that pr13124 has been fixed."
This reverts commit r189886.

I found a corner case where this optimization is not valid:

Say we have a "linkonce_odr unnamed_addr" in two translation units:
* In TU 1 this optimization kicks in and makes it hidden.
* In TU 2 it gets const merged with a constant that is *not* unnamed_addr,
  resulting in a non unnamed_addr constant with default visibility.
* The static linker rules for combining visibility them produce a hidden
  symbol, which is incorrect from the point of view of the non unnamed_addr
  constant.

The one place we can do this is when we know that the symbol is not used from
another TU in the same shared object, i.e., during LTO. I will move it there.

llvm-svn: 189954
2013-09-04 16:09:01 +00:00
Tim Northover dc647a2603 InstCombine: allow unmasked icmps to be combined with logical ops
"(icmp op i8 A, B)" is equivalent to "(icmp op i8 (A & 0xff), B)" as a
degenerate case. Allowing this as a "masked" comparison when analysing "(icmp)
&/| (icmp)" allows us to combine them in more cases.

rdar://problem/7625728

llvm-svn: 189931
2013-09-04 11:57:17 +00:00
Tim Northover c0756c454c InstCombine: look for masked compares with subset relation
Even in cases which aren't universally optimisable like "(A & B) != 0 && (A &
C) != 0", the masks can make one of the comparisons completely redundant. In
this case, since we've gone to the effort of spotting masked comparisons we
should combine them.

rdar://problem/7625728

llvm-svn: 189930
2013-09-04 11:57:13 +00:00
Rafael Espindola 5eb7df68bf Add r159136 back now that pr13124 has been fixed.
Original message:
If a constant or a function has linkonce_odr linkage and unnamed_addr, mark
hidden. Being linkonce_odr guarantees that it is available in every dso that
needs it. Being a constant/function with unnamed_addr guarantees that the
copies don't have to be merged.

llvm-svn: 189886
2013-09-03 23:34:36 +00:00
Michael Gottesman 469a80cb30 [objc-arc] Remove dead code from previous commit.
llvm-svn: 189870
2013-09-03 22:40:56 +00:00