Commit Graph

185 Commits

Author SHA1 Message Date
Rafael Espindola e40238069e The TargetData is not used for the isPowerOfTwo determination. It has never
been used in the first place.  It simply was passed to the function and to the
recursive invocations.  Simply drop the parameter and update the callers for the
new signature.

Patch by Saleem Abdulrasool!

llvm-svn: 169988
2012-12-12 16:52:40 +00:00
Chandler Carruth 80d3e56c73 Add support to ValueTracking for determining that a pointer is non-null
by virtue of inbounds GEPs that preclude a null pointer.

This is a very common pattern in the code generated by std::vector and
other standard library routines which use allocators that test for null
pervasively. This is one step closer to teaching Clang+LLVM to be able
to produce an empty function for:

  void f() {
    std::vector<int> v;
    v.push_back(1);
    v.push_back(2);
    v.push_back(3);
    v.push_back(4);
  }

Which is related to getting them to completely fold SmallVector
push_back sequences into constants when inlining and other optimizations
make that a possibility.

llvm-svn: 169573
2012-12-07 02:08:58 +00:00
Michael Ilseman 0f12837be0 Have CannotBeNegativeZero() be aware of the nsz fast-math flag
llvm-svn: 169452
2012-12-06 00:07:09 +00:00
Chandler Carruth ed0881b2a6 Use the new script to sort the includes of every file under lib.
Sooooo many of these had incorrect or strange main module includes.
I have manually inspected all of these, and fixed the main module
include to be the nearest plausible thing I could find. If you own or
care about any of these source files, I encourage you to take some time
and check that these edits were sensible. I can't have broken anything
(I strictly added headers, and reordered them, never removed), but they
may not be the headers you'd really like to identify as containing the
API being implemented.

Many forward declarations and missing includes were added to a header
files to allow them to parse cleanly when included first. The main
module rule does in fact have its merits. =]

llvm-svn: 169131
2012-12-03 16:50:05 +00:00
Chandler Carruth 5da3f0512e Revert the majority of the next patch in the address space series:
r165941: Resubmit the changes to llvm core to update the functions to
         support different pointer sizes on a per address space basis.

Despite this commit log, this change primarily changed stuff outside of
VMCore, and those changes do not carry any tests for correctness (or
even plausibility), and we have consistently found questionable or flat
out incorrect cases in these changes. Most of them are probably correct,
but we need to devise a system that makes it more clear when we have
handled the address space concerns correctly, and ideally each pass that
gets updated would receive an accompanying test case that exercises that
pass specificaly w.r.t. alternate address spaces.

However, from this commit, I have retained the new C API entry points.
Those were an orthogonal change that probably should have been split
apart, but they seem entirely good.

In several places the changes were very obvious cleanups with no actual
multiple address space code added; these I have not reverted when
I spotted them.

In a few other places there were merge conflicts due to a cleaner
solution being implemented later, often not using address spaces at all.
In those cases, I've preserved the new code which isn't address space
dependent.

This is part of my ongoing effort to clean out the partial address space
code which carries high risk and low test coverage, and not likely to be
finished before the 3.2 release looms closer. Duncan and I would both
like to see the above issues addressed before we return to these
changes.

llvm-svn: 167222
2012-11-01 09:14:31 +00:00
Nadav Rotem 15198e94d2 Fix a crash in SimpliftDemandedBits of vectors of pointers.
PR14183.

llvm-svn: 166785
2012-10-26 17:17:05 +00:00
Nadav Rotem 8255ceb2cf Revert 166726 because it may have broken a number of SPEC tests. PR14183.
llvm-svn: 166739
2012-10-25 23:51:48 +00:00
Nadav Rotem bb4cfb5ee1 Fix a crash in ValueTracking. Add support for vectors of pointers.
llvm-svn: 166726
2012-10-25 21:52:52 +00:00
Micah Villmow 4bb926d91d Resubmit the changes to llvm core to update the functions to support different pointer sizes on a per address space basis.
llvm-svn: 165941
2012-10-15 16:24:29 +00:00
Micah Villmow 0c61134d8d Revert 165732 for further review.
llvm-svn: 165747
2012-10-11 21:27:41 +00:00
Micah Villmow 083189730e Add in the first iteration of support for llvm/clang/lldb to allow variable per address space pointer sizes to be optimized correctly.
llvm-svn: 165726
2012-10-11 17:21:41 +00:00
Micah Villmow cdfe20b97f Move TargetData to DataLayout.
llvm-svn: 165402
2012-10-08 16:38:25 +00:00
Duncan Sands 271ea6cdc5 The alignment of an sret parameter is known: it must be at least the
alignment of the return type.  Teach the optimizers this.

llvm-svn: 165226
2012-10-04 13:36:31 +00:00
Sylvestre Ledru 91ce36c986 Revert 'Fix a typo 'iff' => 'if''. iff is an abreviation of if and only if. See: http://en.wikipedia.org/wiki/If_and_only_if Commit 164767
llvm-svn: 164768
2012-09-27 10:14:43 +00:00
Sylvestre Ledru 721cffd53a Fix a typo 'iff' => 'if'
llvm-svn: 164767
2012-09-27 09:59:43 +00:00
Richard Smith 228e6d4cf3 Fix integer undefined behavior due to signed left shift overflow in LLVM.
Reviewed offline by chandlerc.

llvm-svn: 162623
2012-08-24 23:29:28 +00:00
Nuno Lopes 0d44a50426 PHINode::hasConstantValue(): return undef if the PHI is fully recursive.
Thanks Duncan for the idea

llvm-svn: 159687
2012-07-03 21:15:40 +00:00
Dan Gohman ed7c24e2d9 Teach DeadStoreElimination to eliminate exit-block stores with phi addresses.
llvm-svn: 156558
2012-05-10 18:57:38 +00:00
Duncan Sands 34c4869cf6 Just mark the sign bit as known zero, rather than any other irrelevant bits
known zero in the LHS.  Fixes PR12541.

llvm-svn: 155818
2012-04-30 11:56:58 +00:00
Chandler Carruth 28192c9398 Fix ValueTracking to conclude that debug intrinsics are safe to
speculate. Without this, loop rotate (among many other places) would
suddenly stop working in the presence of debug info. I found this
looking at loop rotate, and have augmented its tests with a reduction
out of a very hot loop in yacr2 where failing to do this rotation costs
sometimes more than 10% in runtime performance, perturbing numerous
downstream optimizations.

This should have no impact on performance without debug info, but the
change in performance when debug info is enabled can be extreme. As
a consequence (and this how I got to this yak) any profiling of
performance problems should be treated with deep suspicion -- they may
have been wildly innacurate of debug info was enabled for profiling. =/
Just a heads up.

llvm-svn: 154263
2012-04-07 19:22:18 +00:00
Rafael Espindola ba0a6cabb8 Always compute all the bits in ComputeMaskedBits.
This allows us to keep passing reduced masks to SimplifyDemandedBits, but
know about all the bits if SimplifyDemandedBits fails. This allows instcombine
to simplify cases like the one in the included testcase.

llvm-svn: 154011
2012-04-04 12:51:34 +00:00
Rafael Espindola 80c540e656 Teach CodeGen's version of computeMaskedBits to understand the range metadata.
This is the CodeGen equivalent of r153747. I tested that there is not noticeable
performance difference with any combination of -O0/-O2 /-g when compiling
gcc as a single compilation unit.

llvm-svn: 153817
2012-03-31 18:14:00 +00:00
Rafael Espindola 53190539db Add computeMaskedBitsLoad back, as it was the change to instsimplify that
caused the slowdown last time.

llvm-svn: 153747
2012-03-30 15:52:11 +00:00
Chad Rosier e27081d348 Revert r153521 as it's causing large regressions on the nightly testers.
Original commit message for r153521 (aka r153423):
Use the new range metadata in computeMaskedBits and add a new optimization to
instruction simplify that lets us remove an and when loding a boolean value.

llvm-svn: 153587
2012-03-28 18:42:50 +00:00
Chad Rosier 8e6dbccd03 Reapply r153423; the original commit was fine. The failing test, distray, had
undefined behavior, which Rafael was kind enough to fix.

Original commit message for r153423:
Use the new range metadata in computeMaskedBits and add a new optimization to
instruction simplify that lets us remove an and when loding a boolean value.

llvm-svn: 153521
2012-03-27 17:44:52 +00:00
Chad Rosier 08e57e5ccf Revert r153423 as this is causing failures on our internal nightly testers.
Original commit message:
Use the new range metadata in computeMaskedBits and add a new optimization to
instruction simplify that lets us remove an and when loading a boolean value.

llvm-svn: 153452
2012-03-26 18:07:14 +00:00
Rafael Espindola df9b4adb82 Use the new range metadata in computeMaskedBits and add a new optimization to
instruction simplify that lets us remove an and when loding a boolean value.

llvm-svn: 153423
2012-03-26 01:44:11 +00:00
Nick Lewycky fa30607eca Factor out the multiply analysis code in ComputeMaskedBits and apply it to the
overflow checking multiply intrinsic as well.

Add a test for this, updating the test from grep to FileCheck.

llvm-svn: 153028
2012-03-18 23:28:48 +00:00
Nick Lewycky fea3e00e09 Factor out the analysis of addition and subtraction in ComputeMaskedBits. Reuse
it to analyze extractvalue(llvm.[us](add|sub).with.overflow.*) intrinsics!

llvm-svn: 152398
2012-03-09 09:23:50 +00:00
Nick Lewycky 1d57ee341a No functionality change. Type::isSized() can be expensive, so avoid calling it
until after other inexpensive tests.

llvm-svn: 152195
2012-03-07 02:27:53 +00:00
Eli Friedman af3c6fe51e A few more cases of missing masking in ComputeMaskedBits; found by inspection.
llvm-svn: 152070
2012-03-05 23:22:40 +00:00
Eli Friedman a8b75ac798 Make sure we don't return bits outside the mask in ComputeMaskedBits. PR12189.
llvm-svn: 152066
2012-03-05 23:09:40 +00:00
Chris Lattner 8213c8af29 Remove some dead code and tidy things up now that vectors use ConstantDataVector
instead of always using ConstantVector.

llvm-svn: 149912
2012-02-06 21:56:39 +00:00
Bill Wendling 0aef16afd5 [unwind removal] Remove all of the code for the dead 'unwind' instruction. There
were no 'unwind' instructions being generated before this, so this is in effect
a no-op.

llvm-svn: 149906
2012-02-06 21:44:22 +00:00
Chris Lattner cf9e8f6968 reapply the patches reverted in r149470 that reenable ConstantDataArray,
but with a critical fix to the SelectionDAG code that optimizes copies
from strings into immediate stores: the previous code was stopping reading
string data at the first nul.  Address this by adding a new argument to
llvm::getConstantStringInfo, preserving the behavior before the patch.

llvm-svn: 149800
2012-02-05 02:29:43 +00:00
Argyrios Kyrtzidis 17c981a45b Revert Chris' commits up to r149348 that started causing VMCoreTests unit test to fail.
These are:

r149348
r149351
r149352
r149354
r149356
r149357
r149361
r149362
r149364
r149365

llvm-svn: 149470
2012-02-01 04:51:17 +00:00
Chris Lattner 997348e9fe remove the last vestiges of llvm::GetConstantStringInfo, in CodeGen.
llvm-svn: 149356
2012-01-31 05:09:17 +00:00
Chris Lattner 108423a94a Change ConstantArray::get to form a ConstantDataArray when possible,
kicking in the big win of ConstantDataArray.  As part of this, change
the implementation of GetConstantStringInfo in ValueTracking to work
with ConstantDataArray (and not ConstantArray) making it dramatically,
amazingly, more efficient in the process and renaming it to 
getConstantStringInfo.

This keeps around a GetConstantStringInfo entrypoint that (grossly)
forwards to getConstantStringInfo and constructs the std::string 
required, but existing clients should move over to 
getConstantStringInfo instead.

llvm-svn: 149351
2012-01-31 04:42:22 +00:00
Chris Lattner 61a1d6cb81 progress making the world safe to ConstantDataVector. While
we're at it, allow PatternMatch's "neg" pattern to match integer
vector negations, and enhance ComputeNumSigned bits to handle
shl of vectors.

llvm-svn: 149082
2012-01-26 21:37:55 +00:00
Chris Lattner 6705883ad8 use Constant::getAggregateElement to simplify a bunch of code.
llvm-svn: 148934
2012-01-25 06:48:06 +00:00
Chris Lattner 9be59599b3 Use the right method to get the # elements in a CDS.
llvm-svn: 148897
2012-01-25 01:27:20 +00:00
Chris Lattner f7eb543380 teach valuetracking about ConstantDataSequential
llvm-svn: 148790
2012-01-24 07:54:10 +00:00
Dan Gohman 7ac046a261 Generalize isSafeToSpeculativelyExecute to work on arbitrary
Values, rather than just Instructions, since it's interesting
for ConstantExprs too.

llvm-svn: 147560
2012-01-04 23:01:09 +00:00
Benjamin Kramer 9442cd01f6 PatternMatch: Introduce a matcher for instructions with the "exact" bit. Use it to simplify a few matchers.
llvm-svn: 147403
2012-01-01 17:55:30 +00:00
Benjamin Kramer 4ee5747fdd ComputeMaskedBits: Make knownzero computation more aggressive for ctlz with undef zero.
unsigned foo(unsigned x) { return 31 - __builtin_clz(x); }
now compiles into a single "bsrl" instruction on x86.

llvm-svn: 147255
2011-12-24 17:31:46 +00:00
Nick Lewycky b4039f633c Make some intrinsics safe to speculatively execute.
llvm-svn: 147036
2011-12-21 05:52:02 +00:00
Dan Gohman 75d7d5e988 Move Instruction::isSafeToSpeculativelyExecute out of VMCore and
into Analysis as a standalone function, since there's no need for
it to be in VMCore. Also, update it to use isKnownNonZero and
other goodies available in Analysis, making it more precise,
enabling more aggressive optimization.

llvm-svn: 146610
2011-12-14 23:49:11 +00:00
Chad Rosier 8abf65a130 Probably not a good idea to convert a single vector load into a memcpy. We
don't do this now, but add a test case to prevent this from happening in the
future.
Additional test for rdar://9892684

llvm-svn: 145879
2011-12-06 00:19:08 +00:00
Nadav Rotem 3924cb0267 Add support for vectors of pointers.
llvm-svn: 145801
2011-12-05 06:29:09 +00:00
Duncan Sands ca6f8ddbf8 Fix a theoretical problem (not seen in the wild): if different instances of a
weak variable are compiled by different compilers, such as GCC and LLVM, while
LLVM may increase the alignment to the preferred alignment there is no reason to
think that GCC will use anything more than the ABI alignment.  Since it is the
GCC version that might end up in the final program (as the linkage is weak), it
is wrong to increase the alignment of loads from the global up to the preferred
alignment as the alignment might only be the ABI alignment.

Increasing alignment up to the ABI alignment might be OK, but I'm not totally
convinced that it is.  It seems better to just leave the alignment of weak
globals alone.

llvm-svn: 145413
2011-11-29 18:26:38 +00:00