Commit Graph

4091 Commits

Author SHA1 Message Date
Chris Lattner 96b8826542 speed up CGP a bit by scanning predecessors through phi operands
instead of with pred_begin/end.

llvm-svn: 96078
2010-02-13 04:04:42 +00:00
Dan Gohman 5b18f039eb Fix a pruning heuristic which implicitly assumed that SmallPtrSet is
deterministically sorted.

llvm-svn: 96071
2010-02-13 02:06:02 +00:00
Dan Gohman 2b75de97c0 Reapply 95979, a compile-time speedup, now that the bug it exposed is fixed.
llvm-svn: 96005
2010-02-12 19:35:25 +00:00
Dan Gohman 363f847ec6 Fix this code to avoid dereferencing an end() iterator in
offset distributions it doesn't expect.

llvm-svn: 96002
2010-02-12 19:20:37 +00:00
Daniel Dunbar e0b2c69d3c Revert "Reverse the order for collecting the parts of an addrec. The order", it
is breaking llvm-gcc bootstrap.

llvm-svn: 95988
2010-02-12 17:27:08 +00:00
Dan Gohman 0194f58047 Reverse the order for collecting the parts of an addrec. The order
doesn't matter, except that ScalarEvolution tends to need less time
to fold the results this way.

llvm-svn: 95979
2010-02-12 11:08:26 +00:00
Dan Gohman 45774ce0ad Reapply the new LoopStrengthReduction code, with compile time and
bug fixes, and with improved heuristics for analyzing foreign-loop
addrecs.

This change also flattens IVUsers, eliminating the stride-oriented
groupings, which makes it easier to work with.

llvm-svn: 95975
2010-02-12 10:34:29 +00:00
Chris Lattner c053cbbc4d Make DSE only scan blocks that are reachable from the entry
block.  Other blocks may have pointer cycles that will crash
basicaa and other alias analyses.  In any case, there is no
point wasting cycles optimizing dead blocks.  This fixes 
rdar://7635088

llvm-svn: 95852
2010-02-11 05:11:54 +00:00
Chris Lattner d924f63692 Make jump threading honor x|undef -> true and x&undef -> false,
instead of considering x|undef -> x, which may not be true.

llvm-svn: 95850
2010-02-11 04:40:44 +00:00
Devang Patel 03936a1880 Ignore dbg info intrinsics.
llvm-svn: 95828
2010-02-11 00:20:49 +00:00
Dan Gohman 4a618827de Fix "the the" and similar typos.
llvm-svn: 95781
2010-02-10 16:03:48 +00:00
Eric Christopher ad1aa86276 Pull these back out, they're a little too aggressive and time
consuming for a simple optimization.

llvm-svn: 95671
2010-02-09 17:29:18 +00:00
Eric Christopher be2f0b2b7b Add file in here too.
llvm-svn: 95641
2010-02-09 01:11:03 +00:00
Eric Christopher 9f85e7eb16 Add a new pass to do llvm.objsize lowering using SCEV.
Initial skeleton and SCEVUnknown lowering implemented,
the rest should come relatively quickly.  Move testcase
to new directory.

Move pass to right before SimplifyLibCalls - which is
moved down a bit so we can take advantage of a few opts.

llvm-svn: 95628
2010-02-09 00:35:38 +00:00
Jakob Stoklund Olesen 5f9ead2714 Don't unroll loops containing function calls.
llvm-svn: 95454
2010-02-05 23:21:31 +00:00
Jakob Stoklund Olesen 916f48a054 Teach SimplifyCFG about magic pointer constants.
Weird code sometimes uses pointer constants other than null. This patch
teaches SimplifyCFG to build switch instructions in those cases.

Code like this:

void f(const char *x) {
  if (!x)
    puts("null");
  else if ((uintptr_t)x == 1)
    puts("one");
  else if (x == (char*)2 || x == (char*)3)
    puts("two");
  else if ((intptr_t)x == 4)
    puts("four");
  else
    puts(x);
}

Now becomes a switch:

define void @f(i8* %x) nounwind ssp {
entry:
  %magicptr23 = ptrtoint i8* %x to i64            ; <i64> [#uses=1]
  switch i64 %magicptr23, label %if.else16 [
    i64 0, label %if.then
    i64 1, label %if.then2
    i64 2, label %if.then9
    i64 3, label %if.then9
    i64 4, label %if.then14
  ]

Note that LLVM's own DenseMap uses magic pointers.

llvm-svn: 95439
2010-02-05 22:03:18 +00:00
Dan Gohman 4739e41ce9 Implement releaseMemory in CodeGenPrepare and free the BackEdges
container data. This prevents it from holding onto dangling
pointers and potentially behaving unpredictably.

llvm-svn: 95409
2010-02-05 19:24:11 +00:00
Bob Wilson 27dfb1e1a4 Do not reassociate expressions with i1 type. SimplifyCFG converts some
short-circuited conditions to AND/OR expressions, and those expressions
are often converted back to a short-circuited form in code gen.  The
original source order may have been optimized to take advantage of the
expected values, and if we reassociate them, we change the order and
subvert that optimization.  Radar 7497329.

llvm-svn: 95333
2010-02-04 23:32:37 +00:00
Bob Wilson 04365c5f72 Adjust the heuristics used to decide when SROA is likely to be profitable.
The SRThreshold value makes perfect sense for checking if an entire aggregate
should be promoted to a scalar integer, but it is not so good for splitting
an aggregate into its separate elements.  A struct may contain a large embedded
array along with some scalar fields that would benefit from being split apart
by SROA.  Even if the total aggregate size is large, it may still be good to
perform SROA.  Thus, the most important piece of this patch is simply moving
the aggregate size comparison vs. SRThreshold so that it guards only the
aggregate promotion.

We have also been checking the number of elements to decide if an aggregate
should be split up.  The limit of "SRThreshold/4" seemed rather arbitrary,
and I don't think it's very useful to derive this limit from SRThreshold
anyway.  I've collected some data showing that the current default limit of
32 (since SRThreshold defaults to 128) is a reasonable cutoff for struct
types.  One thing suggested by the data is that distinguishing between structs
and arrays might be useful.  There are (obviously) a lot more large arrays
than large structs (as measured by the number of elements and not the total
size -- a large array inside a struct still counts as a single element given
the way we do SROA right now).  Out of 8377 arrays where we successfully
performed SROA while compiling a large set of benchmarks, only 16 of them had
more than 8 elements.  And, for those 16 arrays, it's not at all clear that
SROA was actually beneficial.  So, to offset the compile time cost of
investigating more large structs for SROA, the patch lowers the limit on array
elements to 8.

This fixes Apple Radar 7563690.

llvm-svn: 95224
2010-02-03 17:23:56 +00:00
Evan Cheng 27a41d5473 Revert 94937 and move the noreturn check to codegen.
llvm-svn: 95198
2010-02-03 03:55:59 +00:00
Bob Wilson 76e8c59509 Fix some comment typos.
llvm-svn: 95170
2010-02-03 00:33:21 +00:00
Eric Christopher d86233c118 Recommit this, looks like it wasn't the cause.
llvm-svn: 95165
2010-02-03 00:21:58 +00:00
Eric Christopher e67d01a9a8 Hopefully temporarily revert this.
llvm-svn: 95154
2010-02-02 23:01:31 +00:00
Eric Christopher 4264e7e46f Re-add strcmp and known size object size checking optimization.
Passed bootstrap and nightly test run here.

llvm-svn: 95145
2010-02-02 22:10:43 +00:00
Chris Lattner 302240d73e fix a crash in loop unswitch on a loop invariant vector condition.
llvm-svn: 95055
2010-02-02 02:26:54 +00:00
Eric Christopher 14dfc3f6df Don't need to check the last argument since it'll always be bool. We also
don't use TargetData here.

llvm-svn: 95040
2010-02-02 00:51:45 +00:00
Eric Christopher 9afa973203 More indentation/tabification fixes.
llvm-svn: 95036
2010-02-02 00:13:06 +00:00
Eric Christopher 1408234753 Untabify previous commit.
llvm-svn: 95035
2010-02-02 00:06:55 +00:00
Eric Christopher 56e4182c49 Formatting.
llvm-svn: 95027
2010-02-01 23:25:03 +00:00
Bob Wilson d517b52012 Add an option to GVN to remove all partially redundant loads. This is currently
disabled by default.  This divides the existing load PRE code into 2 phases:
first it checks that it is safe to move the load to each of the predecessors
where it is unavailable, and then if it is safe, the code is changed to move
the load.  Radar 7571861.

llvm-svn: 95007
2010-02-01 21:17:14 +00:00
Evan Cheng d86d3fe0c3 Do not mark no-return calls tail calls. It'll screw up special calls like longjmp and it doesn't make much sense for performance reason. If my logic is faulty, please let me know.
llvm-svn: 94937
2010-01-31 00:59:31 +00:00
Bob Wilson 56600a15ad Check alignment of loads when deciding whether it is safe to execute them
unconditionally.  Besides checking the offset, also check that the underlying
object is aligned as much as the load itself.

llvm-svn: 94875
2010-01-30 04:42:39 +00:00
Eric Christopher 5a0e174863 Revert my last couple of patches. They appear to have broken bison.
llvm-svn: 94841
2010-01-29 21:16:24 +00:00
Bob Wilson 7c42b9d51e Improve isSafeToLoadUnconditionally to recognize that GEPs with constant
indices are safe if the result is known to be within the bounds of the
underlying object.

llvm-svn: 94829
2010-01-29 19:19:08 +00:00
Eric Christopher 9b3c02b7da Make strcpy_chk lower to strcpy if we have a safe size.
llvm-svn: 94783
2010-01-29 01:37:11 +00:00
Bill Wendling 48816a0b3f Generic reformatting and comment fixing. No functionality change.
llvm-svn: 94771
2010-01-29 00:52:43 +00:00
Bill Wendling 8277838cf8 Add newline to debugging output, and fix some grammar-os in comment.
llvm-svn: 94765
2010-01-29 00:27:39 +00:00
Benjamin Kramer 40582a891c Use the less expensive getName function instead of getNameStr.
llvm-svn: 94683
2010-01-27 19:46:52 +00:00
Bob Wilson 70c8fe5e4e Remove check for an impossible condition: the condition of the while loop has
already checked that TmpBB->getSinglePredecessor() is non-null.

llvm-svn: 94451
2010-01-25 21:28:05 +00:00
Bob Wilson fc060e4337 Change Value::getUnderlyingObject to have the MaxLookup value specified as a
parameter with a default value, instead of just hardcoding it in the
implementation.  The limit of MaxLookup = 6 was introduced in r69151 to fix
a performance problem with O(n^2) behavior in instcombine, but the scalarrepl
pass is relying on getUnderlyingObject to go all the way back to an AllocaInst.
Making the limit part of the method signature makes it clear that by default
the result is limited and should help avoid similar problems in the future.
This fixes pr6126.

llvm-svn: 94433
2010-01-25 18:26:54 +00:00
Chris Lattner 823aed16f9 make -fno-rtti the default unless a directory builds with REQUIRES_RTTI.
llvm-svn: 94378
2010-01-24 20:43:08 +00:00
Chris Lattner 29b15c5cfd third bug from PR6119: the xor dupe extension allows
for arbitrary terminators in predecessors, don't assume
it is a conditional or uncond branch.  The testcase shows
an example where they can happen with switches.

llvm-svn: 94323
2010-01-23 19:21:31 +00:00
Chris Lattner ba2d0b89ff add an early out to ProcessBranchOnXOR to speed it up,
handle the case when we can infer an input to the xor
from all inputs that agree, instead of going into an
infinite loop.  Another part of PR6199

llvm-svn: 94321
2010-01-23 19:16:25 +00:00
Chris Lattner de5ab4860f fix a crash in jump threading, PR6119
llvm-svn: 94319
2010-01-23 18:56:07 +00:00
Eric Christopher ba7cd4c393 Reapply 94059 while fixing the calling convention setup
for strcpy.

llvm-svn: 94287
2010-01-23 05:29:06 +00:00
Bob Wilson 6c0c8d41b4 Revert 94059. It is breaking the MultiSource/Benchmarks/Prolangs-C/bison
test on ARM.

llvm-svn: 94198
2010-01-22 19:16:40 +00:00
Chris Lattner 7ba0661f27 Stop building RTTI information for *most* llvm libraries. Notable
missing ones are libsupport, libsystem and libvmcore.  libvmcore is
currently blocked on bugpoint, which uses EH.  Once it stops using
EH, we can switch it off.

This #if 0's out 3 unit tests, because gtest requires RTTI information.
Suggestions welcome on how to fix this.

llvm-svn: 94164
2010-01-22 06:49:46 +00:00
Dan Gohman 045f81981a Revert LoopStrengthReduce.cpp to pre-r94061 for now.
llvm-svn: 94123
2010-01-22 00:46:49 +00:00
Victor Hernandez 1df65186d1 DbgInfoIntrinsics no longer appear in an instruction's use list; so clean up looking for them in use iterations and remove OnlyUsedByDbgInfoIntrinsics()
llvm-svn: 94111
2010-01-21 23:05:53 +00:00
Dan Gohman b1ee154b6b When inserting expressions for post-increment users which contain
loop-variant components, adds must be inserted after the increment.
Keep track of the increment position for this case, and insert
these adds in the correct location.

llvm-svn: 94110
2010-01-21 23:01:22 +00:00
Dan Gohman cb8d577eb2 Include IVUsers information in LSR's debug output.
llvm-svn: 94108
2010-01-21 22:46:32 +00:00
Dan Gohman 29916e023d Prune the search for candidate formulae if the number of register
operands exceeds the number of registers used in the initial
solution, as that wouldn't lead to a profitable solution anyway.

llvm-svn: 94107
2010-01-21 22:42:49 +00:00
Dan Gohman c903499ff8 Add a comment.
llvm-svn: 94104
2010-01-21 21:31:09 +00:00
Dan Gohman 51ad99d2c5 Re-implement the main strength-reduction portion of LoopStrengthReduction.
This new version is much more aggressive about doing "full" reduction in
cases where it reduces register pressure, and also more aggressive about
rewriting induction variables to count down (or up) to zero when doing so
reduces register pressure.

It currently uses fairly simplistic algorithms for finding reuse
opportunities, but it introduces a new framework allows it to combine
multiple strategies at once to form hybrid solutions, instead of doing
all full-reduction or all base+index.

llvm-svn: 94061
2010-01-21 02:09:26 +00:00
Eric Christopher fa863258d0 Add strcpy_chk -> strcpy support for "don't know" object size
answers.  This will update as object size checking gets better information.

llvm-svn: 94059
2010-01-21 01:04:38 +00:00
Dan Gohman ca19445d08 When doing address-mode sinking, expand the base register first, rather
than the scaled register. This makes it more likely that subsequent
AddrModeMatcher queries will match the new address the same way as the
old, instead of accidentally matching what had been the base register
as the new scaled register, and then failing to match the scaled register.
This fixes some problems with address-mode sinking multiple muls into a
block, which will be a lot more common with some upcoming
LoopStrengthReduction changes.

llvm-svn: 93935
2010-01-19 22:45:06 +00:00
Bob Wilson 58d59fe394 Fix a crash in scalarrepl for memcpy/memmove where the source and destination
are the same.  I had already fixed a similar problem where the source and
destination were different bitcasts derived from the same alloca, but the
previous fix still did not handle the case where both operands are exactly
the same value.  Radar 7552893.

llvm-svn: 93848
2010-01-19 04:32:48 +00:00
Owen Anderson cdea3572fa Convert some of the dynamic opcode lookups into static ones.
llvm-svn: 93693
2010-01-17 19:33:27 +00:00
Chris Lattner 573da8ac90 1) Use the new SimplifyInstructionsInBlock routine instead of the copy
in JT.

2) When cloning blocks for PHI or xor conditions, use
instsimplify to simplify the code as we go.  This allows us to 
squish common cases early in JT which opens up opportunities for
subsequent iterations, and allows it to completely simplify the
testcase.

llvm-svn: 93253
2010-01-12 20:41:47 +00:00
Chris Lattner af7855d571 tidy up
llvm-svn: 93222
2010-01-12 02:07:50 +00:00
Chris Lattner eb73bdb2e1 Teach jump threading to duplicate small blocks when the branch
condition is a xor with a phi node.  This eliminates nonsense
like this from 176.gcc in several places:

 LBB166_84:
        testl   %eax, %eax
-       setne   %al
-       xorb    %cl, %al
-       notb    %al
-       testb   $1, %al
-       je      LBB166_85
+       je      LBB166_69
+       jmp     LBB166_85

This is rdar://7391699

llvm-svn: 93221
2010-01-12 02:07:17 +00:00
Chris Lattner 6a19ed0b86 some cleanup, and make it obvious that ProcessJumpOnPHI only works
on branches by renaming it and checking for a branch at the call site.

llvm-svn: 93208
2010-01-11 23:41:09 +00:00
Chris Lattner ab7087ad66 only factor from expressions whose uses are empty and whose
base is the right expression type.  This fixes PR5981.

llvm-svn: 93045
2010-01-09 06:01:36 +00:00
Duncan Sands 4a8b15dc74 Suppress an unused variable warning when assertions are off;
remove some trailing whitespace while there.

llvm-svn: 93008
2010-01-08 17:51:48 +00:00
Benjamin Kramer 76e2766442 Use a do-while loop instead of while + boolean.
llvm-svn: 92912
2010-01-07 13:50:07 +00:00
Eric Christopher 2cdb806fd8 Move the object size intrinsic optimization to inst-combine and make
it work for any integer size return type.

llvm-svn: 92853
2010-01-06 20:04:44 +00:00
Mikhail Glushenkov 40d2429b28 Formatting.
llvm-svn: 92831
2010-01-06 09:20:39 +00:00
Benjamin Kramer d2564e3afb Move remaining stuff to the isInteger predicate.
llvm-svn: 92771
2010-01-05 21:05:54 +00:00
Benjamin Kramer a81a6dff0d Convert a ton of simple integer type equality tests to the new predicate.
llvm-svn: 92760
2010-01-05 20:07:06 +00:00
Dan Gohman b5358003fb Set Changed properly after calling DeleteDeadPHIs.
llvm-svn: 92735
2010-01-05 16:31:45 +00:00
Dan Gohman 28943873e6 Use do+while instead of while for loops which obviously have a
non-zero trip count. Use SmallVector's pop_back_val().

llvm-svn: 92734
2010-01-05 16:27:25 +00:00
Chris Lattner f741d72b84 fix an infinite loop in reassociate building emacs.
llvm-svn: 92679
2010-01-05 04:55:35 +00:00
David Greene 241992382e Change errs() to dbgs().
llvm-svn: 92624
2010-01-05 01:27:47 +00:00
David Greene e0b9789593 Change errs() to dbgs().
llvm-svn: 92623
2010-01-05 01:27:44 +00:00
David Greene 6bc0776343 Change errs() to dbgs().
llvm-svn: 92622
2010-01-05 01:27:39 +00:00
David Greene 3a79df0993 Change errs() to dbgs().
llvm-svn: 92620
2010-01-05 01:27:33 +00:00
David Greene 0fd862254e Change errs() to dbgs().
llvm-svn: 92619
2010-01-05 01:27:30 +00:00
David Greene d17c3916d0 Change errs() to dbgs().
llvm-svn: 92617
2010-01-05 01:27:24 +00:00
David Greene 9ddc6e2e12 Change errs() to dbgs().
llvm-svn: 92615
2010-01-05 01:27:21 +00:00
David Greene 1efdb45562 Change errs() to dbgs().
llvm-svn: 92614
2010-01-05 01:27:19 +00:00
David Greene 2e6efc441f Change errs() to dbgs().
llvm-svn: 92613
2010-01-05 01:27:17 +00:00
David Greene 389fc3b9f6 Change errs() to dbgs().
llvm-svn: 92612
2010-01-05 01:27:15 +00:00
David Greene 74e2d4917d Change errs() to dbgs().
llvm-svn: 92611
2010-01-05 01:27:11 +00:00
David Greene 48c86bedbd Change errs() to dbgs().
llvm-svn: 92610
2010-01-05 01:27:09 +00:00
David Greene 0dd384cfd0 Change errs() to dbgs().
llvm-svn: 92609
2010-01-05 01:27:06 +00:00
David Greene d9c355d590 Change errs() to dbgs().
llvm-svn: 92608
2010-01-05 01:27:04 +00:00
Devang Patel be94f23992 Remove dead debug info intrinsics.
Intrinsic::dbg_stoppoint
 Intrinsic::dbg_region_start 
 Intrinsic::dbg_region_end 
 Intrinsic::dbg_func_start
AutoUpgrade simply ignores these intrinsics now.

llvm-svn: 92557
2010-01-05 01:10:40 +00:00
Mikhail Glushenkov 6a8ac8ce8f 80-col violations, trailing whitespace.
llvm-svn: 92470
2010-01-04 07:55:25 +00:00
Chris Lattner c0e6640d3a move instcombine to its own library, it's past time.
llvm-svn: 92459
2010-01-04 06:23:24 +00:00
Chris Lattner 2d91231d82 implement an instcombine xform needed by clang's codegen
on the example in PR4216.  This doesn't trigger in the testsuite,
so I'd really appreciate someone scrutinizing the logic for
correctness.

llvm-svn: 92458
2010-01-04 06:03:59 +00:00
Chris Lattner 48218e42cd pull my debug hooks out, I'm done with this xform for now.
llvm-svn: 92446
2010-01-03 06:58:48 +00:00
Nick Lewycky 475d3d1215 Small cleanups, refactor some duplicated code into a single method. No
functionality change.

llvm-svn: 92445
2010-01-03 04:39:07 +00:00
Chris Lattner fca0c8f93a generalize the previous transformation to handle indexing into
arrays of structs and other arrays, so long as all the subsequent
indexes are constants.  This triggers frequently for stuff like:

@divisions = internal constant [29 x [2 x i32]] [[2 x i32] zeroinitializer, [2 x i32] [i32 0, i32 1], [2 x i32] [i32 0, i32 2], [2 x i32] [i32 0, i32 1], [2 x i32] zeroinitializer, [2 x i32] [i32 0, i32 1], [2 x i32] [i32 0, i32 1], [2 x i32] [i32 0, i32 2], [2 x i32] [i32 0, i32 2], [2 x i32] zeroinitializer, [2 x i32] zeroinitializer, [2 x i32] zeroinitializer, [2 x i32] [i32 0, i32 2], [2 x i32] [i32 0, i32 1], [2 x i32] zeroinitializer, [2 x i32] [i32 1, i32 0], [2 x i32] [i32 1, i32 1], [2 x i32] [i32 1, i32 1], [2 x i32] [i32 1, i32 2], [2 x i32] [i32 1, i32 1], [2 x i32] [i32 1, i32 0], [2 x i32] [i32 1, i32 2], [2 x i32] [i32 1, i32 2], [2 x i32] [i32 1, i32 0], [2 x i32] [i32 1, i32 0], [2 x i32] [i32 1, i32 0], [2 x i32] [i32 1, i32 1], [2 x i32] [i32 1, i32 2], [2 x i32] [i32 1, i32 2]], align 32 ; <[29 x [2 x i32]]*> [#uses=50]

	  %623 = getelementptr inbounds [29 x [2 x i32]]* @divisions, i64 0, i64 %619, i64 0 ; <i32*> [#uses=1]
	   %684 = icmp eq i32 %683, 999 

also for the "my_defs" table in 'gs', etc.

llvm-svn: 92444
2010-01-03 03:03:27 +00:00
Nick Lewycky ff9cd7ace7 Cleanup.
llvm-svn: 92436
2010-01-03 00:55:31 +00:00
Chris Lattner 98ad2b56cc teach instcombine to optimize idioms like A[i]&42 == 0. This
occurs in 403.gcc in mode_mask_array, in safe-ctype.c (which
is copied in multiple apps) in _sch_istable, etc.

llvm-svn: 92427
2010-01-02 22:08:28 +00:00
Chris Lattner b56bef45f8 Teach the table lookup optimization to generate range compares
when a consequtive sequence of elements all satisfies the 
predicate.  Like the double compare case, this generates better
code than the magic constant case and generalizes to more than
32/64 element array lookups.

Here are some examples where it triggers.  From 403.gcc, most
accesses to the rtx_class array are handled, e.g.:

@rtx_class = constant [153 x i8] c"xxxxxmmmmmmmmxxxxxxxxxxxxmxxxxxxiiixxxxxxxxxxxxxxxxxxxooxooooooxxoooooox3x2c21c2222ccc122222ccccaaaaaa<<<<<<<<<<<<<<<<<<111111111111bbooxxxxxxxxxxcc2211x", align 32 ; <[153 x i8]*> [#uses=547]
   %142 = icmp eq i8 %141, 105
@rtx_class = constant [153 x i8] c"xxxxxmmmmmmmmxxxxxxxxxxxxmxxxxxxiiixxxxxxxxxxxxxxxxxxxooxooooooxxoooooox3x2c21c2222ccc122222ccccaaaaaa<<<<<<<<<<<<<<<<<<111111111111bbooxxxxxxxxxxcc2211x", align 32 ; <[153 x i8]*> [#uses=543]
	   %165 = icmp eq i8 %164, 60      

Also, most of the 59-element arrays (mode_class/rid_to_yy, etc) 
optimized before are actually range compares.  This lets 32-bit
machines optimize them.

400.perlbmk has stuff like this:

400.perlbmk: PL_regkind, even for 32-bit:
@PL_regkind = constant [62 x i8] c"\00\00\02\02\02\06\06\06\06\09\09\0B\0B\0D\0E\0E\0E\11\12\12\14\14\16\16\18\18\1A\1A\1C\1C\1E\1F !!!$$&'((((,-.///88886789:;8$", align 32 ; <[62 x i8]*> [#uses=4]
	   %811 = icmp ne i8 %810, 33 

@PL_utf8skip = constant [256 x i8] c"\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\03\03\03\03\03\03\03\03\03\03\03\03\03\03\03\03\04\04\04\04\04\04\04\04\05\05\05\05\06\06\07\0D", align 32 ; <[256 x i8]*> [#uses=94]
	   %12 = icmp ult i8 %10, 2
           
etc.

llvm-svn: 92426
2010-01-02 21:50:18 +00:00
Chris Lattner e199d2df80 theoretically the negate we find could be in a different function, check
for this case.

llvm-svn: 92425
2010-01-02 21:46:33 +00:00
Chris Lattner 2fa4ec70fc use enums for the over/underdefined markers for clarity. Switch
to using -2/-3 instead of -1/-2 for a future xform.

llvm-svn: 92423
2010-01-02 20:20:33 +00:00
Chris Lattner 351e22aa36 remove the random sampling framework, which is not maintained anymore.
If there is interest, it can be resurrected from SVN.  PR4912.

llvm-svn: 92422
2010-01-02 20:07:03 +00:00
Nick Lewycky a67519be12 Fix logic error in previous commit. The != case needs to become an or, not an
and.

llvm-svn: 92419
2010-01-02 16:14:56 +00:00