Commit Graph

14749 Commits

Author SHA1 Message Date
Eli Friedman a5e244c08d Don't crash on variable insertelement on ARM. PR10258.
llvm-svn: 142871
2011-10-24 23:08:52 +00:00
Bill Wendling 57e3aaad89 Check the visibility of the global variable before placing it into the stubs
table. A hidden variable could potentially end up in both lists.
<rdar://problem/10336715>

llvm-svn: 142869
2011-10-24 23:05:43 +00:00
Jim Grosbach 3ea0657d54 ARM assembly parsing and encoding for VLD1 w/ writeback.
One and two length register list variants.

llvm-svn: 142861
2011-10-24 22:16:58 +00:00
Nick Lewycky a58fb48a55 Now that we look at all the header PHIs, we need to consider all the header PHIs
when deciding that the loop has stopped evolving. Fixes miscompile in the gcc
torture testsuite!

llvm-svn: 142843
2011-10-24 21:02:38 +00:00
Owen Anderson 295b1e84ce Fix a NEON disassembly case that was broken in the recent refactorings. As more of this code gets refactored, a lot of these manual decoding hooks should get smaller and/or go away entirely.
llvm-svn: 142817
2011-10-24 18:04:29 +00:00
Dan Gohman 2c9bda1512 Remove the explicit request for "Latency" scheduling from MSP430,
as the Latency scheduler is going away.

llvm-svn: 142811
2011-10-24 17:53:16 +00:00
Dan Gohman c32af340fc Change the default scheduler from Latency to ILP, since Latency
is going away.

llvm-svn: 142810
2011-10-24 17:45:02 +00:00
Jim Grosbach 3adec13c3e Update test for r142801.
llvm-svn: 142806
2011-10-24 17:26:26 +00:00
Benjamin Kramer d48d52e7b2 XFAIL test on leak checkers.
llvm-svn: 142804
2011-10-24 17:24:05 +00:00
Chandler Carruth 7111f4564c Remove return heuristics from the static branch probabilities, and
introduce no-return or unreachable heuristics.

The return heuristics from the Ball and Larus paper don't work well in
practice as they pessimize early return paths. The only good hitrate
return heuristics are those for:
 - NULL return
 - Constant return
 - negative integer return

Only the last of these three can possibly require significant code for
the returning block, and even the last is fairly rare and usually also
a constant. As a consequence, even for the cold return paths, there is
little code on that return path, and so little code density to be gained
by sinking it. The places where sinking these blocks is valuable (inner
loops) will already be weighted appropriately as the edge is a loop-exit
branch.

All of this aside, early returns are nearly as common as all three of
these return categories, and should actually be predicted as taken!
Rather than muddy the waters of the static predictions, just remain
silent on returns and let the CFG itself dictate any layout or other
issues.

However, the return heuristic was flagging one very important case:
unreachable. Unfortunately it still gave a 1/4 chance of the
branch-to-unreachable occuring. It also didn't do a rigorous job of
finding those blocks which post-dominate an unreachable block.

This patch builds a more powerful analysis that should flag all branches
to blocks known to then reach unreachable. It also has better worst-case
runtime complexity by not looping through successors for each block. The
previous code would perform an N^2 walk in the event of a single entry
block branching to N successors with a switch where each successor falls
through to the next and they finally fall through to a return.

Test case added for noreturn heuristics. Also doxygen comments improved
along the way.

llvm-svn: 142793
2011-10-24 12:01:08 +00:00
Nick Lewycky 9be7f277e4 Reapply r142781 with fix. Original message:
Enhance SCEV's brute force loop analysis to handle multiple PHI nodes in the
  loop header when computing the trip count.

  With this, we now constant evaluate:
    struct ListNode { const struct ListNode *next; int i; };
    static const struct ListNode node1 = {0, 1};
    static const struct ListNode node2 = {&node1, 2};
    static const struct ListNode node3 = {&node2, 3};
    int test() {
      int sum = 0;
      for (const struct ListNode *n = &node3; n != 0; n = n->next)
        sum += n->i;
      return sum;
    }

llvm-svn: 142790
2011-10-24 06:57:05 +00:00
Nick Lewycky dd1d3df524 A dead malloc, a free(NULL) and a free(undef) are all trivially dead
instructions.

This doesn't introduce any optimizations we weren't doing before (except
potentially due to pass ordering issues), now passes will eliminate them sooner
as part of their own cleanups.

llvm-svn: 142787
2011-10-24 04:35:36 +00:00
Nick Lewycky 9d28c26d77 Speculatively revert r142781. Bots are showing
Assertion `i_nocapture < OperandTraits<PHINode>::operands(this) && "getOperand() out of range!"' failed.
coming out of indvars.

llvm-svn: 142786
2011-10-24 04:00:25 +00:00
Nick Lewycky 1700007ecc Enhance SCEV's brute force loop analysis to handle multiple PHI nodes in the
loop header when computing the trip count.

With this, we now constant evaluate:
  struct ListNode { const struct ListNode *next; int i; };
  static const struct ListNode node1 = {0, 1};
  static const struct ListNode node2 = {&node1, 2};
  static const struct ListNode node3 = {&node2, 3};
  int test() {
    int sum = 0;
    for (const struct ListNode *n = &node3; n != 0; n = n->next)
      sum += n->i;
    return sum;
  }

llvm-svn: 142781
2011-10-23 23:43:14 +00:00
Craig Topper b05d9e9bea Add X86 SARX, SHRX, and SHLX instructions.
llvm-svn: 142779
2011-10-23 22:18:24 +00:00
Chandler Carruth 1c8ace0e89 Teach the BranchProbabilityInfo pass to print its results, and use that
to bring it under direct test instead of merely indirectly testing it in
the BlockFrequencyInfo pass.

The next step is to start adding tests for the various heuristics
employed, and to start fixing those heuristics once they're under test.

llvm-svn: 142778
2011-10-23 21:21:50 +00:00
Chandler Carruth bd1be4d01c Completely re-write the algorithm behind MachineBlockPlacement based on
discussions with Andy. Fundamentally, the previous algorithm is both
counter productive on several fronts and prioritizing things which
aren't necessarily the most important: static branch prediction.

The new algorithm uses the existing loop CFG structure information to
walk through the CFG itself to layout blocks. It coalesces adjacent
blocks within the loop where the CFG allows based on the most likely
path taken. Finally, it topologically orders the block chains that have
been formed. This allows it to choose a (mostly) topologically valid
ordering which still priorizes fallthrough within the structural
constraints.

As a final twist in the algorithm, it does violate the CFG when it
discovers a "hot" edge, that is an edge that is more than 4x hotter than
the competing edges in the CFG. These are forcibly merged into
a fallthrough chain.

Future transformations that need te be added are rotation of loop exit
conditions to be fallthrough, and better isolation of cold block chains.
I'm also planning on adding statistics to model how well the algorithm
does at laying out blocks based on the probabilities it receives.

The old tests mostly still pass, and I have some new tests to add, but
the nested loops are still behaving very strangely. This almost seems
like working-as-intended as it rotated the exit branch to be
fallthrough, but I'm not convinced this is actually the best layout. It
is well supported by the probabilities for loops we currently get, but
those are pretty broken for nested loops, so this may change later.

llvm-svn: 142743
2011-10-23 09:18:45 +00:00
Craig Topper 980d59832a Add X86 RORX instruction
llvm-svn: 142741
2011-10-23 07:34:00 +00:00
Cameron Zwarich 057fbb1a10 The element insertion code in scalar replacement doesn't handle incorrect
element types, even though the element extraction code does. It is surprising
that this bug has been here for so long. Fixes <rdar://problem/10318778>.

llvm-svn: 142740
2011-10-23 07:02:10 +00:00
Craig Topper e94d277db8 Add X86 MULX instruction for disassembler.
llvm-svn: 142738
2011-10-23 00:33:32 +00:00
Nick Lewycky 52340ac5f8 Oops! Fix test I forgot to submit as part of r142735.
llvm-svn: 142736
2011-10-22 22:07:31 +00:00
Nick Lewycky 32f8051d66 A non-escaping malloc in the entry block is not unlike an alloca. Do dead-store
elimination on them too.

llvm-svn: 142735
2011-10-22 21:59:35 +00:00
Nick Lewycky a6674c7fc9 Make SCEV's brute force analysis stronger in two ways. Firstly, we should be
able to constant fold load instructions where the argument is a constant.
Second, we should be able to watch multiple PHI nodes through the loop; this
patch only supports PHIs in loop headers, more can be done here.

With this patch, we now constant evaluate:
  static const int arr[] = {1, 2, 3, 4, 5};
  int test() {
    int sum = 0;
    for (int i = 0; i < 5; ++i) sum += arr[i];
    return sum;
  }

llvm-svn: 142731
2011-10-22 19:58:20 +00:00
Nadav Rotem e649d66552 Fix pr11193.
SHL inserts zeros from the right, thus even when the original
sign_extend_inreg value was of 1-bit, we need to sra.

llvm-svn: 142724
2011-10-22 12:39:25 +00:00
Jim Grosbach 11c0b347c6 Assembly parsing for 4-register sequential variant of VLD2.
llvm-svn: 142704
2011-10-21 23:58:57 +00:00
Jim Grosbach 118b38cbf1 Assembly parsing for 2-register sequential variant of VLD2.
llvm-svn: 142691
2011-10-21 22:21:10 +00:00
Eli Friedman 688db1d6d0 Remap blockaddress correctly when inlining a function. Fixes PR10162.
llvm-svn: 142684
2011-10-21 20:45:19 +00:00
Jim Grosbach 846bcff7c7 Assembly parsing for 4-register variant of VLD1.
llvm-svn: 142682
2011-10-21 20:35:01 +00:00
Jim Grosbach c4360fe575 Assembly parsing for 3-register variant of VLD1.
llvm-svn: 142675
2011-10-21 20:02:19 +00:00
Eli Friedman ce818277fc Extend instcombine's shufflevector simplification to handle more cases where the input and output vectors have different sizes. Patch by Xiaoyi Guo.
llvm-svn: 142671
2011-10-21 19:06:29 +00:00
Jim Grosbach 2f2e3c4737 ARM VLD parsing and encoding.
Next step in the ongoing saga of NEON load/store assmebly parsing. Handle
VLD1 instructions that take a two-register register list.

Adjust the instruction definitions to only have the single encoded register
as an operand. The super-register from the pseudo is kept as an implicit def,
so passes which come after pseudo-expansion still know that the instruction
defines the other subregs.

llvm-svn: 142670
2011-10-21 18:54:25 +00:00
Nadav Rotem 5e00bb5feb Fix pr11194. When promoting and splitting integers we need to use
ZExtPromotedInteger and SExtPromotedInteger based on the operation we legalize.

SetCC return type needs to be legalized via PromoteTargetBoolean.

llvm-svn: 142660
2011-10-21 17:35:19 +00:00
Chandler Carruth 70a38058b1 Don't hard code the desired alignment for loops -- it isn't 16-bytes on
all x86 systems. Sorry for the breakage.

llvm-svn: 142656
2011-10-21 16:41:39 +00:00
Nadav Rotem d315157f12 1. Fix the widening of SETCC in WidenVecOp_SETCC. Use the correct return CC type.
2. Fix a typo in CONCAT_VECTORS which exposed the bug in #1.

llvm-svn: 142648
2011-10-21 11:42:07 +00:00
Chandler Carruth 8b9737cb54 Add loop aligning to MachineBlockPlacement based on review discussion so
it's a bit more plausible to use this instead of CodePlacementOpt. The
code for this was shamelessly stolen from CodePlacementOpt, and then
trimmed down a bit. There doesn't seem to be much utility in returning
true/false from this pass as we may or may not have rewritten all of the
blocks. Also, the statistic of counting how many loops were aligned
doesn't seem terribly important so I removed it. If folks would like it
to be included, I'm happy to add it back.

This was probably the most egregious of the missing features, and now
I'm going to start gathering some performance numbers and looking at
specific loop structures that have different layout between the two.

Test is updated to include both basic loop alignment and nested loop
alignment.

llvm-svn: 142645
2011-10-21 08:57:37 +00:00
Chandler Carruth ddfeaafdfb Add a very basic test for MachineBlockPlacement. This is essentially the
canonical example I used when developing it, and is one of the primary
motivating real-world use cases for __builtin_expect (when burried under
a macro).

I'm working on more test cases here, but I'm trying to make sure both
that the pass is doing the right thing with the test cases and that they
aren't too brittle to changes elsewhere in the code generation pipeline.

Feedback and/or suggestions on how to test this are very welcome.
Especially feedback on whether testing the block comments is a good
strategy; I couldn't find any good examples to steal from but all the
other ideas I had were a lot uglier or more fragile.

llvm-svn: 142644
2011-10-21 08:01:56 +00:00
Craig Topper 039a79067a Remove intrinsics for X86 BLSI, BLSMSK, and BLSR intrinsics and replace with custom isel lowering code.
llvm-svn: 142642
2011-10-21 06:55:01 +00:00
Owen Anderson 16c8fc5191 Revert r142618, r142622, and r142624, which were based on an incorrect reading of the ARMv7 docs.
llvm-svn: 142626
2011-10-20 22:23:58 +00:00
Owen Anderson 608c60c773 Fix decoding tests for fixed MSR encodings.
llvm-svn: 142624
2011-10-20 22:01:48 +00:00
Owen Anderson 48da0ed477 Fix tests for corrected MSR encodings.
llvm-svn: 142622
2011-10-20 21:53:19 +00:00
Jim Grosbach 9036c5cf2b ARM VLD1/VST1 (one register, no writeback) assembly parsing and encoding.
llvm-svn: 142583
2011-10-20 15:04:25 +00:00
Jim Grosbach 3ad44e50b3 Tidy up formatting.
llvm-svn: 142582
2011-10-20 14:57:47 +00:00
Jim Grosbach 8db25984a9 ARM VTBX (one register) assembly parsing and encoding.
llvm-svn: 142581
2011-10-20 14:48:50 +00:00
Eli Friedman 1923a330e6 Refactor code from inlining and globalopt that checks whether a function definition is unused, and enhance it so it can tell that functions which are only used by a blockaddress are in fact dead. This probably doesn't happen much on most code, but the Linux kernel's _THIS_IP_ can trigger this issue with blockaddress. (GlobalDCE can also handle the given tescase, but we only run that at -O3.) Found while looking at PR11180.
llvm-svn: 142572
2011-10-20 05:23:42 +00:00
Nick Lewycky 462098824f "@string = constant i8 0" is a value i8* string of length zero. Analyze that
correctly in GetStringLength, fixing PR11181!

llvm-svn: 142558
2011-10-20 00:34:35 +00:00
Chad Rosier add38c12b8 Revert 142337. Thumb1 still doesn't support dynamic stack realignment. :(
llvm-svn: 142557
2011-10-20 00:07:12 +00:00
Evan Cheng 54d678fff4 Fix TLS lowering bug. The CopyFromReg must be glued to the TLSCALL. rdar://10291355
llvm-svn: 142550
2011-10-19 22:22:54 +00:00
Nadav Rotem 8824472a25 Improve code generation for vselect on SSE2:
When checking the availability of instructions using the TLI, a 'promoted'
instruction IS available. It means that the value is bitcasted to another type
for which there is an operation. The correct check for the availablity of an
instruction is to check if it should be expanded.

llvm-svn: 142542
2011-10-19 20:43:16 +00:00
Rafael Espindola e0d0908356 Fix parsing of a line with only a # in it.
llvm-svn: 142537
2011-10-19 18:48:52 +00:00
James Molloy 2d768fd379 Use literal pool loads instead of MOVW/MOVT for materializing global addresses when optimizing for size.
On spec/gcc, this caused a codesize improvement of ~1.9% for ARM mode and ~4.9% for Thumb(2) mode. This is
codesize including literal pools.

The pools themselves doubled in size for ARM mode and quintupled for Thumb mode, leaving suggestion that there
is still perhaps redundancy in LLVM's use of constant pools that could be decreased by sharing entries.

Fixes PR11087.

llvm-svn: 142530
2011-10-19 14:11:07 +00:00