Commit Graph

27 Commits

Author SHA1 Message Date
Evan Cheng e4b8ac9fef Add a peephole optimization to optimize pairs of bitcasts. e.g.
v2 = bitcast v1
...
v3 = bitcast v2
...
   = v3
=>
v2 = bitcast v1
...
   = v1
if v1 and v3 are of in the same register class.

bitcast between i32 and fp (and others) are often not nops since they
are in different register classes. These bitcast instructions are often
left because they are in different basic blocks and cannot be
eliminated by dag combine.

rdar://9104514

llvm-svn: 127668
2011-03-15 05:13:13 +00:00
Evan Cheng 98196b4ebb Fix thinko. Cmp can be the first instruction in a MBB.
llvm-svn: 125552
2011-02-15 05:00:24 +00:00
Evan Cheng 9bf3f8e08b Fix PR8854. Track inserted copies to avoid read before write. Sorry, it's hard to reduce a sensible small test case.
llvm-svn: 125523
2011-02-14 21:50:37 +00:00
Jakob Stoklund Olesen 2fb5b31578 Simplify a bunch of isVirtualRegister() and isPhysicalRegister() logic.
These functions not longer assert when passed 0, but simply return false instead.

No functional change intended.

llvm-svn: 123155
2011-01-10 02:58:51 +00:00
Evan Cheng 6eb516dbea Do not model all INLINEASM instructions as having unmodelled side effects.
Instead encode llvm IR level property "HasSideEffects" in an operand (shared
with IsAlignStack). Added MachineInstrs::hasUnmodeledSideEffects() to check
the operand when the instruction is an INLINEASM.

This allows memory instructions to be moved around INLINEASM instructions.

llvm-svn: 123044
2011-01-07 23:50:32 +00:00
Evan Cheng 0638c20e7c DBG_VALUE does not have any side effects; it also makes no sense to mark it cheap as a copy.
llvm-svn: 123031
2011-01-07 21:08:26 +00:00
Evan Cheng 7f8ab6ee8b Remove ARM isel hacks that fold large immediates into a pair of add, sub, and,
and xor. The 32-bit move immediates can be hoisted out of loops by machine
LICM but the isel hacks were preventing them.

Instead, let peephole optimization pass recognize registers that are defined by
immediates and the ARM target hook will fold the immediates in.

Other changes include 1) do not fold and / xor into cmp to isel TST / TEQ
instructions if there are multiple uses. This happens when the 'and' is live
out, machine sink would have sinked the computation and that ends up pessimizing
code. The peephole pass would recognize situations where the 'and' can be
toggled to define CPSR and eliminate the comparison anyway.

2) Move peephole pass to after machine LICM, sink, and CSE to avoid blocking
important optimizations.

rdar://8663787, rdar://8241368

llvm-svn: 119548
2010-11-17 20:13:28 +00:00
Evan Cheng 2ce016c7f8 Code clean up. The peephole pass should be the one updating the instruction
iterator, not TII->OptimizeCompareInstr.

llvm-svn: 119186
2010-11-15 21:20:45 +00:00
Bill Wendling c6627eec13 When we look at instructions to convert to setting the 's' flag, we need to look
at more than those which define CPSR. You can have this situation:

(1)  subs  ...
(2)  sub   r6, r5, r4
(3)  movge ...
(4)  cmp   r6, 0
(5)  movge ...

We cannot convert (2) to "subs" because (3) is using the CPSR set by
(1). There's an analogous situation here:

(1)  sub   r1, r2, r3
(2)  sub   r4, r5, r6
(3)  cmp   r4, ...
(5)  movge ...
(6)  cmp   r1, ...
(7)  movge ...

We cannot convert (1) to "subs" because of the intervening use of CPSR.

llvm-svn: 117950
2010-11-01 20:41:43 +00:00
Bill Wendling 7a23c1fb7d The testcase is now XFAILed. Sorry about the breakage.
llvm-svn: 117904
2010-11-01 05:50:55 +00:00
Eric Christopher ef5a1c3ec3 Revert r117876 for now, it's causing more testsuite failures.
llvm-svn: 117879
2010-10-31 22:42:55 +00:00
Bill Wendling 0392f1b437 Disable the peephole optimizer until 186.crafty on armv6 is fixed. This is what
looks like is happening:

Without the peephole optimizer:
  (1)   sub     r6, r6, #32
        orr     r12, r12, lr, lsl r9
        orr     r2, r2, r3, lsl r10
  (x)   cmp     r6, #0
        ldr     r9, LCPI2_10
        ldr     r10, LCPI2_11
  (2)   sub     r8, r8, #32
  (a)   movge   r12, lr, lsr r6
  (y)   cmp     r8, #0
LPC2_10:
        ldr     lr, [pc, r10]
  (b)   movge   r2, r3, lsr r8

With the peephole optimizer:
        ldr     r9, LCPI2_10
        ldr     r10, LCPI2_11
  (1*)  subs    r6, r6, #32
  (2*)  subs    r8, r8, #32
  (a*)  movge   r12, lr, lsr r6
  (b*)  movge   r2, r3, lsr r8

(1) is used by (x) for the conditional move at (a). (2) is used by (y) for the
conditional move at (b). After the peephole optimizer, these the flags resulting
from (1*) are ignored and only the flags from (2*) are considered for both
conditional moves.

llvm-svn: 117876
2010-10-31 22:07:12 +00:00
Owen Anderson 6c18d1aac0 Get rid of static constructors for pass registration. Instead, every pass exposes an initializeMyPassFunction(), which
must be called in the pass's constructor.  This function uses static dependency declarations to recursively initialize
the pass's dependencies.

Clients that only create passes through the createFooPass() APIs will require no changes.  Clients that want to use the
CommandLine options for passes will need to manually call the appropriate initialization functions in PassInitialization.h
before parsing commandline arguments.

I have tested this with all standard configurations of clang and llvm-gcc on Darwin.  It is possible that there are problems
with the static dependencies that will only be visible with non-standard options.  If you encounter any crash in pass
registration/creation, please send the testcase to me directly.

llvm-svn: 116820
2010-10-19 17:21:58 +00:00
Bill Wendling 337a31133b Don't recompute MachineRegisterInfo in the Optimize* method.
llvm-svn: 116750
2010-10-18 21:22:31 +00:00
Owen Anderson 8ac477ffb5 Begin adding static dependence information to passes, which will allow us to
perform initialization without static constructors AND without explicit initialization
by the client.  For the moment, passes are required to initialize both their
(potential) dependencies and any passes they preserve.  I hope to be able to relax
the latter requirement in the future.

llvm-svn: 116334
2010-10-12 19:48:12 +00:00
Owen Anderson df7a4f2515 Now with fewer extraneous semicolons!
llvm-svn: 115996
2010-10-07 22:25:06 +00:00
Gabor Greif adbbb93d3d Move the search for the appropriate AND instruction
into OptimizeCompareInstr.
This necessitates the passing of CmpValue around,
so widen the virtual functions to accomodate.

No functionality changes.

llvm-svn: 114428
2010-09-21 12:01:15 +00:00
Gabor Greif f08b36d386 must not peephole away side effects
llvm-svn: 113848
2010-09-14 20:46:08 +00:00
Bill Wendling 27dddd1fd1 Rename ConvertToSetZeroFlag to something more general.
llvm-svn: 113670
2010-09-11 00:13:50 +00:00
Bill Wendling d0a5f4e238 No need to recompute the SrcReg and CmpValue.
llvm-svn: 113666
2010-09-10 23:46:12 +00:00
Bill Wendling 041230014c Move some of the decision logic for converting an instruction into one that sets
the 'zero' bit down into the back-end. There are other cases where this logic
isn't sufficient, so they should be handled separately.

llvm-svn: 113665
2010-09-10 23:34:19 +00:00
Bill Wendling aee679bf35 Modify the comparison optimizations in the peephole optimizer to update the
iterator when an optimization took place. This allows us to do more insane
things with the code than just remove an instruction or two.

llvm-svn: 113640
2010-09-10 21:55:43 +00:00
Bill Wendling 6628431a91 Remove now unneeded command line flag that enables 'optimize compares.'
llvm-svn: 112287
2010-08-27 20:39:09 +00:00
Bill Wendling 0757820f8f Turn optimize compares back on with fix. We needed to test that a machine op was
a register before checking if it was defined.

llvm-svn: 110733
2010-08-10 21:38:11 +00:00
Dan Gohman a53f4e23e4 Revert r110718; it broke clang-i386-darwin9.
llvm-svn: 110726
2010-08-10 20:49:33 +00:00
Bill Wendling 558f822bc7 Turn optimize cmps on by default so that we can get some testing by the nightly
ARM testers.

llvm-svn: 110718
2010-08-10 20:23:02 +00:00
Bill Wendling ca67835eaa Merge the OptimizeExts and OptimizeCmps passes into one PeepholeOptimizer
pass. This pass should expand with all of the small, fine-grained optimization
passes to reduce compile time and increase happiment.

llvm-svn: 110627
2010-08-09 23:59:04 +00:00