Commit Graph

390 Commits

Author SHA1 Message Date
David Blaikie 46a9f016c5 More dead code removal (using -Wunreachable-code)
llvm-svn: 148578
2012-01-20 21:51:11 +00:00
Jakob Stoklund Olesen d110e2a83f Reapply r146997, "Heed spill slot alignment on ARM."
Now that canRealignStack() understands frozen reserved registers, it is
safe to use it for aligned spill instructions.

It will only return true if the registers reserved at the beginning of
register allocation allow for dynamic stack realignment.

<rdar://problem/10625436>

llvm-svn: 147579
2012-01-05 00:26:57 +00:00
Jakob Stoklund Olesen 1b7f2a7638 Revert r146997, "Heed spill slot alignment on ARM."
This patch caused a miscompilation of oggenc because a frame pointer was
suddenly needed halfway through register allocation.

<rdar://problem/10625436>

llvm-svn: 147487
2012-01-03 22:34:35 +00:00
Jim Grosbach c80a264386 ARM NEON assmebly parsing for VLD2 to all lanes instructions.
llvm-svn: 147069
2011-12-21 19:40:55 +00:00
Jakob Stoklund Olesen b95c102c2f Heed spill slot alignment on ARM.
Use the spill slot alignment as well as the local variable alignment to
determine when the stack needs to be realigned. This works now that the
ARM target can always realign the stack by using a base pointer.

Still respect the ARMBaseRegisterInfo::canRealignStack() function
vetoing a realigned stack.  Don't use aligned spill code in that case.

llvm-svn: 146997
2011-12-20 22:15:04 +00:00
Evan Cheng da103bf9ec Model ARM predicated write as read-mod-write. e.g.
r0 = mov #0
r0 = moveq #1

Then the second instruction has an implicit data dependency on the first
instruction. Sadly I have yet to come up with a small test case that
demonstrate the post-ra scheduler taking advantage of this.

llvm-svn: 146583
2011-12-14 20:00:08 +00:00
Evan Cheng 7fae11b231 - Add MachineInstrBundle.h and MachineInstrBundle.cpp. This includes a function
to finalize MI bundles (i.e. add BUNDLE instruction and computing register def
  and use lists of the BUNDLE instruction) and a pass to unpack bundles.
- Teach more of MachineBasic and MachineInstr methods to be bundle aware.
- Switch Thumb2 IT block to MI bundles and delete the hazard recognizer hack to
  prevent IT blocks from being broken apart.

llvm-svn: 146542
2011-12-14 02:11:42 +00:00
Jim Grosbach d146a02c79 ARM assembly parsing and encoding for VLD2 with writeback.
Refactor the instructions into fixed writeback and register-stride
writeback variants to simplify the offset operand (no more optional
register operand using reg0). This is a simpler representation and allows
the assembly parser to more easily handle these instructions.

Add tests for the instruction variants now supported.

llvm-svn: 146278
2011-12-09 21:28:25 +00:00
Evan Cheng 7f8e563a69 Add bundle aware API for querying instruction properties and switch the code
generator to it. For non-bundle instructions, these behave exactly the same
as the MC layer API.

For properties like mayLoad / mayStore, look into the bundle and if any of the
bundled instructions has the property it would return true.
For properties like isPredicable, only return true if *all* of the bundled
instructions have the property.
For properties like canFoldAsLoad, isCompare, conservatively return false for
bundles.

llvm-svn: 146026
2011-12-07 07:15:52 +00:00
Jakob Stoklund Olesen cc6bfa8e79 Revert r145971: "Use conservative size estimate for tBR_JTr."
This caused more offset errors.

llvm-svn: 145980
2011-12-06 22:41:31 +00:00
Evan Cheng 2a81dd4a3c First chunk of MachineInstr bundle support.
1. Added opcode BUNDLE
2. Taught MachineInstr class to deal with bundled MIs
3. Changed MachineBasicBlock iterator to skip over bundled MIs; added an iterator to walk all the MIs
4. Taught MachineBasicBlock methods about bundled MIs

llvm-svn: 145975
2011-12-06 22:12:01 +00:00
Jakob Stoklund Olesen 33fe130e12 Use conservative size estimate for tBR_JTr.
This pseudo-instruction contains a .align directive in its expansion, so
the total size may vary by 2 bytes.

It is too difficult to accurately keep track of this alignment
directive, just use the worst-case size instead.

llvm-svn: 145971
2011-12-06 21:55:39 +00:00
Jim Grosbach a68c9a847e ARM parsing for VLD1 all lanes, with writeback.
llvm-svn: 145510
2011-11-30 19:35:44 +00:00
Jakob Stoklund Olesen 653183fd5c Enable -widen-vmovs by default.
This will widen 32-bit register vmov instructions to 64-bit when
possible.  The 64-bit vmovd instructions can then be translated to NEON
vorr instructions by the execution dependency fix pass.

The copies are only widened if they are marked as clobbering the whole
D-register.

llvm-svn: 144734
2011-11-15 23:53:18 +00:00
Jay Foad 465101bb0e Make use of MachinePointerInfo::getFixedStack. This removes all mention
of PseudoSourceValue from lib/Target/.

llvm-svn: 144632
2011-11-15 07:34:52 +00:00
Jim Grosbach 17ec1a19e5 ARM assembly parsing and encoding for VLD1 with writeback.
Four entry register lists.

llvm-svn: 142882
2011-10-25 00:14:01 +00:00
Jim Grosbach 30c39c8bf2 Nuke dead code. Nothing generates the VLD1d64QPseudo_UPD instruction.
llvm-svn: 142877
2011-10-24 23:40:46 +00:00
Jim Grosbach 92fd05ecdc ARM assembly parsing and encoding for VLD1 w/ writeback.
Three entry register list variation.

llvm-svn: 142876
2011-10-24 23:26:05 +00:00
Jim Grosbach 2098cb1e6f ARM refactor am6offset usage for VLD1.
Split am6offset into fixed and register offset variants so the instruction
encodings are explicit rather than relying an a magic reg0 marker.
Needed to being able to parse these.

llvm-svn: 142853
2011-10-24 21:45:13 +00:00
Andrew Trick 88b2450adc Use ARM/t2PseudoInst class from ARM/Thumb2 special adds/subs patterns.
Clean up the patterns, fix comments, and avoid confusing both tools
and coders. Note that the special adds/subs SelectionDAG nodes no
longer have the dummy cc_out operand.

llvm-svn: 142397
2011-10-18 19:18:52 +00:00
Jakob Stoklund Olesen 39c31a77b8 Fix -widen-vmovs liveness issues.
When widening a copy, we are reading a larger register that may not be
live.  Use an <undef> flag to tell the register scavenger and machine
code verifier that we know the value isn't defined.

We now widen:

  %S6<def> = COPY %S4<kill>, %D3<imp-def>

into:

  %D3<def> = VMOVD %D2<undef>, pred:14, pred:%noreg, %S4<imp-use,kill>

This also keeps the <kill> flag on %S4 so we don't inadvertently kill a
live value in %S5.

Finally, ensure that ARMBaseInstrInfo::setExecutionDomain() preserves
the <undef> flag when converting VMOVD to VORR.

llvm-svn: 141746
2011-10-12 00:06:23 +00:00
Jakob Stoklund Olesen da7c0f8f7d Move -widen-vmovs to ARMBaseInstrInfo::expandPostRAPseudo().
The VMOVS widening needs to look at the implicit COPY operands.  Trying
to dig out the COPY instruction from an iterator in copyPhysReg() is the
wrong approach.

The expandPostRAPseudo() hook gets to look at COPY instructions before
they are converted to copyPhysReg() calls.

llvm-svn: 141619
2011-10-11 00:59:06 +00:00
Bill Wendling 4a4772fae2 Use the ARMConstantPoolMBB class to handle the MBB values.
llvm-svn: 140943
2011-10-01 09:30:42 +00:00
Bill Wendling c214cb055d Use the new ARMConstantPoolSymbol class to handle external symbols.
llvm-svn: 140939
2011-10-01 08:58:29 +00:00
Bill Wendling 7753d66468 Switch over to using ARMConstantPoolConstant for global variables, functions,
and block addresses.

llvm-svn: 140936
2011-10-01 08:00:54 +00:00
Bill Wendling 69bc3de4fc Create a machine basic block in the constant pool and retrieve the symbol for an MBB.
llvm-svn: 140824
2011-09-29 23:50:42 +00:00
Jakob Stoklund Olesen f7ad189033 Use ExecutionDepsFix instead of NEONMoveFix.
This enables NEON domain tracking across basic blocks, but should
otherwise do the same thing.

llvm-svn: 140772
2011-09-29 02:48:41 +00:00
Jakob Stoklund Olesen f9b71a2e01 Implement TII::get/setExecutionDomain() for ARM.
llvm-svn: 140653
2011-09-27 22:57:21 +00:00
Andrew Trick 924123acb3 Lower ARM adds/subs to add/sub after adding optional CPSR operand.
This is still a hack until we can teach tblgen to generate the
optional CPSR operand rather than an implicit CPSR def. But the
strangeness is now limited to the selection DAG. ADD/SUB MI's no
longer have implicit CPSR defs, nor do we allow flag setting variants
of these opcodes in machine code. There are several corner cases to
consider, and getting one wrong would previously lead to nasty
miscompilation. It's not the first time I've debugged one, so this
time I added enough verification to ensure it won't happen again.

llvm-svn: 140228
2011-09-21 02:20:46 +00:00
Andrew Trick 3f1fdf1b31 whitespace
llvm-svn: 140227
2011-09-21 02:17:37 +00:00
Owen Anderson eb3f0fbdce Fix an ambiguously nested if.
llvm-svn: 139431
2011-09-09 23:13:02 +00:00
Owen Anderson 29cfe6c368 Thumb unconditional branches are allowed in IT blocks, and therefore should have a predicate operand, unlike conditional branches.
llvm-svn: 139415
2011-09-09 21:48:23 +00:00
Jakob Stoklund Olesen cd893390f5 Put VMOVS widening under a command line option, off by default.
It appears that our use of the imp-use and imp-def flags with
sub-registers is not yet robust enough to support this.

The failing test case is complicated, I am working on a reduction.

<rdar://problem/10044201>

llvm-svn: 138861
2011-08-31 17:00:02 +00:00
Jim Grosbach e364ad540a Clean up Thumb load/store multiple definitions.
There is no non-writeback store multiple instruction in Thumb1, so
don't define one. As a result load multiple is the only instantiation of
the multiclass, so refactor that away entirely.

llvm-svn: 138338
2011-08-23 17:41:15 +00:00
Chad Rosier 61f92efb5c Remove the VMOVQQ pseudo instruction.
llvm-svn: 138177
2011-08-20 00:52:40 +00:00
Jakob Stoklund Olesen 59015c8b17 Add <imp-def> operands to QQ and QQQQ stack loads.
This pleases the register scavenger and brings
test/CodeGen/ARM/2011-08-12-vmovqqqq-pseudo.ll a little closer to
working with -verify-machineinstrs.

llvm-svn: 138164
2011-08-20 00:17:45 +00:00
Chad Rosier be7625161e VMOVQQQQs pseudo instructions are only created by ARMBaseInstrInfo::copyPhysReg.
Therefore, rather then generate a pseudo instruction, which is later expanded,
generate the necessary instructions in place.

llvm-svn: 138163
2011-08-20 00:17:25 +00:00
Owen Anderson 732f82c463 Rewrite some ARM InstrInfo functions to be most accepting of arbitrary register subclasses. Hopefully this fixes some buildbots.
llvm-svn: 137223
2011-08-10 17:21:20 +00:00
Jakob Stoklund Olesen 6a14dc01ff Promote VMOVS to VMOVD when possible.
On Cortex-A8, we use the NEON v2f32 instructions for f32 arithmetic. For
better latency, we also send D-register copies down the NEON pipeline by
translating them to vorr instructions.

This patch promotes even S-register copies to D-register copies when
possible so they can also go down the NEON pipeline.  Example:

        vldr.32 s0, LCPI0_0
    loop:
        vorr    d1, d0, d0
    loop2:
        ...
        vadd.f32        d1, d1, d16

The vorr instruction looked like this after regalloc:

    %S2<def> = COPY %S0, %D1<imp-def>

Copies involving odd S-registers, and copies that don't define the full
D-register are left alone.

llvm-svn: 137182
2011-08-09 23:41:44 +00:00
Jakob Stoklund Olesen c04a66b48e Implement isLoadFromStackSlotPostFE and isStoreToStackSlotPostFE for ARM.
They improve the verbose assembly.

llvm-svn: 137069
2011-08-08 21:45:32 +00:00
Owen Anderson b595ed0085 Split up the ARM so_reg ComplexPattern into so_reg_reg and so_reg_imm, allowing us to distinguish the encodings that use shifted registers from those that use shifted immediates. This is necessary to allow the fixed-length decoder to distinguish things like BICS vs LDRH.
llvm-svn: 135693
2011-07-21 18:54:16 +00:00
Evan Cheng a20cde31e7 Sink ARMMCExpr and ARMAddressingModes into MC layer. First step to separate ARM MC code from target.
llvm-svn: 135636
2011-07-20 23:34:39 +00:00
Owen Anderson 454e1c7abb Remove VMOVDneon and VMOVQ, which are just aliases for VORR. This continues to simplify the path towards an auto-generated disassembler.
llvm-svn: 135290
2011-07-15 18:46:47 +00:00
Evan Cheng bc153d49b7 Next round of MC refactoring. This patch factor MC table instantiations, MC
registeration and creation code into XXXMCDesc libraries.

llvm-svn: 135184
2011-07-14 20:59:42 +00:00
Owen Anderson 651b230ca0 Add a target-indepedent entry to MCInstrDesc to describe the encoded size of an opcode. Switch ARM over to using that rather than its own special MCInstrDesc bits.
llvm-svn: 135106
2011-07-13 23:22:26 +00:00
Jakub Staszak 9b07c0ab6b Use BranchProbability instead of floating points in IfConverter.
llvm-svn: 134858
2011-07-10 02:58:07 +00:00
Evan Cheng 703a0fbf39 Hide the call to InitMCInstrInfo into tblgen generated ctor.
llvm-svn: 134244
2011-07-01 17:57:27 +00:00
Jim Grosbach d86f34d631 Refactor away tSpill and tRestore pseudos in ARM backend.
The tSpill and tRestore instructions are just copies of the tSTRspi and
tLDRspi instructions, respectively. Just use those directly instead.

llvm-svn: 134092
2011-06-29 20:26:39 +00:00
Evan Cheng 194c3dc01f Move CallFrameSetupOpcode and CallFrameDestroyOpcode to TargetInstrInfo.
llvm-svn: 134030
2011-06-28 21:14:33 +00:00
Evan Cheng 1e210d08d8 Merge XXXGenRegisterNames.inc into XXXGenRegisterInfo.inc
llvm-svn: 134024
2011-06-28 20:07:07 +00:00
Evan Cheng 6cc775f905 - Rename TargetInstrDesc, TargetOperandInfo to MCInstrDesc and MCOperandInfo and
sink them into MC layer.
- Added MCInstrInfo, which captures the tablegen generated static data. Chang
TargetInstrInfo so it's based off MCInstrInfo.

llvm-svn: 134021
2011-06-28 19:10:37 +00:00
Chris Lattner 1d0c25756e use the MachineInstrBuilder operator-> to simplify some code.
There are probably more instances of this floating around.

llvm-svn: 130474
2011-04-29 05:24:29 +00:00
Evan Cheng 7d6cd4902e Change A9 scheduling itineraries VLD* / VST* entries default to "aligned". That
is, it assumes addresses are 64-bit aligned (which should be the more common
case). If the alignment is found not to be aligned, then getOperandLatency()
would adjust the operand latency computation by one to compensate for it.
rdar://9294833

llvm-svn: 129742
2011-04-19 01:21:49 +00:00
Cameron Zwarich 9c65e4d69c Add ORR and EOR to the CMP peephole optimizer. It's hard to get isel to generate
a case involving EOR, so I only added a test for ORR.

llvm-svn: 129610
2011-04-15 21:24:38 +00:00
Cameron Zwarich 0829b3065a The AND instruction leaves the V flag unmodified, so it falls victim to the same
problem as all of the other instructions we fold with CMPs.

llvm-svn: 129602
2011-04-15 20:45:00 +00:00
Cameron Zwarich 93eae1571c Add missing register forms of instructions to the ARM CMP-folding code. This
fixes <rdar://problem/9287901>.

llvm-svn: 129599
2011-04-15 20:28:28 +00:00
Chris Lattner 0ab5e2cded Fix a ton of comment typos found by codespell. Patch by
Luis Felipe Strano Moraes!

llvm-svn: 129558
2011-04-15 05:18:47 +00:00
Cameron Zwarich 8001850ee8 Fix a typo.
llvm-svn: 129429
2011-04-13 06:39:16 +00:00
Owen Anderson bdff1c997a Teach the ARM peephole optimizer that RSB, RSC, ADC, and SBC can be used for folded comparisons, just like ADD and SUB.
llvm-svn: 129038
2011-04-06 23:35:59 +00:00
Owen Anderson d6c5a741b5 Get rid of the non-writeback versions VLDMDB and VSTMDB, which don't actually exist.
llvm-svn: 128461
2011-03-29 16:45:53 +00:00
Evan Cheng f098bf1199 Nasty bug in ARMBaseInstrInfo::produceSameValue(). The MachineConstantPoolEntry
entries being compared may not be ARMConstantPoolValue. Without checking
whether they are ARMConstantPoolValue first, and if the stars and moons
are aligned properly, the equality test may return true (when the first few
words of two Constants' values happen to be identical) and very bad things can
happen.

rdar://9125354

llvm-svn: 128203
2011-03-24 06:20:03 +00:00
Evan Cheng 425489d397 Cmp peephole optimization isn't always safe for signed arithmetics.
int tries = INT_MAX;    
while (tries > 0) {
      tries--;
}

The check should be:
        subs    r4, #1
        cmp     r4, #0
        bgt     LBB0_1

The subs can set the overflow V bit when r4 is INT_MAX+1 (which loop
canonicalization apparently does in this case). cmp #0 would have cleared
it while not changing the N and Z bits. Since BGT is dependent on the V
bit, i.e. (N == V) && !Z, it is not safe to eliminate the cmp #0.

rdar://9172742

llvm-svn: 128179
2011-03-23 22:52:04 +00:00
Anton Korobeynikov e7410dd0d5 Preliminary support for ARM frame save directives emission via MI flags.
This is just very first approximation how the stuff should be done
(e.g. ARM-only for now). More to follow.

llvm-svn: 127101
2011-03-05 18:43:32 +00:00
Evan Cheng 2f2435d026 Last round of fixes for movw + movt global address codegen.
1. Fixed ARM pc adjustment.
2. Fixed dynamic-no-pic codegen
3. CSE of pc-relative load of global addresses.

It's now enabled by default for Darwin.

llvm-svn: 123991
2011-01-21 18:55:51 +00:00
Andrew Trick 47ff14b091 Convert -enable-sched-cycles and -enable-sched-hazard to -disable
flags. They are still not enable in this revision.

Added TargetInstrInfo::isZeroCost() to fix a fundamental problem with
the scheduler's model of operand latency in the selection DAG.

Generalized unit tests to work with sched-cycles.

llvm-svn: 123969
2011-01-21 05:51:33 +00:00
Evan Cheng 028ccbfcbf Don't be overly aggressive with CSE of "ldr constantpool". If it's a pc-relative
value, the "add pc" must be CSE'ed at the same time. We could follow the same
approach as T2 by adding pseudo instructions that combine the ldr + "add pc".
But the better approach is to use movw + movt (which I will enable soon), so
I'll leave this as a TODO.

llvm-svn: 123949
2011-01-20 23:55:07 +00:00
Evan Cheng b8b0ad80a8 Sorry, several patches in one.
TargetInstrInfo:
Change produceSameValue() to take MachineRegisterInfo as an optional argument.
When in SSA form, targets can use it to make more aggressive equality analysis.

Machine LICM:
1. Eliminate isLoadFromConstantMemory, use MI.isInvariantLoad instead.
2. Fix a bug which prevent CSE of instructions which are not re-materializable.
3. Use improved form of produceSameValue.

ARM:
1. Teach ARM produceSameValue to look pass some PIC labels.
2. Look for operands from different loads of different constant pool entries
   which have same values.
3. Re-implement PIC GA materialization using movw + movt. Combine the pair with
   a "add pc" or "ldr [pc]" to form pseudo instructions. This makes it possible
   to re-materialize the instruction, allow machine LICM to hoist the set of
   instructions out of the loop and make it possible to CSE them. It's a bit
   hacky, but it significantly improve code quality.
4. Some minor bug fixes as well.

With the fixes, using movw + movt to materialize GAs significantly outperform the
load from constantpool method. 186.crafty and 255.vortex improved > 20%, 254.gap
and 176.gcc ~10%.

llvm-svn: 123905
2011-01-20 08:34:58 +00:00
Evan Cheng dfce83c8f5 Materialize GA addresses with movw + movt pairs for Darwin in PIC mode. e.g.
movw    r0, :lower16:(L_foo$non_lazy_ptr-(LPC0_0+4))
        movt    r0, :upper16:(L_foo$non_lazy_ptr-(LPC0_0+4))
LPC0_0:
        add     r0, pc, r0

It's not yet enabled by default as some tests are failing. I suspect bugs in
down stream tools.

llvm-svn: 123619
2011-01-17 08:03:18 +00:00
Jakob Stoklund Olesen 2fb5b31578 Simplify a bunch of isVirtualRegister() and isPhysicalRegister() logic.
These functions not longer assert when passed 0, but simply return false instead.

No functional change intended.

llvm-svn: 123155
2011-01-10 02:58:51 +00:00
Evan Cheng 078b0b095e Recognize inline asm 'rev /bin/bash, ' as a bswap intrinsic call.
llvm-svn: 123048
2011-01-08 01:24:27 +00:00
Andrew Trick 10ffc2b6c2 Various bits of framework needed for precise machine-level selection
DAG scheduling during isel. Most new functionality is currently
guarded by -enable-sched-cycles and -enable-sched-hazard.

Added InstrItineraryData::IssueWidth field, currently derived from
ARM itineraries, but could be initialized differently on other targets.

Added ScheduleHazardRecognizer::MaxLookAhead to indicate whether it is
active, and if so how many cycles of state it holds.

Added SchedulingPriorityQueue::HasReadyFilter to allowing gating entry
into the scheduler's available queue.

ScoreboardHazardRecognizer now accesses the ScheduleDAG in order to
get information about it's SUnits, provides RecedeCycle for bottom-up
scheduling, correctly computes scoreboard depth, tracks IssueCount, and
considers potential stall cycles when checking for hazards.

ScheduleDAGRRList now models machine cycles and hazards (under
flags). It tracks MinAvailableCycle, drives the hazard recognizer and
priority queue's ready filter, manages a new PendingQueue, properly
accounts for stall cycles, etc.

llvm-svn: 122541
2010-12-24 05:03:26 +00:00
Andrew Trick c416ba612b whitespace
llvm-svn: 122539
2010-12-24 04:28:06 +00:00
Bob Wilson 651eaa02b8 Remove the rest of the *_sfp Neon instruction patterns.
Use the same COPY_TO_REGCLASS approach as for the 2-register *_sfp instructions.
This change made a big difference in the code generated for the
CodeGen/Thumb2/cross-rc-coalescing-2.ll test: The coalescer is still doing
a fine job, but some instructions that were previously moved outside the loop
are not moved now.  It's using fewer VFP registers now, which is generally
a good thing, so I think the estimates for register pressure changed and that
affected the LICM behavior.  Since that isn't obviously wrong, I've just
changed the test file.  This completes the work for Radar 8711675.

llvm-svn: 121730
2010-12-13 23:02:37 +00:00
Jim Grosbach 327cf8ee5f Refactor the ARM CMPz* patterns to just use the normal CMP instructions when
possible. They were duplicates for everything exception the source pattern
before.

llvm-svn: 121179
2010-12-07 20:41:06 +00:00
Evan Cheng 62c7b5bf76 Making use of VFP / NEON floating point multiply-accumulate / subtraction is
difficult on current ARM implementations for a few reasons.
1. Even though a single vmla has latency that is one cycle shorter than a pair
   of vmul + vadd, a RAW hazard during the first (4? on Cortex-a8) can cause
   additional pipeline stall. So it's frequently better to single codegen
   vmul + vadd.
2. A vmla folowed by a vmul, vmadd, or vsub causes the second fp instruction to
   stall for 4 cycles. We need to schedule them apart.
3. A vmla followed vmla is a special case. Obvious issuing back to back RAW
   vmla + vmla is very bad. But this isn't ideal either:
     vmul
     vadd
     vmla
   Instead, we want to expand the second vmla:
     vmla
     vmul
     vadd
   Even with the 4 cycle vmul stall, the second sequence is still 2 cycles
   faster.

Up to now, isel simply avoid codegen'ing fp vmla / vmls. This works well enough
but it isn't the optimial solution. This patch attempts to make it possible to
use vmla / vmls in cases where it is profitable.

A. Add missing isel predicates which cause vmla to be codegen'ed.
B. Make sure the fmul in (fadd (fmul)) has a single use. We don't want to
   compute a fmul and a fmla.
C. Add additional isel checks for vmla, avoid cases where vmla is feeding into
   fp instructions (except for the #3 exceptional case).
D. Add ARM hazard recognizer to model the vmla / vmls hazards.
E. Add a special pre-regalloc case to expand vmla / vmls when it's likely the
   vmla / vmls will trigger one of the special hazards.

Work in progress, only A+B are enabled.

llvm-svn: 120960
2010-12-05 22:04:16 +00:00
Jim Grosbach 81af4f9eb1 Rename t2 TBB and TBH instructions to reference that they encode the jump table
data. Next up, pseudo-izing them.

llvm-svn: 120320
2010-11-29 21:28:32 +00:00
Anton Korobeynikov d08fbd19f5 Move callee-saved regs spills / reloads to TFI
llvm-svn: 120228
2010-11-27 23:05:03 +00:00
Eric Christopher b006fc9c07 Rewrite stack callee saved spills and restores to use push/pop instructions.
Remove movePastCSLoadStoreOps and associated code for simple pointer
increments. Update routines that depended upon other opcodes for save/restore.

Adjust all testcases accordingly.

llvm-svn: 119725
2010-11-18 19:40:05 +00:00
Evan Cheng 2d4e42fba6 Silence compiler warnings.
llvm-svn: 119610
2010-11-18 01:43:23 +00:00
Evan Cheng 7f8ab6ee8b Remove ARM isel hacks that fold large immediates into a pair of add, sub, and,
and xor. The 32-bit move immediates can be hoisted out of loops by machine
LICM but the isel hacks were preventing them.

Instead, let peephole optimization pass recognize registers that are defined by
immediates and the ARM target hook will fold the immediates in.

Other changes include 1) do not fold and / xor into cmp to isel TST / TEQ
instructions if there are multiple uses. This happens when the 'and' is live
out, machine sink would have sinked the computation and that ends up pessimizing
code. The peephole pass would recognize situations where the 'and' can be
toggled to define CPSR and eliminate the comparison anyway.

2) Move peephole pass to after machine LICM, sink, and CSE to avoid blocking
important optimizations.

rdar://8663787, rdar://8241368

llvm-svn: 119548
2010-11-17 20:13:28 +00:00
Evan Cheng 655364797e Simplify code that toggle optional operand to ARM::CPSR.
llvm-svn: 119484
2010-11-17 08:06:50 +00:00
Bill Wendling a68e3a5397 Encode the multi-load/store instructions with their respective modes ('ia',
'db', 'ib', 'da') instead of having that mode as a separate field in the
instruction. It's more convenient for the asm parser and much more readable for
humans.
<rdar://problem/8654088>

llvm-svn: 119310
2010-11-16 01:16:36 +00:00
Evan Cheng 2ce016c7f8 Code clean up. The peephole pass should be the one updating the instruction
iterator, not TII->OptimizeCompareInstr.

llvm-svn: 119186
2010-11-15 21:20:45 +00:00
Eric Christopher b90f7004cf Revert this temporarily.
llvm-svn: 118827
2010-11-11 19:47:02 +00:00
Eric Christopher e6283f950d Change the prologue and epilogue to use push/pop for the low ARM registers.
llvm-svn: 118823
2010-11-11 19:26:03 +00:00
Evan Cheng debf9c502a Two sets of changes. Sorry they are intermingled.
1. Fix pre-ra scheduler so it doesn't try to push instructions above calls to
   "optimize for latency". Call instructions don't have the right latency and
   this is more likely to use introduce spills.
2. Fix if-converter cost function. For ARM, it should use instruction latencies,
   not # of micro-ops since multi-latency instructions is completely executed
   even when the predicate is false. Also, some instruction will be "slower"
   when they are predicated due to the register def becoming implicit input.
   rdar://8598427

llvm-svn: 118135
2010-11-03 00:45:17 +00:00
Bill Wendling c6627eec13 When we look at instructions to convert to setting the 's' flag, we need to look
at more than those which define CPSR. You can have this situation:

(1)  subs  ...
(2)  sub   r6, r5, r4
(3)  movge ...
(4)  cmp   r6, 0
(5)  movge ...

We cannot convert (2) to "subs" because (3) is using the CPSR set by
(1). There's an analogous situation here:

(1)  sub   r1, r2, r3
(2)  sub   r4, r5, r6
(3)  cmp   r4, ...
(5)  movge ...
(6)  cmp   r1, ...
(7)  movge ...

We cannot convert (1) to "subs" because of the intervening use of CPSR.

llvm-svn: 117950
2010-11-01 20:41:43 +00:00
Evan Cheng 99cce36cf5 Fix fpscr <-> GPR latency info.
llvm-svn: 117737
2010-10-29 23:16:55 +00:00
Evan Cheng 6c1414f9c2 Avoiding overly aggressive latency scheduling. If the two nodes share an
operand and one of them has a single use that is a live out copy, favor the
one that is live out. Otherwise it will be difficult to eliminate the copy
if the instruction is a loop induction variable update. e.g.

BB:
sub r1, r3, #1
str r0, [r2, r3]
mov r3, r1
cmp
bne BB

=>

BB:
str r0, [r2, r3]
sub r3, r3, #1
cmp
bne BB

This fixed the recent 256.bzip2 regression.

llvm-svn: 117675
2010-10-29 18:09:28 +00:00
Evan Cheng ff310737e5 Re-commit 117518 and 117519 now that ARM MC test failures are out of the way.
llvm-svn: 117531
2010-10-28 06:47:08 +00:00
Evan Cheng e2c211c1b9 Revert 117518 and 117519 for now. They changed scheduling and cause MC tests to fail. Ugh.
llvm-svn: 117520
2010-10-28 02:00:25 +00:00
Evan Cheng ff1c862f8e - Assign load / store with shifter op address modes the right itinerary classes.
- For now, loads of [r, r] addressing mode is the same as the
  [r, r lsl/lsr/asr #] variants. ARMBaseInstrInfo::getOperandLatency() should
  identify the former case and reduce the output latency by 1.
- Also identify [r, r << 2] case. This special form of shifter addressing mode
  is "free".

llvm-svn: 117519
2010-10-28 01:49:06 +00:00
Jim Grosbach 338de3ee56 Refactor ARM STR/STRB instruction patterns into STR{B}i12 and STR{B}rs, like
the LDR instructions have. This makes the literal/register forms of the
instructions explicit and allows us to assign scheduling itineraries
appropriately. rdar://8477752

llvm-svn: 117505
2010-10-27 23:12:14 +00:00
Jim Grosbach 8bf1483a3d The immediate operands of an LDRi12 instruction doesn't need the addrmode2
encoding tricks. Handle the 'imm doesn't fit in the insn' case.

llvm-svn: 117454
2010-10-27 16:50:31 +00:00
Jim Grosbach 9d2d1f0f00 LDRi12 machine instructions handle negative offset operands normally (simple
integer values), not with the addrmode2 encoding.

llvm-svn: 117429
2010-10-27 01:19:41 +00:00
Jim Grosbach 5a7c715470 Split ARM::LDRB into LDRBi12 and LDRBrs. Adjust accordingly. Continuing on
rdar://8477752.

llvm-svn: 117419
2010-10-27 00:19:44 +00:00
Jim Grosbach 1e4d9a17c2 First part of refactoring ARM addrmode2 (load/store) instructions to be more
explicit about the operands. Split out the different variants into separate
instructions. This gives us the ability to, among other things, assign
different scheduling itineraries to the variants. rdar://8477752.

llvm-svn: 117409
2010-10-26 22:37:02 +00:00
Evan Cheng e96b8d7ab6 Use instruction itinerary to determine what instructions are 'cheap'.
llvm-svn: 117348
2010-10-26 02:08:50 +00:00
Chandler Carruth 82058c05f8 Move the remaining attribute macros to systematic names based on the attribute
name and prefixed with 'LLVM_'.

llvm-svn: 117203
2010-10-23 08:40:19 +00:00
Evan Cheng ad79526471 Latency between CPSR def and branch is zero.
llvm-svn: 117192
2010-10-23 02:04:38 +00:00
Evan Cheng 63c7608c34 Re-enable register pressure aware machine licm with fixes. Hoist() may have
erased the instruction during LICM so UpdateRegPressureAfter() should not
reference it afterwards.

llvm-svn: 116845
2010-10-19 18:58:51 +00:00
Daniel Dunbar 418204e523 Revert r116781 "- Add a hook for target to determine whether an instruction def
is", which breaks some nightly tests.

llvm-svn: 116816
2010-10-19 17:14:24 +00:00
Evan Cheng 8249dfe6ce - Add a hook for target to determine whether an instruction def is
"long latency" enough to hoist even if it may increase spilling. Reloading
  a value from spill slot is often cheaper than performing an expensive
  computation in the loop. For X86, that means machine LICM will hoist
  SQRT, DIV, etc. ARM will be somewhat aggressive with VFP and NEON
  instructions.
- Enable register pressure aware machine LICM by default.

llvm-svn: 116781
2010-10-19 00:55:07 +00:00
Bill Wendling 337a31133b Don't recompute MachineRegisterInfo in the Optimize* method.
llvm-svn: 116750
2010-10-18 21:22:31 +00:00
Bill Wendling 59ebe44049 Check to make sure that the iterator isn't at the beginning of the basic block
before decrementing. <rdar://problem/8529919>

llvm-svn: 116126
2010-10-09 00:03:48 +00:00
Evan Cheng 412e37bd34 Code refactoring.
llvm-svn: 116002
2010-10-07 23:12:15 +00:00
Evan Cheng 1958cefd69 Model operand cycles of vldm / vstm; also fixes scheduling itineraries of vldr / vstr, etc.
llvm-svn: 115898
2010-10-07 01:50:48 +00:00
Jim Grosbach 24ab1ce8c2 Clean up MOVi32imm and t2MOVi32imm pseudo instruction definitions.
llvm-svn: 115853
2010-10-06 22:01:26 +00:00
Evan Cheng 49d4c0bd18 - Add TargetInstrInfo::getOperandLatency() to compute operand latencies. This
allow target to correctly compute latency for cases where static scheduling
  itineraries isn't sufficient. e.g. variable_ops instructions such as
  ARM::ldm.
  This also allows target without scheduling itineraries to compute operand
  latencies. e.g. X86 can return (approximated) latencies for high latency
  instructions such as division.
- Compute operand latencies for those defined by load multiple instructions,
  e.g. ldm and those used by store multiple instructions, e.g. stm.

llvm-svn: 115755
2010-10-06 06:27:31 +00:00
Michael J. Spencer 70ac5fa42c fix MSVC 2010 build.
llvm-svn: 115594
2010-10-05 06:00:43 +00:00
Michael J. Spencer e7f00cbb7c Cleanup Whitespace.
llvm-svn: 115593
2010-10-05 06:00:33 +00:00
Owen Anderson f31f33ea89 Thread the determination of branch prediction hit rates back through the if-conversion heuristic APIs. For now,
stick with a constant estimate of 90% (branch predictors are good!), but we might find that we want to provide
more nuanced estimates in the future.

llvm-svn: 115364
2010-10-01 22:45:50 +00:00
Owen Anderson 2ecba4a07e Make the spelling of the flags for old-style if-conversion heuristics consistent between ARM and Thumb2.
llvm-svn: 115341
2010-10-01 20:33:47 +00:00
Owen Anderson b9b63ee031 Temporarily add a flag to make it easier to compare the new-style ARM if
conversion heuristics to the old-style ones.

llvm-svn: 115239
2010-09-30 23:48:38 +00:00
Gabor Greif d36e3e8850 improve heuristics to find the 'and' corresponding to 'tst' to also catch opportunities on thumb2
added some doxygen on the way

llvm-svn: 115033
2010-09-29 10:12:08 +00:00
Owen Anderson a3181e2d79 Add a subtarget hook for reporting the misprediction penalty. Use this to provide more precise
cost modeling for if-conversion.  Now if only we had a way to estimate the misprediction probability.

Adjsut CodeGen/ARM/ifcvt10.ll.  The pipeline on Cortex-A8 is long enough that it is still profitable
to predicate an ldm, but the shorter pipeline on Cortex-A9 makes it unprofitable.

llvm-svn: 114995
2010-09-28 21:57:50 +00:00
Owen Anderson 88af7d00fc Part one of switching to using a more sane heuristic for determining if-conversion profitability.
Rather than having arbitrary cutoffs, actually try to cost model the conversion.

For now, the constants are tuned to more or less match our existing behavior, but these will be
changed to reflect realistic values as this work proceeds.

llvm-svn: 114973
2010-09-28 18:32:13 +00:00
Eric Christopher bf86fd3c47 80-col fixups.
llvm-svn: 114943
2010-09-28 04:18:29 +00:00
Evan Cheng 1596f7f6f3 Fix r114632. Return if the only terminator is an unconditional branch after the redundant ones are deleted.
llvm-svn: 114688
2010-09-23 19:42:03 +00:00
Evan Cheng 66c8cd2b32 If there are multiple unconditional branches terminating a block, eliminate all
but the first one. Those will never be executed. There was logic to do this
but it was faulty.

llvm-svn: 114632
2010-09-23 06:54:40 +00:00
Evan Cheng d757c88bba OptimizeCompareInstr should avoid iterating pass the beginning of the MBB when the 'and' instruction is after the comparison.
llvm-svn: 114506
2010-09-21 23:49:07 +00:00
Gabor Greif 1a25ae88ff Fix buglet when the TST instruction directly uses the AND result.
I am unable to write a test for this case, help is solicited, though...
What I did is to tickle the code in the debugger and verify that we do the right thing.

llvm-svn: 114430
2010-09-21 13:30:57 +00:00
Gabor Greif adbbb93d3d Move the search for the appropriate AND instruction
into OptimizeCompareInstr.
This necessitates the passing of CmpValue around,
so widen the virtual functions to accomodate.

No functionality changes.

llvm-svn: 114428
2010-09-21 12:01:15 +00:00
Chris Lattner e3d864b857 convert targets to the new MF.getMachineMemOperand interface.
llvm-svn: 114391
2010-09-21 04:39:43 +00:00
Jakob Stoklund Olesen 44857a38fa Remember VLDMQ.
llvm-svn: 114026
2010-09-15 21:40:11 +00:00
Jakob Stoklund Olesen b929c7173d Add missing break.
llvm-svn: 114025
2010-09-15 21:40:09 +00:00
Jakob Stoklund Olesen 33005d1327 Recognize VST1q64Pseudo and VSTMQ as stack slot stores.
Recognize VLD1q64Pseudo as a stack slot load.

Reject these if they are loading or storing a subregister. The API (and
VirtRegRewriter) doesn't know how to deal with that.

llvm-svn: 113985
2010-09-15 17:27:09 +00:00
Bob Wilson 660d7ecf32 Reapply Gabor's 113839, 113840, and 113876 with a fix for a problem
encountered while building llvm-gcc for arm.  This is probably the same issue
that the ppc buildbot hit. llvm::prior works on a MachineBasicBlock::iterator,
not a plain MachineInstr.

llvm-svn: 113983
2010-09-15 17:12:08 +00:00
Gabor Greif 9ae4b271f2 the darwin9-powerpc buildbot keeps consistently crashing,
backing out following to get it back to green,
so I can investigate in peace:

svn merge -c -113840  llvm/test/CodeGen/ARM/arm-and-tst-peephole.ll
svn merge -c -113876 -c -113839 llvm/lib/Target/ARM/ARMBaseInstrInfo.cpp

llvm-svn: 113980
2010-09-15 16:53:07 +00:00
Jakob Stoklund Olesen 11f5be3b86 Move ARM is{LoadFrom,StoreTo}StackSlot closer to their siblings so they won't be
forgotten in the future.

Coalesce identical cases in switch.

No functional changes intended.

llvm-svn: 113979
2010-09-15 16:36:26 +00:00
Bob Wilson 2c00b5098a Spelling fix.
llvm-svn: 113978
2010-09-15 16:28:21 +00:00
Bob Wilson b1e9d4bff1 Use VLD1/VST1 pseudo instructions for loadRegFromStackSlot and
storeRegToStackSlot.

llvm-svn: 113918
2010-09-15 01:48:05 +00:00
Gabor Greif b54e9387ab an attempt to salvage the darwin9-powerpc buildbot, which could be miscompiling this line
llvm-svn: 113876
2010-09-14 22:25:16 +00:00
Gabor Greif d0cef1e2ef Eliminate a 'tst' that immediately follows an 'and'
by morphing the 'and' to its recording form 'andS'.

This is basically a test commit into this area, to
see whether the bots like me. Several generalizations
can be applied and various avenues of code simplification
are open. I'll introduce those as I go.

I am aware of stylistic input from Bill Wendling, about
where put the analysis complexity, but I am positive
that we can move things around easily and will find a
satisfactory solution.

llvm-svn: 113839
2010-09-14 09:23:22 +00:00
Bill Wendling 27dddd1fd1 Rename ConvertToSetZeroFlag to something more general.
llvm-svn: 113670
2010-09-11 00:13:50 +00:00
Bill Wendling d0a5f4e238 No need to recompute the SrcReg and CmpValue.
llvm-svn: 113666
2010-09-10 23:46:12 +00:00
Bill Wendling 041230014c Move some of the decision logic for converting an instruction into one that sets
the 'zero' bit down into the back-end. There are other cases where this logic
isn't sufficient, so they should be handled separately.

llvm-svn: 113665
2010-09-10 23:34:19 +00:00
Bill Wendling aee679bf35 Modify the comparison optimizations in the peephole optimizer to update the
iterator when an optimization took place. This allows us to do more insane
things with the code than just remove an instruction or two.

llvm-svn: 113640
2010-09-10 21:55:43 +00:00
Jim Grosbach 1f77ee5691 Add a missing case to duplicateCPV() for LSDA constants. Add a FIXME. rdar://8302157
llvm-svn: 113637
2010-09-10 21:38:22 +00:00
Evan Cheng bf4070756f Teach if-converter to be more careful with predicating instructions that would
take multiple cycles to decode.
For the current if-converter clients (actually only ARM), the instructions that
are predicated on false are not nops. They would still take machine cycles to
decode. Micro-coded instructions such as LDM / STM can potentially take multiple
cycles to decode. If-converter should take treat them as non-micro-coded
simple instructions.

llvm-svn: 113570
2010-09-10 01:29:16 +00:00
Evan Cheng 367a5df8cf For each instruction itinerary class, specify the number of micro-ops each
instruction in the class would be decoded to. Or zero if the number of
uOPs must be determined dynamically.

This will be used to determine the cost-effectiveness of predicating a
micro-coded instruction.

llvm-svn: 113513
2010-09-09 18:18:55 +00:00
Jim Grosbach 19cb2f4c67 remove obsolete comment
llvm-svn: 113337
2010-09-08 03:51:44 +00:00
Jim Grosbach 136d035e45 correct spill code to properly determine if dynamic stack realignment is
present in the function and thus whether aligned load/store instructions can
be used.

llvm-svn: 113323
2010-09-08 00:26:59 +00:00
Bob Wilson 13ce07fa92 Change ARM VFP VLDM/VSTM instructions to use addressing mode #4, just like
all the other LDM/STM instructions.  This fixes asm printer crashes when
compiling with -O0.  I've changed one of the NEON tests (vst3.ll) to run
with -O0 to check this in the future.

Prior to this change VLDM/VSTM used addressing mode #5, but not really.
The offset field was used to hold a count of the number of registers being
loaded or stored, and the AM5 opcode field was expanded to specify the IA
or DB mode, instead of the standard ADD/SUB specifier.  Much of the backend
was not aware of these special cases.  The crashes occured when rewriting
a frameindex caused the AM5 offset field to be changed so that it did not
have a valid submode.  I don't know exactly what changed to expose this now.
Maybe we've never done much with -O0 and NEON.  Regardless, there's no longer
any reason to keep a count of the VLDM/VSTM registers, so we can use
addressing mode #4 and clean things up in a lot of places.

llvm-svn: 112322
2010-08-27 23:18:17 +00:00
Bill Wendling ad2aa57774 Minor simplification. Gets rid of a needless temporary.
llvm-svn: 111430
2010-08-18 21:32:07 +00:00
Bill Wendling 79553bad50 Handle ARM compares as well as converting for ARM adds, subs, and thumb2's adds.
llvm-svn: 110762
2010-08-11 00:23:00 +00:00
Bill Wendling 0757820f8f Turn optimize compares back on with fix. We needed to test that a machine op was
a register before checking if it was defined.

llvm-svn: 110733
2010-08-10 21:38:11 +00:00
Bill Wendling 798617b1ab Use the "isCompare" machine instruction attribute instead of calling the
relatively expensive comparison analyzer on each instruction. Also rename the
comparison analyzer method to something more in line with what it actually does.

This pass is will eventually be folded into the Machine CSE pass.

llvm-svn: 110539
2010-08-08 05:04:59 +00:00
Bill Wendling 7de9d52c13 Add the Optimize Compares pass (disabled by default).
This pass tries to remove comparison instructions when possible. For instance,
if you have this code:

   sub r1, 1
   cmp r1, 0
   bz  L1

and "sub" either sets the same flag as the "cmp" instruction or could be
converted to set the same flag, then we can eliminate the "cmp" instruction all
together. This is a important for ARM where the ALU instructions could set the
CPSR flag, but need a special suffix ('s') to do so.

llvm-svn: 110423
2010-08-06 01:32:48 +00:00
Jim Grosbach d343166a0b Many Thumb2 instructions can reference the full ARM register set (i.e.,
have 4 bits per register in the operand encoding), but have undefined
behavior when the operand value is 13 or 15 (SP and PC, respectively).
The trivial coalescer in linear scan sometimes will merge a copy from
SP into a subsequent instruction which uses the copy, and if that
instruction cannot legally reference SP, we get bad code such as:
  mls r0,r9,r0,sp
instead of:
  mov r2, sp
  mls r0, r9, r0, r2

This patch adds a new register class for use by Thumb2 that excludes
the problematic registers (SP and PC) and is used instead of GPR
for those operands which cannot legally reference PC or SP. The
trivial coalescer explicitly requires that the register class
of the destination for the COPY instruction contain the source
register for the COPY to be considered for coalescing. This prevents
errant instructions like that above.

PR7499

llvm-svn: 109842
2010-07-30 02:41:01 +00:00