Commit Graph

390 Commits

Author SHA1 Message Date
Bill Wendling 988a47d7e5 Make sure we add the predicate after all of the registers are added.
<rdar://problem/12183003>

llvm-svn: 162703
2012-08-27 22:12:44 +00:00
Jakob Stoklund Olesen 74e6f9fc65 Add a missing def flag.
*** Bad machine code: Explicit definition marked as use ***
- function:    test_cos
- basic block: BB#0 L.entry (0x7ff2a2024fd0)
- instruction: VSETLNi32 %D11, %D11<undef>, %R0, 0, pred:14, pred:%noreg, %Q5<imp-use,kill>, %Q5<imp-def>
- operand 0:   %D11

llvm-svn: 162247
2012-08-21 00:34:53 +00:00
Jakob Stoklund Olesen 7b1a2e8f02 Avoid folding ADD instructions with FI operands.
PEI can't handle the pseudo-instructions. This can be removed when the
pseudo-instructions are replaced by normal predicated instructions.

Fixes PR13628.

llvm-svn: 162130
2012-08-17 20:55:34 +00:00
Tim Northover f66181530f Implement NEON domain switching for scalar <-> S-register vmovs on ARM
llvm-svn: 162094
2012-08-17 11:32:52 +00:00
Jakob Stoklund Olesen 0ea1fce6b4 Add ADD and SUB to the predicable ARM instructions.
It is not my plan to duplicate the entire ARM instruction set with
predicated versions. We need a way of representing predicated
instructions in SSA form without requiring a separate opcode.

Then the pseudo-instructions can go away.

llvm-svn: 162061
2012-08-16 23:21:55 +00:00
Jakob Stoklund Olesen c19bf0282d Handle ARM MOVCC optimization in PeepholeOptimizer.
Use the target independent select analysis hooks.

llvm-svn: 162060
2012-08-16 23:14:20 +00:00
Jakob Stoklund Olesen 6cb96120f1 Fold predicable instructions into MOVCC / t2MOVCC.
The ARM select instructions are just predicated moves. If the select is
the only use of an operand, the instruction defining the operand can be
predicated instead, saving one instruction and decreasing register
pressure.

This implementation can turn AND/ORR/EOR instructions into their
corresponding ANDCC/ORRCC/EORCC variants. Ideally, we should be able to
predicate any instruction, but we don't yet support predicated
instructions in SSA form.

llvm-svn: 161994
2012-08-15 22:16:39 +00:00
Anton Korobeynikov 3a4fdfeceb Recognize vst1.64 / vld1.64 with 3 and 4 regs as load from / store to stack stuff
(this corresponds by spilling/reloading regs in DTriple / DQuad reg classes).
No testcase, found by inspection.

llvm-svn: 161300
2012-08-04 13:22:14 +00:00
Anton Korobeynikov 218aaf6d04 Add stack spill / reload instructions for DTriple and DQuad register classes, which
were missed for no reason. This fixes PR13377

llvm-svn: 161299
2012-08-04 13:16:12 +00:00
Sylvestre Ledru 35521e2310 Fix a typo (the the => the)
llvm-svn: 160621
2012-07-23 08:51:15 +00:00
Manman Ren 88a0d3313b ARM: fix typo in comments
llvm-svn: 160093
2012-07-11 23:47:00 +00:00
Manman Ren 34cb93e192 ARM: Fix optimizeCompare to correctly check safe condition.
It is safe if CPSR is killed or re-defined.
When we are done with the basic block, check whether CPSR is live-out.
Do not optimize away cmp if CPSR is live-out.

llvm-svn: 160090
2012-07-11 22:51:44 +00:00
Andrew Trick 21cca97d95 Revert accidental checkin.
My last checkin was apparently not the branch I intended. It was missing one change (added by chandlerc), and contained a spurious change.

llvm-svn: 159548
2012-07-02 19:12:29 +00:00
Andrew Trick f161e391f8 Reapply "Make NumMicroOps a variable in the subtarget's instruction itinerary."
Reapplies r159406 with minor cleanup. The regressions appear to have been spurious.

llvm-svn: 159541
2012-07-02 18:10:42 +00:00
Manman Ren b1b3db6802 ARM: Clean up optimizeCompare in peephole, no functional change.
Use getUniqueVRegDef.
Replace a loop with existing interfaces: modifiesRegister and readsRegister.
Factor out code into inline functions and simplify the code.

llvm-svn: 159470
2012-06-29 22:06:19 +00:00
Manman Ren 6fa76dc0e0 Add SrcReg2 to analyzeCompare and optimizeCompareInstr to handle Compare
instructions with two register operands.

llvm-svn: 159465
2012-06-29 21:33:59 +00:00
Andrew Trick 51a8cf77b8 Revert "Make NumMicroOps a variable in the subtarget's instruction itinerary."
This reverts commit r159406. I noticed a performance regression so I'll back out for now.

llvm-svn: 159411
2012-06-29 07:10:41 +00:00
Andrew Trick 1f50152b2d Make NumMicroOps a variable in the subtarget's instruction itinerary.
The TargetInstrInfo::getNumMicroOps API does not change, but soon it
will be used by MachineScheduler. Now each subtarget can specify the
number of micro-ops per itinerary class. For ARM, this is currently
always dynamic (-1), because it is used for load/store multiple which
depends on the number of register operands.

Zero is now a valid number of micro-ops. This can be used for
nop pseudo-instructions or instructions that the hardware can squash
during dispatch.

llvm-svn: 159406
2012-06-29 03:23:18 +00:00
Evan Cheng a75127871c Add a missing check to avoid dereference null. No sensible test case possible. Sorry. rdar://11745134
llvm-svn: 159236
2012-06-26 22:54:59 +00:00
Manman Ren 606953fbe7 ARM: update peephole optimization.
More condition codes are included when deciding whether to remove cmp after
a sub instruction. Specifically, we extend from GE|LT|GT|LE to 
GE|LT|GT|LE|HS|LS|HI|LO|EQ|NE. If we have "sub a, b; cmp b, a; movhs", we
should be able to replace with "sub a, b; movls".

rdar: 11725965
llvm-svn: 159166
2012-06-25 21:49:38 +00:00
Andrew Trick 77d0b88999 ARM scheduling fix: don't guess at implicit operand latency.
This is a minor drive-by fix with no robust way to unit test.
As an example see neon-div.ll:
SU(16):   %Q8<def> = VMOVLsv4i32 %D17, pred:14, pred:%noreg, %Q8<imp-use,kill>
 val SU(1): Latency=2 Reg=%Q8
...should be latency=1

llvm-svn: 158960
2012-06-22 02:50:33 +00:00
Andrew Trick 3ccb1b8cf9 ARM scheduling fix: compute predicated implicit use properly.
Minor drive by fix to cleanup latency computation. Calling
getOperandLatency with a deliberately incorrect operand index does not
give you the latency you want.

llvm-svn: 158959
2012-06-22 02:50:31 +00:00
Andrew Trick a5d24ca453 Continue factoring computeOperandLatency. Use it for ARM hasHighOperandLatency.
llvm-svn: 158164
2012-06-07 19:42:04 +00:00
Andrew Trick 5b1cadf9f7 ARM getOperandLatency rewrite.
Match expectations of the new latency API. Cleanup and make the logic consistent.

llvm-svn: 158163
2012-06-07 19:42:00 +00:00
Andrew Trick 3564bdfa61 ARM getOperandLatency should return -1 for unknown, consistent with API
llvm-svn: 158162
2012-06-07 19:41:58 +00:00
Andrew Trick fb1a74c2b2 Fix ARM getInstrLatency logic to work with the current API.
llvm-svn: 158161
2012-06-07 19:41:55 +00:00
Andrew Trick 4544606c71 misched: API for minimum vs. expected latency.
Minimum latency determines per-cycle scheduling groups.
Expected latency determines critical path and cost.

llvm-svn: 158021
2012-06-05 21:11:27 +00:00
Craig Topper 2fbd130a79 Mark a static table as const. Shrink opcode size in static tables to uint16_t. Simplify loop iterating over one of those tables. No functional change intended.
llvm-svn: 157367
2012-05-24 03:59:11 +00:00
David Blaikie 81a84bd841 Fix use of uninitialized variable.
Found by GCC's maybe-uninitialized.

llvm-svn: 156780
2012-05-14 21:48:19 +00:00
Manman Ren 0d5ec28ccc Add space before an open parenthesis in control flow statements.
llvm-svn: 156620
2012-05-11 15:36:46 +00:00
Manman Ren dc8ad0058f ARM: peephole optimization to remove cmp instruction
This patch will optimize the following cases:
  sub r1, r3 | sub r1, imm
  cmp r3, r1 or cmp r1, r3 | cmp r1, imm
  bge L1

TO
  subs r1, r3
  bge  L1 or ble L1

If the branch instruction can use flag from "sub", then we can replace
"sub" with "subs" and eliminate the "cmp" instruction.

rdar: 10734411
llvm-svn: 156599
2012-05-11 01:30:47 +00:00
Manman Ren b555b382bd Revert: 156550 "ARM: peephole optimization to remove cmp instruction"
This commit broke an external linux bot and gave a compile-time warning.

llvm-svn: 156556
2012-05-10 18:49:43 +00:00
Manman Ren c860887b2d ARM: peephole optimization to remove cmp instruction
This patch will optimize the following cases:
  sub r1, r3 | sub r1, imm
  cmp r3, r1 or cmp r1, r3 | cmp r1, imm
  bge L1

TO
  subs r1, r3
  bge  L1 or ble L1

If the branch instruction can use flag from "sub", then we can replace
"sub" with "subs" and eliminate the "cmp" instruction.

rdar: 10734411
llvm-svn: 156550
2012-05-10 16:48:21 +00:00
Jakob Stoklund Olesen 0a5b72f0e4 Implement ARMBaseInstrInfo::commuteInstruction() for MOVCCr.
A MOVCCr instruction can be commuted by inverting the condition. This
can help reduce register pressure and remove unnecessary copies in some
cases.

<rdar://problem/11182914>

llvm-svn: 154033
2012-04-04 18:23:42 +00:00
Jakob Stoklund Olesen caa6bd273f Handle register copies for the new ARM register classes.
ARM recently gained DPair, DTriple, and DQuad register classes.
Update copyPhysReg() to handle copies in these register classes.

No test case, it is difficult to make the register allocator emit the
odd copies reliably. The missing DPair copy caused a failure on
partialsums in the nightly test suite.

<rdar://problem/11147997>

llvm-svn: 153686
2012-03-29 21:10:40 +00:00
Jakob Stoklund Olesen 9e512120b7 Spill DPair registers, not just QPR.
The arm_neon intrinsics can create virtual registers from the DPair
register class which allows both even-odd and odd-even D-register pairs.

This fixes PR12389.

llvm-svn: 153603
2012-03-28 21:20:32 +00:00
Evan Cheng a2b48d985b ARM has a peephole optimization which looks for a def / use pair. The def
produces a 32-bit immediate which is consumed by the use. It tries to 
fold the immediate by breaking it into two parts and fold them into the
immmediate fields of two uses. e.g
       movw    r2, #40885
       movt    r3, #46540
       add     r0, r0, r3
=>
       add.w   r0, r0, #3019898880
       add.w   r0, r0, #30146560
;
However, this transformation is incorrect if the user produces a flag. e.g.
       movw    r2, #40885
       movt    r3, #46540
       adds    r0, r0, r3
=>
       add.w   r0, r0, #3019898880
       adds.w  r0, r0, #30146560
Note the adds.w may not set the carry flag even if the original sequence
would.

rdar://11116189

llvm-svn: 153484
2012-03-26 23:31:00 +00:00
Craig Topper 5fa0caafc0 Prune includes and replace uses of ARMRegisterInfo.h with ARMBaeRegisterInfo.h
llvm-svn: 153422
2012-03-26 00:45:15 +00:00
Jim Grosbach 13a292cc74 ARM refactor more NEON VLD/VST instructions to use composite physregs
Register pair VLD1/VLD2 all-lanes instructions. Kill off more of the
pseudos as a result.

llvm-svn: 152150
2012-03-06 22:01:44 +00:00
Jakob Stoklund Olesen d9b427ee65 Add <imp-def> operands when reloading into physregs.
When an instruction only writes sub-registers, it is still necessary to
add an <imp-def> operand for the super-register.  When reloading into a
virtual register, rewriting will add the operand, but when loading
directly into a virtual register, the <imp-def> operand is still
necessary.

llvm-svn: 152095
2012-03-06 02:48:17 +00:00
Jim Grosbach c988e0c521 ARM refactor away a bunch of VLD/VST pseudo instructions.
With the new composite physical registers to represent arbitrary pairs
of DPR registers, we don't need the pseudo-registers anymore. Get rid of
a bunch of them that use DPR register pairs and just use the real
instructions directly instead.

llvm-svn: 152045
2012-03-05 19:33:30 +00:00
Jakob Stoklund Olesen f729ceae04 Use <def,undef> operands when spilling NEON bundles.
MachineOperands that define part of a virtual register must have an
<undef> flag if they are not intended as read-modify-write operands.

The old trick of adding an <imp-def> operand doesn't work any longer.

Fixes PR12177.

llvm-svn: 152008
2012-03-04 18:40:30 +00:00
Jim Grosbach 617f84ddbd ARM implement TargetInstrInfo::getNoopForMachoTarget()
Without this hook, functions w/ a completely empty body (including no
epilogue) will cause an MCEmitter assertion failure.

For example,
define internal fastcc void @empty_function() {
  unreachable
}

rdar://10947471

llvm-svn: 151673
2012-02-28 23:53:30 +00:00
Jakob Stoklund Olesen 5f37f1c39d Clarify ARM calling conventions.
llvm-svn: 151113
2012-02-22 01:07:19 +00:00
Jakob Stoklund Olesen 6909faaf35 Calls don't really change the stack pointer.
Even if a call instruction has %SP<imp-def> operands, it doesn't change
the value of the stack pointer.

llvm-svn: 151104
2012-02-21 23:47:43 +00:00
Jia Liu b22310fda6 Emacs-tag and some comment fix for all ARM, CellSPU, Hexagon, MBlaze, MSP430, PPC, PTX, Sparc, X86, XCore.
llvm-svn: 150878
2012-02-18 12:03:15 +00:00
Jakob Stoklund Olesen 4fad5b2b9e Handle regmask operands in ARMInstrInfo.
llvm-svn: 150833
2012-02-17 19:23:15 +00:00
Jakob Stoklund Olesen 96732a438d Fix ARMBaseInstrInfo::getInstrLatency for calls.
Calls always clobber CPSR.

llvm-svn: 150831
2012-02-17 19:07:59 +00:00
Craig Topper e55c556a24 Convert assert(0) to llvm_unreachable
llvm-svn: 149961
2012-02-07 02:50:20 +00:00
Evan Cheng 613d6d3b43 DefinesPredicate should only look for def operands. Patch by Ludwig Meier.
llvm-svn: 149846
2012-02-05 19:55:04 +00:00
David Blaikie 46a9f016c5 More dead code removal (using -Wunreachable-code)
llvm-svn: 148578
2012-01-20 21:51:11 +00:00
Jakob Stoklund Olesen d110e2a83f Reapply r146997, "Heed spill slot alignment on ARM."
Now that canRealignStack() understands frozen reserved registers, it is
safe to use it for aligned spill instructions.

It will only return true if the registers reserved at the beginning of
register allocation allow for dynamic stack realignment.

<rdar://problem/10625436>

llvm-svn: 147579
2012-01-05 00:26:57 +00:00
Jakob Stoklund Olesen 1b7f2a7638 Revert r146997, "Heed spill slot alignment on ARM."
This patch caused a miscompilation of oggenc because a frame pointer was
suddenly needed halfway through register allocation.

<rdar://problem/10625436>

llvm-svn: 147487
2012-01-03 22:34:35 +00:00
Jim Grosbach c80a264386 ARM NEON assmebly parsing for VLD2 to all lanes instructions.
llvm-svn: 147069
2011-12-21 19:40:55 +00:00
Jakob Stoklund Olesen b95c102c2f Heed spill slot alignment on ARM.
Use the spill slot alignment as well as the local variable alignment to
determine when the stack needs to be realigned. This works now that the
ARM target can always realign the stack by using a base pointer.

Still respect the ARMBaseRegisterInfo::canRealignStack() function
vetoing a realigned stack.  Don't use aligned spill code in that case.

llvm-svn: 146997
2011-12-20 22:15:04 +00:00
Evan Cheng da103bf9ec Model ARM predicated write as read-mod-write. e.g.
r0 = mov #0
r0 = moveq #1

Then the second instruction has an implicit data dependency on the first
instruction. Sadly I have yet to come up with a small test case that
demonstrate the post-ra scheduler taking advantage of this.

llvm-svn: 146583
2011-12-14 20:00:08 +00:00
Evan Cheng 7fae11b231 - Add MachineInstrBundle.h and MachineInstrBundle.cpp. This includes a function
to finalize MI bundles (i.e. add BUNDLE instruction and computing register def
  and use lists of the BUNDLE instruction) and a pass to unpack bundles.
- Teach more of MachineBasic and MachineInstr methods to be bundle aware.
- Switch Thumb2 IT block to MI bundles and delete the hazard recognizer hack to
  prevent IT blocks from being broken apart.

llvm-svn: 146542
2011-12-14 02:11:42 +00:00
Jim Grosbach d146a02c79 ARM assembly parsing and encoding for VLD2 with writeback.
Refactor the instructions into fixed writeback and register-stride
writeback variants to simplify the offset operand (no more optional
register operand using reg0). This is a simpler representation and allows
the assembly parser to more easily handle these instructions.

Add tests for the instruction variants now supported.

llvm-svn: 146278
2011-12-09 21:28:25 +00:00
Evan Cheng 7f8e563a69 Add bundle aware API for querying instruction properties and switch the code
generator to it. For non-bundle instructions, these behave exactly the same
as the MC layer API.

For properties like mayLoad / mayStore, look into the bundle and if any of the
bundled instructions has the property it would return true.
For properties like isPredicable, only return true if *all* of the bundled
instructions have the property.
For properties like canFoldAsLoad, isCompare, conservatively return false for
bundles.

llvm-svn: 146026
2011-12-07 07:15:52 +00:00
Jakob Stoklund Olesen cc6bfa8e79 Revert r145971: "Use conservative size estimate for tBR_JTr."
This caused more offset errors.

llvm-svn: 145980
2011-12-06 22:41:31 +00:00
Evan Cheng 2a81dd4a3c First chunk of MachineInstr bundle support.
1. Added opcode BUNDLE
2. Taught MachineInstr class to deal with bundled MIs
3. Changed MachineBasicBlock iterator to skip over bundled MIs; added an iterator to walk all the MIs
4. Taught MachineBasicBlock methods about bundled MIs

llvm-svn: 145975
2011-12-06 22:12:01 +00:00
Jakob Stoklund Olesen 33fe130e12 Use conservative size estimate for tBR_JTr.
This pseudo-instruction contains a .align directive in its expansion, so
the total size may vary by 2 bytes.

It is too difficult to accurately keep track of this alignment
directive, just use the worst-case size instead.

llvm-svn: 145971
2011-12-06 21:55:39 +00:00
Jim Grosbach a68c9a847e ARM parsing for VLD1 all lanes, with writeback.
llvm-svn: 145510
2011-11-30 19:35:44 +00:00
Jakob Stoklund Olesen 653183fd5c Enable -widen-vmovs by default.
This will widen 32-bit register vmov instructions to 64-bit when
possible.  The 64-bit vmovd instructions can then be translated to NEON
vorr instructions by the execution dependency fix pass.

The copies are only widened if they are marked as clobbering the whole
D-register.

llvm-svn: 144734
2011-11-15 23:53:18 +00:00
Jay Foad 465101bb0e Make use of MachinePointerInfo::getFixedStack. This removes all mention
of PseudoSourceValue from lib/Target/.

llvm-svn: 144632
2011-11-15 07:34:52 +00:00
Jim Grosbach 17ec1a19e5 ARM assembly parsing and encoding for VLD1 with writeback.
Four entry register lists.

llvm-svn: 142882
2011-10-25 00:14:01 +00:00
Jim Grosbach 30c39c8bf2 Nuke dead code. Nothing generates the VLD1d64QPseudo_UPD instruction.
llvm-svn: 142877
2011-10-24 23:40:46 +00:00
Jim Grosbach 92fd05ecdc ARM assembly parsing and encoding for VLD1 w/ writeback.
Three entry register list variation.

llvm-svn: 142876
2011-10-24 23:26:05 +00:00
Jim Grosbach 2098cb1e6f ARM refactor am6offset usage for VLD1.
Split am6offset into fixed and register offset variants so the instruction
encodings are explicit rather than relying an a magic reg0 marker.
Needed to being able to parse these.

llvm-svn: 142853
2011-10-24 21:45:13 +00:00
Andrew Trick 88b2450adc Use ARM/t2PseudoInst class from ARM/Thumb2 special adds/subs patterns.
Clean up the patterns, fix comments, and avoid confusing both tools
and coders. Note that the special adds/subs SelectionDAG nodes no
longer have the dummy cc_out operand.

llvm-svn: 142397
2011-10-18 19:18:52 +00:00
Jakob Stoklund Olesen 39c31a77b8 Fix -widen-vmovs liveness issues.
When widening a copy, we are reading a larger register that may not be
live.  Use an <undef> flag to tell the register scavenger and machine
code verifier that we know the value isn't defined.

We now widen:

  %S6<def> = COPY %S4<kill>, %D3<imp-def>

into:

  %D3<def> = VMOVD %D2<undef>, pred:14, pred:%noreg, %S4<imp-use,kill>

This also keeps the <kill> flag on %S4 so we don't inadvertently kill a
live value in %S5.

Finally, ensure that ARMBaseInstrInfo::setExecutionDomain() preserves
the <undef> flag when converting VMOVD to VORR.

llvm-svn: 141746
2011-10-12 00:06:23 +00:00
Jakob Stoklund Olesen da7c0f8f7d Move -widen-vmovs to ARMBaseInstrInfo::expandPostRAPseudo().
The VMOVS widening needs to look at the implicit COPY operands.  Trying
to dig out the COPY instruction from an iterator in copyPhysReg() is the
wrong approach.

The expandPostRAPseudo() hook gets to look at COPY instructions before
they are converted to copyPhysReg() calls.

llvm-svn: 141619
2011-10-11 00:59:06 +00:00
Bill Wendling 4a4772fae2 Use the ARMConstantPoolMBB class to handle the MBB values.
llvm-svn: 140943
2011-10-01 09:30:42 +00:00
Bill Wendling c214cb055d Use the new ARMConstantPoolSymbol class to handle external symbols.
llvm-svn: 140939
2011-10-01 08:58:29 +00:00
Bill Wendling 7753d66468 Switch over to using ARMConstantPoolConstant for global variables, functions,
and block addresses.

llvm-svn: 140936
2011-10-01 08:00:54 +00:00
Bill Wendling 69bc3de4fc Create a machine basic block in the constant pool and retrieve the symbol for an MBB.
llvm-svn: 140824
2011-09-29 23:50:42 +00:00
Jakob Stoklund Olesen f7ad189033 Use ExecutionDepsFix instead of NEONMoveFix.
This enables NEON domain tracking across basic blocks, but should
otherwise do the same thing.

llvm-svn: 140772
2011-09-29 02:48:41 +00:00
Jakob Stoklund Olesen f9b71a2e01 Implement TII::get/setExecutionDomain() for ARM.
llvm-svn: 140653
2011-09-27 22:57:21 +00:00
Andrew Trick 924123acb3 Lower ARM adds/subs to add/sub after adding optional CPSR operand.
This is still a hack until we can teach tblgen to generate the
optional CPSR operand rather than an implicit CPSR def. But the
strangeness is now limited to the selection DAG. ADD/SUB MI's no
longer have implicit CPSR defs, nor do we allow flag setting variants
of these opcodes in machine code. There are several corner cases to
consider, and getting one wrong would previously lead to nasty
miscompilation. It's not the first time I've debugged one, so this
time I added enough verification to ensure it won't happen again.

llvm-svn: 140228
2011-09-21 02:20:46 +00:00
Andrew Trick 3f1fdf1b31 whitespace
llvm-svn: 140227
2011-09-21 02:17:37 +00:00
Owen Anderson eb3f0fbdce Fix an ambiguously nested if.
llvm-svn: 139431
2011-09-09 23:13:02 +00:00
Owen Anderson 29cfe6c368 Thumb unconditional branches are allowed in IT blocks, and therefore should have a predicate operand, unlike conditional branches.
llvm-svn: 139415
2011-09-09 21:48:23 +00:00
Jakob Stoklund Olesen cd893390f5 Put VMOVS widening under a command line option, off by default.
It appears that our use of the imp-use and imp-def flags with
sub-registers is not yet robust enough to support this.

The failing test case is complicated, I am working on a reduction.

<rdar://problem/10044201>

llvm-svn: 138861
2011-08-31 17:00:02 +00:00
Jim Grosbach e364ad540a Clean up Thumb load/store multiple definitions.
There is no non-writeback store multiple instruction in Thumb1, so
don't define one. As a result load multiple is the only instantiation of
the multiclass, so refactor that away entirely.

llvm-svn: 138338
2011-08-23 17:41:15 +00:00
Chad Rosier 61f92efb5c Remove the VMOVQQ pseudo instruction.
llvm-svn: 138177
2011-08-20 00:52:40 +00:00
Jakob Stoklund Olesen 59015c8b17 Add <imp-def> operands to QQ and QQQQ stack loads.
This pleases the register scavenger and brings
test/CodeGen/ARM/2011-08-12-vmovqqqq-pseudo.ll a little closer to
working with -verify-machineinstrs.

llvm-svn: 138164
2011-08-20 00:17:45 +00:00
Chad Rosier be7625161e VMOVQQQQs pseudo instructions are only created by ARMBaseInstrInfo::copyPhysReg.
Therefore, rather then generate a pseudo instruction, which is later expanded,
generate the necessary instructions in place.

llvm-svn: 138163
2011-08-20 00:17:25 +00:00
Owen Anderson 732f82c463 Rewrite some ARM InstrInfo functions to be most accepting of arbitrary register subclasses. Hopefully this fixes some buildbots.
llvm-svn: 137223
2011-08-10 17:21:20 +00:00
Jakob Stoklund Olesen 6a14dc01ff Promote VMOVS to VMOVD when possible.
On Cortex-A8, we use the NEON v2f32 instructions for f32 arithmetic. For
better latency, we also send D-register copies down the NEON pipeline by
translating them to vorr instructions.

This patch promotes even S-register copies to D-register copies when
possible so they can also go down the NEON pipeline.  Example:

        vldr.32 s0, LCPI0_0
    loop:
        vorr    d1, d0, d0
    loop2:
        ...
        vadd.f32        d1, d1, d16

The vorr instruction looked like this after regalloc:

    %S2<def> = COPY %S0, %D1<imp-def>

Copies involving odd S-registers, and copies that don't define the full
D-register are left alone.

llvm-svn: 137182
2011-08-09 23:41:44 +00:00
Jakob Stoklund Olesen c04a66b48e Implement isLoadFromStackSlotPostFE and isStoreToStackSlotPostFE for ARM.
They improve the verbose assembly.

llvm-svn: 137069
2011-08-08 21:45:32 +00:00
Owen Anderson b595ed0085 Split up the ARM so_reg ComplexPattern into so_reg_reg and so_reg_imm, allowing us to distinguish the encodings that use shifted registers from those that use shifted immediates. This is necessary to allow the fixed-length decoder to distinguish things like BICS vs LDRH.
llvm-svn: 135693
2011-07-21 18:54:16 +00:00
Evan Cheng a20cde31e7 Sink ARMMCExpr and ARMAddressingModes into MC layer. First step to separate ARM MC code from target.
llvm-svn: 135636
2011-07-20 23:34:39 +00:00
Owen Anderson 454e1c7abb Remove VMOVDneon and VMOVQ, which are just aliases for VORR. This continues to simplify the path towards an auto-generated disassembler.
llvm-svn: 135290
2011-07-15 18:46:47 +00:00
Evan Cheng bc153d49b7 Next round of MC refactoring. This patch factor MC table instantiations, MC
registeration and creation code into XXXMCDesc libraries.

llvm-svn: 135184
2011-07-14 20:59:42 +00:00
Owen Anderson 651b230ca0 Add a target-indepedent entry to MCInstrDesc to describe the encoded size of an opcode. Switch ARM over to using that rather than its own special MCInstrDesc bits.
llvm-svn: 135106
2011-07-13 23:22:26 +00:00
Jakub Staszak 9b07c0ab6b Use BranchProbability instead of floating points in IfConverter.
llvm-svn: 134858
2011-07-10 02:58:07 +00:00
Evan Cheng 703a0fbf39 Hide the call to InitMCInstrInfo into tblgen generated ctor.
llvm-svn: 134244
2011-07-01 17:57:27 +00:00
Jim Grosbach d86f34d631 Refactor away tSpill and tRestore pseudos in ARM backend.
The tSpill and tRestore instructions are just copies of the tSTRspi and
tLDRspi instructions, respectively. Just use those directly instead.

llvm-svn: 134092
2011-06-29 20:26:39 +00:00
Evan Cheng 194c3dc01f Move CallFrameSetupOpcode and CallFrameDestroyOpcode to TargetInstrInfo.
llvm-svn: 134030
2011-06-28 21:14:33 +00:00
Evan Cheng 1e210d08d8 Merge XXXGenRegisterNames.inc into XXXGenRegisterInfo.inc
llvm-svn: 134024
2011-06-28 20:07:07 +00:00