Commit Graph

584 Commits

Author SHA1 Message Date
Weiming Zhao 090edf7e67 temporarily revert the patch due to some conflicts
llvm-svn: 175107
2013-02-13 23:24:40 +00:00
Weiming Zhao 0632a4b002 Bug fix 13622: Add paired register support for inline asm with 64-bit data on ARM
llvm-svn: 175088
2013-02-13 21:43:02 +00:00
Chandler Carruth 9fb823bbd4 Move all of the header files which are involved in modelling the LLVM IR
into their new header subdirectory: include/llvm/IR. This matches the
directory structure of lib, and begins to correct a long standing point
of file layout clutter in LLVM.

There are still more header files to move here, but I wanted to handle
them in separate commits to make tracking what files make sense at each
layer easier.

The only really questionable files here are the target intrinsic
tablegen files. But that's a battle I'd rather not fight today.

I've updated both CMake and Makefile build systems (I think, and my
tests think, but I may have missed something).

I've also re-sorted the includes throughout the project. I'll be
committing updates to Clang, DragonEgg, and Polly momentarily.

llvm-svn: 171366
2013-01-02 11:36:10 +00:00
Evan Cheng eae6d2ccea LLVM sdisel normalize bit extraction of the form:
((x & 0xff00) >> 8) << 2
to
 (x >> 6) & 0x3fc

This is general goodness since it folds a left shift into the mask. However,
the trailing zeros in the mask prevents the ARM backend from using the bit
extraction instructions. And worse since the mask materialization may require
an addition instruction. This comes up fairly frequently when the result of 
the bit twiddling is used as memory address. e.g.

 = ptr[(x & 0xFF0000) >> 16]

We want to generate:
  ubfx   r3, r1, , 
  ldr.w  r3, [r0, r3, lsl ]

vs.
  mov.w  r9, 
  and.w  r2, r9, r1, lsr 
  ldr    r2, [r0, r2]

Add a late ARM specific isel optimization to
ARMDAGToDAGISel::PreprocessISelDAG(). It folds the left shift to the
'base + offset' address computation; change the mask to one which doesn't have
trailing zeros and enable the use of ubfx.

Note the optimization has to be done late since it's target specific and we
don't want to change the DAG normalization. It's also fairly restrictive
as shifter operands are not always free. It's only done for lsh 1 / 2. It's
known to be free on some cpus and they are most common for address
computation.

This is a slight win for blowfish, rijndael, etc.

rdar://12870177

llvm-svn: 170581
2012-12-19 20:16:09 +00:00
Chandler Carruth ed0881b2a6 Use the new script to sort the includes of every file under lib.
Sooooo many of these had incorrect or strange main module includes.
I have manually inspected all of these, and fixed the main module
include to be the nearest plausible thing I could find. If you own or
care about any of these source files, I encourage you to take some time
and check that these edits were sensible. I can't have broken anything
(I strictly added headers, and reordered them, never removed), but they
may not be the headers you'd really like to identify as containing the
API being implemented.

Many forward declarations and missing includes were added to a header
files to allow them to parse cleanly when included first. The main
module rule does in fact have its merits. =]

llvm-svn: 169131
2012-12-03 16:50:05 +00:00
Silviu Baranga 93aefa5f2c Added atomic 64 min/max/umin/umax instrinsics support in the ARM backend.
llvm-svn: 168886
2012-11-29 14:41:25 +00:00
Weiming Zhao 9578222e0d Rename methods like PairSRegs() to createSRegpairNode() to meet our coding
style requirement.

llvm-svn: 168229
2012-11-17 00:23:35 +00:00
Weiming Zhao 8f56f88661 Remove hard coded registers in ARM ldrexd and strexd instructions
This patch replaces the hard coded GPR pair [R0, R1] of
Intrinsic:arm_ldrexd and [R2, R3] of Intrinsic:arm_strexd with
even/odd GPRPair reg class.
Similar to the lowering of atomic_64 operation.

llvm-svn: 168207
2012-11-16 21:55:34 +00:00
Bob Wilson e8a549cd92 Add LLVM support for Swift.
llvm-svn: 164899
2012-09-29 21:43:49 +00:00
Sylvestre Ledru 91ce36c986 Revert 'Fix a typo 'iff' => 'if''. iff is an abreviation of if and only if. See: http://en.wikipedia.org/wiki/If_and_only_if Commit 164767
llvm-svn: 164768
2012-09-27 10:14:43 +00:00
Sylvestre Ledru 721cffd53a Fix a typo 'iff' => 'if'
llvm-svn: 164767
2012-09-27 09:59:43 +00:00
Dmitri Gribenko 5485acd440 Fix Doxygen issues:
* wrap code blocks in \code ... \endcode;
* refer to parameter names in paragraphs correctly (\arg is not what most
  people want -- it starts a new paragraph);
* use \param instead of \arg to document parameters in order to be consistent
  with the rest of the codebase.

llvm-svn: 163902
2012-09-14 14:57:36 +00:00
Silviu Baranga b47bb94f93 This patch introduces A15 as a target in LLVM.
llvm-svn: 163803
2012-09-13 15:05:10 +00:00
Arnold Schwaighofer f00fb1c581 Patch to implement UMLAL/SMLAL instructions for the ARM architecture
This patch corrects the definition of umlal/smlal instructions and adds support
for matching them to the ARM dag combiner.

Bug 12213

Patch by Yin Ma!

llvm-svn: 163136
2012-09-04 14:37:49 +00:00
Jakob Stoklund Olesen e1014e7b98 Remove the CAND/COR/CXOR custom ISD nodes and their select code.
These nodes are no longer needed because the peephole pass can fold
CMOV+AND into ANDCC etc.

llvm-svn: 162179
2012-08-18 21:49:50 +00:00
Jakob Stoklund Olesen 2ec0c41e01 Add missing Rfalse operand to the predicated pseudo-instructions.
When predicating this instruction:

  Rd = ADD Rn, Rm

We need an extra operand to represent the value given to Rd when the
predicate is false:

  Rd = ADDCC Rfalse, Rn, Rm, pred

The Rd and Rfalse operands are different registers while in SSA form.
Rfalse is tied to Rd to make sure they get the same register during
register allocation.

Previously, Rd and Rn were tied, but that is not required.

Compare to MOVCC:

  Rd = MOVCC Rfalse, Rtrue, pred

llvm-svn: 161955
2012-08-15 16:17:24 +00:00
Arnold Schwaighofer b73da9453c Revert 161581: Patch to implement UMLAL/SMLAL instructions for the ARM
architecture

It broke MultiSource/Applications/JM/ldecod/ldecod on armv7 thumb O0 g and armv7
thumb O3.

llvm-svn: 161736
2012-08-12 05:11:56 +00:00
Arnold Schwaighofer 81b2eec1ab Patch to implement UMLAL/SMLAL instructions for the ARM architecture
This patch corrects the definition of umlal/smlal instructions and adds support
for matching them to the ARM dag combiner.

Bug 12213

Patch by Yin Ma!

llvm-svn: 161581
2012-08-09 15:25:52 +00:00
Jim Grosbach 96e8a8dc6d Clean up formatting.
llvm-svn: 161133
2012-08-01 20:33:02 +00:00
Jim Grosbach b437a8c5d5 Tidy up.
llvm-svn: 161132
2012-08-01 20:33:00 +00:00
Craig Topper 01736f866a Make some opcode tables static and const. Allows code to avoid making copies to pass the tables around.
llvm-svn: 157373
2012-05-24 05:17:00 +00:00
Tim Northover 6699a60b0e Test commit.
llvm-svn: 155626
2012-04-26 08:24:07 +00:00
Jim Grosbach 6e536de1a1 ARM 'vuzp.32 Dd, Dm' is a pseudo-instruction.
While there is an encoding for it in VUZP, the result of that is undefined,
so we should avoid it. Define the instruction as a pseudo for VTRN.32
instead, as the ARM ARM indicates.

rdar://11222366

llvm-svn: 154511
2012-04-11 17:40:18 +00:00
Jim Grosbach 4640c8169f ARM 'vzip.32 Dd, Dm' is a pseudo-instruction.
While there is an encoding for it in VZIP, the result of that is undefined,
so we should avoid it. Define the instruction as a pseudo for VTRN.32
instead, as the ARM ARM indicates.

rdar://11221911

llvm-svn: 154505
2012-04-11 16:53:25 +00:00
Jim Grosbach 13a292cc74 ARM refactor more NEON VLD/VST instructions to use composite physregs
Register pair VLD1/VLD2 all-lanes instructions. Kill off more of the
pseudos as a result.

llvm-svn: 152150
2012-03-06 22:01:44 +00:00
Jim Grosbach c988e0c521 ARM refactor away a bunch of VLD/VST pseudo instructions.
With the new composite physical registers to represent arbitrary pairs
of DPR registers, we don't need the pseudo-registers anymore. Get rid of
a bunch of them that use DPR register pairs and just use the real
instructions directly instead.

llvm-svn: 152045
2012-03-05 19:33:30 +00:00
Duncan Sands a354d58f8d Remove unused variable.
llvm-svn: 151251
2012-02-23 11:01:22 +00:00
Evan Cheng e87681cf34 Optimize a couple of common patterns involving conditional moves where the false
value is zero. Instead of a cmov + op, issue an conditional op instead. e.g.
    cmp   r9, r4
    mov   r4, 
    moveq r4,  
    orr   lr, lr, r4

should be:
    cmp   r9, r4
    orreq lr, lr, 

That is, optimize (or x, (cmov 0, y, cond)) to (or.cond x, y). Similarly extend
this to xor as well as (and x, (cmov -1, y, cond)) => (and.cond x, y).

It's possible to extend this to ADD and SUB but I don't think they are common.

rdar://8659097

llvm-svn: 151224
2012-02-23 01:19:06 +00:00
Craig Topper e55c556a24 Convert assert(0) to llvm_unreachable
llvm-svn: 149961
2012-02-07 02:50:20 +00:00
David Blaikie 46a9f016c5 More dead code removal (using -Wunreachable-code)
llvm-svn: 148578
2012-01-20 21:51:11 +00:00
Jim Grosbach 74ac7d50a1 ARM updating VST2 pseudo-lowering fixed vs. register update.
rdar://10663487

llvm-svn: 147876
2012-01-10 21:11:12 +00:00
Jim Grosbach c80a264386 ARM NEON assmebly parsing for VLD2 to all lanes instructions.
llvm-svn: 147069
2011-12-21 19:40:55 +00:00
Jim Grosbach 88ac761aa4 ARM NEON refactor VST2 w/ writeback instructions.
In addition to improving the representation, this adds support for assembly
parsing of these instructions.

llvm-svn: 146588
2011-12-14 21:32:11 +00:00
Jim Grosbach d146a02c79 ARM assembly parsing and encoding for VLD2 with writeback.
Refactor the instructions into fixed writeback and register-stride
writeback variants to simplify the offset operand (no more optional
register operand using reg0). This is a simpler representation and allows
the assembly parser to more easily handle these instructions.

Add tests for the instruction variants now supported.

llvm-svn: 146278
2011-12-09 21:28:25 +00:00
Jim Grosbach 5ee209ce3a ARM assembly parsing and encoding for four-register VST1.
llvm-svn: 145450
2011-11-29 22:58:48 +00:00
Jim Grosbach 98d032fd67 ARM assembly parsing and encoding for three-register VST1.
llvm-svn: 145442
2011-11-29 22:38:04 +00:00
Jim Grosbach 05df460269 ARM VST1 w/ writeback assembly parsing and encoding.
llvm-svn: 143369
2011-10-31 21:50:31 +00:00
Jakob Stoklund Olesen e5a6adceac Also set addrmode6 alignment when align==size.
Previously, we were only setting the alignment bits on over-aligned
loads and stores.

llvm-svn: 143160
2011-10-27 22:39:16 +00:00
Jim Grosbach 12a39540bb ARM isel for vld1, opcode selection for register stride post-index pseudos.
llvm-svn: 143158
2011-10-27 22:25:42 +00:00
Jim Grosbach 2098cb1e6f ARM refactor am6offset usage for VLD1.
Split am6offset into fixed and register offset variants so the instruction
encodings are explicit rather than relying an a magic reg0 marker.
Needed to being able to parse these.

llvm-svn: 142853
2011-10-24 21:45:13 +00:00
Eli Friedman 4c42be5b32 Fix misc warnings. Patch by Joe Abbey.
llvm-svn: 142332
2011-10-18 03:17:34 +00:00
Bill Wendling a7d697e4a6 Reapply r141365 now that PR11107 is fixed.
llvm-svn: 141591
2011-10-10 22:59:55 +00:00
Bill Wendling 47aac51043 Revert r141365. It was causing MultiSource/Benchmarks/MiBench/consumer-lame to
hang, and possibly SPEC/CINT2006/464_h264ref.

llvm-svn: 141560
2011-10-10 18:27:30 +00:00
Anton Korobeynikov e45373520d Disable ABS optimization for Thumb1 target, we don't have necessary instructions there.
llvm-svn: 141481
2011-10-08 08:38:45 +00:00
Anton Korobeynikov 318d6bae80 Peephole optimization for ABS on ARM.
Patch by Ana Pazos!

llvm-svn: 141365
2011-10-07 16:15:08 +00:00
Cameron Zwarich 842f99a6ee Always merge profitable shifts on A9, not just when they have a single use.
llvm-svn: 141248
2011-10-05 23:39:02 +00:00
Cameron Zwarich 87aa18378e Remove a check from ARM shifted operand isel helper methods, which were blocking
merging an lsl  that has multiple uses on A9. This shift is free, so there is
no problem merging it in multiple places. Other unprofitable shifts will not be
merged.

llvm-svn: 141247
2011-10-05 23:38:50 +00:00
Cameron Zwarich 2226b4be09 Add braces around something that throws me for a loop.
llvm-svn: 141173
2011-10-05 08:59:10 +00:00
Cameron Zwarich 6a7aa237cc There is no point in setting out-parameters for a ComplexPattern function when
it returns false, at least as far as I could tell by reading the code.

llvm-svn: 141172
2011-10-05 08:59:05 +00:00
Jakob Stoklund Olesen 2056d15bd9 Also match negative offsets for addrmode3 and addrmode5.
Math is hard, and isScaledConstantInRange() always returned false for
negative constants.  It was doing unsigned division of negative numbers
before casting back to signed.

llvm-svn: 140425
2011-09-23 22:10:33 +00:00
Jim Grosbach e7e2aca322 Tidy up a few 80 column violations.
llvm-svn: 139636
2011-09-13 20:30:37 +00:00
Owen Anderson 939cd21248 When performing instruction selection for LDR_PRE_IMM/LDRB_PRE_IMM, we still need to preserve the sign of the index. This fixes miscompilations of Quicksort in the nightly testsuite, and hopefully others as well.
<rdar://problem/10046188>

llvm-svn: 138885
2011-08-31 20:00:11 +00:00
Eli Friedman 1ccecbb9d3 64-bit atomic cmpxchg for ARM.
llvm-svn: 138868
2011-08-31 17:52:22 +00:00
Eli Friedman c3f9c4a852 Some 64-bit atomic operations on ARM. 64-bit cmpxchg coming next.
llvm-svn: 138845
2011-08-31 00:31:29 +00:00
Owen Anderson 4d5c8f894d addrmode_imm12 and addrmode2_offset encode their immediate values differently. Update the manual instruction selection code that was encoding them the addrmode2 way even though LDR_PRE_IMM/LDRB_PRE_IMM had switched to addrmode_imm12. Should fix a number of nightly test failures.
llvm-svn: 138758
2011-08-29 20:16:50 +00:00
Owen Anderson fd60f60ed1 Fix ARM codegen breakage caused by r138653.
llvm-svn: 138657
2011-08-26 21:12:37 +00:00
Owen Anderson 16d33f36d5 invalid-LDR_PRE-arm.txt was already passing, but for the wrong reasons. We were failing to specify enough fixed bits of LDR_PRE/LDRB_PRE, resulting in decoding conflicts. Separate them into immediate vs. register versions, allowing us to specify the necessary fixed bits. This in turn results in the test being decoded properly, and being rejected as UNPREDICTABLE rather than a hard failure.
llvm-svn: 138653
2011-08-26 20:43:14 +00:00
Jim Grosbach 1b8457a84c Thumb1 ADD/SUB SP instructions are predicable in Thumb2 mode.
Add the predicate operand to the instructions. Update the back end
accordingly where the instructions are used. Restrict the SP operands
to actually only be SP, as otherwise these break assembly parsing for the
normal instruction variants.

llvm-svn: 138445
2011-08-24 17:46:13 +00:00
Jim Grosbach f0c95cadc7 ARM refactor indexed store instructions.
Refactor STR[B] pre and post indexed instructions to use addressing modes for
memory operands, which is necessary for assembly parsing and is more consistent
with the rest of the memory instruction definitions. Make some incremental
progress on refactoring away the mega-operand addrmode2 along the way, which
is nice.

llvm-svn: 136978
2011-08-05 20:35:44 +00:00
Jim Grosbach 03f56d9de6 ARM parsing and encoding of SBFX and UBFX.
Encode the width operand as it encodes in the instruction, which simplifies
the disassembler and the encoder, by using the imm1_32 operand def. Add a
diagnostic for the context-sensitive constraint that the width must be in
the range [1,32-lsb].

llvm-svn: 136264
2011-07-27 21:09:25 +00:00
Owen Anderson 2aedba6c5e Split am2offset into register addend and immediate addend forms, necessary for allowing the fixed-length disassembler to distinguish between SBFX and STR_PRE.
llvm-svn: 136141
2011-07-26 20:54:26 +00:00
Owen Anderson 3fa7ca84d9 Fix test failures caused by my so_reg refactoring.
llvm-svn: 135785
2011-07-22 18:30:30 +00:00
Owen Anderson 0491270f99 Get rid of the extraneous GPR operand on so_reg_imm operands, which in turn necessitates a lot of changes to related bits.
llvm-svn: 135722
2011-07-21 23:38:37 +00:00
Owen Anderson b595ed0085 Split up the ARM so_reg ComplexPattern into so_reg_reg and so_reg_imm, allowing us to distinguish the encodings that use shifted registers from those that use shifted immediates. This is necessary to allow the fixed-length decoder to distinguish things like BICS vs LDRH.
llvm-svn: 135693
2011-07-21 18:54:16 +00:00
Evan Cheng a20cde31e7 Sink ARMMCExpr and ARMAddressingModes into MC layer. First step to separate ARM MC code from target.
llvm-svn: 135636
2011-07-20 23:34:39 +00:00
Evan Cheng 6cc775f905 - Rename TargetInstrDesc, TargetOperandInfo to MCInstrDesc and MCOperandInfo and
sink them into MC layer.
- Added MCInstrInfo, which captures the tablegen generated static data. Chang
TargetInstrInfo so it's based off MCInstrInfo.

llvm-svn: 134021
2011-06-28 19:10:37 +00:00
Owen Anderson 5fc8b77f83 Change the REG_SEQUENCE SDNode to take an explict register class ID as its first operand. This operand is lowered away by the time we reach MachineInstrs, so the actual register-allocation handling of them doesn't need to change.
This is intended to support using REG_SEQUENCE SDNode's with type MVT::untyped, and is part of the long road to eliminating some of the hacks we currently use to support register pairs and other strange constraints, particularly on ARM NEON.

llvm-svn: 133178
2011-06-16 18:17:13 +00:00
Bruno Cardoso Lopes 325110f30d Add support for ARM ldrexd/strexd intrinsics. They both use i32 register pairs
to load/store i64 values. Since there's no current support to explicitly
declare such restrictions, implement it by using specific hardcoded register
pairs during isel.

llvm-svn: 132248
2011-05-28 04:07:29 +00:00
Eli Friedman 468dfabce0 Zap a couple now-unused functions.
llvm-svn: 130557
2011-04-29 22:56:48 +00:00
Bob Wilson 0858c3aaed This patch combines several changes from Evan Cheng for rdar://8659675.
Making use of VFP / NEON floating point multiply-accumulate / subtraction is
difficult on current ARM implementations for a few reasons.
1. Even though a single vmla has latency that is one cycle shorter than a pair
   of vmul + vadd, a RAW hazard during the first (4? on Cortex-a8) can cause
   additional pipeline stall. So it's frequently better to single codegen
   vmul + vadd.
2. A vmla folowed by a vmul, vmadd, or vsub causes the second fp instruction to
   stall for 4 cycles. We need to schedule them apart.
3. A vmla followed vmla is a special case. Obvious issuing back to back RAW
   vmla + vmla is very bad. But this isn't ideal either:
     vmul
     vadd
     vmla
   Instead, we want to expand the second vmla:
     vmla
     vmul
     vadd
   Even with the 4 cycle vmul stall, the second sequence is still 2 cycles
   faster.

Up to now, isel simply avoid codegen'ing fp vmla / vmls. This works well enough
but it isn't the optimial solution. This patch attempts to make it possible to
use vmla / vmls in cases where it is profitable.

A. Add missing isel predicates which cause vmla to be codegen'ed.
B. Make sure the fmul in (fadd (fmul)) has a single use. We don't want to
   compute a fmul and a fmla.
C. Add additional isel checks for vmla, avoid cases where vmla is feeding into
   fp instructions (except for the  exceptional case).
D. Add ARM hazard recognizer to model the vmla / vmls hazards.
E. Add a special pre-regalloc case to expand vmla / vmls when it's likely the
   vmla / vmls will trigger one of the special hazards.

Enable these fp vmlx codegen changes for Cortex-A9.

llvm-svn: 129775
2011-04-19 18:11:57 +00:00
Evan Cheng 4079133796 Do not lose mem_operands while lowering VLD / VST intrinsics.
llvm-svn: 129738
2011-04-19 00:04:03 +00:00
Owen Anderson 6d55745d2f Reduce code duplication.
llvm-svn: 127899
2011-03-18 19:46:58 +00:00
Bill Wendling e1fd78f2bc Generate a VTBL instruction instead of a series of loads and stores when we
can. As Nate pointed out, VTBL isn't super performant, but it *has* to be better
than this:

_shuf:
@ BB#0:       @ %entry
  push        {r4, r7, lr}
  add         r7, sp, 
  sub         sp, 
  mov         r4, sp
  bic         r4, r4, 
  mov         sp, r4
  mov         r2, sp
  vmov        d16, r0, r1
  orr         r0, r2, 
  orr         r3, r2, 
  vst1.8      {d16[0]}, [r3]
  vst1.8      {d16[5]}, [r0]
  subs        r4, r7, 
  orr         r0, r2, 
  vst1.8      {d16[4]}, [r0]
  orr         r0, r2, 
  vst1.8      {d16[4]}, [r0]
  orr         r0, r2, 
  vst1.8      {d16[0]}, [r0]
  orr         r0, r2, 
  vst1.8      {d16[2]}, [r0]
  orr         r0, r2, 
  vst1.8      {d16[1]}, [r0]
  vst1.8      {d16[3]}, [r2]
  vldr.64     d16, [sp]
  vmov        r0, r1, d16
  mov         sp, r4
  pop         {r4, r7, pc}

The "illegal" testcase in vext.ll is no longer illegal.
<rdar://problem/9078775>

llvm-svn: 127630
2011-03-14 23:02:38 +00:00
Jim Grosbach 2fee5327aa Remove dead code. These ARM instruction definitions no longer exist.
llvm-svn: 127509
2011-03-11 23:15:02 +00:00
Bob Wilson 00d09428fe Remove unused conditional negate operations.
llvm-svn: 127090
2011-03-05 16:54:31 +00:00
Bob Wilson e3ecd5fb9b Add patterns to use post-increment addressing for Neon VST1-lane instructions.
llvm-svn: 126477
2011-02-25 06:42:42 +00:00
Chris Lattner 46c01a30f4 Enhance ComputeMaskedBits to know that aligned frameindexes
have their low bits set to zero.  This allows us to optimize
out explicit stack alignment code like in stack-align.ll:test4 when
it is redundant.

Doing this causes the code generator to start turning FI+cst into
FI|cst all over the place, which is general goodness (that is the
canonical form) except that various pieces of the code generator
don't handle OR aggressively.  Fix this by introducing a new
SelectionDAG::isBaseWithConstantOffset predicate, and using it
in places that are looking for ADD(X,CST).  The ARM backend in
particular was missing a lot of addressing mode folding opportunities
around OR.

llvm-svn: 125470
2011-02-13 22:25:43 +00:00
Bob Wilson 06fce87c4a Add codegen support for using post-increment NEON load/store instructions.
The vld1-lane, vld1-dup and vst1-lane instructions do not yet support using
post-increment versions, but all the rest of the NEON load/store instructions
should be handled now.

llvm-svn: 125014
2011-02-07 17:43:21 +00:00
Bob Wilson a609b8954e Change VLD3/4 and VST3/4 for quad registers to not update the address register.
These operations are expanded to pairs of loads or stores, and the first one
uses the address register update to produce the address for the second one.
So far, the second load/store has also updated the address register, just
for convenience, since that output has never been used.  In anticipation of
actually supporting post-increment updates for these operations, this changes
the non-updating operations to use a non-updating load/store for the second
instruction.

llvm-svn: 125013
2011-02-07 17:43:15 +00:00
Evan Cheng b8b0ad80a8 Sorry, several patches in one.
TargetInstrInfo:
Change produceSameValue() to take MachineRegisterInfo as an optional argument.
When in SSA form, targets can use it to make more aggressive equality analysis.

Machine LICM:
1. Eliminate isLoadFromConstantMemory, use MI.isInvariantLoad instead.
2. Fix a bug which prevent CSE of instructions which are not re-materializable.
3. Use improved form of produceSameValue.

ARM:
1. Teach ARM produceSameValue to look pass some PIC labels.
2. Look for operands from different loads of different constant pool entries
   which have same values.
3. Re-implement PIC GA materialization using movw + movt. Combine the pair with
   a "add pc" or "ldr [pc]" to form pseudo instructions. This makes it possible
   to re-materialize the instruction, allow machine LICM to hoist the set of
   instructions out of the loop and make it possible to CSE them. It's a bit
   hacky, but it significantly improve code quality.
4. Some minor bug fixes as well.

With the fixes, using movw + movt to materialize GAs significantly outperform the
load from constantpool method. 186.crafty and 255.vortex improved > 20%, 254.gap
and 176.gcc ~10%.

llvm-svn: 123905
2011-01-20 08:34:58 +00:00
Daniel Dunbar e0cd9ac096 ARM/ISel: Factor out isScaledConstantInRange() helper.
llvm-svn: 123823
2011-01-19 15:12:16 +00:00
Evan Cheng dfce83c8f5 Materialize GA addresses with movw + movt pairs for Darwin in PIC mode. e.g.
movw    r0, :lower16:(L_foo$non_lazy_ptr-(LPC0_0+4))
        movt    r0, :upper16:(L_foo$non_lazy_ptr-(LPC0_0+4))
LPC0_0:
        add     r0, pc, r0

It's not yet enabled by default as some tests are failing. I suspect bugs in
down stream tools.

llvm-svn: 123619
2011-01-17 08:03:18 +00:00
Anton Korobeynikov 62acecd7e1 Model operand restrictions of mul-like instructions on ARMv5 via
earlyclobber stuff. This should fix PRs 2313 and 8157.

Unfortunately, no testcase, since it'd be dependent on register
assignments.

llvm-svn: 122663
2011-01-01 20:38:38 +00:00
Andrew Trick c416ba612b whitespace
llvm-svn: 122539
2010-12-24 04:28:06 +00:00
Chris Lattner 3e5fbd74ed rename MVT::Flag to MVT::Glue. "Flag" is a terrible name for
something that just glues two nodes together, even if it is
sometimes used for flags.

llvm-svn: 122310
2010-12-21 02:38:05 +00:00
Bob Wilson 261aad8e16 Use PairDRegs to implement ConcatVectors. No functionality change.
llvm-svn: 122017
2010-12-17 01:21:08 +00:00
Jim Grosbach bfef309d11 Thumb1 had two patterns for the same load-from-constant-pool instruction.
Canonicalize on tLDRpci and remove tLDRcp.

llvm-svn: 121920
2010-12-15 23:52:36 +00:00
Bill Wendling 832a5daab5 Reapply r121808 now that the missing patterns have been supplied.
llvm-svn: 121820
2010-12-15 01:03:19 +00:00
Bill Wendling 20480d26e9 Revert r121808 until I can fix the build.
llvm-svn: 121815
2010-12-15 00:04:00 +00:00
Bill Wendling 00adcd6ed9 Make the ISel selections for LDR/STR the same as before the LDRr/LDRi split. In
particular, we want

   ldr r2, [r3]

to be equivalent to

   ldr r2, [r3, ]

and not

   ldr r2, [r3, r0]

llvm-svn: 121808
2010-12-14 23:40:49 +00:00
Bill Wendling 092a7bdf9f The tLDR et al instructions were emitting either a reg/reg or reg/imm
instruction based on the t_addrmode_s# mode and what it returned. There is some
obvious badness to this. In particular, it's hard to do MC-encoding when the
instruction may change out from underneath you after the t_addrmode_s# variable
is finally resolved.

The solution is to revert a long-ago change that merged the reg/reg and reg/imm
versions. There is the addition of several new addressing modes. They no longer
have extraneous operands associated with them. I.e., if it's reg/reg we don't
have to have a dummy zero immediate tacked on to the SDNode.

There are some obvious cleanups here, which will happen shortly.

llvm-svn: 121747
2010-12-14 03:36:38 +00:00
Bob Wilson d29b38c893 Fix some invalid alignments for Neon vld-dup and vld/st-lane instructions.
Alignments smaller than the total size of the memory being loaded or stored,
unless the alignment is 8 bytes, are not allowed.  Add tests for this, too.

llvm-svn: 121506
2010-12-10 19:37:42 +00:00
Evan Cheng 62c7b5bf76 Making use of VFP / NEON floating point multiply-accumulate / subtraction is
difficult on current ARM implementations for a few reasons.
1. Even though a single vmla has latency that is one cycle shorter than a pair
   of vmul + vadd, a RAW hazard during the first (4? on Cortex-a8) can cause
   additional pipeline stall. So it's frequently better to single codegen
   vmul + vadd.
2. A vmla folowed by a vmul, vmadd, or vsub causes the second fp instruction to
   stall for 4 cycles. We need to schedule them apart.
3. A vmla followed vmla is a special case. Obvious issuing back to back RAW
   vmla + vmla is very bad. But this isn't ideal either:
     vmul
     vadd
     vmla
   Instead, we want to expand the second vmla:
     vmla
     vmul
     vadd
   Even with the 4 cycle vmul stall, the second sequence is still 2 cycles
   faster.

Up to now, isel simply avoid codegen'ing fp vmla / vmls. This works well enough
but it isn't the optimial solution. This patch attempts to make it possible to
use vmla / vmls in cases where it is profitable.

A. Add missing isel predicates which cause vmla to be codegen'ed.
B. Make sure the fmul in (fadd (fmul)) has a single use. We don't want to
   compute a fmul and a fmla.
C. Add additional isel checks for vmla, avoid cases where vmla is feeding into
   fp instructions (except for the  exceptional case).
D. Add ARM hazard recognizer to model the vmla / vmls hazards.
E. Add a special pre-regalloc case to expand vmla / vmls when it's likely the
   vmla / vmls will trigger one of the special hazards.

Work in progress, only A+B are enabled.

llvm-svn: 120960
2010-12-05 22:04:16 +00:00
Bob Wilson 431ac4ef50 Add support for NEON VLD3-dup instructions.
The encoding for alignment in VLD4-dup instructions is still a work in progress.

llvm-svn: 120356
2010-11-30 00:00:35 +00:00
Bob Wilson 77ab165afe Add support for NEON VLD3-dup instructions.
llvm-svn: 120312
2010-11-29 19:35:29 +00:00
Bob Wilson 2d790df105 Add support for NEON VLD2-dup instructions.
llvm-svn: 120236
2010-11-28 06:51:26 +00:00
Evan Cheng a5f048485f Fix a cut-n-paste-error.
llvm-svn: 119866
2010-11-19 23:01:16 +00:00
Evan Cheng 39c81c0a55 Avoid isel movcc of large immediates when the large immediate is available in a register. These immediates aren't free.
llvm-svn: 119558
2010-11-17 20:56:30 +00:00
Evan Cheng 2bcb8daa44 Add conditional move of large immediate.
llvm-svn: 118968
2010-11-13 02:25:14 +00:00
Evan Cheng 8ce967e393 Fix an obvious typo which inverted an immediate.
llvm-svn: 118951
2010-11-13 00:27:47 +00:00
Evan Cheng 0fc8084a64 Add conditional mvn instructions.
llvm-svn: 118935
2010-11-12 22:42:47 +00:00
Duncan Sands 1462777017 Simplify uses of MVT and EVT. An MVT can be compared directly
with a SimpleValueType, while an EVT supports equality and
inequality comparisons with SimpleValueType.

llvm-svn: 118169
2010-11-03 12:17:33 +00:00
Jim Grosbach c6af2b4066 Break ARM addrmode4 (load/store multiple base address) into its constituent
parts. Represent the operation mode as an optional operand instead.
rdar://8614429

llvm-svn: 118137
2010-11-03 01:01:43 +00:00
Bob Wilson dd9fbaa9c0 Add support for alignment operands on VLD1-lane instructions.
This is another part of the fix for Radar 8599955.

llvm-svn: 117976
2010-11-01 23:40:51 +00:00
Evan Cheng 59bbc545e0 Shifter ops are not always free. Do not fold them (especially to form
complex load / store addressing mode) when they have higher cost and
when they have more than one use.

llvm-svn: 117509
2010-10-27 23:41:30 +00:00
Jim Grosbach 1e4d9a17c2 First part of refactoring ARM addrmode2 (load/store) instructions to be more
explicit about the operands. Split out the different variants into separate
instructions. This gives us the ability to, among other things, assign
different scheduling itineraries to the variants. rdar://8477752.

llvm-svn: 117409
2010-10-26 22:37:02 +00:00
Jim Grosbach d37f0715b1 trailing whitespace
llvm-svn: 117050
2010-10-21 19:38:40 +00:00
Bob Wilson b6d61dc291 Support alignment for NEON vld-lane and vst-lane instructions.
llvm-svn: 116776
2010-10-19 00:16:32 +00:00
Jim Grosbach 5b255c2dd6 Allow use of the 16-bit literal move instruction in CMOVs for Thumb2 mode.
llvm-svn: 115890
2010-10-07 00:53:56 +00:00
Jim Grosbach 742adc328a Allow use of the 16-bit literal move instruction in CMOVs for ARM mode.
llvm-svn: 115884
2010-10-07 00:42:42 +00:00
Jim Grosbach 0860520527 Add specializations of addrmode2 that allow differentiating those forms
which require the use of the shifter-operand. This will be used to split
the ldr/str instructions such that those versions needing the shifter operand
can get a different scheduling itenerary, as in some cases, the use of the
shifter can cause different scheduling than the simpler forms.

llvm-svn: 115066
2010-09-29 19:03:54 +00:00
Jim Grosbach c7b10f3745 Add braces for legibility.
llvm-svn: 115043
2010-09-29 17:32:29 +00:00
Bob Wilson 7fbbe9a43a Set alignment operand for NEON VST instructions.
llvm-svn: 114709
2010-09-23 23:42:37 +00:00
Bob Wilson 9eeb890172 Set alignment operand for NEON VLD instructions.
llvm-svn: 114696
2010-09-23 21:43:54 +00:00
Chris Lattner 0e023ea02a fix a long standing wart: all the ComplexPattern's were being
passed the root of the match, even though only a few patterns
actually needed this (one in X86, several in ARM [which should
be refactored anyway], and some in CellSPU that I don't feel 
like detangling).   Instead of requiring all ComplexPatterns to
take the dead root, have targets opt into getting the root by
putting SDNPWantRoot on the ComplexPattern.

llvm-svn: 114471
2010-09-21 20:31:19 +00:00
Eric Christopher 726838a3e5 Fix QOpcode assignment to Opc.
llvm-svn: 113837
2010-09-14 08:31:25 +00:00
Bob Wilson c597fd3b4a Convert some VTBL and VTBX instructions to use pseudo instructions prior to
register allocation.  Remove the NEONPreAllocPass, which is no longer needed.
Yeah!!

llvm-svn: 113818
2010-09-13 23:55:10 +00:00
Bob Wilson d5c57a5ed4 Switch all the NEON vld-lane and vst-lane instructions over to the new
pseudo-instruction approach.  Change ARMExpandPseudoInsts to use a table
to record all the NEON load/store information.

llvm-svn: 113812
2010-09-13 23:01:35 +00:00
Chris Lattner f43cb302ca remove some dead code. t2addrmode_imm8s4 is never used in a
pattern, so there is no need to define a matching function.

llvm-svn: 113122
2010-09-05 22:51:11 +00:00
Bob Wilson 35fafca587 Finish converting the rest of the NEON VLD instructions to use pseudo-
instructions prior to regalloc.  Since it's getting a little close to
the 2.8 branch deadline, I'll have to leave the rest of the instructions
handled by the NEONPreAllocPass for now, but I didn't want to leave half
of the VLD instructions converted and the other half not.

llvm-svn: 112983
2010-09-03 18:16:02 +00:00
Bob Wilson 75a6408f88 Convert VLD1 and VLD2 instructions to use pseudo-instructions until
after regalloc.

llvm-svn: 112825
2010-09-02 16:00:54 +00:00
Chris Lattner 39eccb4754 temporarily revert r112664, it is causing a decoding conflict, and
the testcases should be merged.

llvm-svn: 112711
2010-09-01 16:00:50 +00:00
Bill Wendling 6789f8b6ae We have a chance for an optimization. Consider this code:
int x(int t) {
  if (t & 256)
    return -26;
  return 0;
}

We generate this:

     tst.w   r0, 
     mvn     r0, 
     it      eq
     moveq   r0, 

while gcc generates this:

     ands    r0, r0, 
     it      ne
     mvnne   r0, 
     bx      lr

Scandalous really!

During ISel time, we can look for this particular pattern. One where we have a
"MOVCC" that uses the flag off of a CMPZ that itself is comparing an AND
instruction to 0. Something like this (greatly simplified):

  %r0 = ISD::AND ...
  ARMISD::CMPZ %r0, 0         @ sets [CPSR]
  %r0 = ARMISD::MOVCC 0, -26  @ reads [CPSR]

All we have to do is convert the "ISD::AND" into an "ARM::ANDS" that sets [CPSR]
when it's zero. The zero value will all ready be in the %r0 register and we only
need to change it if the AND wasn't zero. Easy!

llvm-svn: 112664
2010-08-31 22:41:22 +00:00
Bob Wilson 950882be07 Use pseudo instructions for VST1 and VST2.
llvm-svn: 112357
2010-08-28 05:12:57 +00:00
Bob Wilson 8ee9394750 We don't need to custom-select VLDMQ and VSTMQ anymore.
llvm-svn: 112336
2010-08-28 00:20:11 +00:00
Bob Wilson 13ce07fa92 Change ARM VFP VLDM/VSTM instructions to use addressing mode , just like
all the other LDM/STM instructions.  This fixes asm printer crashes when
compiling with -O0.  I've changed one of the NEON tests (vst3.ll) to run
with -O0 to check this in the future.

Prior to this change VLDM/VSTM used addressing mode , but not really.
The offset field was used to hold a count of the number of registers being
loaded or stored, and the AM5 opcode field was expanded to specify the IA
or DB mode, instead of the standard ADD/SUB specifier.  Much of the backend
was not aware of these special cases.  The crashes occured when rewriting
a frameindex caused the AM5 offset field to be changed so that it did not
have a valid submode.  I don't know exactly what changed to expose this now.
Maybe we've never done much with -O0 and NEON.  Regardless, there's no longer
any reason to keep a count of the VLDM/VSTM registers, so we can use
addressing mode  and clean things up in a lot of places.

llvm-svn: 112322
2010-08-27 23:18:17 +00:00
Bob Wilson 97919e9c59 Use pseudo instructions for VST3.
llvm-svn: 112208
2010-08-26 18:51:29 +00:00
Bob Wilson 4cec44975e Use pseudo instructions for VST1d64Q.
llvm-svn: 112170
2010-08-26 05:33:30 +00:00
Bob Wilson 9392b0e960 Start converting NEON load/stores to use pseudo instructions, beginning here
with the VST4 instructions.  Until after register allocation, we want to
represent sets of adjacent registers by a single super-register.  These
VST4 pseudo instructions have a single QQ or QQQQ source register operand.
They get expanded to the real VST4 instructions with 4 separate D register
operands.  Once this conversion is complete, we'll be able to remove the
NEONPreAllocPass and avoid some fragile and hacky code elsewhere.

llvm-svn: 112108
2010-08-25 23:27:42 +00:00
Jakob Stoklund Olesen e2cbaf6ed7 Don't call tablegen'ed Predicate_* functions in the ARM target.
llvm-svn: 111277
2010-08-17 20:39:04 +00:00
Evan Cheng 59069ec784 Add -disable-shifter-op to disable isel of shifter ops. On Cortex-a9 the shifts cost extra instructions so it might be better to emit them separately to take advantage of dual-issues.
llvm-svn: 109934
2010-07-30 23:33:54 +00:00
Bob Wilson 5bc8a79e7f Also use REG_SEQUENCE for VTBX instructions.
llvm-svn: 107743
2010-07-07 00:08:54 +00:00
Bob Wilson 3ed511bc6b Use REG_SEQUENCE nodes to make the table registers for VTBL instructions be
allocated to consecutive registers.

llvm-svn: 107730
2010-07-06 23:36:25 +00:00
Duncan Sands 78ad27ca2b Remove an unused and a pointless variable.
llvm-svn: 107131
2010-06-29 13:00:29 +00:00
Dan Gohman f1d8304fe3 Eliminate unnecessary uses of getZExtValue().
llvm-svn: 106279
2010-06-18 14:22:04 +00:00
Bob Wilson 01ac8f9fc0 Remove the hidden "neon-reg-sequence" option. The reg sequences are working
now, so there's no need to disable them.

llvm-svn: 106155
2010-06-16 21:34:01 +00:00
Bob Wilson d8a9a04739 For NEON vectors with 32- or 64-bit elements, select BUILD_VECTORs and
VECTOR_SHUFFLEs to REG_SEQUENCE instructions.  The standard ISD::BUILD_VECTOR
node corresponds closely to REG_SEQUENCE but I couldn't use it here because
its operands do not get legalized.  That is pretty awful, but I guess it
makes sense for other targets.  Instead, I have added an ARM-specific version
of BUILD_VECTOR that will have its operands properly legalized.
This fixes the rest of Radar 7872877.

llvm-svn: 105439
2010-06-04 00:04:02 +00:00
Dale Johannesen d679ff7330 Early implementation of tail call for ARM.
A temporary flag -arm-tail-calls defaults to off,
so there is no functional change by default.
Intrepid users may try this; simple cases work
but there are bugs.

llvm-svn: 105413
2010-06-03 21:09:53 +00:00
Jim Grosbach 84511e1526 Clean up 80 column violations. No functional change.
llvm-svn: 105350
2010-06-02 21:53:11 +00:00
Bob Wilson b6112e8706 Add the cc_out operand for t2RSBrs instructions. I missed this when I changed
the instruction class for t2RSB to add that operand in svn r104582.
Radar 8033757.

llvm-svn: 104907
2010-05-28 00:27:15 +00:00
Jakob Stoklund Olesen 8d042c0269 Fix a few places that depended on the numeric value of subreg indices.
Add assertions in places that depend on consecutive indices.

llvm-svn: 104510
2010-05-24 17:13:28 +00:00
Jakob Stoklund Olesen 6c47d6423c Switch ARMRegisterInfo.td to use SubRegIndex and eliminate the parallel enums
from ARMRegisterInfo.h

llvm-svn: 104508
2010-05-24 16:54:32 +00:00
Evan Cheng e89f5ae9d4 Target instruction selection should copy memoperands.
llvm-svn: 104110
2010-05-19 06:06:09 +00:00
Evan Cheng 3d98b996ff Turn on -neon-reg-sequence by default.
Using NEON load / store multiple instructions will no longer create gobs of vmov of D registers!

llvm-svn: 103960
2010-05-17 19:51:20 +00:00
Evan Cheng 298e6b82eb Model vst lane instructions with REG_SEQUENCE.
llvm-svn: 103898
2010-05-16 03:27:48 +00:00
Evan Cheng 9e688cbcc9 Model 128-bit vld lane with REG_SEQUENCE.
llvm-svn: 103868
2010-05-15 07:53:37 +00:00
Evan Cheng 0cbd11dfb2 Model 64-bit lane vld with REG_SEQUENCE.
llvm-svn: 103851
2010-05-15 01:36:29 +00:00
Evan Cheng cb78e5558b Model VST*_UPD and VST*oddUPD pair with REG_SEQUENCE.
llvm-svn: 103833
2010-05-14 22:54:52 +00:00
Evan Cheng cfa7d02d6e Model VLD*_UPD and VLD*odd_UPD pair with REG_SEQUENCE.
llvm-svn: 103790
2010-05-14 18:54:59 +00:00
Evan Cheng ca21cc8b13 Fix comments.
llvm-svn: 103749
2010-05-14 00:21:45 +00:00
Evan Cheng e276c18385 Model some vst3 and vst4 with reg_sequence.
llvm-svn: 103453
2010-05-11 01:19:40 +00:00
Evan Cheng 630063aa0d Model some vld3 instructions with REG_SEQUENCE.
llvm-svn: 103437
2010-05-10 21:26:24 +00:00
Evan Cheng c2ae5f546f Model vld2 / vst2 with reg_sequence.
llvm-svn: 103411
2010-05-10 17:34:18 +00:00
Bob Wilson f765e1f34a Add a missing break statement to fix unintentional fall-through
(replacing the previous patch for the same issue).

llvm-svn: 103183
2010-05-06 16:05:26 +00:00
Jim Grosbach 5e3cccb1e4 Fix unintentional fallthrough. Patch by Edmund Grimley-Evans <Edmund.Grimley-Evans@arm.com>
llvm-svn: 103181
2010-05-06 15:32:49 +00:00
Evan Cheng d85631e700 Model CONCAT_VECTORS of two 64-bit values as a REG_SEQUENCE.
llvm-svn: 103104
2010-05-05 18:28:36 +00:00
Evan Cheng 8e6b40a881 With -neon-reg-sequence, models forming a Q register from a pair of consecutive D registers as a REG_SEQUENCE.
llvm-svn: 103047
2010-05-04 20:39:49 +00:00
Jim Grosbach 825cb299cd Update ARM DAGtoDAG for matching UBFX instruction for unsigned bitfield
extraction. This fixes PR5998.

llvm-svn: 102144
2010-04-22 23:24:18 +00:00
Dan Gohman 21cea8ac2e Use const qualifiers with TargetLowering. This eliminates several
const_casts, and it reinforces the design of the Target classes being
immutable.

SelectionDAGISel::IsLegalToFold is now a static member function, because
PIC16 uses it in an unconventional way. There is more room for API
cleanup here.

And PIC16's AsmPrinter no longer uses TargetLowering.

llvm-svn: 101635
2010-04-17 15:26:15 +00:00
Evan Cheng 3da64f7672 Use getAL() rather than a major constant.
llvm-svn: 101446
2010-04-16 05:46:06 +00:00
Evan Cheng f7f97b4bbd Use default lowering of DYNAMIC_STACKALLOC. As far as I can tell, ARM isle is doing the right thing and codegen looks correct for both Thumb and Thumb2.
llvm-svn: 101410
2010-04-15 22:20:34 +00:00
Evan Cheng 1ba1428577 ARM SelectDYN_ALLOC should emit a copy from SP rather than referencing SP directly. In cases where there are two dyn_alloc in the same BB it would have caused the old SP value to be reused and badness ensues. rdar://7493908
llvm is generating poor code for dynamic alloca, I'll fix that later.

llvm-svn: 101383
2010-04-15 18:42:28 +00:00
Bob Wilson 59f75bba24 Fix VLDMQ and VSTMQ instructions to use the correct encoding and address modes.
These instructions are only needed for codegen, so I've removed all the
explicit encoding bits for now; they should be set in the same way as the for
VLDMD and VSTMD whenever we add encodings for VFP.  The use of addrmode5
requires that the instructions be custom-selected so that the number of
registers can be set in the AM5Opc value.

llvm-svn: 99309
2010-03-23 18:54:46 +00:00
Bob Wilson cc0a2a75a0 Change VST1 instructions for loading Q register values to operate on pairs
of D registers.  Add a separate VST1q instruction with a Q register
source operand for use by storeRegToStackSlot.

llvm-svn: 99265
2010-03-23 06:20:33 +00:00
Bob Wilson 340861d29e Change VLD1 instructions for loading Q register values to operate on pairs
of D registers.  Add a separate VLD1q instruction with a Q register
destination operand for use by loadRegFromStackSlot.

llvm-svn: 99261
2010-03-23 05:25:43 +00:00
Bob Wilson c53a1125ff Rename some VLD1/VST1 instructions to match the implementation, i.e., the
corresponding NEON instructions, instead of operation they are currently
used for.

llvm-svn: 99189
2010-03-22 18:13:18 +00:00
Bob Wilson ae08a736d6 Re-commit r98683 ("remove redundant writeback flag from ARM address mode 6")
with changes to add a separate optional register update argument.  Change all
the NEON instructions with address register writeback to use it.

llvm-svn: 99095
2010-03-20 22:13:40 +00:00
Bob Wilson c0795f8b87 Rename some instructions for consistency and sanity: use "_UPD" suffix for
load/stores with address register writeback, and use "odd" suffix to distinguish
instructions to access odd numbered registers (instead of "a" and "b").
No functional changes.

llvm-svn: 99066
2010-03-20 18:35:24 +00:00
Bob Wilson c7ba918b84 Revert 98683. It is breaking something in the disassembler.
llvm-svn: 98692
2010-03-16 23:01:13 +00:00
Bob Wilson c953bca10b Remove redundant writeback flag from ARM address mode 6. Also remove the
optional register update argument, which is currently unused -- when we add
support for that, it can just be a separate operand.

llvm-svn: 98683
2010-03-16 21:44:40 +00:00
Chris Lattner f98f124a73 Sink InstructionSelect() out of each target into SDISel, and rename it
DoInstructionSelection.  Inline "SelectRoot" into it from DAGISelHeader.
Sink some other stuff out of DAGISelHeader into SDISel.

Eliminate the various 'Indent' stuff from various targets, which dates
to when isel was recursive.

 17 files changed, 114 insertions(+), 430 deletions(-)

llvm-svn: 97555
2010-03-02 06:34:30 +00:00
Evan Cheng 5e73ff2e3a Split SelectionDAGISel::IsLegalAndProfitableToFold to
IsLegalToFold and IsProfitableToFold. The generic version of the later simply checks whether the folding candidate has a single use.

This allows the target isel routines more flexibility in deciding whether folding makes sense. The specific case we are interested in is folding constant pool loads with multiple uses.

llvm-svn: 96255
2010-02-15 19:41:07 +00:00
Chris Lattner b06015aa69 move target-independent opcodes out of TargetInstrInfo
into TargetOpcodes.h.  #include the new TargetOpcodes.h
into MachineInstr.  Add new inline accessors (like isPHI())
to MachineInstr, and start using them throughout the 
codebase.

llvm-svn: 95687
2010-02-09 19:54:29 +00:00
Evan Cheng 6c0fb92c03 Fix r93758. Use isel patterns instead of c++ selection code to select rbit and make sure we pick different instructions for ARM vs. Thumb2.
llvm-svn: 93829
2010-01-19 00:44:15 +00:00
Jim Grosbach 8546ec9c14 Patch by David Conrad:
"On ARMv6T2 this turns cttz into rbit, clz instead of the 4 instruction
 sequence it is now."

llvm-svn: 93758
2010-01-18 19:58:49 +00:00
Bob Wilson 55d2ebda31 Fix an off-by-one error that caused the chain operand to be dropped from Neon
vector load-lane and store-lane instructions.

llvm-svn: 93673
2010-01-17 05:58:23 +00:00
Dan Gohman ea6f91ff64 Change SelectCode's argument from SDValue to SDNode *, to make it more
clear what information these functions are actually using.

This is also a micro-optimization, as passing a SDNode * around is
simpler than passing a { SDNode *, int } by value or reference.

llvm-svn: 92564
2010-01-05 01:24:18 +00:00
Anton Korobeynikov 2522908653 Materialize global addresses via movt/movw pair, this is always better
than doing the same via constpool:
1. Load from constpool costs 3 cycles on A9, movt/movw pair - just 2.
2. Load from constpool might stall up to 300 cycles due to cache miss.
3. Movt/movw does not use load/store unit.
4. Less constpool entries => better compiler performance.

This is only enabled on ELF systems, since darwin does not have needed
relocations (yet).

llvm-svn: 89720
2009-11-24 00:44:37 +00:00
Evan Cheng a33fc86be3 Add predicate operand to NEON instructions. Fix lots (but not all) 80 col violations in ARMInstrNEON.td.
llvm-svn: 89542
2009-11-21 06:21:52 +00:00
Evan Cheng 81a2851bcb Fix codegen of conditional move of immediates. We were not making use of the immediate forms of cmov instructions at all.
llvm-svn: 89423
2009-11-20 00:54:03 +00:00
Evan Cheng b6c7704a8d Refactor cmov selection code out to a separate function. No functionality change.
llvm-svn: 89396
2009-11-19 21:45:22 +00:00
Evan Cheng 82adca8373 80 col violation.
llvm-svn: 89337
2009-11-19 08:16:50 +00:00
Jim Grosbach d7cf55cd0e Use Unified Assembly Syntax for the ARM backend.
llvm-svn: 86494
2009-11-09 00:11:35 +00:00
Jim Grosbach d1d002a6fe Support alignment specifier for NEON vld/vst instructions
llvm-svn: 86404
2009-11-07 21:25:39 +00:00
Dan Gohman b15f4a1cbd Remove uninteresting and confusing debug output.
llvm-svn: 86149
2009-11-05 18:47:09 +00:00
Bob Wilson e90a4aa703 Prune unnecessary include.
llvm-svn: 85805
2009-11-02 16:58:31 +00:00
Johnny Chen b678a56fef Test commit. Added '.' to the comment line.
llvm-svn: 85255
2009-10-27 17:25:15 +00:00
Evan Cheng 0f55e9ce2e Don't generate sbfx / ubfx with negative lsb field. Patch by David Conrad.
llvm-svn: 84813
2009-10-22 00:40:00 +00:00
Evan Cheng 786b15fe12 Match more patterns to movt.
llvm-svn: 84751
2009-10-21 08:15:52 +00:00
Bob Wilson ad03cf02f6 Remove unused variables to fix build warning.
llvm-svn: 84144
2009-10-14 21:40:45 +00:00
Bob Wilson c350cdf3b3 Refactor code to select NEON VST intrinsics.
llvm-svn: 84122
2009-10-14 18:32:29 +00:00
Bob Wilson 12b4799787 Refactor code to select NEON VLD intrinsics.
llvm-svn: 84117
2009-10-14 17:28:52 +00:00
Bob Wilson 93117bc499 More refactoring. NEON vst lane intrinsics can share almost all the code for
vld lane intrinsics.

llvm-svn: 84110
2009-10-14 16:46:45 +00:00
Bob Wilson 4145e3ac8d Refactor code for selecting NEON load lane intrinsics.
llvm-svn: 84109
2009-10-14 16:19:03 +00:00
Bob Wilson b62d160b3c More Neon clean-up: avoid the need for custom-lowering vld/st-lane intrinsics
by creating TargetConstants during instruction selection instead of during
legalization.

llvm-svn: 84042
2009-10-13 22:29:24 +00:00
Bob Wilson 3b51560ae4 Revise ARM inline assembly memory operands to require the memory address to
be in a register.  The previous use of ARM address mode 2 was completely
arbitrary and inappropriate for Thumb.  Radar 7137468.

llvm-svn: 84022
2009-10-13 20:50:28 +00:00
Sandeep Patel 7460e0822f Fix method name in comment, per Bob Wilson.
llvm-svn: 84017
2009-10-13 20:25:58 +00:00
Sandeep Patel 423e42b371 Add ARMv6T2 SBFX/UBFX instructions. Approved by Anton Korobeynikov.
llvm-svn: 84009
2009-10-13 18:59:48 +00:00
Bob Wilson 84e7967fae Add codegen support for NEON vst4lane intrinsics with 128-bit vectors.
llvm-svn: 83600
2009-10-09 00:01:36 +00:00
Bob Wilson c409030838 Add codegen support for NEON vst3lane intrinsics with 128-bit vectors.
llvm-svn: 83598
2009-10-08 23:51:31 +00:00