Commit Graph

77 Commits

Author SHA1 Message Date
Jay Foad 465101bb0e Make use of MachinePointerInfo::getFixedStack. This removes all mention
of PseudoSourceValue from lib/Target/.

llvm-svn: 144632
2011-11-15 07:34:52 +00:00
Jim Grosbach 1b8457a84c Thumb1 ADD/SUB SP instructions are predicable in Thumb2 mode.
Add the predicate operand to the instructions. Update the back end
accordingly where the instructions are used. Restrict the SP operands
to actually only be SP, as otherwise these break assembly parsing for the
normal instruction variants.

llvm-svn: 138445
2011-08-24 17:46:13 +00:00
Owen Anderson 12d13efa21 Handle new register classes in Thumb2 mode. Should fix the ARM buildbots.
llvm-svn: 137364
2011-08-11 21:52:38 +00:00
Evan Cheng a20cde31e7 Sink ARMMCExpr and ARMAddressingModes into MC layer. First step to separate ARM MC code from target.
llvm-svn: 135636
2011-07-20 23:34:39 +00:00
Jim Grosbach e9cc901814 Refact ARM Thumb1 tMOVr instruction family.
Merge the tMOVr, tMOVgpr2tgpr, tMOVtgpr2gpr, and tMOVgpr2gpr instructions
into tMOVr. There's no need to keep them separate. Giving the tMOVr
instruction the proper GPR register class for its operands is sufficient
to give the register allocator enough information to do the right thing
directly.

llvm-svn: 134204
2011-06-30 23:38:17 +00:00
Jim Grosbach b98ab91e39 Thumb1 register to register MOV instruction is predicable.
Fix a FIXME and allow predication (in Thumb2) for the T1 register to
register MOV instructions. This allows some better codegen with
if-conversion (as seen in the test updates), plus it lays the groundwork
for pseudo-izing the tMOVCC instructions.

llvm-svn: 134197
2011-06-30 22:10:46 +00:00
Jim Grosbach cfe3b14d77 Kill dead code.
llvm-svn: 134131
2011-06-30 02:23:05 +00:00
Jim Grosbach a8a8067dec Remove redundant Thumb2 ADD/SUB SP instruction definitions.
Unlike Thumb1, Thumb2 does not have dedicated encodings for adjusting the
stack pointer. It can just use the normal add-register-immediate encoding
since it can use all registers as a source, not just R0-R7. The extra
instruction definitions are just duplicates of the normal instructions with
the (not well enforced) constraint that the source register was SP.

llvm-svn: 134114
2011-06-29 23:25:04 +00:00
Evan Cheng 1e210d08d8 Merge XXXGenRegisterNames.inc into XXXGenRegisterInfo.inc
llvm-svn: 134024
2011-06-28 20:07:07 +00:00
Evan Cheng 6cc775f905 - Rename TargetInstrDesc, TargetOperandInfo to MCInstrDesc and MCOperandInfo and
sink them into MC layer.
- Added MCInstrInfo, which captures the tablegen generated static data. Chang
TargetInstrInfo so it's based off MCInstrInfo.

llvm-svn: 134021
2011-06-28 19:10:37 +00:00
Anton Korobeynikov e7410dd0d5 Preliminary support for ARM frame save directives emission via MI flags.
This is just very first approximation how the stuff should be done
(e.g. ARM-only for now). More to follow.

llvm-svn: 127101
2011-03-05 18:43:32 +00:00
Evan Cheng 666cf56668 Guard against de-referencing MBB.end().
llvm-svn: 126192
2011-02-22 07:07:59 +00:00
Evan Cheng 87a9f19f9c Skipping over debugvalue instructions to determine whether the split spot is in a IT block. rdar://9030770
llvm-svn: 126159
2011-02-21 23:40:47 +00:00
Evan Cheng 62c7b5bf76 Making use of VFP / NEON floating point multiply-accumulate / subtraction is
difficult on current ARM implementations for a few reasons.
1. Even though a single vmla has latency that is one cycle shorter than a pair
   of vmul + vadd, a RAW hazard during the first (4? on Cortex-a8) can cause
   additional pipeline stall. So it's frequently better to single codegen
   vmul + vadd.
2. A vmla folowed by a vmul, vmadd, or vsub causes the second fp instruction to
   stall for 4 cycles. We need to schedule them apart.
3. A vmla followed vmla is a special case. Obvious issuing back to back RAW
   vmla + vmla is very bad. But this isn't ideal either:
     vmul
     vadd
     vmla
   Instead, we want to expand the second vmla:
     vmla
     vmul
     vadd
   Even with the 4 cycle vmul stall, the second sequence is still 2 cycles
   faster.

Up to now, isel simply avoid codegen'ing fp vmla / vmls. This works well enough
but it isn't the optimial solution. This patch attempts to make it possible to
use vmla / vmls in cases where it is profitable.

A. Add missing isel predicates which cause vmla to be codegen'ed.
B. Make sure the fmul in (fadd (fmul)) has a single use. We don't want to
   compute a fmul and a fmla.
C. Add additional isel checks for vmla, avoid cases where vmla is feeding into
   fp instructions (except for the #3 exceptional case).
D. Add ARM hazard recognizer to model the vmla / vmls hazards.
E. Add a special pre-regalloc case to expand vmla / vmls when it's likely the
   vmla / vmls will trigger one of the special hazards.

Work in progress, only A+B are enabled.

llvm-svn: 120960
2010-12-05 22:04:16 +00:00
Evan Cheng debf9c502a Two sets of changes. Sorry they are intermingled.
1. Fix pre-ra scheduler so it doesn't try to push instructions above calls to
   "optimize for latency". Call instructions don't have the right latency and
   this is more likely to use introduce spills.
2. Fix if-converter cost function. For ARM, it should use instruction latencies,
   not # of micro-ops since multi-latency instructions is completely executed
   even when the predicate is false. Also, some instruction will be "slower"
   when they are predicated due to the register def becoming implicit input.
   rdar://8598427

llvm-svn: 118135
2010-11-03 00:45:17 +00:00
Owen Anderson f31f33ea89 Thread the determination of branch prediction hit rates back through the if-conversion heuristic APIs. For now,
stick with a constant estimate of 90% (branch predictors are good!), but we might find that we want to provide
more nuanced estimates in the future.

llvm-svn: 115364
2010-10-01 22:45:50 +00:00
Owen Anderson 671d57865e Provide an option to restore old-style if-conversion heuristics for Thumb2.
llvm-svn: 115339
2010-10-01 20:28:06 +00:00
Owen Anderson 88af7d00fc Part one of switching to using a more sane heuristic for determining if-conversion profitability.
Rather than having arbitrary cutoffs, actually try to cost model the conversion.

For now, the constants are tuned to more or less match our existing behavior, but these will be
changed to reflect realistic values as this work proceeds.

llvm-svn: 114973
2010-09-28 18:32:13 +00:00
Chris Lattner e3d864b857 convert targets to the new MF.getMachineMemOperand interface.
llvm-svn: 114391
2010-09-21 04:39:43 +00:00
Evan Cheng bf4070756f Teach if-converter to be more careful with predicating instructions that would
take multiple cycles to decode.
For the current if-converter clients (actually only ARM), the instructions that
are predicated on false are not nops. They would still take machine cycles to
decode. Micro-coded instructions such as LDM / STM can potentially take multiple
cycles to decode. If-converter should take treat them as non-micro-coded
simple instructions.

llvm-svn: 113570
2010-09-10 01:29:16 +00:00
Jim Grosbach d343166a0b Many Thumb2 instructions can reference the full ARM register set (i.e.,
have 4 bits per register in the operand encoding), but have undefined
behavior when the operand value is 13 or 15 (SP and PC, respectively).
The trivial coalescer in linear scan sometimes will merge a copy from
SP into a subsequent instruction which uses the copy, and if that
instruction cannot legally reference SP, we get bad code such as:
  mls r0,r9,r0,sp
instead of:
  mov r2, sp
  mls r0, r9, r0, r2

This patch adds a new register class for use by Thumb2 that excludes
the problematic registers (SP and PC) and is used instead of GPR
for those operands which cannot legally reference PC or SP. The
trivial coalescer explicitly requires that the register class
of the destination for the COPY instruction contain the source
register for the COPY to be considered for coalescing. This prevents
errant instructions like that above.

PR7499

llvm-svn: 109842
2010-07-30 02:41:01 +00:00
Jakob Stoklund Olesen d7b33002dd Replace copyRegToReg with copyPhysReg for ARM.
llvm-svn: 108078
2010-07-11 06:33:54 +00:00
Bob Wilson 83b993a977 The t2MOVi16 and t2MOVTi16 instructions do not set CPSR. Trying to add
a CPSR operand to them causes an assertion failure, so apparently these
instructions haven't been getting a lot of use.

llvm-svn: 107147
2010-06-29 16:25:11 +00:00
Evan Cheng 0c30739cbb Change if-cvt options to something that actually as useable.
llvm-svn: 107121
2010-06-29 05:37:59 +00:00
Evan Cheng 02b184de5b Change if-conversion block size limit checks to add some flexibility.
llvm-svn: 106901
2010-06-25 22:42:03 +00:00
Evan Cheng 37bb617f8a Tail merging pass shall not break up IT blocks. rdar://8115404
llvm-svn: 106517
2010-06-22 01:18:16 +00:00
Evan Cheng 2d51c7c592 Allow ARM if-converter to be run after post allocation scheduling.
- This fixed a number of bugs in if-converter, tail merging, and post-allocation
  scheduler. If-converter now runs branch folding / tail merging first to
  maximize if-conversion opportunities.
- Also changed the t2IT instruction slightly. It now defines the ITSTATE
  register which is read by instructions in the IT block.
- Added Thumb2 specific hazard recognizer to ensure the scheduler doesn't
  change the instruction ordering in the IT block (since IT mask has been
  finalized). It also ensures no other instructions can be scheduled between
  instructions in the IT block.

This is not yet enabled.

llvm-svn: 106344
2010-06-18 23:09:54 +00:00
Dale Johannesen 44f9dfc9cf Next round of tail call changes. Register used in a tail
call must not be callee-saved; following x86, add a new
regclass to represent this.  Also fixes a couple of bugs.
Still disabled by default; Thumb doesn't work yet.

llvm-svn: 106053
2010-06-15 22:08:33 +00:00
Evan Cheng a0746bd50a Allow target to place 2-address pass inserted copies in better spots. Thumb2 will use this to try to avoid breaking up IT blocks.
llvm-svn: 105745
2010-06-09 19:26:01 +00:00
Jim Grosbach 84511e1526 Clean up 80 column violations. No functional change.
llvm-svn: 105350
2010-06-02 21:53:11 +00:00
Dan Gohman 779c69bbc5 Add a DebugLoc argument to TargetInstrInfo::copyRegToReg, so that it
doesn't have to guess.

llvm-svn: 103194
2010-05-06 20:33:48 +00:00
Evan Cheng efb126a665 Add argument TargetRegisterInfo to loadRegFromStackSlot and storeRegToStackSlot.
llvm-svn: 103193
2010-05-06 19:06:44 +00:00
Bob Wilson 25f85947a3 Handle register-to-register copies within the tGPR class.
Radar 7896289

llvm-svn: 102396
2010-04-26 23:20:08 +00:00
Chris Lattner 6f306d7d30 use DebugLoc default ctor instead of DebugLoc::getUnknownLoc()
llvm-svn: 100214
2010-04-02 20:16:16 +00:00
Jim Grosbach 44313db557 Thumb2 storeFrom/LoadToStackSlot() need to handle tGPR regs directly, not pass
through to the generic version. The generic functions use STR/LDR, but T2
needs the t2STR/t2LDR instead so we get the addressing mode correct.

llvm-svn: 99678
2010-03-27 00:09:12 +00:00
Bob Wilson 0bfbd9b68c Fix a crash compiling 254.gap for Thumb2. The Thumb2 add/sub with 12-bit
immediate instructions cannot set the condition codes, so they do not have
the extra cc_out operand.  We hit an assertion during tail duplication
because the instruction being duplicated had more operands that expected.

llvm-svn: 98001
2010-03-08 22:56:15 +00:00
Bob Wilson 5638c36efd Handle AddrMode6 (for NEON load/stores) in Thumb2's rewriteT2FrameIndex.
Radar 7614112.

llvm-svn: 95456
2010-02-06 00:24:38 +00:00
Jakob Stoklund Olesen bdc17f6840 Remove predicates when changing an add into an unpredicable mov.
Since the mov is executed unconditionally, make sure that the add didn't have
any predicate.

llvm-svn: 93909
2010-01-19 21:08:28 +00:00
Dan Gohman 047a767d74 Remove the target hook TargetInstrInfo::BlockHasNoFallThrough in favor of
MachineBasicBlock::canFallThrough(), which is target-independent and more
thorough.

llvm-svn: 90634
2009-12-05 00:44:40 +00:00
Evan Cheng fe864425cb Refactor code.
llvm-svn: 86423
2009-11-08 00:15:23 +00:00
Jim Grosbach 4e9f379554 80-column cleanup of file header comments
llvm-svn: 86408
2009-11-07 22:00:39 +00:00
Evan Cheng 8b5278a466 t2ldrpci_pic can be used for blockaddress as well.
llvm-svn: 86400
2009-11-07 19:40:04 +00:00
Evan Cheng a8e8a7c976 Refactor code. Fix a potential missing check. Teach isIdentical() about tLDRpci_pic.
llvm-svn: 86330
2009-11-07 04:04:34 +00:00
Evan Cheng 7ff831962a - Add TargetInstrInfo::isIdentical(). It's similar to MachineInstr::isIdentical
except it doesn't care if the definitions' virtual registers differ. This is
  used by machine LICM and other MI passes to perform CSE.
- Teach Thumb2InstrInfo::isIdentical() to check two t2LDRpci_pic are identical.
  Since pc relative constantpool entries are always different, this requires it
  it check if the values can actually the same.

llvm-svn: 86328
2009-11-07 03:52:02 +00:00
Evan Cheng 207b246650 - Add pseudo instructions tLDRpci_pic and t2LDRpci_pic which does a pc-relative
load of a GV from constantpool and then add pc. It allows the code sequence to
  be rematerializable so it would be hoisted by machine licm.
- Add a late pass to break these pseudo instructions into a number of real
  instructions. Also move the code in Thumb2 IT pass that breaks up t2MOVi32imm
  to this pass. This is done before post regalloc scheduling to allow the
  scheduler to proper schedule these instructions. It also allow them to be
  if-converted and shrunk by later passes.

llvm-svn: 86304
2009-11-06 23:52:48 +00:00
Anton Korobeynikov 14635da94b Use NEON reg-reg moves, where profitable. This reduces "domain-cross" stalls, when we used to mix vfp and neon code (the former were used for reg-reg moves)
llvm-svn: 85764
2009-11-02 00:10:38 +00:00
Evan Cheng 1a4492be97 Fix a couple more places where we are creating ld / st instructions without memoperands.
llvm-svn: 85746
2009-11-01 22:04:35 +00:00
Bob Wilson 73789b848d Add a Thumb BRIND pattern. Change the ARM BRIND assembly to separate the
opcode and operand with a tab.  Check for these instructions in the usual
places.

llvm-svn: 85411
2009-10-28 18:26:41 +00:00
Bob Wilson 967bf27de2 Handle AddrMode4 for Thumb2 in rewriteT2FrameIndex. This occurs for
VLDM/VSTM instructions, and without this check, the code assumes that an
offset is allowed, as it would be with VLDR/VSTR.  The asm printer,
however, silently drops the offset, producing incorrect code.  Since the
address register in this case is either the stack or frame pointer, the
spill location ends up conflicting with some other stack slot or with
outgoing arguments on the stack.

llvm-svn: 81879
2009-09-15 17:56:18 +00:00
Evan Cheng 7a37b1a2ca Fix PR4789. Teach eliminateFrameIndex how to handle VLDRQ and VSTRQ which cannot fold any immediate offset.
llvm-svn: 80191
2009-08-27 01:23:50 +00:00