Commit Graph

124 Commits

Author SHA1 Message Date
Evan Cheng 83e0d481ae Make ARM and Thumb2 32-bit immediate materialization into a single 32-bit pseudo
instruction. This makes it re-materializable.

Thumb2 will split it back out into two instructions so IT pass will generate the
right mask. Also, this expose opportunies to optimize the movw to a 16-bit move.

llvm-svn: 82982
2009-09-28 09:14:39 +00:00
Anton Korobeynikov c30d816d7a Fix thinko in my recent movt commit: it's not safe to remat movt, since it has input reg argument.
Disable rematting of it for now.

llvm-svn: 82975
2009-09-28 07:26:46 +00:00
Anton Korobeynikov 7c2b1e71c1 Use movt/movw pair to materialize 32 bit constants on ARMv6T2+.
This should be better than single load from constpool.

llvm-svn: 82948
2009-09-27 23:52:58 +00:00
Chris Lattner 7b26fce23e Rename TargetAsmInfo (and its subclasses) to MCAsmInfo.
llvm-svn: 79763
2009-08-22 20:48:53 +00:00
Evan Cheng f43cf709cb Remove ARM specific getInlineAsmLength. We'll rely on the simpler (and faster) generic algorithm for now. If more accurate computation is needed, we'll rely on the disassembler.
llvm-svn: 78032
2009-08-04 01:56:09 +00:00
Chris Lattner e98a3c3ca3 Move the getInlineAsmLength virtual method from TAI to TII, where
the only real caller (GetFunctionSizeInBytes) uses it.

The custom ARM implementation of this is basically reimplementing
an assembler poorly for negligible gain.  It should be removed 
IMNSHO, but I'll leave that to ARMish folks to decide.

llvm-svn: 77877
2009-08-02 05:20:37 +00:00
Evan Cheng 780748d565 - More refactoring. This gets rid of all of the getOpcode calls.
- This change also makes it possible to switch between ARM / Thumb on a
  per-function basis.
- Fixed thumb2 routine which expand reg + arbitrary immediate. It was using
  using ARM so_imm logic.
- Use movw and movt to do reg + imm when profitable.
- Other code clean ups and minor optimizations.

llvm-svn: 77300
2009-07-28 05:48:47 +00:00
Evan Cheng 38b7eee164 More DCE.
llvm-svn: 77231
2009-07-27 18:48:45 +00:00
Evan Cheng 18688f431d Get rid of more dead code.
llvm-svn: 77227
2009-07-27 18:38:54 +00:00
Evan Cheng 056c669e93 Get rid of some more getOpcode calls.
This also fixes potential problems in ARMBaseInstrInfo routines not recognizing thumb1 instructions when 32-bit and 16-bit instructions mix.

llvm-svn: 77218
2009-07-27 18:20:05 +00:00
Evan Cheng c47e109335 Use t2LDRi12 and t2STRi12 to load / store to / from stack frames. Eliminate more getOpcode calls.
llvm-svn: 77181
2009-07-27 03:14:20 +00:00
Evan Cheng f3a1fce8ae Change Thumb2 jumptable codegen to one that uses two level jumps:
Before:
      adr r12, #LJTI3_0_0
      ldr pc, [r12, +r0, lsl #2]
LJTI3_0_0:
      .long    LBB3_24
      .long    LBB3_30
      .long    LBB3_31
      .long    LBB3_32

After:
      adr r12, #LJTI3_0_0
      add pc, r12, +r0, lsl #2
LJTI3_0_0:
      b.w    LBB3_24
      b.w    LBB3_30
      b.w    LBB3_31
      b.w    LBB3_32

This has several advantages.
1. This will make it easier to optimize this to a TBB / TBH instruction +
   (smaller) table.
2. This eliminate the need for ugly asm printer hack to force the address
   into thumb addresses (bit 0 is one).
3. Same codegen for pic and non-pic.
4. This eliminate the need to align the table so constantpool island pass
   won't have to over-estimate the size.

Based on my calculation, the later is probably slightly faster as well since
ldr pc with shifter address is very slow. That is, it should be a win as long
as the HW implementation can do a reasonable job of branch predict the second
branch.

llvm-svn: 77024
2009-07-25 00:33:29 +00:00
Evan Cheng 6cfbe61361 FLDD, FLDS, FCPYD, FCPYS, FSTD, FSTS, VMOVD, VMOVQ maps to the same instructions on all sub-targets.
llvm-svn: 76925
2009-07-24 00:53:56 +00:00
David Goodwin cdd405d804 Correctly handle the Thumb-2 imm8 addrmode. Specialize frame index elimination more exactly for Thumb-2 to get better code gen.
llvm-svn: 76919
2009-07-24 00:16:18 +00:00
David Goodwin 6deba28c6f Fix frame index elimination to correctly handle thumb-2 addressing modes that don't allow negative offsets. During frame elimination convert *i12 opcode to a *i8 when necessary due to a negative offset.
llvm-svn: 76883
2009-07-23 17:06:46 +00:00
Evan Cheng 84517443ca Let callers decide the sub-register index on the def operand of rematerialized instructions.
Avoid remat'ing instructions whose def have sub-register indices for now. It's just really really hard to get all the cases right.

llvm-svn: 75900
2009-07-16 09:20:10 +00:00
David Goodwin 03ab0bbb24 Generalize opcode selection in ARMBaseRegisterInfo.
llvm-svn: 75036
2009-07-08 20:28:28 +00:00
David Goodwin af7451b674 Checkpoint Thumb2 Instr info work. Generalized base code so that it can be shared between ARM and Thumb2. Not yet activated because register information must be generalized first.
llvm-svn: 75010
2009-07-08 16:09:28 +00:00
David Goodwin ade05a37f1 Checkpoint refactoring of ThumbInstrInfo and ThumbRegisterInfo into Thumb1InstrInfo, Thumb2InstrInfo, Thumb1RegisterInfo and Thumb2RegisterInfo. Move methods from ARMInstrInfo to ARMBaseInstrInfo to prepare for sharing with Thumb2.
llvm-svn: 74731
2009-07-02 22:18:33 +00:00
Evan Cheng d379e896ff Handle IMPLICIT_DEF with isUndef operand marker, part 2. This patch moves the code to annotate machineoperands to LiveIntervalAnalysis. It also add markers for implicit_def that define physical registers. The rest, is just a lot of details.
llvm-svn: 74580
2009-07-01 01:59:31 +00:00
David Goodwin 28d6d87244 Improve Thumb-2 jump table support.
llvm-svn: 74549
2009-06-30 19:50:22 +00:00
David Goodwin 27303cde82 Add conditional and unconditional thumb-2 branch. Add thumb-2 jump table.
llvm-svn: 74543
2009-06-30 18:04:13 +00:00
Anton Korobeynikov 0f2158b35f Simplify a bit
llvm-svn: 74385
2009-06-27 12:59:03 +00:00
Anton Korobeynikov a1b5b18bd0 ARM refactoring. Step 2: split RegisterInfo
llvm-svn: 74384
2009-06-27 12:16:40 +00:00
Anton Korobeynikov 99152f3a2c Split thumb-related stuff into separate classes.
Step 1: ARMInstructionInfo => {ARM,Thumb}InstructionInfo

llvm-svn: 74329
2009-06-26 21:28:53 +00:00
Bob Wilson 2e076c4e02 Add support for ARM's Advanced SIMD (NEON) instruction set.
This is still a work in progress but most of the NEON instruction set
is supported.

llvm-svn: 73919
2009-06-22 23:27:02 +00:00
Anton Korobeynikov 5d28cb204f GNU as refuses to assemble "pop {}" instruction. Do not emit such
(this is the case when we have thumb vararg function with single
callee-saved register, which is handled separately).

llvm-svn: 73529
2009-06-16 18:49:08 +00:00
Jim Grosbach 06928192ae Update the names of the exception handling sjlj instrinsics to
llvm.eh.sjlj.* for better clarity as to their purpose and scope. Add
a description of llvm.eh.sjlj.setjmp to ExceptionHandling.html.
(llvm.eh.sjlj.longjmp documentation coming when that implementation is
added).

llvm-svn: 71758
2009-05-14 00:46:35 +00:00
Bill Wendling f7b83c7ae7 Change MachineInstrBuilder::addReg() to take a flag instead of a list of
booleans. This gives a better indication of what the "addReg()" is
doing. Remembering what all of those booleans mean isn't easy, especially if you
aren't spending all of your time in that code.

I took Jakob's suggestion and made it illegal to pass in "true" for the
flag. This should hopefully prevent any unintended misuse of this (by reverting
to the old way of using addReg()).

llvm-svn: 71722
2009-05-13 21:33:08 +00:00
Jim Grosbach aeca45dd6f Add support for GCC compatible builtin setjmp and longjmp intrinsics. This is
a supporting preliminary patch for GCC-compatible SjLJ exception handling. Note that these intrinsics are not designed to be invoked directly by the user, but
rather used by the front-end as target hooks for exception handling.

llvm-svn: 71610
2009-05-12 23:59:14 +00:00
Jim Grosbach fde2110aa9 PR2985 / <rdar://problem/6584986>
When compiling in Thumb mode, only the low (R0-R7) registers are available
for most instructions. Breaking the low registers into a new register class
handles this. Uses of R12, SP, etc, are handled explicitly where needed
with copies inserted to move results into low registers where the rest of
the code generator can deal with them.

llvm-svn: 68545
2009-04-07 20:34:09 +00:00
Bob Wilson 6bedd59894 Wrap some lines to fix indentation problems.
llvm-svn: 68405
2009-04-03 21:08:42 +00:00
Bob Wilson d24b794f31 Fix some comments.
llvm-svn: 68404
2009-04-03 20:53:25 +00:00
Dan Gohman 2af1f85f1f Factor out the code to add a MachineOperand to a MachineInstrBuilder.
llvm-svn: 64891
2009-02-18 05:45:50 +00:00
Dale Johannesen 7647da67ea Remove refs to non-DebugLoc versions of BuildMI from ARM.
llvm-svn: 64429
2009-02-13 02:25:56 +00:00
Dale Johannesen 6b8c76a910 Eliminate a couple of non-DebugLoc BuildMI variants.
Modify callers.

llvm-svn: 64409
2009-02-12 23:08:38 +00:00
Bill Wendling f6d609a227 Move debug loc info along when the spiller creates new instructions.
llvm-svn: 64342
2009-02-12 00:02:55 +00:00
Evan Cheng 64dfcacd5f Turns out AnalyzeBranch can modify the mbb being analyzed. This is a nasty
suprise to some callers, e.g. register coalescer. For now, add an parameter
that tells AnalyzeBranch whether it's safe to modify the mbb. A better
solution is out there, but I don't have time to deal with it right now.

llvm-svn: 64124
2009-02-09 07:14:22 +00:00
Evan Cheng 066757eea1 Move getPointerRegClass from TargetInstrInfo to TargetRegisterInfo.
llvm-svn: 63938
2009-02-06 17:43:24 +00:00
Bill Wendling e3c78361d3 Create DebugLoc information in FastISel. Several temporary methods were
created. Specifically, those BuildMIs which use
"DebugLoc::getUnknownLoc()". I'll remove them soon.

llvm-svn: 63584
2009-02-03 00:55:04 +00:00
Evan Cheng c544cb0eca Change TargetInstrInfo::isMoveInstr to return source and destination sub-register indices as well.
llvm-svn: 62600
2009-01-20 19:12:24 +00:00
Evan Cheng d5021730c8 Preliminary ARM debug support based on patch by Mikael of FlexyCore.
llvm-svn: 60851
2008-12-10 21:54:21 +00:00
Dan Gohman 3f86b51333 Split foldMemoryOperand into public non-virtual and protected virtual
parts, and add target-independent code to add/preserve
MachineMemOperands.

llvm-svn: 60488
2008-12-03 18:43:12 +00:00
Dan Gohman 0b2732598c Add more const qualifiers. This fixes build breakage from r59540.
llvm-svn: 59542
2008-11-18 19:49:32 +00:00
Evan Cheng 3620e685b5 Minor code restructuring. No functionality change.
llvm-svn: 58643
2008-11-03 21:02:39 +00:00
Dan Gohman 33332bce17 Const-ify several TargetInstrInfo methods.
llvm-svn: 57622
2008-10-16 01:49:15 +00:00
Dan Gohman 0d1e9a8e04 Switch the MachineOperand accessors back to the short names like
isReg, etc., from isRegister, etc.

llvm-svn: 57006
2008-10-03 15:45:36 +00:00
Owen Anderson 27fb3dcbc7 Make TargetInstrInfo::copyRegToReg return a bool indicating whether the copy requested
was inserted or not.  This allows bitcast in fast isel to properly handle the case
where an appropriate reg-to-reg copy is not available.

llvm-svn: 55375
2008-08-26 18:03:31 +00:00
Owen Anderson 4f6bf04616 Convert uses of std::vector in TargetInstrInfo to SmallVector. This change had to be propoagated down into all the targets and up into all clients of this API.
llvm-svn: 54802
2008-08-14 22:49:33 +00:00
Dan Gohman 3b46030375 Pool-allocation for MachineInstrs, MachineBasicBlocks, and
MachineMemOperands. The pools are owned by MachineFunctions.

This drastically reduces the number of calls to malloc/free made
during the "Emit" phase of scheduling, as well as later phases
in CodeGen. Combined with other changes, this speeds up the
"instruction selection" phase of CodeGen by 10% in some cases.

llvm-svn: 53212
2008-07-07 23:14:23 +00:00