Commit Graph

227 Commits

Author SHA1 Message Date
Evan Cheng e37da03e60 More pseudo instruction scheduling itinerary fixes.
llvm-svn: 114768
2010-09-24 22:41:41 +00:00
Evan Cheng 1d35ad62cc Fix scheduling itinerary for pseudo mov immediate instructions which expand into two real instructions.
llvm-svn: 114766
2010-09-24 22:03:46 +00:00
Owen Anderson 2c5df619c4 Revert r114703 and r114702, removing the isConditionalMove flag from instructions. After further
reflection, this isn't going to achieve the purpose I intended it for.  Back to the drawing board!

llvm-svn: 114710
2010-09-23 23:45:25 +00:00
Owen Anderson bd57e0ce3d Add isConditionalMove bits to X86 and ARM instructions.
llvm-svn: 114703
2010-09-23 22:57:01 +00:00
Chris Lattner 0e023ea02a fix a long standing wart: all the ComplexPattern's were being
passed the root of the match, even though only a few patterns
actually needed this (one in X86, several in ARM [which should
be refactored anyway], and some in CellSPU that I don't feel 
like detangling).   Instead of requiring all ComplexPatterns to
take the dead root, have targets opt into getting the root by
putting SDNPWantRoot on the ComplexPattern.

llvm-svn: 114471
2010-09-21 20:31:19 +00:00
Evan Cheng 722cd122dc Fix LDM_RET schedule itinery.
llvm-svn: 113435
2010-09-08 22:57:08 +00:00
Chris Lattner f43cb302ca remove some dead code. t2addrmode_imm8s4 is never used in a
pattern, so there is no need to define a matching function.

llvm-svn: 113122
2010-09-05 22:51:11 +00:00
Chris Lattner 39eccb4754 temporarily revert r112664, it is causing a decoding conflict, and
the testcases should be merged.

llvm-svn: 112711
2010-09-01 16:00:50 +00:00
Bill Wendling 6789f8b6ae We have a chance for an optimization. Consider this code:
int x(int t) {
  if (t & 256)
    return -26;
  return 0;
}

We generate this:

     tst.w   r0, #256
     mvn     r0, #25
     it      eq
     moveq   r0, #0

while gcc generates this:

     ands    r0, r0, #256
     it      ne
     mvnne   r0, #25
     bx      lr

Scandalous really!

During ISel time, we can look for this particular pattern. One where we have a
"MOVCC" that uses the flag off of a CMPZ that itself is comparing an AND
instruction to 0. Something like this (greatly simplified):

  %r0 = ISD::AND ...
  ARMISD::CMPZ %r0, 0         @ sets [CPSR]
  %r0 = ARMISD::MOVCC 0, -26  @ reads [CPSR]

All we have to do is convert the "ISD::AND" into an "ARM::ANDS" that sets [CPSR]
when it's zero. The zero value will all ready be in the %r0 register and we only
need to change it if the AND wasn't zero. Easy!

llvm-svn: 112664
2010-08-31 22:41:22 +00:00
Bill Wendling 87bb14c566 Use the existing T2I_bin_s_irs pattern instead of creating T2I_bin_sw_irs, which
is meant to do exactly the same thing. Thanks to Jim Grosbach for pointing this
out! :-)

llvm-svn: 112538
2010-08-30 22:05:23 +00:00
Jim Grosbach fef37287a8 Make ARM add rN, sp, #imm instructions rematerializable. That's how the address of locals is calculated, so this should
help relieve register pressure a bit. Recalculating the local address is
almost always going to be better than spilling.

llvm-svn: 112503
2010-08-30 19:49:58 +00:00
Bill Wendling f8dfa461fa Create Thumb2sI_cpsr and T2sI_cpsr. These new classes indicate that CPSR is the
optional modified register (instead of reg0). Along with r112461 it will make
sure that the optional define of CPSR is marked as "def" and will thus mark the
instructions using these classes (t2ANDS*) as setting the 's' flag.

llvm-svn: 112462
2010-08-30 01:47:35 +00:00
Bill Wendling df9ec17d53 - Add a parameter to T2I_bin_irs for those patterns which set the S bit.
- Create T2I_bin_sw_irs to be like T2I_bin_w_irs, but that it sets the S bit.

llvm-svn: 112399
2010-08-29 03:55:31 +00:00
Bill Wendling b0dc465c04 Name ANDflag to ANDS, which is less stupid.
llvm-svn: 112395
2010-08-29 03:06:09 +00:00
Bill Wendling 0a65116cce Create an ARMISD::AND node. This node is exactly like the "ARM::AND" node, but
it sets the CPSR register.

llvm-svn: 112393
2010-08-29 03:02:11 +00:00
Jim Grosbach 074d22e1ac Restrict the register to tGPR to make sure the str instruction will be
encodable as a 16-bit wide instruction.

llvm-svn: 112195
2010-08-26 17:02:47 +00:00
Dan Gohman 10b20b2b81 Revert r112176; it broke test/CodeGen/Thumb2/thumb2-cmn.ll.
llvm-svn: 112191
2010-08-26 15:50:25 +00:00
Bill Wendling a9a0599b39 There seems to be a (potential) hardware bug with the CMN instruction and
comparison with 0. These two pieces of code should give identical results:

  rsbs r1, r1, 0
  cmp  r0, r1
  mov  r0, #0
  it   ls
  mov  r0, #1

and:

  cmn  r0, r1
  mov  r0, #0
  it   ls
  mov  r0, #1

However, the CMN gives the *opposite* result when r1 is 0. This is because the
carry flag is set in the CMP case but not in the CMN case. In short, the CMP
instruction doesn't perform a truncate of the (logical) NOT of 0 plus the value
of r0 and the carry bit (because the "carry bit" parameter to AddWithCarry is
defined as 1 in this case, the carry flag will always be set when r0 >= 0). The
CMN instruction doesn't perform a NOT of 0 so there is never a "carry" when this
AddWithCarry is performed (because the "carry bit" parameter to AddWithCarry is
defined as 0).

The AddWithCarry in the CMP case seems to be relying upon the identity:

  ~x + 1 = -x

However when x is 0 and unsigned, this doesn't hold:

   x = 0
  ~x = 0xFFFF FFFF
  ~x + 1 = 0x1 0000 0000
  (-x = 0) != (0x1 0000 0000 = ~x + 1)

Therefore, we should disable *all* versions of CMN, especially when comparing
against zero, until we can limit when the CMN instruction is used (when we know
that the RHS is not 0) or when we have a hardware fix for this.

(See the ARM docs for the "AddWithCarry" pseudo-code.)

This is related to <rdar://problem/7569620>.

llvm-svn: 112176
2010-08-26 09:07:33 +00:00
Bill Wendling 768d3b510c Add the "isCompare" attribute to the defm instead of each individual instr.
llvm-svn: 111481
2010-08-19 00:05:48 +00:00
Jakob Stoklund Olesen e2cbaf6ed7 Don't call tablegen'ed Predicate_* functions in the ARM target.
llvm-svn: 111277
2010-08-17 20:39:04 +00:00
Jim Grosbach 62800a990b 80 column cleanup.
llvm-svn: 111266
2010-08-17 18:39:16 +00:00
Bob Wilson 942b10f511 Change ARM PKHTB and PKHBT instructions to use a shift_imm operand to avoid
printing "lsl #0".  This fixes the remaining parts of pr7792.  Make
corresponding changes for encoding/decoding these instructions.

llvm-svn: 111251
2010-08-17 17:23:19 +00:00
Bob Wilson 804f6159f1 Generalize a pattern for PKHTB: an SRL of 16-31 bits will guarantee
that the high halfword is zero.  The shift need not be exactly 16 bits.

llvm-svn: 111196
2010-08-16 22:26:55 +00:00
Bob Wilson 481d7a9ab4 Rename sat_shift operand to shift_imm, in preparation for using it for other
instructions besides saturate instructions.  No functional changes.

llvm-svn: 111168
2010-08-16 18:27:34 +00:00
Bob Wilson bffc757df7 T2I_rbin_irs rr variant is for disassembly only, so don't provide a pattern.
llvm-svn: 111068
2010-08-14 03:18:29 +00:00
Bob Wilson 4577f37d49 Add a Thumb2 t2RSBrr instruction for disassembly only.
This fixes another part of PR7792.

llvm-svn: 111057
2010-08-13 23:24:25 +00:00
Bob Wilson 15b3c3d0ac Move the Thumb2 SSAT and USAT optional shift operator out of the
instruction opcode.  This fixes part of PR7792.

llvm-svn: 111047
2010-08-13 21:48:10 +00:00
Evan Cheng 91033bed94 Really control isel of barrier instructions with cpu feature.
llvm-svn: 110787
2010-08-11 06:36:31 +00:00
Evan Cheng 6e809de90c - Add subtarget feature -mattr=+db which determine whether an ARM cpu has the
memory and synchronization barrier dmb and dsb instructions.
- Change instruction names to something more sensible (matching name of actual
  instructions).
- Added tests for memory barrier codegen.

llvm-svn: 110785
2010-08-11 06:22:01 +00:00
Daniel Dunbar 740c50385c ARM: Quote $p in an asm string.
llvm-svn: 110780
2010-08-11 04:46:10 +00:00
Evan Cheng 5415713d9a CBZ and CBNZ are implemented.
llvm-svn: 110745
2010-08-10 23:27:11 +00:00
Evan Cheng fa16acae44 Delete some unused instructions.
llvm-svn: 110710
2010-08-10 19:36:22 +00:00
Bill Wendling 798617b1ab Use the "isCompare" machine instruction attribute instead of calling the
relatively expensive comparison analyzer on each instruction. Also rename the
comparison analyzer method to something more in line with what it actually does.

This pass is will eventually be folded into the Machine CSE pass.

llvm-svn: 110539
2010-08-08 05:04:59 +00:00
Bob Wilson b128824b60 Move newlines before inline jumptables from the asm strings in .td files to
the jtblock_operand print methods.  This avoids extra newlines in the
disassembler's output.  PR7757.

llvm-svn: 109948
2010-07-31 06:28:10 +00:00
Jim Grosbach d343166a0b Many Thumb2 instructions can reference the full ARM register set (i.e.,
have 4 bits per register in the operand encoding), but have undefined
behavior when the operand value is 13 or 15 (SP and PC, respectively).
The trivial coalescer in linear scan sometimes will merge a copy from
SP into a subsequent instruction which uses the copy, and if that
instruction cannot legally reference SP, we get bad code such as:
  mls r0,r9,r0,sp
instead of:
  mov r2, sp
  mls r0, r9, r0, r2

This patch adds a new register class for use by Thumb2 that excludes
the problematic registers (SP and PC) and is used instead of GPR
for those operands which cannot legally reference PC or SP. The
trivial coalescer explicitly requires that the register class
of the destination for the COPY instruction contain the source
register for the COPY to be considered for coalescing. This prevents
errant instructions like that above.

PR7499

llvm-svn: 109842
2010-07-30 02:41:01 +00:00
Nate Begeman c4a96c0e8c Add builtins for ssat/usat, similar to RealView's __ssat and __usat intrinsics.
llvm-svn: 109813
2010-07-29 22:48:09 +00:00
Nate Begeman 7010a71ac4 Add intrinsics __builtin_arm_qadd & __builtin_arm_qsub to allow access to the QADD & QSUB instructions.
Behave identically to __qadd & __qsub RealView instruction intrinsics.

llvm-svn: 109770
2010-07-29 17:56:55 +00:00
Jim Grosbach 716a596cf7 Remove incorrect substitution pattern for UXTB16. It wrongly assumed the input shift was actually a rotate. rdar://8240138
llvm-svn: 109693
2010-07-28 23:17:45 +00:00
Jim Grosbach 3680f70c9d Using BIC for immediates needs an extra bump for its complexity to get
instruction selection to prefer it when possible. rdar://7903972

llvm-svn: 108844
2010-07-20 16:07:04 +00:00
Jim Grosbach 11013eda5a Add basic support to code-gen the ARM/Thumb2 bit-field insert (BFI) instruction
and a combine pattern to use it for setting a bit-field to a constant
value. More to come for non-constant stores.

llvm-svn: 108570
2010-07-16 23:05:05 +00:00
Jim Grosbach a90af1ba38 Improve 64-subtraction of immediates when parts of the immediate can fit
in the literal field of an instruction. E.g.,
long long foo(long long a) {
  return a - 734439407618LL;
}

rdar://7038284

llvm-svn: 108339
2010-07-14 17:45:16 +00:00
Bob Wilson 1aef53403f Add missing address register update to t2LDM_RET instruction.
Patch by Brian Lucas. PR7636.

llvm-svn: 108332
2010-07-14 16:02:13 +00:00
Evan Cheng b59dd8f10a PR7503: uxtb16 is not available for ARMv7-M. Patch by Brian G. Lucas.
llvm-svn: 107122
2010-06-29 05:38:36 +00:00
Eli Friedman 246c41d93e Always allow Thumb-2 SXTB, SXTH, UXTB, and UXTH. Fixes PR7324.
llvm-svn: 106770
2010-06-24 18:20:04 +00:00
Jim Grosbach 523e554afa LEApcrelJT shouldn't be marked as neverHasSideEffects, as we don't want it
being moved around away from the jump table it references. rdar://8104340

llvm-svn: 106483
2010-06-21 21:27:27 +00:00
Evan Cheng 2d51c7c592 Allow ARM if-converter to be run after post allocation scheduling.
- This fixed a number of bugs in if-converter, tail merging, and post-allocation
  scheduler. If-converter now runs branch folding / tail merging first to
  maximize if-conversion opportunities.
- Also changed the t2IT instruction slightly. It now defines the ITSTATE
  register which is read by instructions in the IT block.
- Added Thumb2 specific hazard recognizer to ensure the scheduler doesn't
  change the instruction ordering in the IT block (since IT mask has been
  finalized). It also ensures no other instructions can be scheduled between
  instructions in the IT block.

This is not yet enabled.

llvm-svn: 106344
2010-06-18 23:09:54 +00:00
Jim Grosbach 84511e1526 Clean up 80 column violations. No functional change.
llvm-svn: 105350
2010-06-02 21:53:11 +00:00
Jim Grosbach 0b20fdaff0 Cosmetic cleanup. No functional change.
llvm-svn: 104974
2010-05-28 17:51:20 +00:00
Jim Grosbach 37eb2c24b9 make sure accesses to set up the jmpbuf don't get moved after it by the scheduler. Add a missing \n.
llvm-svn: 104967
2010-05-28 17:37:40 +00:00
Jim Grosbach faa3abbe39 Update the saved stack pointer in the sjlj function context following either
an alloca() or an llvm.stackrestore(). rdar://8031573

llvm-svn: 104900
2010-05-27 23:49:24 +00:00