Commit Graph

1073 Commits

Author SHA1 Message Date
Eric Christopher 174d872702 Sometimes isPredicable lies to us and tells us we don't need the operands.
Go ahead and add them on when we might want to use them and let
later passes remove them.

Fixes rdar://9118569

llvm-svn: 127518
2011-03-12 01:09:29 +00:00
Jim Grosbach 6d371ce37e Properly pseudo-ize the ARM LDMIA_RET instruction. This has the nice side-
effect that we get proper instruction printing using the "pop" mnemonic for it.

llvm-svn: 127502
2011-03-11 22:51:41 +00:00
Cameron Zwarich 338d362200 Roll r127459 back in:
Optimize trivial branches in CodeGenPrepare, which often get created from the
lowering of objectsize intrinsics. Unfortunately, a number of tests were relying
on llc not optimizing trivial branches, so I had to add an option to allow them
to continue to test what they originally tested.

This fixes <rdar://problem/8785296> and <rdar://problem/9112893>.

llvm-svn: 127498
2011-03-11 21:52:04 +00:00
Daniel Dunbar 94ccb27b43 Revert r127459, "Optimize trivial branches in CodeGenPrepare, which often get
created from the", it broke some GCC test suite tests.

llvm-svn: 127477
2011-03-11 19:30:30 +00:00
Cameron Zwarich cc27b3acc4 Optimize trivial branches in CodeGenPrepare, which often get created from the
lowering of objectsize intrinsics. Unfortunately, a number of tests were relying
on llc not optimizing trivial branches, so I had to add an option to allow them
to continue to test what they originally tested.

This fixes <rdar://problem/8785296> and <rdar://problem/9112893>.

llvm-svn: 127459
2011-03-11 04:54:27 +00:00
Evan Cheng adb9c03e41 Avoid replacing the value of a directly stored load with the stored value if the load is indexed. rdar://9117613.
llvm-svn: 127440
2011-03-11 00:48:56 +00:00
Jim Grosbach 62a7b473af Properly pseudo-ize MOVCCr and MOVCCs.
llvm-svn: 127434
2011-03-10 23:56:09 +00:00
Bob Wilson 45acbd03db Fix a compiler crash where a Glue value had multiple uses. Radar 9049552.
llvm-svn: 127198
2011-03-08 01:17:20 +00:00
Joerg Sonnenberger 62f759791a Be nice to Xcore and the XMOS assembler and avoid quoting section names
that contain only letters, digits and the characters "_" and ".".

llvm-svn: 127028
2011-03-04 20:03:14 +00:00
Devang Patel 906df92d5c XFAIL for all. These tests are darwin specific anyway.
llvm-svn: 127022
2011-03-04 19:38:10 +00:00
Devang Patel a0d73fd65e Disable ARMGlobalMerge on darwin. The debugger is not yet able to extract individual variable's info from merged global.
llvm-svn: 127019
2011-03-04 19:11:05 +00:00
Joerg Sonnenberger 852ab890b5 Bug#9033: For the ELF assembler output, always quote the section name.
llvm-svn: 126963
2011-03-03 22:31:08 +00:00
Cameron Zwarich 5dd2aa2615 Eliminate the unused CodeGenPrepare option to split critical edges.
llvm-svn: 126825
2011-03-02 03:31:46 +00:00
Bill Wendling 3b1459b810 Narrow right shifts need to encode their immediates differently from a normal
shift.

   16-bit: imm6<5:3> = '001', 8 - <imm> is encded in imm6<2:0>
   32-bit: imm6<5:4> = '01',16 - <imm> is encded in imm6<3:0>
   64-bit: imm6<5> = '1', 32 - <imm> is encded in imm6<4:0>

llvm-svn: 126723
2011-03-01 01:00:59 +00:00
Jakob Stoklund Olesen 87d7408f29 Fix typo introduced by r126661: "Fix a typo which ..."
llvm-svn: 126666
2011-02-28 19:18:59 +00:00
Evan Cheng 6e3d443646 Fix a typo which cause dag combine crash. rdar://9059537.
llvm-svn: 126661
2011-02-28 18:45:27 +00:00
Bob Wilson e3ecd5fb9b Add patterns to use post-increment addressing for Neon VST1-lane instructions.
llvm-svn: 126477
2011-02-25 06:42:42 +00:00
Devang Patel b52040da17 Move arch specific tests in arch specific directories.
llvm-svn: 126401
2011-02-24 19:06:27 +00:00
Devang Patel 37e056e455 Check only relevant strings in output to increase stability of the tests.
llvm-svn: 126338
2011-02-23 22:35:57 +00:00
Evan Cheng d6b641e5bc More fcopysign correctness and performance fix.
The previous codegen for the slow path (when values are in VFP / NEON
registers) was incorrect if the source is NaN.

The new codegen uses NEON vbsl instruction to copy the sign bit. e.g.
        vmov.i32        d1, #0x80000000
        vbsl    d1, d2, d0
If NEON is not available, it uses integer instructions to copy the sign bit.
rdar://9034702

llvm-svn: 126295
2011-02-23 02:24:55 +00:00
Evan Cheng 2ce663031f available_externally (hidden or not) GVs are always accessed via stubs. rdar://9027648.
llvm-svn: 126191
2011-02-22 06:58:34 +00:00
Bob Wilson 60f50bc9f1 PR9139: Specify ARM/Darwin triple for vector-DAGCombine.ll test.
The i64_buildvector test in this file relies on the alignment of i64 and
f64 types being the same, which is true for Darwin but not AAPCS.

llvm-svn: 125525
2011-02-14 22:12:50 +00:00
Nate Begeman fa62d50481 Implement sdiv & udiv for <4 x i16> and <8 x i8> NEON vector types.
This avoids moving each element to the integer register file and calling __divsi3 etc. on it.

llvm-svn: 125402
2011-02-11 20:53:29 +00:00
Evan Cheng 2da1c95993 Fix buggy fcopysign lowering.
This
define float @foo(float %x, float %y) nounwind readnone {
entry:
  %0 = tail call float @copysignf(float %x, float %y) nounwind readnone
  ret float %0
}

Was compiled to:
    vmov     s0, r1
    bic      r0, r0, #-2147483648
    vmov     s1, r0
    vcmpe.f32    s0, #0
    vmrs         apsr_nzcv, fpscr
    it           lt
    vneglt.f32   s1, s1
    vmov         r0, s1
    bx           lr

This fails to copy the sign of -0.0f because it's lost during the float to int
conversion. Also, it's sub-optimal when the inputs are in GPR registers.

Now it uses integer and + or operations when it's profitable. And it's correct!
    lsrs    r1, r1, #31
    bfi     r0, r1, #31, #1
    bx      lr
rdar://8984306

llvm-svn: 125357
2011-02-11 02:28:55 +00:00
Andrew Trick 7d90bf5551 PostRA antidependence breaker unit test for PR8986.
llvm-svn: 125091
2011-02-08 17:42:05 +00:00
Andrew Trick c5daa45c8d PostRA antidependence breaker unit test for rdar://8959122.
llvm-svn: 125090
2011-02-08 17:41:12 +00:00
Evan Cheng e1a4ac9b5b Fix an obvious typo which caused an isel assertion. rdar://8964854.
llvm-svn: 125023
2011-02-07 18:50:47 +00:00
Bob Wilson 06fce87c4a Add codegen support for using post-increment NEON load/store instructions.
The vld1-lane, vld1-dup and vst1-lane instructions do not yet support using
post-increment versions, but all the rest of the NEON load/store instructions
should be handled now.

llvm-svn: 125014
2011-02-07 17:43:21 +00:00
Jason W Kim 85b0af177f Rework some .ARM.attribute work for improved gcc compatibility.
Unified EmitTextAttribute for both Asm and Obj emission (.cpu only)
Added necessary cortex-A8 related attrs for codegen compat tests.

llvm-svn: 124995
2011-02-07 00:49:53 +00:00
Evan Cheng d42641c6b5 Given a pair of floating point load and store, if there are no other uses of
the load, then it may be legal to transform the load and store to integer
load and store of the same width.

This is done if the target specified the transformation as profitable. e.g.
On arm, this can transform:
vldr.32 s0, []
vstr.32 s0, []

to

ldr r12, []
str r12, []

rdar://8944252

llvm-svn: 124708
2011-02-02 01:06:55 +00:00
Eric Christopher ebd8db7d00 Add a testcase for my last checkin.
llvm-svn: 124358
2011-01-27 06:01:17 +00:00
Evan Cheng d6093ff4cb Don't merge restore with tail call instruction.
llvm-svn: 124167
2011-01-25 01:28:33 +00:00
Evan Cheng 2f2435d026 Last round of fixes for movw + movt global address codegen.
1. Fixed ARM pc adjustment.
2. Fixed dynamic-no-pic codegen
3. CSE of pc-relative load of global addresses.

It's now enabled by default for Darwin.

llvm-svn: 123991
2011-01-21 18:55:51 +00:00
Andrew Trick bd428ec50f Enable support for precise scheduling of the instruction selection
DAG. Disable using "-disable-sched-cycles".

For ARM, this enables a framework for modeling the cpu pipeline and
counting stalls. It also activates several heuristics to drive
scheduling based on the model. Scheduling is inherently imprecise at
this stage, and until spilling is improved it may defeat attempts to
schedule. However, this framework provides greater control over
tuning codegen.

Although the flag is not target-specific, it should have very little
affect on the default scheduler used by x86. The only two changes that
affect x86 are:
- scheduling a high-latency operation bumps the current cycle so independent
  operations can have their latency covered. i.e. two independent 4
  cycle operations can produce results in 4 cycles, not 8 cycles.
- Two operations with equal register pressure impact and no
  latency-based stalls on their uses will be prioritized by depth before height
  (height is irrelevant if no stalls occur in the schedule below this point).

llvm-svn: 123971
2011-01-21 06:19:05 +00:00
Andrew Trick 47ff14b091 Convert -enable-sched-cycles and -enable-sched-hazard to -disable
flags. They are still not enable in this revision.

Added TargetInstrInfo::isZeroCost() to fix a fundamental problem with
the scheduler's model of operand latency in the selection DAG.

Generalized unit tests to work with sched-cycles.

llvm-svn: 123969
2011-01-21 05:51:33 +00:00
Evan Cheng 028ccbfcbf Don't be overly aggressive with CSE of "ldr constantpool". If it's a pc-relative
value, the "add pc" must be CSE'ed at the same time. We could follow the same
approach as T2 by adding pseudo instructions that combine the ldr + "add pc".
But the better approach is to use movw + movt (which I will enable soon), so
I'll leave this as a TODO.

llvm-svn: 123949
2011-01-20 23:55:07 +00:00
Evan Cheng f2e914be15 Add test.
llvm-svn: 123906
2011-01-20 08:38:21 +00:00
Evan Cheng b8b0ad80a8 Sorry, several patches in one.
TargetInstrInfo:
Change produceSameValue() to take MachineRegisterInfo as an optional argument.
When in SSA form, targets can use it to make more aggressive equality analysis.

Machine LICM:
1. Eliminate isLoadFromConstantMemory, use MI.isInvariantLoad instead.
2. Fix a bug which prevent CSE of instructions which are not re-materializable.
3. Use improved form of produceSameValue.

ARM:
1. Teach ARM produceSameValue to look pass some PIC labels.
2. Look for operands from different loads of different constant pool entries
   which have same values.
3. Re-implement PIC GA materialization using movw + movt. Combine the pair with
   a "add pc" or "ldr [pc]" to form pseudo instructions. This makes it possible
   to re-materialize the instruction, allow machine LICM to hoist the set of
   instructions out of the loop and make it possible to CSE them. It's a bit
   hacky, but it significantly improve code quality.
4. Some minor bug fixes as well.

With the fixes, using movw + movt to materialize GAs significantly outperform the
load from constantpool method. 186.crafty and 255.vortex improved > 20%, 254.gap
and 176.gcc ~10%.

llvm-svn: 123905
2011-01-20 08:34:58 +00:00
Eric Christopher bb14f65672 If we can, lower the multiply part of a umulo/smulo call to a libcall
with an invalid type then split the result and perform the overflow check
normally.

Fixes the 32-bit parts of rdar://8622122 and rdar://8774702.

llvm-svn: 123864
2011-01-20 00:29:24 +00:00
Devang Patel 2d9e532a3a Fix debug info for merged global.
llvm-svn: 123862
2011-01-20 00:02:16 +00:00
Evan Cheng dfce83c8f5 Materialize GA addresses with movw + movt pairs for Darwin in PIC mode. e.g.
movw    r0, :lower16:(L_foo$non_lazy_ptr-(LPC0_0+4))
        movt    r0, :upper16:(L_foo$non_lazy_ptr-(LPC0_0+4))
LPC0_0:
        add     r0, pc, r0

It's not yet enabled by default as some tests are failing. I suspect bugs in
down stream tools.

llvm-svn: 123619
2011-01-17 08:03:18 +00:00
Eric Christopher 3904343c68 Even if we don't have 7 bytes of stack space we may need to save and
restore the stack pointer from the frame pointer on thumbv6.

Fixes rdar://8819685

llvm-svn: 123196
2011-01-11 00:16:04 +00:00
Evan Cheng 078b0b095e Recognize inline asm 'rev /bin/bash, ' as a bswap intrinsic call.
llvm-svn: 123048
2011-01-08 01:24:27 +00:00
Bob Wilson 6f2b8966ca Lower some BUILD_VECTORS using VEXT+shuffle.
Patch by Tim Northover.

llvm-svn: 123035
2011-01-07 21:37:30 +00:00
Bob Wilson 99da75c17d Add testcases for PR8411 (vget_low and vget_high implemented as shuffles).
llvm-svn: 122997
2011-01-07 06:44:14 +00:00
Bob Wilson 8265d56638 Add ARM patterns to match EXTRACT_SUBVECTOR nodes.
Also fix an off-by-one in SelectionDAGBuilder that was preventing shuffle
vectors from being translated to EXTRACT_SUBVECTOR.
Patch by Tim Northover.

The test changes are needed to keep those spill-q tests from testing aligned
spills and restores.  If the only aligned stack objects are spill slots, we
no longer realign the stack frame.  Prior to this patch, an EXTRACT_SUBVECTOR
was legalized by loading from the stack, which created an aligned frame index.
Now, however, there is nothing except the spill slot in the stack frame, so
I added an aligned alloca.

llvm-svn: 122995
2011-01-07 04:59:04 +00:00
Bob Wilson 914df82a2e PR8921: LDM/POP do not support interworking prior to v5t.
llvm-svn: 122970
2011-01-06 19:24:41 +00:00
Anton Korobeynikov 879be84aa5 Update the test
llvm-svn: 122666
2011-01-01 20:57:26 +00:00
Bob Wilson 36be00ceb3 Radar 8803471: Fix expansion of ARM BCCi64 pseudo instructions.
If the basic block containing the BCCi64 (or BCCZi64) instruction ends with
an unconditional branch, that branch needs to be deleted before appending
the expansion of the BCCi64 to the end of the block.

llvm-svn: 122521
2010-12-23 22:45:49 +00:00
Bob Wilson 1a20c2aedd Add ARM-specific DAG combining to cast i64 vector element load/stores to f64.
Type legalization splits up i64 values into pairs of i32 values, which leads
to poor quality code when inserting or extracting i64 vector elements.
If the vector element is loaded or stored, it can be treated as an f64 value
and loaded or stored directly from a VPR register.  Use the pre-legalization
DAG combiner to cast those vector elements to f64 types so that the type
legalizer won't mess them up.  Radar 8755338.

llvm-svn: 122319
2010-12-21 06:43:19 +00:00
Chris Lattner 2b43f2df2b move this test into the ARM test so that it is only run when the arm backend
is enabled.

llvm-svn: 122163
2010-12-19 02:58:14 +00:00
Bob Wilson 00871c71e9 Fix result type of Neon floating-point comparisons against zero.
The result vector elements are always integers.  Radar 8782191.

llvm-svn: 122112
2010-12-18 00:04:33 +00:00
Bill Wendling 3fff1fd49b During local stack slot allocation, the materializeFrameBaseRegister function
may be called. If the entry block is empty, the insertion point iterator will be
the "end()" value. Calling ->getParent() on it (among others) causes problems.

Modify materializeFrameBaseRegister to take the machine basic block and insert
the frame base register at the beginning of that block. (It's very similar to
what the code does all ready. The only difference is that it will always insert
at the beginning of the entry block instead of after a previous materialization
of the frame base register. I doubt that that matters here.)

<rdar://problem/8782198>

llvm-svn: 122104
2010-12-17 23:09:14 +00:00
Bob Wilson 5408144add Fix a DAGCombiner crash when folding binary vector operations with constant
BUILD_VECTOR operands where the element type is not legal.  I had previously
changed this code to insert TRUNCATE operations, but that was just wrong.

llvm-svn: 122102
2010-12-17 23:06:49 +00:00
Bob Wilson 494d7f0367 Combine several vector-related DAGCombiner tests.
llvm-svn: 122101
2010-12-17 23:06:46 +00:00
Bob Wilson bfc6904fc6 Fix crash compiling a QQQQ REG_SEQUENCE for a Neon vld3_lane operation.
Radar 8776599

llvm-svn: 122018
2010-12-17 01:21:12 +00:00
Jason W Kim 4c1386add9 1. ARM/MC/ELF: A few more ELF relocs for .o
2. Fixed EmitLocalCommonSymbol for ELF (Yes, they exist. :)
   Test added.

llvm-svn: 121951
2010-12-16 03:12:17 +00:00
Eric Christopher 347f4c32e8 Don't handle -arm-long-calls in fast isel for now.
llvm-svn: 121919
2010-12-15 23:47:29 +00:00
Bob Wilson fa27a8621c Add Neon VCVT instructions for f32 <-> f16 conversions.
Clang is now providing intrinsics for these and so we need to support them
in the backend.  Radar 8068427.

llvm-svn: 121902
2010-12-15 22:14:12 +00:00
Evan Cheng c177813755 bfi A, (and B, C1), C2) -> bfi A, B, C2 iff C1 & C2 == C1. rdar://8458663
llvm-svn: 121746
2010-12-14 03:22:07 +00:00
Jason W Kim 1296055841 fix fixme case typo :-)
llvm-svn: 121743
2010-12-14 01:42:38 +00:00
Jason W Kim 0e909c5f9c First cut of ARM/MC/ELF PIC relocations.
Test has fixme, to move to .s -> .o test when AsmParser works better.

llvm-svn: 121732
2010-12-13 23:16:07 +00:00
Evan Cheng 3434575704 (or (and (shl A, #shamt), mask), B) => ARMbfi B, A, ~mask where lsb(mask) == #shamt. rdar://8752056
llvm-svn: 121606
2010-12-11 04:11:38 +00:00
Bob Wilson 9375d27460 Add float patterns for Neon vld1-lane/dup and vst1-lane operations.
llvm-svn: 121583
2010-12-10 22:13:32 +00:00
Bob Wilson d29b38c893 Fix some invalid alignments for Neon vld-dup and vld/st-lane instructions.
Alignments smaller than the total size of the memory being loaded or stored,
unless the alignment is 8 bytes, are not allowed.  Add tests for this, too.

llvm-svn: 121506
2010-12-10 19:37:42 +00:00
Jim Grosbach 5fccad84a3 ARM stm/ldm instructions require more than one register in the register list.
Otherwise, a plain str/ldr should be used instead. Make sure we account for
that in prologue/epilogue code generation.
rdar://8745460

llvm-svn: 121391
2010-12-09 18:31:13 +00:00
Jason W Kim c79c5f6e8c ARM/MC/ELF TPsoft is now a proper pseudo inst.
Added test to check bl __aeabi_read_tp gets emitted properly for ELF/ASM
as well as ELF/OBJ (including fixup)

Also added support for ELF::R_ARM_TLS_IE32

llvm-svn: 121312
2010-12-08 23:14:44 +00:00
Evan Cheng 775ead3293 Fix a bad prologue / epilogue codegen bug where the compiler would emit illegal
vpush instructions to save / restore VFP / NEON registers like this:
vpush {d8,d10,d11}
vpop {d8,d10,d11}

vpush and vpop do not allow gaps in the register list.
rdar://8728956

llvm-svn: 121197
2010-12-07 23:08:38 +00:00
Devang Patel c24048a718 If dbg_declare() or dbg_value() is not lowered by isel then emit DEBUG message instead of creating DBG_VALUE for undefined value in reg0.
llvm-svn: 121059
2010-12-06 22:39:26 +00:00
Evan Cheng 62c7b5bf76 Making use of VFP / NEON floating point multiply-accumulate / subtraction is
difficult on current ARM implementations for a few reasons.
1. Even though a single vmla has latency that is one cycle shorter than a pair
   of vmul + vadd, a RAW hazard during the first (4? on Cortex-a8) can cause
   additional pipeline stall. So it's frequently better to single codegen
   vmul + vadd.
2. A vmla folowed by a vmul, vmadd, or vsub causes the second fp instruction to
   stall for 4 cycles. We need to schedule them apart.
3. A vmla followed vmla is a special case. Obvious issuing back to back RAW
   vmla + vmla is very bad. But this isn't ideal either:
     vmul
     vadd
     vmla
   Instead, we want to expand the second vmla:
     vmla
     vmul
     vadd
   Even with the 4 cycle vmul stall, the second sequence is still 2 cycles
   faster.

Up to now, isel simply avoid codegen'ing fp vmla / vmls. This works well enough
but it isn't the optimial solution. This patch attempts to make it possible to
use vmla / vmls in cases where it is profitable.

A. Add missing isel predicates which cause vmla to be codegen'ed.
B. Make sure the fmul in (fadd (fmul)) has a single use. We don't want to
   compute a fmul and a fmla.
C. Add additional isel checks for vmla, avoid cases where vmla is feeding into
   fp instructions (except for the #3 exceptional case).
D. Add ARM hazard recognizer to model the vmla / vmls hazards.
E. Add a special pre-regalloc case to expand vmla / vmls when it's likely the
   vmla / vmls will trigger one of the special hazards.

Work in progress, only A+B are enabled.

llvm-svn: 120960
2010-12-05 22:04:16 +00:00
Jason W Kim 29805961d8 ARM/MC/ELF relocation "hello world" for movw/movt.
Lifted adjustFixupValue() from Darwin for sharing w ELF.
Test added
TODO:
  refactor ELFObjectWriter::RecordRelocation more.
  Possibly share more code with Darwin?
  Lots more relocations...

llvm-svn: 120534
2010-12-01 02:40:06 +00:00
Evan Cheng d4b0873c06 Enable sibling call optimization of libcalls which are expanded during
legalization time. Since at legalization time there is no mapping from
SDNode back to the corresponding LLVM instruction and the return
SDNode is target specific, this requires a target hook to check for
eligibility. Only x86 and ARM support this form of sibcall optimization
right now.
rdar://8707777

llvm-svn: 120501
2010-11-30 23:55:39 +00:00
Bob Wilson 431ac4ef50 Add support for NEON VLD3-dup instructions.
The encoding for alignment in VLD4-dup instructions is still a work in progress.

llvm-svn: 120356
2010-11-30 00:00:35 +00:00
Evan Cheng 9a133f623c Mark Darwin call instructions as using "r7" to prevent the frame-register
assignment instructions from being moved below / above calls.
rdar://8690640

llvm-svn: 120339
2010-11-29 22:43:27 +00:00
Benjamin Kramer a22f0ce1a3 Add missing colon.
llvm-svn: 120336
2010-11-29 22:39:38 +00:00
Bob Wilson 77ab165afe Add support for NEON VLD3-dup instructions.
llvm-svn: 120312
2010-11-29 19:35:29 +00:00
Bob Wilson 2d790df105 Add support for NEON VLD2-dup instructions.
llvm-svn: 120236
2010-11-28 06:51:26 +00:00
Bob Wilson c92eea0175 Add NEON VLD1-dup instructions (load 1 element to all lanes).
llvm-svn: 120194
2010-11-27 06:35:16 +00:00
Bob Wilson d7d2cf7842 Recognize sign/zero-extended constant BUILD_VECTORs for VMULL operations.
We need to check if the individual vector elements are sign/zero-extended
values.  For now this only handles constants values.  Radar 8687140.

llvm-svn: 120034
2010-11-23 19:38:38 +00:00
Evan Cheng eb56dca4fd Fix epilogue codegen to avoid leaving the stack pointer in an invalid
state. Previously Thumb2 would restore sp from fp like this:
mov sp, r7
sub, sp, #4
If an interrupt is taken after the 'mov' but before the 'sub', callee-saved
registers might be clobbered by the interrupt handler. Instead, try
restoring directly from sp:
add sp, #4
Or, if necessary (with VLA, etc.) use a scratch register to compute sp and
then restore it:
sub.w r4, r7, #8
mov sp, r7
rdar://8465407

llvm-svn: 119977
2010-11-22 18:12:04 +00:00
Tanya Lattner cd68095650 Fix bug in DAGCombiner for ARM that was trying to do a ShiftCombine on illegal types (vector should be split first).
Added test case.

llvm-svn: 119749
2010-11-18 22:06:46 +00:00
Eric Christopher b006fc9c07 Rewrite stack callee saved spills and restores to use push/pop instructions.
Remove movePastCSLoadStoreOps and associated code for simple pointer
increments. Update routines that depended upon other opcodes for save/restore.

Adjust all testcases accordingly.

llvm-svn: 119725
2010-11-18 19:40:05 +00:00
Dale Johannesen 0659c8f157 These tests are looking for library function names that
appear to differ on Linux.  Try to make them pass on Linux.
Would be good for a Linux person to review this.

llvm-svn: 119572
2010-11-17 21:57:32 +00:00
Bob Wilson 881b45ccdf Change ARMGlobalMerge to keep BSS globals in separate pools.
This completes the fixes for Radar 8673120.

llvm-svn: 119566
2010-11-17 21:25:39 +00:00
Bob Wilson 4c8ab19c22 Fix ARMGlobalMerge pass to check if globals are entirely within range.
It is generally not sufficient to check if the starting offset is in range
of the maximum offset that can be efficiently used for the target.

llvm-svn: 119565
2010-11-17 21:25:36 +00:00
Bob Wilson 59182fb4b5 Change the symbol for merged globals from "merged" to "_MergedGlobals".
This makes it more clear that the symbol is an internal, compiler-generated
name and gives a little more description about its contents.

llvm-svn: 119564
2010-11-17 21:25:33 +00:00
Bob Wilson f796d4b469 Fix the ARMGlobalMerge pass to look at variable sizes instead of pointer sizes.
It was mistakenly looking at the pointer type when checking for the size of
global variables.  This is a partial fix for Radar 8673120.

llvm-svn: 119563
2010-11-17 21:25:27 +00:00
Evan Cheng 7f8ab6ee8b Remove ARM isel hacks that fold large immediates into a pair of add, sub, and,
and xor. The 32-bit move immediates can be hoisted out of loops by machine
LICM but the isel hacks were preventing them.

Instead, let peephole optimization pass recognize registers that are defined by
immediates and the ARM target hook will fold the immediates in.

Other changes include 1) do not fold and / xor into cmp to isel TST / TEQ
instructions if there are multiple uses. This happens when the 'and' is live
out, machine sink would have sinked the computation and that ends up pessimizing
code. The peephole pass would recognize situations where the 'and' can be
toggled to define CPSR and eliminate the comparison anyway.

2) Move peephole pass to after machine LICM, sink, and CSE to avoid blocking
important optimizations.

rdar://8663787, rdar://8241368

llvm-svn: 119548
2010-11-17 20:13:28 +00:00
Jakob Stoklund Olesen e2b8858611 Fix PR8612 in the standard spiller, take two.
The live range of a register defined by an early clobber starts at the use slot,
not the def slot.

Except when it is an early clobber tied to a use operand. Then it starts at the
def slot like a standard def.

llvm-svn: 119305
2010-11-16 00:40:59 +00:00
Jakob Stoklund Olesen 97154f67d9 Revert "Fix PR8612 in the standard spiller as well."
This reverts r119183 which borke the buildbots.

llvm-svn: 119270
2010-11-15 21:51:51 +00:00
Eric Christopher 964943780b Recommit this change and remove the failing part of the test - it didn't
pass in the first place and was masked by earlier failures not warning
and aborting the block.

llvm-svn: 119184
2010-11-15 21:11:06 +00:00
Jakob Stoklund Olesen 97825bcbfd Fix PR8612 in the standard spiller as well.
The live range of a register defined by an early clobber starts at the use slot,
not the def slot.

llvm-svn: 119183
2010-11-15 20:55:53 +00:00
Jakob Stoklund Olesen ddf25c341c When spilling a register defined by an early clobber, make sure that the new
live ranges for the spill register are also defined at the use slot instead of
the normal def slot.

This fixes PR8612 for the inline spiller. A use was being allocated to the same
register as a spilled early clobber def.

This problem exists in all the spillers. A fix for the standard spiller is
forthcoming.

llvm-svn: 119182
2010-11-15 20:55:49 +00:00
Evan Cheng 2bcb8daa44 Add conditional move of large immediate.
llvm-svn: 118968
2010-11-13 02:25:14 +00:00
Evan Cheng 8ce967e393 Fix an obvious typo which inverted an immediate.
llvm-svn: 118951
2010-11-13 00:27:47 +00:00
Eric Christopher a08ccc8cb9 This should be still failing, but is. Disable it with the
forget-me-stick for now.

llvm-svn: 118950
2010-11-13 00:25:06 +00:00
Evan Cheng 0fc8084a64 Add conditional mvn instructions.
llvm-svn: 118935
2010-11-12 22:42:47 +00:00
Evan Cheng 2d59ee34f1 Add some missing isel predicates on def : pat patterns to avoid generating VFP vmla / vmls (they cause stalls). Disabling them in isel is properly not a right solution, I'll look into a proper solution next.
llvm-svn: 118922
2010-11-12 20:32:20 +00:00
Owen Anderson c7baee31ad Add support for ARM's specialized vector-compare-against-zero instructions.
llvm-svn: 118453
2010-11-08 23:21:22 +00:00
Dale Johannesen 0ef474730f Revert 118422 in search of bot verdancy.
llvm-svn: 118429
2010-11-08 19:17:22 +00:00
Jason W Kim f3e224f830 Support -mcpu=cortex-a8 in ARM attributes - Has Fixme. 1 Test modified.
llvm-svn: 118422
2010-11-08 17:58:07 +00:00
Owen Anderson 30c4892ea5 Add codegen and encoding support for the immediate form of vbic.
llvm-svn: 118291
2010-11-05 19:27:46 +00:00
Evan Cheng 21acf9fb38 Fix @llvm.prefetch isel. Selecting between pld / pldw using the first immediate rw. There is currently no intrinsic that matches to pli.
llvm-svn: 118237
2010-11-04 05:19:35 +00:00
Owen Anderson bc9b31c493 Covert VORRIMM to be produced via early target-specific DAG combining, rather than legalization.
This is both the conceptually correct place for it, as well as allowing it to be more aggressive.

llvm-svn: 118204
2010-11-03 23:15:26 +00:00
Owen Anderson 0747307049 Add support for code generation of the one register with immediate form of vorr.
We could be more aggressive about making this work for a larger range of constants,
but this seems like a good start.

llvm-svn: 118201
2010-11-03 22:44:51 +00:00
Evan Cheng 3ad8df65c5 Fix test.
llvm-svn: 118187
2010-11-03 18:21:33 +00:00
Bob Wilson 7d0ac84abd Add codegen patterns for VST1-lane instructions. Radar 8599955.
llvm-svn: 118176
2010-11-03 16:24:53 +00:00
Bob Wilson ceb49296ef Check for extractelement with a variable operand for the element number.
For NEON we had been assuming this was always an immediate constant.

llvm-svn: 118175
2010-11-03 16:24:50 +00:00
Evan Cheng 8740ee3637 Fix preload instruction isel. Only v7 supports pli, and only v7 with mp extension supports pldw. Add subtarget attribute to denote mp extension support and legalize illegal ones to nothing.
llvm-svn: 118160
2010-11-03 06:34:55 +00:00
Evan Cheng 6f36042557 Add support to match @llvm.prefetch to pld / pldw / pli. rdar://8601536.
llvm-svn: 118152
2010-11-03 05:14:24 +00:00
Evan Cheng debf9c502a Two sets of changes. Sorry they are intermingled.
1. Fix pre-ra scheduler so it doesn't try to push instructions above calls to
   "optimize for latency". Call instructions don't have the right latency and
   this is more likely to use introduce spills.
2. Fix if-converter cost function. For ARM, it should use instruction latencies,
   not # of micro-ops since multi-latency instructions is completely executed
   even when the predicate is false. Also, some instruction will be "slower"
   when they are predicated due to the register def becoming implicit input.
   rdar://8598427

llvm-svn: 118135
2010-11-03 00:45:17 +00:00
John Thompson beffa5bef1 Inline asm mult-alt constraint tests.
llvm-svn: 118107
2010-11-02 23:01:44 +00:00
Jim Grosbach 0b7fda23cc Revert r114340 (improvements in Darwin function prologue/epilogue), as it broke
assumptions about stack layout. Specifically, LR must be saved next to FP.

llvm-svn: 118026
2010-11-02 17:35:25 +00:00
Bob Wilson dd9fbaa9c0 Add support for alignment operands on VLD1-lane instructions.
This is another part of the fix for Radar 8599955.

llvm-svn: 117976
2010-11-01 23:40:51 +00:00
Bob Wilson 7e57573844 Add VLD1-lane testcases for quad-register types.
llvm-svn: 117975
2010-11-01 23:40:46 +00:00
Bob Wilson dc44990c7d Add NEON VLD1-lane instructions. Partial fix for Radar 8599955.
llvm-svn: 117964
2010-11-01 22:04:05 +00:00
Bill Wendling c6627eec13 When we look at instructions to convert to setting the 's' flag, we need to look
at more than those which define CPSR. You can have this situation:

(1)  subs  ...
(2)  sub   r6, r5, r4
(3)  movge ...
(4)  cmp   r6, 0
(5)  movge ...

We cannot convert (2) to "subs" because (3) is using the CPSR set by
(1). There's an analogous situation here:

(1)  sub   r1, r2, r3
(2)  sub   r4, r5, r6
(3)  cmp   r4, ...
(5)  movge ...
(6)  cmp   r1, ...
(7)  movge ...

We cannot convert (1) to "subs" because of the intervening use of CPSR.

llvm-svn: 117950
2010-11-01 20:41:43 +00:00
Bob Wilson 44be217af1 NEON does not support truncating vector stores. Radar 8598391.
llvm-svn: 117940
2010-11-01 18:31:39 +00:00
Bill Wendling 359dd0c6bd More tests to XFAIL. The arm-and-txt-peephole.ll test passes even when the
peephole optimizer is disabled. That's not good at all.

llvm-svn: 117905
2010-11-01 05:59:43 +00:00
Bill Wendling cd4750cb4d Disable because peephole is disabled.
llvm-svn: 117903
2010-11-01 05:48:44 +00:00
Evan Cheng 2b3f25e031 Teach machine cse to eliminate instructions with multiple physreg uses and defs. rdar://8610857.
llvm-svn: 117745
2010-10-29 23:36:03 +00:00
Bob Wilson 08882be86c Remove DAG combiner patch to fold vector splats. Instcombiner does it now.
llvm-svn: 117720
2010-10-29 22:03:02 +00:00
Bob Wilson f63da12be9 Teach the DAG combiner to fold a splat of a splat. Radar 8597790.
Also do some minor refactoring to reduce indentation.

llvm-svn: 117558
2010-10-28 17:06:14 +00:00
Evan Cheng ff310737e5 Re-commit 117518 and 117519 now that ARM MC test failures are out of the way.
llvm-svn: 117531
2010-10-28 06:47:08 +00:00
Evan Cheng e2c211c1b9 Revert 117518 and 117519 for now. They changed scheduling and cause MC tests to fail. Ugh.
llvm-svn: 117520
2010-10-28 02:00:25 +00:00
Evan Cheng ff1c862f8e - Assign load / store with shifter op address modes the right itinerary classes.
- For now, loads of [r, r] addressing mode is the same as the
  [r, r lsl/lsr/asr #] variants. ARMBaseInstrInfo::getOperandLatency() should
  identify the former case and reduce the output latency by 1.
- Also identify [r, r << 2] case. This special form of shifter addressing mode
  is "free".

llvm-svn: 117519
2010-10-28 01:49:06 +00:00
Evan Cheng 59bbc545e0 Shifter ops are not always free. Do not fold them (especially to form
complex load / store addressing mode) when they have higher cost and
when they have more than one use.

llvm-svn: 117509
2010-10-27 23:41:30 +00:00
Bob Wilson c7334a146e SelectionDAG shuffle nodes do not allow operands with different numbers of
elements than the result vector type.  So, when an instruction like:

%8 = shufflevector <2 x float> %4, <2 x float> %7, <4 x i32> <i32 1, i32 0, i32 3, i32 2>

is translated to a DAG, each operand is changed to a concat_vectors node that appends 2 undef elements.  That is:

shuffle [a,b], [c,d] is changed to:
shuffle [a,b,u,u], [c,d,u,u]

That's probably the right thing for x86 but for NEON, we'd much rather have:

shuffle [a,b,c,d], undef

Teach the DAG combiner how to do that transformation for ARM.  Radar 8597007.

llvm-svn: 117482
2010-10-27 20:38:28 +00:00
Jim Grosbach 3fe94f1e9f FileCheck'ize
llvm-svn: 117401
2010-10-26 21:26:47 +00:00
Bob Wilson e1961fe289 When the "true" and "false" blocks of a diamond if-conversion are the same,
do not double-count the duplicate instructions by counting once from the
beginning and again from the end.  Keep track of where the duplicates from
the beginning ended and don't go past that point when counting duplicates
at the end.  Radar 8589805.

This change causes one of the MC/ARM/simple-fp-encoding tests to produce
different (better!) code without the vmovne instruction being tested.
I changed the test to produce vmovne and vmoveq instructions but moving
between register files in the opposite direction.  That's not quite the same
but predicated versions of those instructions weren't being tested before,
so at least the test coverage is not any worse, just different.

llvm-svn: 117333
2010-10-26 00:02:24 +00:00
Rafael Espindola 0ed1543d4e Add support for emitting ARM file attributes.
llvm-svn: 117275
2010-10-25 17:50:35 +00:00
Jim Grosbach 4663dc72e2 tidy up
llvm-svn: 117185
2010-10-22 23:46:04 +00:00
Jim Grosbach 3565b0a2e2 Remove duplicate test.
llvm-svn: 117158
2010-10-22 22:04:28 +00:00
Jim Grosbach 6d993fa1a6 tidy up.
llvm-svn: 117157
2010-10-22 22:01:56 +00:00
Jim Grosbach db7415075d FileCheck-ize a few tests.
llvm-svn: 117156
2010-10-22 21:55:03 +00:00
Andrew Trick f4ebec03e0 putback r116983 and fix simple-fp-encoding.ll tests
llvm-svn: 116992
2010-10-21 03:40:16 +00:00
Owen Anderson 9e00f27e14 Revert r116983, which is breaking all the buildbots.
llvm-svn: 116987
2010-10-21 03:11:16 +00:00
Evan Cheng 15c2ac90ec Add missing scheduling itineraries for transfers between core registers and VFP registers.
llvm-svn: 116983
2010-10-21 01:12:00 +00:00
Evan Cheng 63c7608c34 Re-enable register pressure aware machine licm with fixes. Hoist() may have
erased the instruction during LICM so UpdateRegPressureAfter() should not
reference it afterwards.

llvm-svn: 116845
2010-10-19 18:58:51 +00:00
Daniel Dunbar 418204e523 Revert r116781 "- Add a hook for target to determine whether an instruction def
is", which breaks some nightly tests.

llvm-svn: 116816
2010-10-19 17:14:24 +00:00
Evan Cheng 8249dfe6ce - Add a hook for target to determine whether an instruction def is
"long latency" enough to hoist even if it may increase spilling. Reloading
  a value from spill slot is often cheaper than performing an expensive
  computation in the loop. For X86, that means machine LICM will hoist
  SQRT, DIV, etc. ARM will be somewhat aggressive with VFP and NEON
  instructions.
- Enable register pressure aware machine LICM by default.

llvm-svn: 116781
2010-10-19 00:55:07 +00:00
Bob Wilson b6d61dc291 Support alignment for NEON vld-lane and vst-lane instructions.
llvm-svn: 116776
2010-10-19 00:16:32 +00:00
Eric Christopher 7b92c2a9a0 Revert r116220 - thus turning arm fast isel back on by default.
llvm-svn: 116762
2010-10-18 22:53:53 +00:00
Bob Wilson 59351844e1 ARM instructions that are both predicated and set the condition codes
have been printed with the "S" modifier after the predicate.  With ARM's
unified syntax, they are supposed to go in the other order.  We fixed this
for Thumb when we switched to unified syntax but missed changing it for
ARM.  Apparently we don't generate these instructions often because no one
noticed until now.  Thanks to Bill Wendling for the testcase!

llvm-svn: 116563
2010-10-15 03:23:44 +00:00
Jim Grosbach 8b6a9c1574 Refactor the MOVsr[al]_flag and RRX pseudo-instructions to really be pseudos
and let the ARMExpandPseudoInsts pass fix them up into the real (MOVs)
instruction form.

llvm-svn: 116534
2010-10-14 22:57:13 +00:00
Jim Grosbach 062749cb25 Tweak the ARM backend to use the RRX mnemonic instead of the 'mov a, b, rrx'
pseudonym.

llvm-svn: 116512
2010-10-14 20:43:44 +00:00
Eric Christopher e2a0b6841a Found a bug turning this on by default. Disable again for now.
llvm-svn: 116220
2010-10-11 20:26:21 +00:00
Eric Christopher 6002b3b3e1 Remove now non-existent option.
llvm-svn: 116219
2010-10-11 20:21:21 +00:00
Evan Cheng 05f13e94bf Correct some load / store instruction itinerary mistakes:
1. Cortex-A8 load / store multiplies can only issue on ALU0.
2. Eliminate A8_Issue, A8_LSPipe will correctly limit the load / store issues.
3. Correctly model all vld1 and vld2 variants.

llvm-svn: 116134
2010-10-09 01:03:04 +00:00
Bill Wendling 748265b0da Simplify test and move into a generic "crash" ll file.
llvm-svn: 116130
2010-10-09 00:29:04 +00:00
Bill Wendling 59ebe44049 Check to make sure that the iterator isn't at the beginning of the basic block
before decrementing. <rdar://problem/8529919>

llvm-svn: 116126
2010-10-09 00:03:48 +00:00
Bob Wilson 056b694de1 Change register allocation order for ARM VFP and NEON registers to put the
callee-saved registers at the end of the lists.  Also prefer to avoid using
the low registers that are in register subclasses required by certain
instructions, so that those registers will more likely be available when needed.
This change makes a huge improvement in spilling in some cases.  Thanks to
Jakob for helping me realize the problem.

Most of this patch is fixing the testsuite.  There are quite a few places
where we're checking for specific registers.  I changed those to wildcards
in places where that doesn't weaken the tests.  The spill-q.ll and
thumb2-spill-q.ll tests stopped spilling with this change, so I added a bunch
of live values to force spills on those tests.

llvm-svn: 116055
2010-10-08 06:15:13 +00:00
Jim Grosbach 742adc328a Allow use of the 16-bit literal move instruction in CMOVs for ARM mode.
llvm-svn: 115884
2010-10-07 00:42:42 +00:00
Evan Cheng 49d4c0bd18 - Add TargetInstrInfo::getOperandLatency() to compute operand latencies. This
allow target to correctly compute latency for cases where static scheduling
  itineraries isn't sufficient. e.g. variable_ops instructions such as
  ARM::ldm.
  This also allows target without scheduling itineraries to compute operand
  latencies. e.g. X86 can return (approximated) latencies for high latency
  instructions such as division.
- Compute operand latencies for those defined by load multiple instructions,
  e.g. ldm and those used by store multiple instructions, e.g. stm.

llvm-svn: 115755
2010-10-06 06:27:31 +00:00
Jakob Stoklund Olesen eb12f49fb7 Try again to disable critical edge splitting in CodeGenPrepare.
The bug that broke i386 linux has been fixed in r115191.

llvm-svn: 115204
2010-09-30 20:51:52 +00:00
Jason W Kim 645f6c2bef Tiny patch for proof-of-concept cleanup of ARMAsmPrinter::EmitStartOfAsmFile()
Small test for sanity check of resulting ARM .s file.
Tested against -r115129.

llvm-svn: 115133
2010-09-30 02:45:56 +00:00
Bob Wilson 97bf273870 Increase ARM APCS preferred alignment for i64 and f64 from 32 bits to 64 bits.
LDM/STM instructions can run one cycle faster on some ARM processors if the
memory address is 64-bit aligned.  Radar 8489376.

llvm-svn: 115047
2010-09-29 17:54:10 +00:00
Gabor Greif c2eb72dc2a do not compare actual branch labels; this may fix llvm-gcc-x86_64-darwin10-cross-mingw32 buildbot too
llvm-svn: 115034
2010-09-29 10:45:43 +00:00
Gabor Greif d36e3e8850 improve heuristics to find the 'and' corresponding to 'tst' to also catch opportunities on thumb2
added some doxygen on the way

llvm-svn: 115033
2010-09-29 10:12:08 +00:00
Owen Anderson a3181e2d79 Add a subtarget hook for reporting the misprediction penalty. Use this to provide more precise
cost modeling for if-conversion.  Now if only we had a way to estimate the misprediction probability.

Adjsut CodeGen/ARM/ifcvt10.ll.  The pipeline on Cortex-A8 is long enough that it is still profitable
to predicate an ldm, but the shorter pipeline on Cortex-A9 makes it unprofitable.

llvm-svn: 114995
2010-09-28 21:57:50 +00:00
Anton Korobeynikov 81bdc93bbb User proper libcall names & condcodes while compiling for ARM EABI.
Patch by Evzen Muller!

llvm-svn: 114991
2010-09-28 21:39:26 +00:00
Owen Anderson 88af7d00fc Part one of switching to using a more sane heuristic for determining if-conversion profitability.
Rather than having arbitrary cutoffs, actually try to cost model the conversion.

For now, the constants are tuned to more or less match our existing behavior, but these will be
changed to reflect realistic values as this work proceeds.

llvm-svn: 114973
2010-09-28 18:32:13 +00:00
Bob Wilson 3dc97324c1 Add a command line option "-arm-strict-align" to disallow unaligned memory
accesses for ARM targets that would otherwise allow it.  Radar 8465431.

llvm-svn: 114941
2010-09-28 04:09:35 +00:00
Jakob Stoklund Olesen 415a7a6fec Revert "Disable codegen prepare critical edge splitting. Machine instruction passes now"
This reverts revision 114633. It was breaking llvm-gcc-i386-linux-selfhost.

It seems there is a downstream bug that is exposed by
-cgp-critical-edge-splitting=0. When that bug is fixed, this patch can go back
in.

Note that the changes to tailcallfp2.ll are not reverted. They were good are
required.

llvm-svn: 114859
2010-09-27 18:43:48 +00:00
Jakob Stoklund Olesen 4f3443e74d Explicitly disable CGP critical edge splitting for this test so it won't break
by reenabling it temporarily.

llvm-svn: 114858
2010-09-27 18:43:43 +00:00
Jakob Stoklund Olesen f2a279b902 Don't depend on basic block numbering.
llvm-svn: 114857
2010-09-27 18:43:40 +00:00
Evan Cheng dbcc4b4d4d Enable code placement optimization pass for ARM.
llvm-svn: 114746
2010-09-24 19:07:23 +00:00
Bob Wilson 7fbbe9a43a Set alignment operand for NEON VST instructions.
llvm-svn: 114709
2010-09-23 23:42:37 +00:00
Bob Wilson 9eeb890172 Set alignment operand for NEON VLD instructions.
llvm-svn: 114696
2010-09-23 21:43:54 +00:00
Evan Cheng 794aaa79e2 Disable codegen prepare critical edge splitting. Machine instruction passes now
break critical edges on demand.

llvm-svn: 114633
2010-09-23 06:55:34 +00:00
Evan Cheng d757c88bba OptimizeCompareInstr should avoid iterating pass the beginning of the MBB when the 'and' instruction is after the comparison.
llvm-svn: 114506
2010-09-21 23:49:07 +00:00
Jim Grosbach 94dfd6fc4f Simplify ARM callee-saved register handling by removing the distinction
between the high and low registers for prologue/epilogue code. This was
a Darwin-only thing that wasn't providing a realistic benefit anymore.
Combining the save areas simplifies the compiler code and results in better
ARM/Thumb2 codegen.

For example, previously we would generate code like:
        push    {r4, r5, r6, r7, lr}
        add     r7, sp, #12
        stmdb   sp!, {r8, r10, r11}
With this change, we combine the register saves and generate:
        push    {r4, r5, r6, r7, r8, r10, r11, lr}
        add     r7, sp, #12

rdar://8445635

llvm-svn: 114340
2010-09-20 19:32:20 +00:00
Bob Wilson cb6db98897 Add target-specific DAG combiner for BUILD_VECTOR and VMOVRRD. An i64
value should be in GPRs when it's going to be used as a scalar, and we use
VMOVRRD to make that happen, but if the value is converted back to a vector
we need to fold to a simple bit_convert.  Radar 8407927.

llvm-svn: 114233
2010-09-17 22:59:05 +00:00
Jim Grosbach 7a6c37d3e7 Teach the (non-MC) instruction printer to use the cannonical names for push/pop,
and shift instructions on ARM. Update the tests to match.

llvm-svn: 114230
2010-09-17 22:36:38 +00:00
Jim Grosbach 6d800f88da Update tests to handle MC-inst instruction printing of shift operations. The
legacy asm printer uses instructions of the form, "mov r0, r0, lsl #3", while
the MC-instruction printer uses the form "lsl r0, r0, #3". The latter mnemonic
is correct and preferred according the ARM documentation (A8.6.98). The former
are pseudo-instructions for the latter.

llvm-svn: 114221
2010-09-17 21:58:46 +00:00
Jim Grosbach 4a5e54021a FileCheck-ize
llvm-svn: 114218
2010-09-17 21:46:16 +00:00
Jim Grosbach 20da4e360b Move thumb2 tests to the thumb2 directory
llvm-svn: 114206
2010-09-17 20:34:09 +00:00
Jim Grosbach 9b0cd20f72 tweak test to check instructions rather than relying on the comment string
llvm-svn: 114204
2010-09-17 20:27:26 +00:00
Jim Grosbach f3ceecec7e tweak test to check instructions rather than relying on the comment string
llvm-svn: 114200
2010-09-17 20:21:03 +00:00
Jim Grosbach c18a460adc tweak test to check instructions rather than relying on the comment string
llvm-svn: 114199
2010-09-17 20:17:41 +00:00
Bob Wilson 660d7ecf32 Reapply Gabor's 113839, 113840, and 113876 with a fix for a problem
encountered while building llvm-gcc for arm.  This is probably the same issue
that the ppc buildbot hit. llvm::prior works on a MachineBasicBlock::iterator,
not a plain MachineInstr.

llvm-svn: 113983
2010-09-15 17:12:08 +00:00
Gabor Greif 9ae4b271f2 the darwin9-powerpc buildbot keeps consistently crashing,
backing out following to get it back to green,
so I can investigate in peace:

svn merge -c -113840  llvm/test/CodeGen/ARM/arm-and-tst-peephole.ll
svn merge -c -113876 -c -113839 llvm/lib/Target/ARM/ARMBaseInstrInfo.cpp

llvm-svn: 113980
2010-09-15 16:53:07 +00:00
Gabor Greif 00e34f4b32 forgot the testcase change for r113839
llvm-svn: 113840
2010-09-14 09:30:17 +00:00
Gabor Greif 5dbe800203 test for and-tst peephole optimization
documents the status-quo with its opportunities

llvm-svn: 113838
2010-09-14 08:50:43 +00:00
Owen Anderson c237a849e3 Re-apply r113679, which was reverted in r113720, which added a paid of new instcombine transforms
to expose greater opportunities for store narrowing in codegen.  This patch fixes a potential
infinite loop in instcombine caused by one of the introduced transforms being overly aggressive.

llvm-svn: 113763
2010-09-13 17:59:27 +00:00
Eric Christopher 26abd3e0c2 Revert 113679, it was causing an infinite loop in a testcase that I've sent
on to Owen.

llvm-svn: 113720
2010-09-12 06:09:23 +00:00
Evan Cheng 1d6aa46cd7 Fix test so it passes on non-Darwin hosts.
llvm-svn: 113577
2010-09-10 06:20:01 +00:00
Bob Wilson 8617234658 Fix merging base-updates for VLDM/VSTM: Before I switched these instructions
to use AddrMode4, there was a count of the registers stored in one of the
operands.  I changed that to just count the operands but forgot to adjust for
the size of D registers.  This was noticed by Evan as a performance problem
but it is a potential correctness bug as well, since it is possible that this
could merge a base update with a non-matching immediate.

llvm-svn: 113576
2010-09-10 05:15:04 +00:00
Evan Cheng bf4070756f Teach if-converter to be more careful with predicating instructions that would
take multiple cycles to decode.
For the current if-converter clients (actually only ARM), the instructions that
are predicated on false are not nops. They would still take machine cycles to
decode. Micro-coded instructions such as LDM / STM can potentially take multiple
cycles to decode. If-converter should take treat them as non-micro-coded
simple instructions.

llvm-svn: 113570
2010-09-10 01:29:16 +00:00
Eric Christopher ca2ec95154 Remove ssp from this test.
llvm-svn: 113392
2010-09-08 19:32:34 +00:00
Bob Wilson f65c9ef720 Replace NEON vabdl, vaba, and vabal intrinsics with combinations of the
vabd intrinsic and add and/or zext operations.  In the case of vaba, this
also avoids the need for a DAG combine pattern to combine vabd with add.
Update tests.  Auto-upgrade the old intrinsics.

llvm-svn: 112941
2010-09-03 01:35:08 +00:00
Sandeep Patel 0ca17f7e8a Fix an unnecessary XFAIL
llvm-svn: 112853
2010-09-02 20:19:24 +00:00
Jim Grosbach 66c681a644 Now that register allocation properly considers reserved regs, simplify the
ARM register class allocation order functions to take advantage of that.

llvm-svn: 112841
2010-09-02 18:14:29 +00:00
Bob Wilson 75a6408f88 Convert VLD1 and VLD2 instructions to use pseudo-instructions until
after regalloc.

llvm-svn: 112825
2010-09-02 16:00:54 +00:00
Bob Wilson 38ab35a911 Remove NEON vmull, vmlal, and vmlsl intrinsics, replacing them with multiply,
add, and subtract operations with zero-extended or sign-extended vectors.
Update tests.  Add auto-upgrade support for the old intrinsics.

llvm-svn: 112773
2010-09-01 23:50:19 +00:00
Chris Lattner 39eccb4754 temporarily revert r112664, it is causing a decoding conflict, and
the testcases should be merged.

llvm-svn: 112711
2010-09-01 16:00:50 +00:00
Bill Wendling 6789f8b6ae We have a chance for an optimization. Consider this code:
int x(int t) {
  if (t & 256)
    return -26;
  return 0;
}

We generate this:

     tst.w   r0, #256
     mvn     r0, #25
     it      eq
     moveq   r0, #0

while gcc generates this:

     ands    r0, r0, #256
     it      ne
     mvnne   r0, #25
     bx      lr

Scandalous really!

During ISel time, we can look for this particular pattern. One where we have a
"MOVCC" that uses the flag off of a CMPZ that itself is comparing an AND
instruction to 0. Something like this (greatly simplified):

  %r0 = ISD::AND ...
  ARMISD::CMPZ %r0, 0         @ sets [CPSR]
  %r0 = ARMISD::MOVCC 0, -26  @ reads [CPSR]

All we have to do is convert the "ISD::AND" into an "ARM::ANDS" that sets [CPSR]
when it's zero. The zero value will all ready be in the %r0 register and we only
need to change it if the AND wasn't zero. Easy!

llvm-svn: 112664
2010-08-31 22:41:22 +00:00
Anton Korobeynikov 3a1d87a7ba Fix borken test
llvm-svn: 112555
2010-08-30 23:41:49 +00:00
Bob Wilson 4cd8a126c3 Remove NEON vmovn intrinsic, replacing it with vector truncate operations.
Auto-upgrade the old intrinsic and update tests.

llvm-svn: 112507
2010-08-30 20:02:30 +00:00
Duncan Sands 68c30907cc Correct bogus module triple specifications.
llvm-svn: 112469
2010-08-30 10:48:29 +00:00
Bob Wilson d0c054886c Remove NEON vaddl, vaddw, vsubl, and vsubw intrinsics. Instead, use llvm
IR add/sub operations with one or both operands sign- or zero-extended.
Auto-upgrade the old intrinsics.

llvm-svn: 112416
2010-08-29 05:57:34 +00:00
Bob Wilson 13ce07fa92 Change ARM VFP VLDM/VSTM instructions to use addressing mode #4, just like
all the other LDM/STM instructions.  This fixes asm printer crashes when
compiling with -O0.  I've changed one of the NEON tests (vst3.ll) to run
with -O0 to check this in the future.

Prior to this change VLDM/VSTM used addressing mode #5, but not really.
The offset field was used to hold a count of the number of registers being
loaded or stored, and the AM5 opcode field was expanded to specify the IA
or DB mode, instead of the standard ADD/SUB specifier.  Much of the backend
was not aware of these special cases.  The crashes occured when rewriting
a frameindex caused the AM5 offset field to be changed so that it did not
have a valid submode.  I don't know exactly what changed to expose this now.
Maybe we've never done much with -O0 and NEON.  Regardless, there's no longer
any reason to keep a count of the VLDM/VSTM registers, so we can use
addressing mode #4 and clean things up in a lot of places.

llvm-svn: 112322
2010-08-27 23:18:17 +00:00
Bob Wilson edf722add3 Add alignment arguments to all the NEON load/store intrinsics.
Update all the tests using those intrinsics and add support for
auto-upgrading bitcode files with the old versions of the intrinsics.

llvm-svn: 112271
2010-08-27 17:13:24 +00:00
Bob Wilson 4629f423f8 Revert svn 107892 (with changes to work with trunk). It caused a crash if
a VLD result was not used (Radar 8355607).  It should also fix pr7988, but
I haven't verified that yet.

llvm-svn: 112118
2010-08-26 00:13:36 +00:00
Eric Christopher 6b1533a1a9 Add another basic test cribbed from the x86 fast-isel tests.
llvm-svn: 112036
2010-08-25 07:57:29 +00:00
Eric Christopher 37d547aee6 Run this on thumb and arm.
llvm-svn: 112035
2010-08-25 07:53:15 +00:00
Eric Christopher e58c03698e Make this testcase actually executed with fast-isel on arm.
llvm-svn: 112033
2010-08-25 07:47:00 +00:00
Bob Wilson be745d8c00 Replace some NEON vmovl intrinsic that I missed earlier.
llvm-svn: 111696
2010-08-20 23:22:43 +00:00
Bob Wilson 9a511c07e4 Replace the arm.neon.vmovls and vmovlu intrinsics with vector sign-extend and
zero-extend operations.

llvm-svn: 111614
2010-08-20 04:54:02 +00:00
Dan Gohman 82656fb0e1 When sending stats output to stdout for grepping, don't emit normal
output to standard output also.

llvm-svn: 111435
2010-08-18 22:22:44 +00:00
Bob Wilson fb7eaff759 Expand ZERO_EXTEND operations for NEON vector types.
Testcase from Nick Lewycky.

llvm-svn: 111341
2010-08-18 01:45:52 +00:00
Bob Wilson 942b10f511 Change ARM PKHTB and PKHBT instructions to use a shift_imm operand to avoid
printing "lsl #0".  This fixes the remaining parts of pr7792.  Make
corresponding changes for encoding/decoding these instructions.

llvm-svn: 111251
2010-08-17 17:23:19 +00:00
Bob Wilson 411dfad981 Allow more cases of undef shuffle indices and add tests for them.
llvm-svn: 111226
2010-08-17 05:54:34 +00:00
Evan Cheng f259efde47 PHI elimination should not break back edge. It can cause some significant code placement issues. rdar://8263994
good:
LBB0_2:
  mov     r2, r0
  . . .
  mov     r1, r2
  bne     LBB0_2

bad:
LBB0_2:
  mov     r2, r0
  . . .
@ BB#3:
  mov     r1, r2
  b       LBB0_2

llvm-svn: 111221
2010-08-17 01:20:36 +00:00
Bob Wilson eee4824f74 Add a testcase for svn 111208.
llvm-svn: 111212
2010-08-16 23:44:29 +00:00
Bob Wilson 804f6159f1 Generalize a pattern for PKHTB: an SRL of 16-31 bits will guarantee
that the high halfword is zero.  The shift need not be exactly 16 bits.

llvm-svn: 111196
2010-08-16 22:26:55 +00:00
Bob Wilson 8f553757c4 Convert a test to use FileCheck.
llvm-svn: 111153
2010-08-16 17:05:27 +00:00
Bob Wilson 3c9ed76ba5 Temporarily disable tail calls on ARM to work around some linker problems.
llvm-svn: 111050
2010-08-13 22:43:33 +00:00
Bill Wendling 6a98131468 Consider this code snippet:
float t1(int argc) {
  return (argc == 1123) ? 1.234f : 2.38213f;
}

We would generate truly awful code on ARM (those with a weak stomach should look
away):

_t1:
  movw   r1, #1123
  movs   r2, #1
  movs   r3, #0
  cmp    r0, r1
  mov.w  r0, #0
  it     eq
  moveq  r0, r2
  movs   r1, #4
  cmp    r0, #0
  it     ne
  movne  r3, r1
  adr    r0, #LCPI1_0
  ldr    r0, [r0, r3]
  bx     lr

The problem was that legalization was creating a cascade of SELECT_CC nodes, for
for the comparison of "argc == 1123" which was fed into a SELECT node for the ?:
statement which was itself converted to a SELECT_CC node. This is because the
ARM back-end doesn't have custom lowering for SELECT nodes, so it used the
default "Expand".

I added a fairly simple "LowerSELECT" to the ARM back-end. It takes care of this
testcase, but can obviously be expanded to include more cases.

Now we generate this, which looks optimal to me:

_t1:
  movw   r1, #1123
  movs   r2, #0
  cmp    r0, r1
  adr    r0, #LCPI0_0
  it     eq
  moveq  r2, #4
  ldr    r0, [r0, r2]
  bx     lr
  .align  2
LCPI0_0:
  .long   1075344593  @ float 2.382130e+00
  .long   1067316150  @ float 1.234000e+00

llvm-svn: 110799
2010-08-11 08:43:16 +00:00
Evan Cheng 5190f09291 Report error if codegen tries to instantiate a ARM target when the cpu does support it. e.g. cortex-m* processors.
llvm-svn: 110798
2010-08-11 07:17:46 +00:00
Bill Wendling 79937dfc5b Update test to match output of optimize compares for ARM.
llvm-svn: 110765
2010-08-11 01:05:02 +00:00
Bill Wendling 871d4e1170 The optimize comparisons pass removes the "cmp" instruction this is checking for.
llvm-svn: 110739
2010-08-10 22:16:05 +00:00
Rafael Espindola 027d5bcf89 Fix eabi calling convention when a 64 bit value shadows r3.
Without this what was happening was:

* R3 is not marked as "used"
* ARM backend thinks it has to save it to the stack because of vaarg
* Offset computation correctly ignores it
* Offsets are wrong

llvm-svn: 110446
2010-08-06 15:35:32 +00:00
Bill Wendling 26feb849a4 Testcase for r110248.
llvm-svn: 110249
2010-08-04 21:56:30 +00:00
Bob Wilson 79daf7e0ae Combine NEON VABD (absolute difference) intrinsics with ADDs to make VABA
(absolute difference with accumulate) intrinsics.  Radar 8228576.

llvm-svn: 110170
2010-08-04 00:12:08 +00:00
Anton Korobeynikov 6bcea068db Currently EH lowering code expects typeinfo to be global only.
This assumption is not satisfied due to global mergeing.
Workaround the issue by temporary disablinge mergeing of const globals.
Also, ignore LLVM "special" globals. This fixes PR7716

llvm-svn: 109423
2010-07-26 18:45:39 +00:00
Evan Cheng df907f4594 - Allow target to specify when is register pressure "too high". In most cases,
it's too late to start backing off aggressive latency scheduling when most
  of the registers are in use so the threshold should be a bit tighter.
- Correctly handle live out's and extract_subreg etc.
- Enable register pressure aware scheduling by default for hybrid scheduler.
  For ARM, this is almost always a win on # of instructions. It's runtime
  neutral for most of the tests. But for some kernels with high register
  pressure it can be a huge win. e.g. 464.h264ref reduced number of spills by
  54 and sped up by 20%.

llvm-svn: 109279
2010-07-23 22:39:59 +00:00
Evan Cheng 285903853f More register pressure aware scheduling work.
llvm-svn: 109064
2010-07-21 23:53:58 +00:00
Eric Christopher 84bdfd80df Baby steps towards ARM fast-isel.
llvm-svn: 109047
2010-07-21 22:26:11 +00:00
Rafael Espindola 4277e14dc4 Fix calling convention on ARM if vfp2+ is enabled.
llvm-svn: 109009
2010-07-21 11:38:30 +00:00
Jim Grosbach b97e2bbe32 Add combiner patterns to more effectively utilize the BFI (bitfield insert)
instruction for non-constant operands. This includes the case referenced
in the README.txt regarding a bitfield copy.

llvm-svn: 108608
2010-07-17 03:30:54 +00:00
Jim Grosbach 11013eda5a Add basic support to code-gen the ARM/Thumb2 bit-field insert (BFI) instruction
and a combine pattern to use it for setting a bit-field to a constant
value. More to come for non-constant stores.

llvm-svn: 108570
2010-07-16 23:05:05 +00:00
Evan Cheng 55f0c6b9fc Split -enable-finite-only-fp-math to two options:
-enable-no-nans-fp-math and -enable-no-infs-fp-math. All of the current codegen fp math optimizations only care whether the fp arithmetics arguments and results can never be NaN.

llvm-svn: 108465
2010-07-15 22:07:12 +00:00
Jim Grosbach a90af1ba38 Improve 64-subtraction of immediates when parts of the immediate can fit
in the literal field of an instruction. E.g.,
long long foo(long long a) {
  return a - 734439407618LL;
}

rdar://7038284

llvm-svn: 108339
2010-07-14 17:45:16 +00:00
Bob Wilson bad47f62f6 Add support for NEON VMVN immediate instructions.
llvm-svn: 108324
2010-07-14 06:31:50 +00:00
Bob Wilson 103a0dcfe1 Add an ARM-specific DAG combining to avoid redundant VDUPLANE nodes.
Radar 7373643.

llvm-svn: 108303
2010-07-14 01:22:12 +00:00
Bob Wilson a3f1901531 Use a target-specific VMOVIMM DAG node instead of BUILD_VECTOR to represent
NEON VMOV-immediate instructions.  This simplifies some things.

llvm-svn: 108275
2010-07-13 21:16:48 +00:00
Evan Cheng 0cc4ad983d Extend the r107852 optimization which turns some fp compare to code sequence using only i32 operations. It now optimize some f64 compares when fp compare is exceptionally slow (e.g. cortex-a8). It also catches comparison against 0.0.
llvm-svn: 108258
2010-07-13 19:27:42 +00:00
Rafael Espindola a76eccf815 Fix va_arg for doubles. With this patch VAARG nodes always contain the
correct alignment information, which simplifies ExpandRes_VAARG a bit.

The patch introduces a new alignment information to TargetLoweringInfo. This is
needed since the two natural candidates cannot be used:

* The 's' in target data: If this is set to the minimal alignment of any
  argument, getCallFrameTypeAlignment would return 4 for doubles on ARM for
  example.
* The getTransientStackAlignment method. It is possible for an architecture to
  have argument less aligned than what we maintain the stack pointer.

llvm-svn: 108072
2010-07-11 04:01:49 +00:00
Jim Grosbach 2a5725b1a3 In the presence of variable sized objects, allocate an emergency spill slot.
rdar://8131327

llvm-svn: 108008
2010-07-09 20:27:06 +00:00
Jakob Stoklund Olesen a57965827f Fix test to be less sensitive of regalloc accidents
llvm-svn: 107951
2010-07-09 01:32:11 +00:00
Bob Wilson 88a4e6dc0e Print "dregpair" NEON operands with a space between them, for readability and
consistency with other instructions that have lists of register operands.

llvm-svn: 107944
2010-07-09 00:47:20 +00:00
Bob Wilson 21eed476e8 Reenable DAG combining for vector shuffles. It looks like it was temporarily
disabled and then never turned back on again.  Adjust some tests, one because
this change avoids an unnecessary instruction, and the other to make it
continue testing what it was intended to test.

llvm-svn: 107941
2010-07-09 00:38:12 +00:00
Evan Cheng 0f54854a1d Check for FiniteOnlyFPMath as well.
llvm-svn: 107904
2010-07-08 20:12:24 +00:00
Evan Cheng be1f7a931e r107852 is only safe with -enable-unsafe-fp-math to account for +0.0 == -0.0.
llvm-svn: 107856
2010-07-08 06:01:49 +00:00
Evan Cheng 25f9364cbd Optimize some vfp comparisons to integer ones. This patch implements the simplest case when the following conditions are met:
1. The arguments are f32.
2. The arguments are loads and they have no uses other than the comparison.
3. The comparison code is EQ or NE.

e.g.
        vldr.32 s0, [r1]
        vldr.32 s1, [r0]
        vcmpe.f32       s1, s0
        vmrs    apsr_nzcv, fpscr
	beq     LBB0_2
=>
        ldr     r1, [r1]
        ldr     r0, [r0]
        cmp     r0, r1
        beq     LBB0_2

More complicated cases will be implemented in subsequent patches.

llvm-svn: 107852
2010-07-08 02:08:50 +00:00
Dale Johannesen e2289285ae Changes to ARM tail calls, mostly cosmetic.
Add explicit testcases for tail calls within the same module.
Duplicate some code to humor those who think .w doesn't apply on ARM.
Leave this disabled on Thumb1, and add some comments explaining why it's hard
and won't gain much.

llvm-svn: 107851
2010-07-08 01:18:23 +00:00
Rafael Espindola 7c510aa7bc Don't create neon moves in CopyRegToReg. NEONMoveFixPass will do the conversion
if profitable.

llvm-svn: 107673
2010-07-06 16:24:34 +00:00
Bob Wilson 771d04b969 Fix incorrect asm-printing of some NEON immediates. Fix weak testcase so
that it checks the immediate values, not just the instructions opcodes.
Radar 8110263.

llvm-svn: 107487
2010-07-02 17:23:44 +00:00
Bill Wendling 03bcd6ecc8 Implement the "linker_private_weak" linkage type. This will be used for
Objective-C metadata types which should be marked as "weak", but which the
linker will remove upon final linkage. However, this linkage isn't specific to
Objective-C.

For example, the "objc_msgSend_fixup_alloc" symbol is defined like this:

      .globl l_objc_msgSend_fixup_alloc
      .weak_definition l_objc_msgSend_fixup_alloc
      .section __DATA, __objc_msgrefs, coalesced
      .align 3
l_objc_msgSend_fixup_alloc:
       .quad   _objc_msgSend_fixup
       .quad   L_OBJC_METH_VAR_NAME_1

This is different from the "linker_private" linkage type, because it can't have
the metadata defined with ".weak_definition".

Currently only supported on Darwin platforms.

llvm-svn: 107433
2010-07-01 21:55:59 +00:00
Jakob Stoklund Olesen dadea5b178 Fix the handling of partial redefines in the fast register allocator.
A partial redefine needs to be treated like a tied operand, and the register
must be reloaded while processing use operands.

This fixes a bug where partially redefined registers were processed as normal
defs with a reload added. The reload could clobber another use operand if it was
a kill that allowed register reuse.

llvm-svn: 107193
2010-06-29 19:15:30 +00:00
Bob Wilson d91d5bfc95 Fix a register scavenger crash when dealing with undefined subregs.
The LowerSubregs pass needs to preserve implicit def operands attached to
EXTRACT_SUBREG instructions when it replaces those instructions with copies.

llvm-svn: 107189
2010-06-29 18:42:49 +00:00
Rafael Espindola 38a7d7cbc3 Add a VT argument to getMinimalPhysRegClass and replace the copy related uses
of getPhysicalRegisterRegClass with it.

If we want to make a copy (or estimate its cost), it is better to use the
smallest class as more efficient operations might be possible.

llvm-svn: 107140
2010-06-29 14:02:34 +00:00
Bob Wilson 269a89fd3a Unlike other targets, ARM now uses BUILD_VECTORs post-legalization so they
can't be changed arbitrarily by the DAGCombiner without checking if it is
running after legalization.

llvm-svn: 107097
2010-06-28 23:40:25 +00:00
Rafael Espindola 2041abd958 When splitting a VAARG, remember its alignment.
This produces terrible but correct code.

llvm-svn: 106952
2010-06-26 18:22:20 +00:00
Daniel Dunbar acbdf53db4 Thumb2ITBlockPass: Fix a possible dereference of an invalid iterator. This was
introduced in r106343, but only showed up recently (with a particular compiler &
linker combination) because of the particular check, and because we have no
builtin checking for dereferencing the end of an array, which is truly
unfortunate.

llvm-svn: 106908
2010-06-25 23:14:54 +00:00
Evan Cheng 02b184de5b Change if-conversion block size limit checks to add some flexibility.
llvm-svn: 106901
2010-06-25 22:42:03 +00:00
Dan Gohman 9a2f0473b2 Teach EmitLiveInCopies to omit copies for unused virtual registers,
and to clean up unused incoming physregs from the live-in list.

llvm-svn: 106805
2010-06-24 22:23:02 +00:00
Bill Wendling 2d3c490026 It's possible that a flag is added to the SDNode that points back to the
original SDNode. This is badness. Also, this function allows one SDNode to point
multiple flags to another SDNode. Badness as well.

llvm-svn: 106793
2010-06-24 22:00:37 +00:00
Jakob Stoklund Olesen 45230239e4 Replace a big gob of old coalescer logic with the new CoalescerPair class.
CoalescerPair can determine if a copy can be coalesced, and which register gets
merged away. The old logic in SimpleRegisterCoalescing had evolved into
something a bit too convoluted.

This second attempt fixes some crashes that only occurred Linux.

llvm-svn: 106769
2010-06-24 18:15:01 +00:00
Dan Gohman 463f26b4be Eliminate the other half of the BRCOND optimization, and update
as many tests as possible.

llvm-svn: 106749
2010-06-24 15:24:03 +00:00
Jakob Stoklund Olesen dbb58d2974 Revert "Replace a big gob of old coalescer logic with the new CoalescerPair class."
Whiny buildbots.

llvm-svn: 106710
2010-06-24 00:52:22 +00:00
Jakob Stoklund Olesen f38e6720cc Replace a big gob of old coalescer logic with the new CoalescerPair class.
CoalescerPair can determine if a copy can be coalesced, and which register gets
merged away. The old logic in SimpleRegisterCoalescing had evolved into
something a bit too convoluted.

llvm-svn: 106701
2010-06-24 00:12:39 +00:00
Bill Wendling f470747a36 We are missing opportunites to use ldm. Take code like this:
void t(int *cp0, int *cp1, int *dp, int fmd) {
  int c0, c1, d0, d1, d2, d3;
  c0 = (*cp0++ & 0xffff) | ((*cp1++ << 16) & 0xffff0000);
  c1 = (*cp0++ & 0xffff) | ((*cp1++ << 16) & 0xffff0000);
  /* ... */
}

It code gens into something pretty bad. But with this change (analogous to the
X86 back-end), it will use ldm and generate few instructions.

llvm-svn: 106693
2010-06-23 23:00:16 +00:00
Dale Johannesen fc40f0a1ab Reinstate correct test, remove the real invalidated test.
llvm-svn: 106664
2010-06-23 18:56:06 +00:00
Dale Johannesen 6effb503f5 Remove tests invalidated by previous checkin.
llvm-svn: 106663
2010-06-23 18:53:12 +00:00
Bob Wilson c5d712232d Thumb1 functions using @llvm.returnaddress were not saving the incoming LR.
Radar 8031193.

llvm-svn: 106582
2010-06-22 22:04:24 +00:00
Evan Cheng 1fb4de8ec5 Fix PR7421: bug in kill transferring logic. It was ignoring loads / stores which have already been processed.
llvm-svn: 106481
2010-06-21 21:21:14 +00:00
Dale Johannesen dd471bbb10 Add missing FileCheck call.
llvm-svn: 106443
2010-06-21 18:46:08 +00:00
Dale Johannesen d5c58b76ab Fix PR 7433. Silly typo in non-Darwin ARM tail call
handling, plus correct R9 handling in that mode.

llvm-svn: 106434
2010-06-21 18:21:49 +00:00
Evan Cheng f3c01f3ef6 Disable sibcall optimization for Thumb1 for now since Thumb1RegisterInfo::emitEpilogue is not expecting them.
llvm-svn: 106368
2010-06-19 01:01:32 +00:00
Evan Cheng 2d51c7c592 Allow ARM if-converter to be run after post allocation scheduling.
- This fixed a number of bugs in if-converter, tail merging, and post-allocation
  scheduler. If-converter now runs branch folding / tail merging first to
  maximize if-conversion opportunities.
- Also changed the t2IT instruction slightly. It now defines the ITSTATE
  register which is read by instructions in the IT block.
- Added Thumb2 specific hazard recognizer to ensure the scheduler doesn't
  change the instruction ordering in the IT block (since IT mask has been
  finalized). It also ensures no other instructions can be scheduled between
  instructions in the IT block.

This is not yet enabled.

llvm-svn: 106344
2010-06-18 23:09:54 +00:00
Evan Cheng cf9e8a987f Fix an inverted condition.
llvm-svn: 106330
2010-06-18 22:17:13 +00:00
Jakob Stoklund Olesen 22a212f97c When using ADDri to get the address of a stack object, 255 is a conservative
limit on the offset that can be materialized without using the register
scavenger.

llvm-svn: 106312
2010-06-18 20:59:25 +00:00
Dale Johannesen c1570dda5c Enable tail calls on ARM by default, with some
basic tests.

This has been well tested on Darwin but not elsewhere.
It should work provided the linker correctly resolves
  B.W  <label in other function>
which it has not seen before, at least from llvm-based
compilers.  I'm leaving the arm-tail-calls switch in
until I see if there's any problems because of that;
it might need to be disabled for some environments.

llvm-svn: 106299
2010-06-18 19:00:18 +00:00
Jakob Stoklund Olesen b9f91667e1 Treat the ARM inline asm {cc} constraint as a physreg (%CPSR), just like X86
does for {flags}. If we create virtual registers of the CCR class, RegAllocFast
may try to spill them, and we can't do that.

llvm-svn: 106289
2010-06-18 16:49:33 +00:00
Rafael Espindola 29dda21e96 Remove arm_apcscc from the test files. It is the default and doing this
matches what llvm-gcc and clang now produce.

llvm-svn: 106221
2010-06-17 15:18:27 +00:00
Jakob Stoklund Olesen ec2e964fd6 Remove the local register allocator.
Please use the fast allocator instead.

llvm-svn: 106051
2010-06-15 21:58:33 +00:00
Rafael Espindola ae591be4e9 Set the mtriple in some tests so that they use AAPCS.
llvm-svn: 106041
2010-06-15 20:42:00 +00:00
Rafael Espindola 5a24a56e1e Remove the arm_aapcscc marker from the tests. It is the default
for the linux targets.

llvm-svn: 106029
2010-06-15 19:04:29 +00:00
Bob Wilson a55b8877e6 Generalize the pre-coalescing of extract_subregs feeding reg_sequences,
replacing the overly conservative checks that I had introduced recently to
deal with correctness issues.  This makes a pretty noticable difference
in our testcases where reg_sequences are used.  I've updated one test to
check that we no longer emit the unnecessary subreg moves.

llvm-svn: 105991
2010-06-15 05:56:31 +00:00
Bob Wilson f07d33d8f1 Add a missing bitcast. This code used to only handle conversions between
i64 and f64 types, but now it also handle Neon vector types, so the f64 result
of VMOVDRR may need to be converted to a Neon type.  Radar 8084742.

llvm-svn: 105845
2010-06-11 22:45:25 +00:00
Evan Cheng a03e6f85fe Re-apply 105308 with fix.
llvm-svn: 105502
2010-06-04 23:28:13 +00:00
Dale Johannesen 065d6fd537 More tail call removal.
llvm-svn: 105485
2010-06-04 21:14:24 +00:00
Dale Johannesen b3780b1103 Remove more tail calls.
llvm-svn: 105450
2010-06-04 01:01:24 +00:00
Dale Johannesen e7b392dca9 Remove a tail call, and move some CHECKs to the
functions where they belong.

llvm-svn: 105449
2010-06-04 01:01:04 +00:00
Bob Wilson 30093b5d8b Revert 105308.
llvm-svn: 105399
2010-06-03 18:28:31 +00:00
Evan Cheng a2da22734f Enable machine cse of instructions which define physical registers.
llvm-svn: 105308
2010-06-02 01:08:27 +00:00
Evan Cheng cc2efe11db Fix some latency computation bugs: if the use is not a machine opcode do not just return zero.
llvm-svn: 105061
2010-05-28 23:26:21 +00:00
Jakob Stoklund Olesen b613ae2c89 Add a -regalloc=default option that chooses a register allocator based on the -O
optimization level.

This only really affects llc for now because both the llvm-gcc and clang front
ends override the default register allocator. I intend to remove that code later.

llvm-svn: 104904
2010-05-27 23:57:25 +00:00
Evan Cheng 3d3ee87d4e llvm can't correctly support 'H', 'Q' and 'R' modifiers. Just mark it an error.
llvm-svn: 104891
2010-05-27 22:08:38 +00:00
Evan Cheng 755d45be43 LR is in GPR, not tGPR even in Thumb1 mode.
llvm-svn: 104518
2010-05-24 18:00:18 +00:00
Evan Cheng 168ced94d8 Implement @llvm.returnaddress. rdar://8015977.
llvm-svn: 104421
2010-05-22 01:47:14 +00:00
Bob Wilson 91fdf68516 Recognize more BUILD_VECTORs and VECTOR_SHUFFLEs that can be implemented by
copying VFP subregs.  This exposed a bunch of dead code in the *spill-q.ll
tests, so I tweaked those tests to keep that code from being optimized away.
Radar 7872877.

llvm-svn: 104415
2010-05-22 00:23:12 +00:00
Bob Wilson 51d9ee3ff6 Change CodeGen/ARM/2009-11-02-NegativeLane.ll to use 16-bit vector elements
so that it will continue to test what it was meant to test when I commit a
separate change for better support of BUILD_VECTOR and VECTOR_SHUFFLE for Neon.
Fix a DAG combiner crash exposed by this test change.

llvm-svn: 104380
2010-05-21 21:05:32 +00:00
Jakob Stoklund Olesen a648c6a757 Teach VirtRegRewriter to handle spilling in instructions that have multiple
definitions of the virtual register.

This happens when spilling the registers produced by REG_SEQUENCE:

%reg1047:5<def>, %reg1047:6<def>, %reg1047:7<def> = VLD3d8 %reg1033, 0, pred:14, pred:%reg0

The rewriter would spill the register multiple times, dead store elimination
tried to keep up, but ended up cutting the branch it was sitting on.

llvm-svn: 104321
2010-05-21 16:36:13 +00:00
Evan Cheng 34c260458a Change ARM scheduling default to list-hybrid if the target supports floating point instructions (and is not using soft float).
llvm-svn: 104307
2010-05-21 00:43:17 +00:00
Dan Gohman ee2fea3cd7 When canonicalizing icmp operand order to put the loop invariant
operand on the left, the interesting operand is on the right. This
fixes a bug where LSR was failing to recognize ICmpZero uses,
which led it to be unable to reverse the induction variable in the
attached testcase.

Delete test/CodeGen/X86/stack-color-with-reg-2.ll, because its test
is extremely fragile and hard to meaningfully update.

llvm-svn: 104262
2010-05-20 19:26:52 +00:00
Bob Wilson 5954994bba Handle Neon v2f64 and v2i64 vector shuffles as register copies.
This fixes the remaining issue with pr7167.

llvm-svn: 104257
2010-05-20 18:39:53 +00:00
Dan Gohman 20fab456da Teach LSR how to cope better with unrolled loops on targets where
the addressing modes don't make this trivially easy. This allows
it to avoid falling into the less precise heuristics in more
cases.

llvm-svn: 104186
2010-05-19 23:43:12 +00:00