Commit Graph

490 Commits

Author SHA1 Message Date
Dale Johannesen ff37675c72 Fix crash introduced in 116852. 8573915.
llvm-svn: 116955
2010-10-20 22:03:37 +00:00
Jim Grosbach bbdc5d2ef9 Add a pre-dispatch SjLj EH hook on the unwind edge for targets to do any
setup they require. Use this for ARM/Darwin to rematerialize the base
pointer from the frame pointer when required. rdar://8564268

llvm-svn: 116879
2010-10-19 23:27:08 +00:00
Dale Johannesen 710a2d9d46 Enable using vdup for vector constants which are splat of
integers by default, and remove the controlling flag, now
that LICM will hoist such vdup's.  8003375.

llvm-svn: 116852
2010-10-19 20:00:17 +00:00
Jim Grosbach 2d00b1b2e5 Don't mark argument value stores as immutable, as otherwise the post-RA
scheduler may reorder loads from them before the stores and other such
badness. PR8347. Patch by David Meyer

llvm-svn: 116602
2010-10-15 18:34:47 +00:00
Bob Wilson 3b1db392fc Remove unused ARMISD::AND selection DAG node.
llvm-svn: 116566
2010-10-15 04:34:40 +00:00
Anton Korobeynikov 81bdc93bbb User proper libcall names & condcodes while compiling for ARM EABI.
Patch by Evzen Muller!

llvm-svn: 114991
2010-09-28 21:39:26 +00:00
Bob Wilson 3dc97324c1 Add a command line option "-arm-strict-align" to disallow unaligned memory
accesses for ARM targets that would otherwise allow it.  Radar 8465431.

llvm-svn: 114941
2010-09-28 04:09:35 +00:00
Evan Cheng dbcc4b4d4d Enable code placement optimization pass for ARM.
llvm-svn: 114746
2010-09-24 19:07:23 +00:00
Jim Grosbach 85dcd3d0f4 Add support for ELF PLT references for ARM MC asm printing. Adding a
new VariantKind to the MCSymbolExpr seems like overkill, but I'm not sure
there's a more straightforward way to get the printing difference captured.
(i.e., x86 uses @PLT, ARM uses (PLT)).

llvm-svn: 114613
2010-09-22 23:27:36 +00:00
Bob Wilson 463a05342a Change VDUPLANE DAG combiner to just return the result instead of calling
CombineTo to avoid putting the result on the worklist.  I don't think it makes
much difference for now, but it might help someday as we add more DAG
combine optimizations.

llvm-svn: 114595
2010-09-22 22:27:30 +00:00
Bob Wilson 2280674fa9 Combine both VMOVDRR(VMOVRRD) and VMOVRRD(VMOVDRR), instead of just doing one
of those.  Refactor to share code for handling BUILD_VECTOR(VMOVRRD).
I don't have a testcase that exercises this, but it seems like an obvious
good thing to do.

llvm-svn: 114589
2010-09-22 22:09:21 +00:00
Owen Anderson 61158f98ab Enable target-specific mul-lowering on ARM, even at -Os. Remove a test that this makes
irrelevant, but add a new test for the new, improved functionality.

llvm-svn: 114494
2010-09-21 22:51:46 +00:00
Chris Lattner 886250c8f0 convert a couple more places to use the new getStore()
llvm-svn: 114463
2010-09-21 18:51:21 +00:00
Bob Wilson 5549d496dd Define the TargetLowering::getTgtMemIntrinsic hook for ARM so that NEON load
and store intrinsics are represented with MemIntrinsicSDNodes.

llvm-svn: 114454
2010-09-21 17:56:22 +00:00
Chris Lattner 7727d05dbb convert the targets off the non-MachinePointerInfo of getLoad.
llvm-svn: 114410
2010-09-21 06:44:06 +00:00
Chris Lattner 2510de2bea reimplement memcpy/memmove/memset lowering to use MachinePointerInfo
instead of srcvalue/offset pairs.  This corrects SV info for mem 
operations whose size is > 32-bits.

llvm-svn: 114401
2010-09-21 05:40:29 +00:00
Bob Wilson cb6db98897 Add target-specific DAG combiner for BUILD_VECTOR and VMOVRRD. An i64
value should be in GPRs when it's going to be used as a scalar, and we use
VMOVRRD to make that happen, but if the value is converted back to a vector
we need to fold to a simple bit_convert.  Radar 8407927.

llvm-svn: 114233
2010-09-17 22:59:05 +00:00
Eric Christopher 1c06917f15 Split out some of the calling convention bits so that they can be
used for fast-isel.

llvm-svn: 113652
2010-09-10 22:42:06 +00:00
Evan Cheng bf4070756f Teach if-converter to be more careful with predicating instructions that would
take multiple cycles to decode.
For the current if-converter clients (actually only ARM), the instructions that
are predicated on false are not nops. They would still take machine cycles to
decode. Micro-coded instructions such as LDM / STM can potentially take multiple
cycles to decode. If-converter should take treat them as non-micro-coded
simple instructions.

llvm-svn: 113570
2010-09-10 01:29:16 +00:00
Jim Grosbach 535d3b4e09 remove trailing whitespace
llvm-svn: 113338
2010-09-08 03:54:02 +00:00
Bob Wilson f65c9ef720 Replace NEON vabdl, vaba, and vabal intrinsics with combinations of the
vabd intrinsic and add and/or zext operations.  In the case of vaba, this
also avoids the need for a DAG combine pattern to combine vabd with add.
Update tests.  Auto-upgrade the old intrinsics.

llvm-svn: 112941
2010-09-03 01:35:08 +00:00
Bob Wilson 38ab35a911 Remove NEON vmull, vmlal, and vmlsl intrinsics, replacing them with multiply,
add, and subtract operations with zero-extended or sign-extended vectors.
Update tests.  Add auto-upgrade support for the old intrinsics.

llvm-svn: 112773
2010-09-01 23:50:19 +00:00
Bill Wendling 0a65116cce Create an ARMISD::AND node. This node is exactly like the "ARM::AND" node, but
it sets the CPSR register.

llvm-svn: 112393
2010-08-29 03:02:11 +00:00
Daniel Dunbar a54a1b0edf ARM/Thumb2: Fix a misselect in getARMCmp, when attempting to adjust a signed
comparison that would overflow.
 - The other under/overflow cases can't actually happen because the immediates
   which would trigger them are legal (so we don't enter this code), but
   adjusted the style to make it clear the transform is always valid.

llvm-svn: 112053
2010-08-25 16:58:05 +00:00
Bob Wilson 9a511c07e4 Replace the arm.neon.vmovls and vmovlu intrinsics with vector sign-extend and
zero-extend operations.

llvm-svn: 111614
2010-08-20 04:54:02 +00:00
Bob Wilson fb7eaff759 Expand ZERO_EXTEND operations for NEON vector types.
Testcase from Nick Lewycky.

llvm-svn: 111341
2010-08-18 01:45:52 +00:00
Bob Wilson 411dfad981 Allow more cases of undef shuffle indices and add tests for them.
llvm-svn: 111226
2010-08-17 05:54:34 +00:00
Bob Wilson c350e7a509 Ignore undef shuffle indices when checking for a VTRN shuffle. Radar 8290937.
llvm-svn: 111208
2010-08-16 23:37:17 +00:00
Bob Wilson 3c9ed76ba5 Temporarily disable tail calls on ARM to work around some linker problems.
llvm-svn: 111050
2010-08-13 22:43:33 +00:00
Jim Grosbach 4d5dc3e7e5 cortex m4 has floating point support, but only single precision.
llvm-svn: 110810
2010-08-11 15:44:15 +00:00
Bill Wendling 6a98131468 Consider this code snippet:
float t1(int argc) {
  return (argc == 1123) ? 1.234f : 2.38213f;
}

We would generate truly awful code on ARM (those with a weak stomach should look
away):

_t1:
  movw   r1, #1123
  movs   r2, #1
  movs   r3, #0
  cmp    r0, r1
  mov.w  r0, #0
  it     eq
  moveq  r0, r2
  movs   r1, #4
  cmp    r0, #0
  it     ne
  movne  r3, r1
  adr    r0, #LCPI1_0
  ldr    r0, [r0, r3]
  bx     lr

The problem was that legalization was creating a cascade of SELECT_CC nodes, for
for the comparison of "argc == 1123" which was fed into a SELECT node for the ?:
statement which was itself converted to a SELECT_CC node. This is because the
ARM back-end doesn't have custom lowering for SELECT nodes, so it used the
default "Expand".

I added a fairly simple "LowerSELECT" to the ARM back-end. It takes care of this
testcase, but can obviously be expanded to include more cases.

Now we generate this, which looks optimal to me:

_t1:
  movw   r1, #1123
  movs   r2, #0
  cmp    r0, r1
  adr    r0, #LCPI0_0
  it     eq
  moveq  r2, #4
  ldr    r0, [r0, r2]
  bx     lr
  .align  2
LCPI0_0:
  .long   1075344593  @ float 2.382130e+00
  .long   1067316150  @ float 1.234000e+00

llvm-svn: 110799
2010-08-11 08:43:16 +00:00
Evan Cheng 6e809de90c - Add subtarget feature -mattr=+db which determine whether an ARM cpu has the
memory and synchronization barrier dmb and dsb instructions.
- Change instruction names to something more sensible (matching name of actual
  instructions).
- Added tests for memory barrier codegen.

llvm-svn: 110785
2010-08-11 06:22:01 +00:00
Evan Cheng fa16acae44 Delete some unused instructions.
llvm-svn: 110710
2010-08-10 19:36:22 +00:00
Evan Cheng 3f251fb26e Re-apply r110655 with fixes. Epilogue must restore sp from fp if the function stack frame has a var-sized object.
Also added a test case to check for the added benefit of this patch: it's optimizing away the unnecessary restore of sp from fp for some non-leaf functions.

llvm-svn: 110707
2010-08-10 19:30:19 +00:00
Daniel Dunbar 0dd47bfca3 Revert r110655, "Fix ARM hasFP() semantics. It should return true whenever FP
register is", it breaks a couple test-suite tests.

llvm-svn: 110701
2010-08-10 18:32:02 +00:00
Evan Cheng 8d5d1c1331 Fix ARM hasFP() semantics. It should return true whenever FP register is
reserved, not available for general allocation. This eliminates all the
extra checks for Darwin.

This change also fixes the use of FP to access frame indices in leaf
functions and cleaned up some confusing code in epilogue emission.

llvm-svn: 110655
2010-08-10 06:26:49 +00:00
Dale Johannesen 21f13209f8 Remove switch for disabling ARM tail calls. They
seem to be working correctly.  No functional change.

llvm-svn: 110226
2010-08-04 18:07:17 +00:00
Bob Wilson 79daf7e0ae Combine NEON VABD (absolute difference) intrinsics with ADDs to make VABA
(absolute difference with accumulate) intrinsics.  Radar 8228576.

llvm-svn: 110170
2010-08-04 00:12:08 +00:00
Nate Begeman b69b182191 Add support for getting & setting the FPSCR application register on ARM when VFP is enabled.
Add support for using the FPSCR in conjunction with the vcvtr instruction, for controlling fp to int rounding.
Add support for the FLT_ROUNDS_ node now that the FPSCR is exposed.

llvm-svn: 110152
2010-08-03 21:31:55 +00:00
Bob Wilson 728eb292eb Refactor ARM-specific DAG combining in preparation for adding some more
transformations.

llvm-svn: 109800
2010-07-29 20:34:14 +00:00
Dale Johannesen 2bff50546c Implement vector constants which are splat of
integers with mov + vdup.  8003375.  This is
currently disabled by default because LICM will
not hoist a VDUP, so it pessimizes the code if
the construct occurs inside a loop (8248029).

llvm-svn: 109799
2010-07-29 20:10:08 +00:00
Anton Korobeynikov 19edda0323 Hook in GlobalMerge pass
llvm-svn: 109359
2010-07-24 21:52:08 +00:00
Jim Grosbach 0acbcb1a60 Use the appropriate register class for an i32 when adding ARM::LR to the
function live in set. This will give us tGPR for Thumb1 and GPR otherwise,
so the copy will be spillable. rdar://8224931

llvm-svn: 109293
2010-07-23 23:50:35 +00:00
Evan Cheng df907f4594 - Allow target to specify when is register pressure "too high". In most cases,
it's too late to start backing off aggressive latency scheduling when most
  of the registers are in use so the threshold should be a bit tighter.
- Correctly handle live out's and extract_subreg etc.
- Enable register pressure aware scheduling by default for hybrid scheduler.
  For ARM, this is almost always a win on # of instructions. It's runtime
  neutral for most of the tests. But for some kernels with high register
  pressure it can be a huge win. e.g. 464.h264ref reduced number of spills by
  54 and sped up by 20%.

llvm-svn: 109279
2010-07-23 22:39:59 +00:00
Chandler Carruth a1d7516cb7 Mark an assert-only variable as used.
llvm-svn: 109091
2010-07-22 08:02:25 +00:00
Evan Cheng 285903853f More register pressure aware scheduling work.
llvm-svn: 109064
2010-07-21 23:53:58 +00:00
Eric Christopher 84bdfd80df Baby steps towards ARM fast-isel.
llvm-svn: 109047
2010-07-21 22:26:11 +00:00
Rafael Espindola 4277e14dc4 Fix calling convention on ARM if vfp2+ is enabled.
llvm-svn: 109009
2010-07-21 11:38:30 +00:00
Evan Cheng a77f3d3b37 Teach bottom up pre-ra scheduler to track register pressure. Work in progress.
llvm-svn: 108991
2010-07-21 06:09:07 +00:00
Jim Grosbach 9c7708cc1b Removed un-used code.
llvm-svn: 108841
2010-07-20 14:51:32 +00:00