Commit Graph

3757 Commits

Author SHA1 Message Date
NAKAMURA Takumi 54ce546865 test/twoaddr-coalesce: Do not use @main.
Win32 codegen emits implicit invoking __main into, to fail.

llvm-svn: 112801
2010-09-02 03:45:51 +00:00
Bob Wilson 38ab35a911 Remove NEON vmull, vmlal, and vmlsl intrinsics, replacing them with multiply,
add, and subtract operations with zero-extended or sign-extended vectors.
Update tests.  Add auto-upgrade support for the old intrinsics.

llvm-svn: 112773
2010-09-01 23:50:19 +00:00
Bruno Cardoso Lopes fea81b4831 Using target specific nodes for shuffle nodes makes the mask
check more strict, breaking some cases not checked in the
testsuite, but also exposes some foldings not done before,
as this example:

  movaps  (%rdi), %xmm0
  movaps  (%rax), %xmm1
  movaps  %xmm0, %xmm2
  movss %xmm1, %xmm2
  shufps  $36, %xmm2, %xmm0

now is generated as:

  movaps  (%rdi), %xmm0
  movaps  %xmm0, %xmm1
  movlps  (%rax), %xmm1
  shufps  $36, %xmm1, %xmm0

llvm-svn: 112753
2010-09-01 22:33:20 +00:00
Jakob Stoklund Olesen 4b6fd48bba Teach RemoveCopyByCommutingDef to check all aliases, not just subregisters.
This caused a miscompilation in WebKit where %RAX had conflicting defs when
RemoveCopyByCommutingDef was commuting a %EAX use.

llvm-svn: 112751
2010-09-01 22:15:35 +00:00
Chris Lattner 39eccb4754 temporarily revert r112664, it is causing a decoding conflict, and
the testcases should be merged.

llvm-svn: 112711
2010-09-01 16:00:50 +00:00
Dan Gohman 110ed64fbb Revert 112442 and 112440 until the compile time problems introduced
by 112440 are resolved.

llvm-svn: 112692
2010-09-01 01:45:53 +00:00
Bill Wendling 6789f8b6ae We have a chance for an optimization. Consider this code:
int x(int t) {
  if (t & 256)
    return -26;
  return 0;
}

We generate this:

     tst.w   r0, #256
     mvn     r0, #25
     it      eq
     moveq   r0, #0

while gcc generates this:

     ands    r0, r0, #256
     it      ne
     mvnne   r0, #25
     bx      lr

Scandalous really!

During ISel time, we can look for this particular pattern. One where we have a
"MOVCC" that uses the flag off of a CMPZ that itself is comparing an AND
instruction to 0. Something like this (greatly simplified):

  %r0 = ISD::AND ...
  ARMISD::CMPZ %r0, 0         @ sets [CPSR]
  %r0 = ARMISD::MOVCC 0, -26  @ reads [CPSR]

All we have to do is convert the "ISD::AND" into an "ARM::ANDS" that sets [CPSR]
when it's zero. The zero value will all ready be in the %r0 register and we only
need to change it if the AND wasn't zero. Easy!

llvm-svn: 112664
2010-08-31 22:41:22 +00:00
Jim Grosbach ad9b6de3b6 Update test for 112609
llvm-svn: 112610
2010-08-31 17:58:47 +00:00
Anton Korobeynikov 3a1d87a7ba Fix borken test
llvm-svn: 112555
2010-08-30 23:41:49 +00:00
Bob Wilson 4cd8a126c3 Remove NEON vmovn intrinsic, replacing it with vector truncate operations.
Auto-upgrade the old intrinsic and update tests.

llvm-svn: 112507
2010-08-30 20:02:30 +00:00
Chris Lattner 34bfab0ad5 two changes:
1) nuke ConstDataCoalSection, which is dead.
2) revise my previous patch for rdar://8018335,
  which was completely wrong.  Specifically, it doesn't 
  make sense to mark __TEXT,__const_coal as PURE_INSTRUCTIONS,
  because it is for readonly data.  templates (it turns out)
  go to const_coal_nt.  The real fix for rdar://8018335 was
  to give ConstTextCoalSection a section kind of ReadOnly 
  instead of Text.

llvm-svn: 112496
2010-08-30 18:12:35 +00:00
Duncan Sands 68c30907cc Correct bogus module triple specifications.
llvm-svn: 112469
2010-08-30 10:48:29 +00:00
Dan Gohman 3a08ed7904 Make IVUsers iterative instead of recursive.
This has the side effect of reversing the order of most of
IVUser's results.

llvm-svn: 112442
2010-08-29 16:40:03 +00:00
Dan Gohman 6665550bca Make this test less dependent on register allocation choices.
llvm-svn: 112426
2010-08-29 14:49:42 +00:00
Kalle Raiskila 1e616572d9 Fix lowering of INSERT_VECTOR_ELT in SPU.
The IDX was treated as byte index, not element index.

llvm-svn: 112422
2010-08-29 12:41:50 +00:00
Bob Wilson d0c054886c Remove NEON vaddl, vaddw, vsubl, and vsubw intrinsics. Instead, use llvm
IR add/sub operations with one or both operands sign- or zero-extended.
Auto-upgrade the old intrinsics.

llvm-svn: 112416
2010-08-29 05:57:34 +00:00
Chris Lattner c2887bc283 merge a bunch of shuffle tests into sse2.ll
llvm-svn: 112398
2010-08-29 03:19:04 +00:00
Chris Lattner b1ff978406 add some nounwind's
llvm-svn: 112396
2010-08-29 03:07:47 +00:00
Chris Lattner 94656b1c8c fix the buildvector->insertp[sd] logic to not always create a redundant
insertp[sd] $0, which is a noop.  Before:

_f32:                                   ## @f32
	pshufd	$1, %xmm1, %xmm2
	pshufd	$1, %xmm0, %xmm3
	addss	%xmm2, %xmm3
	addss	%xmm1, %xmm0
                                        ## kill: XMM0<def> XMM0<kill> XMM0<def>
	insertps	$0, %xmm0, %xmm0
	insertps	$16, %xmm3, %xmm0
	ret

after:

_f32:                                   ## @f32
	movdqa	%xmm0, %xmm2
	addss	%xmm1, %xmm2
	pshufd	$1, %xmm1, %xmm1
	pshufd	$1, %xmm0, %xmm3
	addss	%xmm1, %xmm3
	movdqa	%xmm2, %xmm0
	insertps	$16, %xmm3, %xmm0
	ret

The extra movs are due to a random (poor) scheduling decision.

llvm-svn: 112379
2010-08-28 17:59:08 +00:00
Chris Lattner bcb6090ad0 fix the BuildVector -> unpcklps logic to not do pointless shuffles
when the top elements of a vector are undefined.  This happens all
the time for X86-64 ABI stuff because only the low 2 elements of
a 4 element vector are defined.  For example, on:

_Complex float f32(_Complex float A, _Complex float B) {
  return A+B;
}

We used to produce (with SSE2, SSE4.1+ uses insertps):

_f32:                                   ## @f32
	movdqa	%xmm0, %xmm2
	addss	%xmm1, %xmm2
	pshufd	$16, %xmm2, %xmm2
	pshufd	$1, %xmm1, %xmm1
	pshufd	$1, %xmm0, %xmm0
	addss	%xmm1, %xmm0
	pshufd	$16, %xmm0, %xmm1
	movdqa	%xmm2, %xmm0
	unpcklps	%xmm1, %xmm0
	ret

We now produce:

_f32:                                   ## @f32
	movdqa	%xmm0, %xmm2
	addss	%xmm1, %xmm2
	pshufd	$1, %xmm1, %xmm1
	pshufd	$1, %xmm0, %xmm3
	addss	%xmm1, %xmm3
	movaps	%xmm2, %xmm0
	unpcklps	%xmm3, %xmm0
	ret

This implements rdar://8368414

llvm-svn: 112378
2010-08-28 17:28:30 +00:00
Dan Gohman e06905d1f0 Completely disable tail calls when fast-isel is enabled, as fast-isel
doesn't currently support dealing with this.

llvm-svn: 112341
2010-08-28 00:51:03 +00:00
Bob Wilson 13ce07fa92 Change ARM VFP VLDM/VSTM instructions to use addressing mode #4, just like
all the other LDM/STM instructions.  This fixes asm printer crashes when
compiling with -O0.  I've changed one of the NEON tests (vst3.ll) to run
with -O0 to check this in the future.

Prior to this change VLDM/VSTM used addressing mode #5, but not really.
The offset field was used to hold a count of the number of registers being
loaded or stored, and the AM5 opcode field was expanded to specify the IA
or DB mode, instead of the standard ADD/SUB specifier.  Much of the backend
was not aware of these special cases.  The crashes occured when rewriting
a frameindex caused the AM5 offset field to be changed so that it did not
have a valid submode.  I don't know exactly what changed to expose this now.
Maybe we've never done much with -O0 and NEON.  Regardless, there's no longer
any reason to keep a count of the VLDM/VSTM registers, so we can use
addressing mode #4 and clean things up in a lot of places.

llvm-svn: 112322
2010-08-27 23:18:17 +00:00
Chris Lattner 7413e87b6d get this test passing on linux builders.
llvm-svn: 112280
2010-08-27 18:49:08 +00:00
Bob Wilson edf722add3 Add alignment arguments to all the NEON load/store intrinsics.
Update all the tests using those intrinsics and add support for
auto-upgrading bitcode files with the old versions of the intrinsics.

llvm-svn: 112271
2010-08-27 17:13:24 +00:00
Daniel Dunbar 1844a71e66 X86: Fix an encoding issue with LOCK_ADD64mr, which could lead to very hard to find miscompiles with the integrated assembler.
llvm-svn: 112250
2010-08-27 01:30:14 +00:00
Chris Lattner af23e9a798 Add a hackaround for PR7993 which is causing failures on x86 builders that lack sse2.
llvm-svn: 112175
2010-08-26 06:57:07 +00:00
Chris Lattner 66afba7aa4 I think enough general codegen bugs are fixed to allow this to work
on random hosts, lets see!

llvm-svn: 112172
2010-08-26 05:52:42 +00:00
Chris Lattner eb2cc0ce0e implement SplitVecOp_CONCAT_VECTORS, fixing the included testcase with SSE1.
llvm-svn: 112171
2010-08-26 05:51:22 +00:00
Chris Lattner 825294b85f Make sure this forces the x86 targets
llvm-svn: 112169
2010-08-26 05:25:05 +00:00
Chris Lattner cc60609cb4 fix sse1 only codegen in x86-64 mode, which is something we
apparently try to support.

llvm-svn: 112168
2010-08-26 05:24:29 +00:00
Jim Grosbach 08da771ec3 Enable pre-RA virtual frame base register allocation. rdar://8277890
llvm-svn: 112127
2010-08-26 00:58:06 +00:00
Bob Wilson 4629f423f8 Revert svn 107892 (with changes to work with trunk). It caused a crash if
a VLD result was not used (Radar 8355607).  It should also fix pr7988, but
I haven't verified that yet.

llvm-svn: 112118
2010-08-26 00:13:36 +00:00
Chris Lattner c7fb446a9d temporarily disable this, which started failing on the llvm-i686-linux
builder.  I will investigate tonight.

llvm-svn: 112113
2010-08-25 23:43:14 +00:00
Chris Lattner 75ff053497 Change handling of illegal vector types to widen when possible instead of
expanding: e.g. <2 x float> -> <4 x float> instead of -> 2 floats.  This
affects two places in the code: handling cross block values and handling
function return and arguments.  Since vectors are already widened by 
legalizetypes, this gives us much better code and unblocks x86-64 abi
and SPU abi work.

For example, this (which is a silly example of a cross-block value):
define <4 x float> @test2(<4 x float> %A) nounwind {
 %B = shufflevector <4 x float> %A, <4 x float> undef, <2 x i32> <i32 0, i32 1>
 %C = fadd <2 x float> %B, %B
  br label %BB
BB:
 %D = fadd <2 x float> %C, %C
 %E = shufflevector <2 x float> %D, <2 x float> undef, <4 x i32> <i32 0, i32 1, i32 undef, i32 undef>
 ret <4 x float> %E
}

Now compiles into:

_test2:                                 ## @test2
## BB#0:
 addps %xmm0, %xmm0
 addps %xmm0, %xmm0
 ret

previously it compiled into:

_test2:                                 ## @test2
## BB#0:
 addps %xmm0, %xmm0
 pshufd $1, %xmm0, %xmm1
                                        ## kill: XMM0<def> XMM0<kill> XMM0<def>
 insertps $0, %xmm0, %xmm0
 insertps $16, %xmm1, %xmm0
 addps %xmm0, %xmm0
 ret

This implements rdar://8230384

llvm-svn: 112101
2010-08-25 22:49:25 +00:00
Daniel Dunbar a54a1b0edf ARM/Thumb2: Fix a misselect in getARMCmp, when attempting to adjust a signed
comparison that would overflow.
 - The other under/overflow cases can't actually happen because the immediates
   which would trigger them are legal (so we don't enter this code), but
   adjusted the style to make it clear the transform is always valid.

llvm-svn: 112053
2010-08-25 16:58:05 +00:00
Eric Christopher 6b1533a1a9 Add another basic test cribbed from the x86 fast-isel tests.
llvm-svn: 112036
2010-08-25 07:57:29 +00:00
Eric Christopher 37d547aee6 Run this on thumb and arm.
llvm-svn: 112035
2010-08-25 07:53:15 +00:00
Eric Christopher e58c03698e Make this testcase actually executed with fast-isel on arm.
llvm-svn: 112033
2010-08-25 07:47:00 +00:00
Bruno Cardoso Lopes 0bc919fa35 Convert test to use filecheck and make it more specific
llvm-svn: 112016
2010-08-25 01:47:16 +00:00
Dan Gohman c88fda477a Fix X86's isLegalAddressingMode to recognize that static addresses
need not be RIP-relative in small mode.

llvm-svn: 111917
2010-08-24 15:55:12 +00:00
Kalle Raiskila 7e25bc4145 Fix SPU BE to use all the available return registers.
llc used to assert on the added testcase.

llvm-svn: 111911
2010-08-24 11:50:48 +00:00
Chris Lattner 58bd73a5a7 Add a new llvm.x86.int intrinsic, allowing access to the
x86 int and int3 instructions.  Patch by Peter Housel!

llvm-svn: 111831
2010-08-23 19:39:25 +00:00
Dan Gohman 42ef669d81 Fix x86 fast-isel's cmp+branch folding to avoid folding when the
comparison is in a different basic block from the branch. In such
cases, the comparison's operands may not have initialized virtual
registers available.

llvm-svn: 111709
2010-08-21 02:32:36 +00:00
Bob Wilson be745d8c00 Replace some NEON vmovl intrinsic that I missed earlier.
llvm-svn: 111696
2010-08-20 23:22:43 +00:00
Bob Wilson 9a511c07e4 Replace the arm.neon.vmovls and vmovlu intrinsics with vector sign-extend and
zero-extend operations.

llvm-svn: 111614
2010-08-20 04:54:02 +00:00
Evan Cheng 361b9be7c6 It's possible to sink a def if its local uses are PHI's.
llvm-svn: 111537
2010-08-19 18:33:29 +00:00
Dan Gohman 82656fb0e1 When sending stats output to stdout for grepping, don't emit normal
output to standard output also.

llvm-svn: 111435
2010-08-18 22:22:44 +00:00
Dan Gohman 2470818942 When sending stats output to stdout for grepping, don't emit normal
output to standard output also.

llvm-svn: 111401
2010-08-18 20:32:46 +00:00
Kalle Raiskila e60b5161d1 Fix a bug with insertelement on SPU.
The previous algorithm in LowerVECTOR_SHUFFLE 
didn't check all requirements for "monotonic" shuffles.

llvm-svn: 111361
2010-08-18 10:20:29 +00:00
Kalle Raiskila ab49360f59 Remove all traces of v2[i,f]32 on SPU.
The "half vectors" are now widened to full size by the legalizer.
The only exception is in parameter passing, where half vectors are 
expanded. This causes changes to some dejagnu tests.

llvm-svn: 111360
2010-08-18 10:04:39 +00:00
Kalle Raiskila f3984d1ef6 Change SPU C calling convention to match that described in
"SPU Application Binary Interface Specification, v1.9" by
IBM. 
Specifically: use r3-r74 to pass parameters and the return value.

llvm-svn: 111358
2010-08-18 09:50:30 +00:00
Bob Wilson fb7eaff759 Expand ZERO_EXTEND operations for NEON vector types.
Testcase from Nick Lewycky.

llvm-svn: 111341
2010-08-18 01:45:52 +00:00
Dan Gohman ed2b005842 Tweak IVUsers' concept of "interesting" to exclude add recurrences
where the step value is an induction variable from an outer loop, to
avoid trouble trying to re-expand such expressions. This effectively
hides such expressions from indvars and lsr, which prevents them
from getting into trouble.

llvm-svn: 111317
2010-08-17 22:50:37 +00:00
Evan Cheng efdc74ea59 Add nounwind.
llvm-svn: 111312
2010-08-17 22:35:20 +00:00
Dale Johannesen 16f96445c3 Make fast scheduler handle asm clobbers correctly.
PR 7882.  Follows suggestion by Amaury Pouly, thanks.

llvm-svn: 111306
2010-08-17 22:17:24 +00:00
Bob Wilson 942b10f511 Change ARM PKHTB and PKHBT instructions to use a shift_imm operand to avoid
printing "lsl #0".  This fixes the remaining parts of pr7792.  Make
corresponding changes for encoding/decoding these instructions.

llvm-svn: 111251
2010-08-17 17:23:19 +00:00
Bob Wilson 411dfad981 Allow more cases of undef shuffle indices and add tests for them.
llvm-svn: 111226
2010-08-17 05:54:34 +00:00
Evan Cheng f259efde47 PHI elimination should not break back edge. It can cause some significant code placement issues. rdar://8263994
good:
LBB0_2:
  mov     r2, r0
  . . .
  mov     r1, r2
  bne     LBB0_2

bad:
LBB0_2:
  mov     r2, r0
  . . .
@ BB#3:
  mov     r1, r2
  b       LBB0_2

llvm-svn: 111221
2010-08-17 01:20:36 +00:00
Bob Wilson eee4824f74 Add a testcase for svn 111208.
llvm-svn: 111212
2010-08-16 23:44:29 +00:00
Bob Wilson 804f6159f1 Generalize a pattern for PKHTB: an SRL of 16-31 bits will guarantee
that the high halfword is zero.  The shift need not be exactly 16 bits.

llvm-svn: 111196
2010-08-16 22:26:55 +00:00
Bob Wilson 3fd1e0dcda Convert test to FileCheck.
llvm-svn: 111195
2010-08-16 22:21:13 +00:00
Bob Wilson 8f553757c4 Convert a test to use FileCheck.
llvm-svn: 111153
2010-08-16 17:05:27 +00:00
Benjamin Kramer cbc55d9dc0 Test expects SSE, give him SSE.
llvm-svn: 111115
2010-08-15 23:32:03 +00:00
Benjamin Kramer 4566466b7f Restore arch on these test, they fail on arm.
llvm-svn: 111109
2010-08-15 20:42:56 +00:00
Dale Johannesen 339423c460 Mark as XFAIL on darwin 8. PR 7886.
llvm-svn: 111108
2010-08-15 19:40:29 +00:00
Bob Wilson 3c9ed76ba5 Temporarily disable tail calls on ARM to work around some linker problems.
llvm-svn: 111050
2010-08-13 22:43:33 +00:00
Dale Johannesen 8d3c89e765 Revert 110491. While not wrong, it was based on a
misanalysis and is undesirable.

llvm-svn: 111028
2010-08-13 18:43:45 +00:00
Bruno Cardoso Lopes 7f704b31a9 - Teach SSEDomainFix to switch between different levels of AVX instructions. Here we guess that AVX will have domain issues, so just implement them for consistency and in the future we remove if it's unnecessary.
- Make foldMemoryOperandImpl aware of 256-bit zero vectors folding and support the 128-bit counterparts of AVX too.
- Make sure MOV[AU]PS instructions are only selected when SSE1 is enabled, and duplicate the patterns to match AVX.
- Add a testcase for a simple 128-bit zero vector creation.

llvm-svn: 110946
2010-08-12 20:20:53 +00:00
Bruno Cardoso Lopes 7306c86886 Begin to support some vector operations for AVX 256-bit intructions. The long
term goal here is to be able to match enough of vector_shuffle and build_vector
so all avx intrinsics which aren't mapped to their own built-ins but to
shufflevector calls can be codegen'd. This is the first (baby) step, support
building zeroed vectors.

llvm-svn: 110897
2010-08-12 02:06:36 +00:00
Devang Patel 48595bf2bc This is x86 only test.
llvm-svn: 110887
2010-08-12 00:17:38 +00:00
Bruno Cardoso Lopes 1675ee7a02 Add testcases for all AVX 256-bit intrinsics added in the last couple days
llvm-svn: 110854
2010-08-11 21:12:09 +00:00
Bruno Cardoso Lopes 29c8818ad9 Reapply r109881 using a more strict command line for llc.
llvm-svn: 110833
2010-08-11 17:39:23 +00:00
Jim Grosbach a5f923b1a1 fix silly typo
llvm-svn: 110831
2010-08-11 17:32:46 +00:00
Jim Grosbach 2bf8bd1e19 Add a target triple, as the runtime library invocation varies a bit by
platform. It's apparently "bl __muldf3" on linux, for example. Since that's
not what we're checking here, it's more robust to just force a triple. We
just wwant to check that the inline FP instructions are only generated
on cpus that have them."

llvm-svn: 110830
2010-08-11 17:31:12 +00:00
Evan Cheng b0276814d5 Fix test and re-enable it.
llvm-svn: 110829
2010-08-11 17:25:51 +00:00
Dan Gohman 4df4114870 Temporarily disable some failing tests, until they can be
properly investigated.

llvm-svn: 110825
2010-08-11 16:36:07 +00:00
Jim Grosbach 4d5dc3e7e5 cortex m4 has floating point support, but only single precision.
llvm-svn: 110810
2010-08-11 15:44:15 +00:00
Dan Gohman f3d783a6d2 Temporarily disable some failing tests, until they can be
properly investigated.

llvm-svn: 110808
2010-08-11 15:09:00 +00:00
Bill Wendling 6a98131468 Consider this code snippet:
float t1(int argc) {
  return (argc == 1123) ? 1.234f : 2.38213f;
}

We would generate truly awful code on ARM (those with a weak stomach should look
away):

_t1:
  movw   r1, #1123
  movs   r2, #1
  movs   r3, #0
  cmp    r0, r1
  mov.w  r0, #0
  it     eq
  moveq  r0, r2
  movs   r1, #4
  cmp    r0, #0
  it     ne
  movne  r3, r1
  adr    r0, #LCPI1_0
  ldr    r0, [r0, r3]
  bx     lr

The problem was that legalization was creating a cascade of SELECT_CC nodes, for
for the comparison of "argc == 1123" which was fed into a SELECT node for the ?:
statement which was itself converted to a SELECT_CC node. This is because the
ARM back-end doesn't have custom lowering for SELECT nodes, so it used the
default "Expand".

I added a fairly simple "LowerSELECT" to the ARM back-end. It takes care of this
testcase, but can obviously be expanded to include more cases.

Now we generate this, which looks optimal to me:

_t1:
  movw   r1, #1123
  movs   r2, #0
  cmp    r0, r1
  adr    r0, #LCPI0_0
  it     eq
  moveq  r2, #4
  ldr    r0, [r0, r2]
  bx     lr
  .align  2
LCPI0_0:
  .long   1075344593  @ float 2.382130e+00
  .long   1067316150  @ float 1.234000e+00

llvm-svn: 110799
2010-08-11 08:43:16 +00:00
Evan Cheng 5190f09291 Report error if codegen tries to instantiate a ARM target when the cpu does support it. e.g. cortex-m* processors.
llvm-svn: 110798
2010-08-11 07:17:46 +00:00
Evan Cheng 40921a4e62 Add ARM Archv6M and let it implies FeatureDB (having dmb, etc.)
llvm-svn: 110795
2010-08-11 06:51:54 +00:00
Evan Cheng 49e02fc414 Add Cortex-M0 support. It's a ARMv6m device (no ARM mode) with some 32-bit
instructions: dmb, dsb, isb, msr, and mrs.

llvm-svn: 110786
2010-08-11 06:30:38 +00:00
Evan Cheng 6e809de90c - Add subtarget feature -mattr=+db which determine whether an ARM cpu has the
memory and synchronization barrier dmb and dsb instructions.
- Change instruction names to something more sensible (matching name of actual
  instructions).
- Added tests for memory barrier codegen.

llvm-svn: 110785
2010-08-11 06:22:01 +00:00
Bill Wendling 79937dfc5b Update test to match output of optimize compares for ARM.
llvm-svn: 110765
2010-08-11 01:05:02 +00:00
Bill Wendling 871d4e1170 The optimize comparisons pass removes the "cmp" instruction this is checking for.
llvm-svn: 110739
2010-08-10 22:16:05 +00:00
Evan Cheng 3f251fb26e Re-apply r110655 with fixes. Epilogue must restore sp from fp if the function stack frame has a var-sized object.
Also added a test case to check for the added benefit of this patch: it's optimizing away the unnecessary restore of sp from fp for some non-leaf functions.

llvm-svn: 110707
2010-08-10 19:30:19 +00:00
Daniel Dunbar 0dd47bfca3 Revert r110655, "Fix ARM hasFP() semantics. It should return true whenever FP
register is", it breaks a couple test-suite tests.

llvm-svn: 110701
2010-08-10 18:32:02 +00:00
Jakob Stoklund Olesen 5730846c2f Fix test for more architectures. Patch by Tobias Grosser.
llvm-svn: 110685
2010-08-10 16:48:24 +00:00
Tobias Grosser fedeff8015 Fix failing testcase.
Those look like typos to me.

llvm-svn: 110664
2010-08-10 09:54:29 +00:00
Devang Patel b219746c80 Handle TAG_constant for integers.
llvm-svn: 110656
2010-08-10 07:11:13 +00:00
Evan Cheng 8d5d1c1331 Fix ARM hasFP() semantics. It should return true whenever FP register is
reserved, not available for general allocation. This eliminates all the
extra checks for Darwin.

This change also fixes the use of FP to access frame indices in leaf
functions and cleaned up some confusing code in epilogue emission.

llvm-svn: 110655
2010-08-10 06:26:49 +00:00
Kalle Raiskila 999da1f3a0 Have SPU handle halfvec stores aligned by 8 bytes.
llvm-svn: 110576
2010-08-09 16:33:00 +00:00
Dale Johannesen a3bd31a923 Use sdmem and sse_load_f64 (etc.) for the vector
form of CMPSD (etc.)  Matching a 128-bit memory
operand is wrong, the instruction uses only 64 bits
(same as ADDSD etc.)  8193553.

llvm-svn: 110491
2010-08-07 00:33:42 +00:00
Rafael Espindola 027d5bcf89 Fix eabi calling convention when a 64 bit value shadows r3.
Without this what was happening was:

* R3 is not marked as "used"
* ARM backend thinks it has to save it to the stack because of vaarg
* Offset computation correctly ignores it
* Offsets are wrong

llvm-svn: 110446
2010-08-06 15:35:32 +00:00
Eric Christopher e1fb772aa5 Add an option to always emit realignment code for a particular module.
llvm-svn: 110404
2010-08-05 23:57:43 +00:00
Devang Patel cc3f3b341d Move x86 specific tests into test/CodeGen/X86.
llvm-svn: 110372
2010-08-05 20:25:37 +00:00
Dan Gohman c53ee449a5 Move x86-specific tests out of test/Transforms/LoopStrengthReduce and
into test/CodeGen/X86, so that they aren't run when the x86 target is
not enabled.

Fix uglygep.ll to not be x86-specific.

llvm-svn: 110343
2010-08-05 17:04:15 +00:00
Daniel Dunbar e62e664656 tests: CodeGen/X86/GC tests require X86.
llvm-svn: 110338
2010-08-05 15:45:33 +00:00
Bill Wendling ca1cb13646 The lower invoke pass needs to have unreachable code elimination run after it
because it could create such things. This fixes a MingW buildbot test failure.

llvm-svn: 110279
2010-08-04 23:36:02 +00:00
Eli Friedman 39d0f57cab PR7814: Truncates cannot be ignored for signed comparisons.
llvm-svn: 110268
2010-08-04 22:40:58 +00:00
Bill Wendling 26feb849a4 Testcase for r110248.
llvm-svn: 110249
2010-08-04 21:56:30 +00:00
Stuart Hastings cba0d06b7c call-imm.ll test case regex fix. Patch by Dimitry Andric!
llvm-svn: 110199
2010-08-04 15:31:35 +00:00
Kalle Raiskila 8b2f70125f Make SPU backend handle insertelement and
store for "half vectors"

llvm-svn: 110198
2010-08-04 13:59:48 +00:00
Bob Wilson 79daf7e0ae Combine NEON VABD (absolute difference) intrinsics with ADDs to make VABA
(absolute difference with accumulate) intrinsics.  Radar 8228576.

llvm-svn: 110170
2010-08-04 00:12:08 +00:00
Jakob Stoklund Olesen 011ff9bec9 OK, that's it. This test is going away now. But don't worry, I am taking it to a
nice farm in the country where it can play with other tests. And bunnies.

It is not clear what is being tested, and the revision history shows a bunch of
random changes to the expected instruction count. Clearly, we are just fudging
it to pass whenever it fails.

llvm-svn: 110118
2010-08-03 17:21:14 +00:00
Kalle Raiskila 77558b7d13 More SPU v2f32 stuff added: insertelement and shuffle.
llvm-svn: 110038
2010-08-02 11:22:10 +00:00
Kalle Raiskila 68b3886678 Add preliminary v2f32 support for SPU. Like with v2i32, we just
duplicate the instructions and operate on half vectors. 

Also reorder code in SPUInstrInfo.td for better coherency.

llvm-svn: 110037
2010-08-02 10:25:47 +00:00
Kalle Raiskila 622f8eb981 Add preliminary v2i32 support for SPU backend. As there are no
such registers in SPU, this support boils down to "emulating" 
them by duplicating instructions on the general purpose registers. 

This adds the most basic operations on v2i32: passing parameters,
addition, subtraction, multiplication and a few others.

llvm-svn: 110035
2010-08-02 08:54:39 +00:00
Eli Friedman 7595ce05a2 PR7781: Fix incorrect shifting in PPCTargetLowering::LowerBUILD_VECTOR.
llvm-svn: 109998
2010-08-02 00:18:19 +00:00
Eli Friedman 1b2bc1b844 PR7774: Fix undefined shifts in Alpha backend. As a bonus, this actually
improves the generated code in some cases.

llvm-svn: 109985
2010-08-01 21:13:28 +00:00
Bob Wilson 66161f5eb4 Revert new AVX intrinsic tests. They are breaking buildbots and Bruno is
away from a computer now.
--- Reverse-merging r109881 into '.':
D    test/CodeGen/X86/avx-intrinsics-x86.ll
D    test/CodeGen/X86/avx-intrinsics-x86_64.ll

llvm-svn: 109959
2010-07-31 22:36:03 +00:00
Bruno Cardoso Lopes 92941fdb26 A *bunch* of tests for AVX intrinsics
llvm-svn: 109881
2010-07-30 19:57:56 +00:00
Eli Friedman ffe64c06ef Fix for bug reported by Evzen Muller on llvm-commits: make sure to correctly
check the range of the constant when optimizing a comparison between a
constant and a sign_extend_inreg node.

llvm-svn: 109854
2010-07-30 06:44:31 +00:00
Jim Grosbach d343166a0b Many Thumb2 instructions can reference the full ARM register set (i.e.,
have 4 bits per register in the operand encoding), but have undefined
behavior when the operand value is 13 or 15 (SP and PC, respectively).
The trivial coalescer in linear scan sometimes will merge a copy from
SP into a subsequent instruction which uses the copy, and if that
instruction cannot legally reference SP, we get bad code such as:
  mls r0,r9,r0,sp
instead of:
  mov r2, sp
  mls r0, r9, r0, r2

This patch adds a new register class for use by Thumb2 that excludes
the problematic registers (SP and PC) and is used instead of GPR
for those operands which cannot legally reference PC or SP. The
trivial coalescer explicitly requires that the register class
of the destination for the COPY instruction contain the source
register for the COPY to be considered for coalescing. This prevents
errant instructions like that above.

PR7499

llvm-svn: 109842
2010-07-30 02:41:01 +00:00
Dale Johannesen 2bff50546c Implement vector constants which are splat of
integers with mov + vdup.  8003375.  This is
currently disabled by default because LICM will
not hoist a VDUP, so it pessimizes the code if
the construct occurs inside a loop (8248029).

llvm-svn: 109799
2010-07-29 20:10:08 +00:00
Nate Begeman 53afc8f06a Implement a vectorized algorithm for <16 x i8> << <16 x i8>
This is about 4x faster and smaller than the existing scalarization.

llvm-svn: 109566
2010-07-28 00:21:48 +00:00
Nate Begeman 269a6da023 ~40% faster vector shl <4 x i32> on SSE 4.1 Larger improvements for smaller types coming in future patches.
For:

define <2 x i64> @shl(<4 x i32> %r, <4 x i32> %a) nounwind readnone ssp {
entry:
  %shl = shl <4 x i32> %r, %a                     ; <<4 x i32>> [#uses=1]
  %tmp2 = bitcast <4 x i32> %shl to <2 x i64>     ; <<2 x i64>> [#uses=1]
  ret <2 x i64> %tmp2
}

We get:

_shl:                                   ## @shl
	pslld	$23, %xmm1
	paddd	LCPI0_0, %xmm1
	cvttps2dq	%xmm1, %xmm1
	pmulld	%xmm1, %xmm0
	ret

Instead of:

_shl:                                   ## @shl
	pshufd	$3, %xmm0, %xmm2
	movd	%xmm2, %eax
	pshufd	$3, %xmm1, %xmm2
	movd	%xmm2, %ecx
	shll	%cl, %eax
	movd	%eax, %xmm2
	pshufd	$1, %xmm0, %xmm3
	movd	%xmm3, %eax
	pshufd	$1, %xmm1, %xmm3
	movd	%xmm3, %ecx
	shll	%cl, %eax
	movd	%eax, %xmm3
	punpckldq	%xmm2, %xmm3
	movd	%xmm0, %eax
	movd	%xmm1, %ecx
	shll	%cl, %eax
	movd	%eax, %xmm2
	movhlps	%xmm0, %xmm0
	movd	%xmm0, %eax
	movhlps	%xmm1, %xmm1
	movd	%xmm1, %ecx
	shll	%cl, %eax
	movd	%eax, %xmm0
	punpckldq	%xmm0, %xmm2
	movdqa	%xmm2, %xmm0
	punpckldq	%xmm3, %xmm0
	ret

llvm-svn: 109549
2010-07-27 22:37:06 +00:00
Nate Begeman 317b969ac5 Fix a crash in the dag combiner caused by ConstantFoldBIT_CONVERTofBUILD_VECTOR calling itself
recursively and returning a SCALAR_TO_VECTOR node, but assuming the input was always a BUILD_VECTOR.

llvm-svn: 109519
2010-07-27 18:02:18 +00:00
Anton Korobeynikov 6bcea068db Currently EH lowering code expects typeinfo to be global only.
This assumption is not satisfied due to global mergeing.
Workaround the issue by temporary disablinge mergeing of const globals.
Also, ignore LLVM "special" globals. This fixes PR7716

llvm-svn: 109423
2010-07-26 18:45:39 +00:00
Evan Cheng df907f4594 - Allow target to specify when is register pressure "too high". In most cases,
it's too late to start backing off aggressive latency scheduling when most
  of the registers are in use so the threshold should be a bit tighter.
- Correctly handle live out's and extract_subreg etc.
- Enable register pressure aware scheduling by default for hybrid scheduler.
  For ARM, this is almost always a win on # of instructions. It's runtime
  neutral for most of the tests. But for some kernels with high register
  pressure it can be a huge win. e.g. 464.h264ref reduced number of spills by
  54 and sped up by 20%.

llvm-svn: 109279
2010-07-23 22:39:59 +00:00
Dan Gohman 55e244698a Use the proper type for shift counts. This fixes a bootstrap error.
llvm-svn: 109265
2010-07-23 21:08:12 +00:00
Dan Gohman 0818684a70 DAGCombine (shl (anyext x, c)) to (anyext (shl x, c)) if the high bits
are not demanded. This often allows the anyext to be folded away.

llvm-svn: 109242
2010-07-23 18:03:30 +00:00
Eric Christopher 9a77382685 Custom lower the memory barrier instructions and add support
for lowering without sse2.  Add a couple of new testcases.

Fixes a few libgomp tests and latent bugs.  Remove a few todos.

llvm-svn: 109078
2010-07-22 02:48:34 +00:00
Evan Cheng 285903853f More register pressure aware scheduling work.
llvm-svn: 109064
2010-07-21 23:53:58 +00:00
Eric Christopher 84bdfd80df Baby steps towards ARM fast-isel.
llvm-svn: 109047
2010-07-21 22:26:11 +00:00
Rafael Espindola 4277e14dc4 Fix calling convention on ARM if vfp2+ is enabled.
llvm-svn: 109009
2010-07-21 11:38:30 +00:00
Dan Gohman 625fd2292d Fix SCEV denormalization of expressions where the exit value from
one loop is involved in the increment of an addrec for another
loop. This fixes rdar://8168938.

llvm-svn: 108863
2010-07-20 17:06:20 +00:00
Jim Grosbach badf087e45 update tests for smarter BIC usage
llvm-svn: 108846
2010-07-20 16:16:48 +00:00
Duncan Sands 2e839de377 The same problem was being tracked in PR7652.
llvm-svn: 108843
2010-07-20 15:52:32 +00:00
Bruno Cardoso Lopes 160695fecb Fix PR7174, a couple o Mips fixes:
- Fix a typo for PIC check during jmp table lowering
- Also fix the "first jump table basic block is not
considered only reachable by fall through" problem, use this
ad-hoc solution until I come up with something better.

Patch by stetorvs@gmail.com

llvm-svn: 108820
2010-07-20 08:37:04 +00:00
Bruno Cardoso Lopes ea7863647b Fix Mips PR7473. Patch by stetorvs@gmail.com
llvm-svn: 108816
2010-07-20 07:58:51 +00:00
Dan Gohman b5e918dc05 After a custom inserter, in a block which has constant instructions,
update the current basic block in addition to the current insert
position, so that they remain consistent. This fixes rdar://8204072.

llvm-svn: 108765
2010-07-19 22:48:56 +00:00
Owen Anderson 9c271e2835 Remove r108639 now that it is handled by InstCombine instead.
llvm-svn: 108688
2010-07-19 08:10:24 +00:00
Owen Anderson 41670a11a8 Add a testcase for r108639.
llvm-svn: 108640
2010-07-18 08:57:19 +00:00
Jim Grosbach b97e2bbe32 Add combiner patterns to more effectively utilize the BFI (bitfield insert)
instruction for non-constant operands. This includes the case referenced
in the README.txt regarding a bitfield copy.

llvm-svn: 108608
2010-07-17 03:30:54 +00:00
Jim Grosbach 11013eda5a Add basic support to code-gen the ARM/Thumb2 bit-field insert (BFI) instruction
and a combine pattern to use it for setting a bit-field to a constant
value. More to come for non-constant stores.

llvm-svn: 108570
2010-07-16 23:05:05 +00:00
Bill Wendling bf8370ff36 Consider this function:
void foo() { __builtin_unreachable(); }

It will output the following on Darwin X86:

_func1:
Leh_func_begin0:
        pushq %rbp
Ltmp0:
        movq %rsp, %rbp
Ltmp1:
Leh_func_end0:

This prolog adds a new Call Frame Information (CFI) row to the FDE with an
address that is not within the address range of the code it describes -- part is
equal to the end of the function -- and therefore results in an invalid EH
frame. If we emit a nop in this situation, then the CFI row is now within the
address range.

llvm-svn: 108568
2010-07-16 22:51:10 +00:00
Jakob Stoklund Olesen c30b4ddc58 Remove the X86::FP_REG_KILL pseudo-instruction and the X86FloatingPointRegKill
pass that inserted it.

It is no longer necessary to limit the live ranges of FP registers to a single
basic block.

llvm-svn: 108536
2010-07-16 17:41:44 +00:00
Benjamin Kramer 50729ad717 Feed the right output into FileCheck.
llvm-svn: 108523
2010-07-16 10:58:02 +00:00
Jakob Stoklund Olesen 37c42a3d02 Remove many calls to TII::isMoveInstr. Targets should be producing COPY anyway.
TII::isMoveInstr is going tobe completely removed.

llvm-svn: 108507
2010-07-16 04:45:42 +00:00
Jakob Stoklund Olesen b1671271ab Add forgotten test case.
llvm-svn: 108506
2010-07-16 04:45:35 +00:00
Dan Gohman 103c4ebea5 Use the source-order scheduler instead of the "fast" scheduler at -O0,
because it's more likely to keep debug line information in its original
order.

llvm-svn: 108496
2010-07-16 02:01:19 +00:00
Dale Johannesen bfd4fd7bb7 The SelectionDAGBuilder's handling of debug info, on rare
occasions, caused code to be generated in a different order.
All cases I've seen involved float softening in the type
legalizer, and this could be perhaps be fixed there, but
it's better not to generate things differently in the first
place.  7797940 (6/29/2010..7/15/2010).

llvm-svn: 108484
2010-07-16 00:02:08 +00:00
Bill Wendling 4bda1c8e68 Revert. This isn't the correct way to go.
llvm-svn: 108478
2010-07-15 23:42:21 +00:00
Bill Wendling 973dc3b1d8 Handle code gen for the unreachable instruction if it's the only instruction in
the function. We'll just turn it into a "trap" instruction instead.

The problem with not handling this is that it might generate a prologue without
the equivalent epilogue to go with it:

$ cat t.ll
define void @foo() {
entry:
  unreachable
}
$ llc -o - t.ll -relocation-model=pic -disable-fp-elim -unwind-tables
        .section        __TEXT,__text,regular,pure_instructions
        .globl  _foo
        .align  4, 0x90
_foo:                                   ## @foo
Leh_func_begin0:
## BB#0:                                ## %entry
        pushq   %rbp
Ltmp0:
        movq    %rsp, %rbp
Ltmp1:
Leh_func_end0:
...

The unwind tables then have bad data in them causing all sorts of problems.

Fixes <rdar://problem/8096481>.

llvm-svn: 108473
2010-07-15 23:32:40 +00:00
Evan Cheng 55f0c6b9fc Split -enable-finite-only-fp-math to two options:
-enable-no-nans-fp-math and -enable-no-infs-fp-math. All of the current codegen fp math optimizations only care whether the fp arithmetics arguments and results can never be NaN.

llvm-svn: 108465
2010-07-15 22:07:12 +00:00
Chris Lattner 60b131654b fix the definitions of ConstTextCoalSection/ConstDataCoalSection
to keep "Text" in sync with the "pure instructions" section attribute.
Lack of this attribute was preventing the assembler from emitting
multibyte noops instructions for templates (and inlines, and other
coalesced stuff) and was causing the assembler to mismatch .o files.

This fixes rdar://8018335

llvm-svn: 108461
2010-07-15 21:22:00 +00:00
Devang Patel df09db62e2 Fix crash reported in PR7653.
llvm-svn: 108441
2010-07-15 18:45:27 +00:00
Dan Gohman 4afd412d6b Watch out for a constant offset cancelling out a base register, forming
a zero. This situation arrises in Fortran code with induction variables
that start at 1 instead of 0. This fixes PR7651.

llvm-svn: 108424
2010-07-15 15:14:45 +00:00
Devang Patel 29168baf4b Make it a .ll test case.
llvm-svn: 108370
2010-07-14 23:12:52 +00:00
Jim Grosbach a90af1ba38 Improve 64-subtraction of immediates when parts of the immediate can fit
in the literal field of an instruction. E.g.,
long long foo(long long a) {
  return a - 734439407618LL;
}

rdar://7038284

llvm-svn: 108339
2010-07-14 17:45:16 +00:00
Dan Gohman 042523340b Delete fast-isel's trivial load optimization; it breaks debugging because
it can look past points where a debugger might modify user variables.

llvm-svn: 108336
2010-07-14 17:25:37 +00:00
Bob Wilson bb57896f8e Fix test to appease the buildbots.
llvm-svn: 108334
2010-07-14 16:43:47 +00:00
Evan Cheng a8e8874552 Fix for PR7193 was overly conservative. The only case where sibcall callee
address cannot be allocated a register is in 32-bit mode where the first
three arguments are marked inreg. In that case EAX, EDX, and ECX will be
used for argument passing.

This fixes PR7610.

llvm-svn: 108327
2010-07-14 06:44:01 +00:00
Bob Wilson bad47f62f6 Add support for NEON VMVN immediate instructions.
llvm-svn: 108324
2010-07-14 06:31:50 +00:00
Evan Cheng c893115312 Re-enable the test with fix.
llvm-svn: 108319
2010-07-14 05:49:23 +00:00
Chris Lattner 711338fb04 temporarily disable to test to fix buildbots.
llvm-svn: 108310
2010-07-14 02:21:59 +00:00
Evan Cheng d542414945 Teach ProcessImplicitDefs to transform more COPY instructions into IMPLICIT_DEF (and subsequently eliminate them). This allows machine LICM to hoist IMPLICIT_DEF's. PR7620.
llvm-svn: 108304
2010-07-14 01:22:19 +00:00
Bob Wilson 103a0dcfe1 Add an ARM-specific DAG combining to avoid redundant VDUPLANE nodes.
Radar 7373643.

llvm-svn: 108303
2010-07-14 01:22:12 +00:00
Bob Wilson a3f1901531 Use a target-specific VMOVIMM DAG node instead of BUILD_VECTOR to represent
NEON VMOV-immediate instructions.  This simplifies some things.

llvm-svn: 108275
2010-07-13 21:16:48 +00:00
Dale Johannesen caca5488dc In inline asm treat indirect 'X' constraint as 'm'.
This may not be right in all cases, but it's better
than asserting which it was doing before.  PR 7528.

llvm-svn: 108268
2010-07-13 20:17:05 +00:00
Evan Cheng 0cc4ad983d Extend the r107852 optimization which turns some fp compare to code sequence using only i32 operations. It now optimize some f64 compares when fp compare is exceptionally slow (e.g. cortex-a8). It also catches comparison against 0.0.
llvm-svn: 108258
2010-07-13 19:27:42 +00:00
Evan Cheng f43961007c -enable-unsafe-fp-math should not imply -enable-finite-only-fp-math.
llvm-svn: 108254
2010-07-13 18:46:14 +00:00
Dale Johannesen f241d4626c Fix PR number.
llvm-svn: 108251
2010-07-13 18:14:47 +00:00
Dan Gohman 51e6d9bbf6 Apply the SSE dependence idiom for SSE unary operations to
SD instructions too, in addition to SS instructions. And
add a comment about it.

llvm-svn: 108191
2010-07-12 20:46:04 +00:00
Jakob Stoklund Olesen c4227f1362 Remove TargetInstrInfo::copyRegToReg entirely.
Targets must now implement TargetInstrInfo::copyPhysReg instead. There is no
longer a default implementation forwarding to copyRegToReg.

llvm-svn: 108095
2010-07-11 17:01:17 +00:00
Rafael Espindola a76eccf815 Fix va_arg for doubles. With this patch VAARG nodes always contain the
correct alignment information, which simplifies ExpandRes_VAARG a bit.

The patch introduces a new alignment information to TargetLoweringInfo. This is
needed since the two natural candidates cannot be used:

* The 's' in target data: If this is set to the minimal alignment of any
  argument, getCallFrameTypeAlignment would return 4 for doubles on ARM for
  example.
* The getTransientStackAlignment method. It is possible for an architecture to
  have argument less aligned than what we maintain the stack pointer.

llvm-svn: 108072
2010-07-11 04:01:49 +00:00
Dan Gohman 79be2b9be5 Fix this test.
llvm-svn: 108059
2010-07-10 22:42:12 +00:00
Jakob Stoklund Olesen c4b3bcc051 FileCheckize inline asm FP stack tests
llvm-svn: 108046
2010-07-10 16:30:25 +00:00
Dan Gohman d7b5ce3312 Reapply bottom-up fast-isel, with several fixes for x86-32:
- Check getBytesToPopOnReturn().
 - Eschew ST0 and ST1 for return values.
 - Fix the PIC base register initialization so that it doesn't ever
   fail to end up the top of the entry block.

llvm-svn: 108039
2010-07-10 09:00:22 +00:00
Jakob Stoklund Olesen 51702ec46b Fix a few tests
llvm-svn: 108011
2010-07-09 20:43:09 +00:00
Jim Grosbach 2a5725b1a3 In the presence of variable sized objects, allocate an emergency spill slot.
rdar://8131327

llvm-svn: 108008
2010-07-09 20:27:06 +00:00
Dan Gohman ea9ae3e6ed Add a target triple.
llvm-svn: 108003
2010-07-09 19:17:36 +00:00
Dan Gohman 7929c448fc Fix MachineLICM to actually visit inner loops.
llvm-svn: 108001
2010-07-09 18:49:45 +00:00
Bob Wilson 6586e9b203 --- Reverse-merging r107947 into '.':
U    utils/TableGen/FastISelEmitter.cpp
--- Reverse-merging r107943 into '.':
U    test/CodeGen/X86/fast-isel.ll
U    test/CodeGen/X86/fast-isel-loads.ll
U    include/llvm/Target/TargetLowering.h
U    include/llvm/Support/PassNameParser.h
U    include/llvm/CodeGen/FunctionLoweringInfo.h
U    include/llvm/CodeGen/CallingConvLower.h
U    include/llvm/CodeGen/FastISel.h
U    include/llvm/CodeGen/SelectionDAGISel.h
U    lib/CodeGen/LLVMTargetMachine.cpp
U    lib/CodeGen/CallingConvLower.cpp
U    lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
U    lib/CodeGen/SelectionDAG/FunctionLoweringInfo.cpp
U    lib/CodeGen/SelectionDAG/FastISel.cpp
U    lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
U    lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp
U    lib/CodeGen/SelectionDAG/InstrEmitter.cpp
U    lib/CodeGen/SelectionDAG/TargetLowering.cpp
U    lib/Target/XCore/XCoreISelLowering.cpp
U    lib/Target/XCore/XCoreISelLowering.h
U    lib/Target/X86/X86ISelLowering.cpp
U    lib/Target/X86/X86FastISel.cpp
U    lib/Target/X86/X86ISelLowering.h

llvm-svn: 107987
2010-07-09 16:37:18 +00:00
Jakob Stoklund Olesen a57965827f Fix test to be less sensitive of regalloc accidents
llvm-svn: 107951
2010-07-09 01:32:11 +00:00
Bob Wilson 88a4e6dc0e Print "dregpair" NEON operands with a space between them, for readability and
consistency with other instructions that have lists of register operands.

llvm-svn: 107944
2010-07-09 00:47:20 +00:00
Dan Gohman 0b5aa1cdd3 Re-apply bottom-up fast-isel, with fixes. Be very careful to avoid emitting
a DBG_VALUE after a terminator, or emitting any instructions before an EH_LABEL.

llvm-svn: 107943
2010-07-09 00:39:23 +00:00
Bob Wilson 21eed476e8 Reenable DAG combining for vector shuffles. It looks like it was temporarily
disabled and then never turned back on again.  Adjust some tests, one because
this change avoids an unnecessary instruction, and the other to make it
continue testing what it was intended to test.

llvm-svn: 107941
2010-07-09 00:38:12 +00:00
Bill Wendling a992445ff2 Extension of r107506. Make sure that we don't mark a function as having a call
if the inline ASM doesn't need a stack frame.

llvm-svn: 107922
2010-07-08 22:38:02 +00:00
Evan Cheng 0f54854a1d Check for FiniteOnlyFPMath as well.
llvm-svn: 107904
2010-07-08 20:12:24 +00:00
Eric Christopher e796253217 A slight reworking of the custom patterns for x86-64 tpoff codegen and
correct the testcase for valid assembly.

Needs more tests.

llvm-svn: 107860
2010-07-08 07:36:46 +00:00
Evan Cheng be1f7a931e r107852 is only safe with -enable-unsafe-fp-math to account for +0.0 == -0.0.
llvm-svn: 107856
2010-07-08 06:01:49 +00:00
Evan Cheng 25f9364cbd Optimize some vfp comparisons to integer ones. This patch implements the simplest case when the following conditions are met:
1. The arguments are f32.
2. The arguments are loads and they have no uses other than the comparison.
3. The comparison code is EQ or NE.

e.g.
        vldr.32 s0, [r1]
        vldr.32 s1, [r0]
        vcmpe.f32       s1, s0
        vmrs    apsr_nzcv, fpscr
	beq     LBB0_2
=>
        ldr     r1, [r1]
        ldr     r0, [r0]
        cmp     r0, r1
        beq     LBB0_2

More complicated cases will be implemented in subsequent patches.

llvm-svn: 107852
2010-07-08 02:08:50 +00:00
Dale Johannesen e2289285ae Changes to ARM tail calls, mostly cosmetic.
Add explicit testcases for tail calls within the same module.
Duplicate some code to humor those who think .w doesn't apply on ARM.
Leave this disabled on Thumb1, and add some comments explaining why it's hard
and won't gain much.

llvm-svn: 107851
2010-07-08 01:18:23 +00:00
Dan Gohman e75704369d Revert 107840 107839 107813 107804 107800 107797 107791.
Debug info intrinsics win for now.

llvm-svn: 107850
2010-07-08 01:00:56 +00:00
Jakob Stoklund Olesen ddaf0099a5 Allow copies between GR8_ABCD_L and GR8_ABCD_H.
This fixes PR7540.

llvm-svn: 107809
2010-07-07 20:33:27 +00:00
Dan Gohman e7ccc51cc1 Implement bottom-up fast-isel. This has the advantage of not requiring
a separate DCE pass over MachineInstrs.

llvm-svn: 107804
2010-07-07 19:20:32 +00:00
Dan Gohman 2d4d01d0de Add X86FastISel support for return statements. This entails refactoring
a bunch of stuff, to allow the target-independent calling convention
logic to be employed.

llvm-svn: 107800
2010-07-07 18:32:53 +00:00
Dale Johannesen ce65663330 Accept RIP-relative symbols with 'i' constraint, and
print the (%rip) only if the 'a' modifier is present.
PR 7528.

llvm-svn: 107727
2010-07-06 23:27:00 +00:00
Dale Johannesen 6f01541ae6 Make test not hang waiting for input.
llvm-svn: 107721
2010-07-06 23:06:58 +00:00
Jakob Stoklund Olesen a64c0a3d22 Be more forgiving when calculating alias interference for physreg coalescing.
It is OK for an alias live range to overlap if there is a copy to or from the
physical register. CoalescerPair can work out if the copy is coalescable
independently of the alias.

This means that we can join with the actual destination interval instead of
using the getOrigDstReg() hack. It is no longer necessary to merge clobber
ranges into subregisters.

llvm-svn: 107695
2010-07-06 20:31:51 +00:00
Devang Patel 23a7593534 Fix PR7545 crash.
llvm-svn: 107678
2010-07-06 18:18:32 +00:00
Rafael Espindola 7c510aa7bc Don't create neon moves in CopyRegToReg. NEONMoveFixPass will do the conversion
if profitable.

llvm-svn: 107673
2010-07-06 16:24:34 +00:00
Eric Christopher 8f06b4a294 Remove mistakenly added test.
llvm-svn: 107641
2010-07-06 05:20:13 +00:00
Eric Christopher 2ad0c779c3 Fix up -fstack-protector on linux to use the segment
registers.  Split out testcases per architecture and os
now.

Patch from Nelson Elhage.

llvm-svn: 107640
2010-07-06 05:18:56 +00:00
Chris Lattner 60db4557cd another v2f32 case, in this case showing poor codegen.
llvm-svn: 107614
2010-07-05 05:52:56 +00:00
Chris Lattner 431e81f2fb fix test on non-x86 hosts.
llvm-svn: 107608
2010-07-05 03:56:55 +00:00
Chris Lattner 45cc4d74a3 Just rip v2f32 support completely out of the X86 backend. In
the example in the testcase, we now generate:

_test1:                                 ## @test1
	movss	4(%esp), %xmm0
	addss	8(%esp), %xmm0
	movl	12(%esp), %eax
	movss	%xmm0, (%eax)
	ret

instead of:

_test1:                                                     ## @test1
	subl	$20, %esp
	movl	24(%esp), %eax
	movq	%mm0, (%esp)
	movq	%mm0, 8(%esp)
	movss	(%esp), %xmm0
	addss	12(%esp), %xmm0
	movss	%xmm0, (%eax)
	addl	$20, %esp
	ret

v2f32 support did not work reliably because most of the X86
backend didn't know it was legal.  It was apparently only added
to support returning source-level v2f32 values in MMX registers
in x86-32 mode.  If ABI compatibility is important on this
GCC-extended-vector type for some reason, then the frontend
should generate IR that returns v2i32 instead of v2f32.  However,
we generally don't try very hard to be abi compatible on gcc
extended vectors. 

llvm-svn: 107601
2010-07-04 23:07:25 +00:00
Chris Lattner 681b926d54 fix PR7518 - terrible codegen of <2 x float>, by only marking
v2f32 as legal in 32-bit mode.  It is just as terrible there,
but I just care about x86-64 and noone claims it is valuable
in 64-bit mode.

llvm-svn: 107600
2010-07-04 22:57:10 +00:00
Evan Cheng 0ce84486c3 - Two-address pass should not assume unfolding is always successful.
- X86 unfolding should check if the instructions being unfolded has memoperands.
  If there is no memoperands, then it must assume conservative alignment. If this
  would introduce an expensive sse unaligned load / store, then unfoldMemoryOperand
  etc. should not unfold the instruction.

llvm-svn: 107509
2010-07-02 20:36:18 +00:00
Dale Johannesen 4d887f7ca7 Propagate the AlignStack bit in InlineAsm's to the
PrologEpilog code, and use it to determine whether
the asm forces stack alignment or not.  gcc consistently
does not do this for GCC-style asms; Apple gcc inconsistently
sometimes does it for asm blocks.  There is no
convenient place to put a bit in either the SDNode or
the MachineInstr form, so I've added an extra operand
to each; unlovely, but it does allow for expansion for
more bits, should we need it.  PR 5125.  Some
existing testcases are affected.
The operand lists of the SDNode and MachineInstr forms
are indexed with awesome mnemonics, like "2"; I may
fix this someday, but not now.  I'm not making it any
worse.  If anyone is inspired I think you can find all
the right places from this patch.

llvm-svn: 107506
2010-07-02 20:16:09 +00:00
Bob Wilson 771d04b969 Fix incorrect asm-printing of some NEON immediates. Fix weak testcase so
that it checks the immediate values, not just the instructions opcodes.
Radar 8110263.

llvm-svn: 107487
2010-07-02 17:23:44 +00:00
Bob Wilson 8a99b730a9 ARM function alignments were off by a power of two. svn 83242 changed
getFunctionAlignment and the corresponding use of that value in the ARM
asm printer, but now we're using the standard asm printer.  The result of
this was that function alignments were dropped completely for Thumb functions.
Radar 8143571.

llvm-svn: 107435
2010-07-01 22:26:26 +00:00
Bill Wendling 03bcd6ecc8 Implement the "linker_private_weak" linkage type. This will be used for
Objective-C metadata types which should be marked as "weak", but which the
linker will remove upon final linkage. However, this linkage isn't specific to
Objective-C.

For example, the "objc_msgSend_fixup_alloc" symbol is defined like this:

      .globl l_objc_msgSend_fixup_alloc
      .weak_definition l_objc_msgSend_fixup_alloc
      .section __DATA, __objc_msgrefs, coalesced
      .align 3
l_objc_msgSend_fixup_alloc:
       .quad   _objc_msgSend_fixup
       .quad   L_OBJC_METH_VAR_NAME_1

This is different from the "linker_private" linkage type, because it can't have
the metadata defined with ".weak_definition".

Currently only supported on Darwin platforms.

llvm-svn: 107433
2010-07-01 21:55:59 +00:00
Dan Gohman d2965c10a1 Temporarily disable on-demand fast-isel.
llvm-svn: 107393
2010-07-01 12:15:30 +00:00
Dan Gohman aef3d140b7 Teach fast-isel to avoid loading a value from memory when it's already
available in a register. This is pretty primitive, but it reduces the
number of instructions in common testcases by 4%.

llvm-svn: 107380
2010-07-01 03:49:38 +00:00
Dan Gohman 722f5fc567 Enable on-demand fast-isel.
llvm-svn: 107377
2010-07-01 02:58:57 +00:00
Dan Gohman 7937d5606d Teach X86FastISel to fold constant offsets and scaled indices in
the same address.

llvm-svn: 107373
2010-07-01 02:27:15 +00:00
Jakob Stoklund Olesen dadea5b178 Fix the handling of partial redefines in the fast register allocator.
A partial redefine needs to be treated like a tied operand, and the register
must be reloaded while processing use operands.

This fixes a bug where partially redefined registers were processed as normal
defs with a reload added. The reload could clobber another use operand if it was
a kill that allowed register reuse.

llvm-svn: 107193
2010-06-29 19:15:30 +00:00
Bob Wilson d91d5bfc95 Fix a register scavenger crash when dealing with undefined subregs.
The LowerSubregs pass needs to preserve implicit def operands attached to
EXTRACT_SUBREG instructions when it replaces those instructions with copies.

llvm-svn: 107189
2010-06-29 18:42:49 +00:00
Rafael Espindola 38a7d7cbc3 Add a VT argument to getMinimalPhysRegClass and replace the copy related uses
of getPhysicalRegisterRegClass with it.

If we want to make a copy (or estimate its cost), it is better to use the
smallest class as more efficient operations might be possible.

llvm-svn: 107140
2010-06-29 14:02:34 +00:00
Evan Cheng b59dd8f10a PR7503: uxtb16 is not available for ARMv7-M. Patch by Brian G. Lucas.
llvm-svn: 107122
2010-06-29 05:38:36 +00:00
Bob Wilson 1e5da550e5 Reapply my if-conversion cleanup from svn r106939 with fixes.
There are 2 changes relative to the previous version of the patch:

1) For the "simple" if-conversion case, there's no need to worry about
RemoveExtraEdges not handling an unanalyzable branch.  Predicated terminators
are ignored in this context, so RemoveExtraEdges does the right thing.
This might break someday if we ever treat indirect branches (BRIND) as
predicable, but for now, I just removed this part of the patch, because
in the case where we do not add an unconditional branch, we rely on keeping
the fall-through edge to CvtBBI (which is empty after this transformation).

The change relative to the previous patch is:

@@ -1036,10 +1036,6 @@
     IterIfcvt = false;
   }
 
-  // RemoveExtraEdges won't work if the block has an unanalyzable branch,
-  // which is typically the case for IfConvertSimple, so explicitly remove
-  // CvtBBI as a successor.
-  BBI.BB->removeSuccessor(CvtBBI->BB);
   RemoveExtraEdges(BBI);
 
   // Update block info. BB can be iteratively if-converted.


2) My patch exposed a bug in the code for merging the tail of a "diamond",
which had previously never been exercised.  The code was simply checking that
the tail had a single predecessor, but there was a case in
MultiSource/Benchmarks/VersaBench/dbms where that single predecessor was
neither edge of the diamond.  I added the following change to check for
that:

@@ -1276,7 +1276,18 @@
   // tail, add a unconditional branch to it.
   if (TailBB) {
     BBInfo TailBBI = BBAnalysis[TailBB->getNumber()];
-    if (TailBB->pred_size() == 1 && !TailBBI.HasFallThrough) {
+    bool CanMergeTail = !TailBBI.HasFallThrough;
+    // There may still be a fall-through edge from BBI1 or BBI2 to TailBB;
+    // check if there are any other predecessors besides those.
+    unsigned NumPreds = TailBB->pred_size();
+    if (NumPreds > 1)
+      CanMergeTail = false;
+    else if (NumPreds == 1 && CanMergeTail) {
+      MachineBasicBlock::pred_iterator PI = TailBB->pred_begin();
+      if (*PI != BBI1->BB && *PI != BBI2->BB)
+        CanMergeTail = false;
+    }
+    if (CanMergeTail) {
       MergeBlocks(BBI, TailBBI);
       TailBBI.IsDone = true;
     } else {

With these fixes, I was able to run all the SingleSource and MultiSource
tests successfully.

llvm-svn: 107110
2010-06-29 00:55:23 +00:00
Bob Wilson 269a89fd3a Unlike other targets, ARM now uses BUILD_VECTORs post-legalization so they
can't be changed arbitrarily by the DAGCombiner without checking if it is
running after legalization.

llvm-svn: 107097
2010-06-28 23:40:25 +00:00
Dale Johannesen 17feb07c53 In asm's, output operands with matching input constraints
have to be registers, per gcc documentation.  This affects
the logic for determining what "g" should lower to.  PR 7393.
A couple of existing testcases are affected.

llvm-svn: 107079
2010-06-28 22:09:45 +00:00
Jakob Stoklund Olesen fde9c348e9 Don't write temporary files in test directory
llvm-svn: 107049
2010-06-28 20:01:15 +00:00
Jakob Stoklund Olesen 0117091c16 Add a triple so test runs on Linux as well.
llvm-svn: 107045
2010-06-28 19:31:15 +00:00
Jakob Stoklund Olesen 0d94d7af78 Add more special treatment for inline asm in RegAllocFast.
When an instruction has tied operands and physreg defines, we must take extra
care that the tied operands conflict with neither physreg defs nor uses.

The special treatment is given to inline asm and instructions with tied operands
/ early clobbers and physreg defines.

This fixes PR7509.

llvm-svn: 107043
2010-06-28 18:34:34 +00:00
Benjamin Kramer 3bbc52ce3e Fix some tests that didn't test anything.
llvm-svn: 106954
2010-06-26 20:05:06 +00:00
Rafael Espindola 2041abd958 When splitting a VAARG, remember its alignment.
This produces terrible but correct code.

llvm-svn: 106952
2010-06-26 18:22:20 +00:00
Bob Wilson 418e64a385 Revert my if-conversion cleanup since it caused a bunch of nightly test
regressions.

--- Reverse-merging r106939 into '.':
U    test/CodeGen/Thumb2/thumb2-ifcvt3.ll
U    lib/CodeGen/IfConversion.cpp

llvm-svn: 106951
2010-06-26 17:47:06 +00:00
Eli Friedman b9bdc5a52d Remove bogus test.
llvm-svn: 106941
2010-06-26 04:59:56 +00:00
Bob Wilson c72da6bb56 Clean up some problems with extra CFG edges being introduced during
if-conversion.  The RemoveExtraEdges function doesn't work for blocks that
end with unanalyzable branches, so in those cases, the "extra" edges must
be explicitly removed.  The CopyAndPredicateBlock and MergeBlocks methods
can also avoid copying successor edges due to branches that have already
been removed.  The latter case is especially helpful when MergeBlocks is
called for handling "diamond" if-conversions, where otherwise you can end
up with some weird intermediate states in the CFG.  Unfortunately I've
been unable to find cases where this cleanup actually makes a significant
difference in the code.  There is one test where we manage to remove an
empty block at the end of a function.  Radar 6911268.

llvm-svn: 106939
2010-06-26 04:27:33 +00:00
Jakob Stoklund Olesen d7d0d4e882 When creating X86 MUL8 and DIV8 instructions, make sure we don't produce
CopyFromReg nodes for aliasing registers (AX and AL). This confuses the fast
register allocator.

Instead of CopyFromReg(AL), use ExtractSubReg(CopyFromReg(AX), sub_8bit).

This fixes PR7312.

llvm-svn: 106934
2010-06-26 00:39:23 +00:00
Daniel Dunbar acbdf53db4 Thumb2ITBlockPass: Fix a possible dereference of an invalid iterator. This was
introduced in r106343, but only showed up recently (with a particular compiler &
linker combination) because of the particular check, and because we have no
builtin checking for dereferencing the end of an array, which is truly
unfortunate.

llvm-svn: 106908
2010-06-25 23:14:54 +00:00
Evan Cheng 02b184de5b Change if-conversion block size limit checks to add some flexibility.
llvm-svn: 106901
2010-06-25 22:42:03 +00:00
Dale Johannesen ce97d55ad9 The hasMemory argument is irrelevant to how the argument
for an "i" constraint should get lowered; PR 6309.  While
this argument was passed around a lot, this is the only
place it was used, so it goes away from a lot of other
places.

llvm-svn: 106893
2010-06-25 21:55:36 +00:00
Dan Gohman 8de1fe3ccf pcmpeqd and friends are Commutable.
llvm-svn: 106886
2010-06-25 21:05:35 +00:00
Bill Wendling e41e40f689 - Reapply r106066 now that the bzip2 build regression has been fixed.
- 2010-06-25-CoalescerSubRegDefDead.ll is the testcase for r106878.

llvm-svn: 106880
2010-06-25 20:48:10 +00:00
Dan Gohman 600658a4ba Don't write an output file to cwd, and put an rdar prefix on
an rdar number.

llvm-svn: 106810
2010-06-24 23:45:15 +00:00
Dan Gohman 9a2f0473b2 Teach EmitLiveInCopies to omit copies for unused virtual registers,
and to clean up unused incoming physregs from the live-in list.

llvm-svn: 106805
2010-06-24 22:23:02 +00:00
Bill Wendling 2d3c490026 It's possible that a flag is added to the SDNode that points back to the
original SDNode. This is badness. Also, this function allows one SDNode to point
multiple flags to another SDNode. Badness as well.

llvm-svn: 106793
2010-06-24 22:00:37 +00:00
Dale Johannesen 5ad5226c58 Disallow matching "i" constraint to symbol addresses when
address requires a register or secondary load to compute
(most PIC modes).  This improves "g" constraint handling.  8015842.

The test from 2007 is attempting to test the fix for PR1761,
but since -relocation-model=static doesn't work on Darwin
x86-64, it was not testing what it was supposed to be testing
and was passing erroneously.  Fixed to use Linux x86-64.

llvm-svn: 106779
2010-06-24 20:14:51 +00:00
Jakob Stoklund Olesen 45230239e4 Replace a big gob of old coalescer logic with the new CoalescerPair class.
CoalescerPair can determine if a copy can be coalesced, and which register gets
merged away. The old logic in SimpleRegisterCoalescing had evolved into
something a bit too convoluted.

This second attempt fixes some crashes that only occurred Linux.

llvm-svn: 106769
2010-06-24 18:15:01 +00:00
Bob Wilson 279e55fb2e PR7458: Try commuting Thumb2 instruction operands to put them into 2-address
form so they can be narrowed to 16-bit instructions.

llvm-svn: 106762
2010-06-24 16:50:20 +00:00
Dan Gohman 463f26b4be Eliminate the other half of the BRCOND optimization, and update
as many tests as possible.

llvm-svn: 106749
2010-06-24 15:24:03 +00:00
Dan Gohman df6b33e778 Eliminate the first have of the optimization which eliminates BRCOND
when the condition is constant. This optimization shouldn't be
necessary, because codegen shouldn't be able to find dead control
paths that the IR-level optimizer can't find. And it's undesirable,
because it encourages bugpoint to leave "br i1 false" branches
in its output. And it wasn't updating the CFG.

I updated all the tests I could, but some tests are too reduced
and I wasn't able to meaningfully preserve them.

llvm-svn: 106748
2010-06-24 15:04:11 +00:00
Dan Gohman 600f62b3ba Reapply r106634, now that the bug it exposed is fixed.
llvm-svn: 106746
2010-06-24 14:30:44 +00:00
Dan Gohman 0695e09b09 Optimize the "bit test" code path for switch lowering in the
case where the bit mask has exactly one bit.

llvm-svn: 106716
2010-06-24 02:06:24 +00:00
Jakob Stoklund Olesen dbb58d2974 Revert "Replace a big gob of old coalescer logic with the new CoalescerPair class."
Whiny buildbots.

llvm-svn: 106710
2010-06-24 00:52:22 +00:00
Jakob Stoklund Olesen f38e6720cc Replace a big gob of old coalescer logic with the new CoalescerPair class.
CoalescerPair can determine if a copy can be coalesced, and which register gets
merged away. The old logic in SimpleRegisterCoalescing had evolved into
something a bit too convoluted.

llvm-svn: 106701
2010-06-24 00:12:39 +00:00
Bill Wendling f470747a36 We are missing opportunites to use ldm. Take code like this:
void t(int *cp0, int *cp1, int *dp, int fmd) {
  int c0, c1, d0, d1, d2, d3;
  c0 = (*cp0++ & 0xffff) | ((*cp1++ << 16) & 0xffff0000);
  c1 = (*cp0++ & 0xffff) | ((*cp1++ << 16) & 0xffff0000);
  /* ... */
}

It code gens into something pretty bad. But with this change (analogous to the
X86 back-end), it will use ldm and generate few instructions.

llvm-svn: 106693
2010-06-23 23:00:16 +00:00
Dale Johannesen fc40f0a1ab Reinstate correct test, remove the real invalidated test.
llvm-svn: 106664
2010-06-23 18:56:06 +00:00
Dale Johannesen 6effb503f5 Remove tests invalidated by previous checkin.
llvm-svn: 106663
2010-06-23 18:53:12 +00:00
Bill Wendling a136521a17 MorphNodeTo doesn't preserve the memory operands. Because we're morphing a node
into the same node, but with different non-memory operands, we need to replace
the memory operands after it's finished morphing.

llvm-svn: 106643
2010-06-23 18:16:24 +00:00
Daniel Dunbar 4df321b7ad Revert r106263, "Fold the ShrinkDemandedOps pass into the regular DAGCombiner pass,"... it was causing both 'file' (with clang) and 176.gcc (with llvm-gcc) to be miscompiled.
llvm-svn: 106634
2010-06-23 17:09:26 +00:00
Daniel Dunbar ef5a4383ad Revert r106066, "Create a more targeted fix for not sinking instructions into a range where it"... it causes bzip2 to be miscompiled by Clang.
Conflicts:

	lib/CodeGen/MachineSink.cpp

llvm-svn: 106614
2010-06-23 00:48:25 +00:00
Dan Gohman f1cf963c64 Loosen up this test so that it doesn't depend as much on register
allocation details.

llvm-svn: 106599
2010-06-22 23:32:47 +00:00
Dan Gohman 1081f1a0f5 Fix OptimizeMax to handle an odd case where one of the max operands
is another max which folds. This fixes PR7454.

llvm-svn: 106594
2010-06-22 23:07:13 +00:00