Commit Graph

429 Commits

Author SHA1 Message Date
Evan Cheng c415c5be49 During vector shuffle lowering, we sometimes commute a vector shuffle to try
to match MOVL (movss, movsd, etc.). Don't forget to commute it back and try
unpck* and shufp* if that doesn't pan out.

llvm-svn: 31186
2006-10-25 21:49:50 +00:00
Evan Cheng 798b306311 Remove -disable-x86-shuffle-opti
llvm-svn: 31183
2006-10-25 20:48:19 +00:00
Chris Lattner c0fb567e23 Implement branch analysis/xform hooks required by the branch folding pass.
llvm-svn: 31065
2006-10-20 17:42:20 +00:00
Evan Cheng 949bcc94ea Avoid getting into an infinite loop when -disable-x86-shuffle-opti is specified.
llvm-svn: 30974
2006-10-16 06:36:00 +00:00
Evan Cheng ab51cf2e78 Merge ISD::TRUNCSTORE to ISD::STORE. Switch to using StoreSDNode.
llvm-svn: 30945
2006-10-13 21:14:26 +00:00
Evan Cheng 694810c227 Some X86ISD::CMP were created with wrong ValueType's.
llvm-svn: 30913
2006-10-12 19:12:56 +00:00
Evan Cheng e646abb7b6 Don't convert to MOVLP if using shufps etc. may allow load folding.
llvm-svn: 30847
2006-10-09 21:39:25 +00:00
Evan Cheng e71fe34d75 Reflects ISD::LOAD / ISD::LOADX / LoadSDNode changes.
llvm-svn: 30844
2006-10-09 20:57:25 +00:00
Evan Cheng df9ac47e5e Make use of getStore().
llvm-svn: 30759
2006-10-05 23:01:46 +00:00
Chris Lattner f2ef243580 Lower some min/max idioms to minss/maxss when unsafe fp math is enabled.
llvm-svn: 30748
2006-10-05 04:11:26 +00:00
Evan Cheng 8c5766ef3f Added option -disable-x86-shuffle-opti to disable X86 specific vector shuffle optimizations.
llvm-svn: 30723
2006-10-04 18:33:38 +00:00
Chris Lattner 9259b1efb6 Pattern match min/max nodes when we have sse. This implements
CodeGen/X86/scalar_sse_minmax.ll

llvm-svn: 30719
2006-10-04 06:57:07 +00:00
Evan Cheng 5d9fd977d3 Combine ISD::EXTLOAD, ISD::SEXTLOAD, ISD::ZEXTLOAD into ISD::LOADX. Add an
extra operand to LOADX to specify the exact value extension type.

llvm-svn: 30714
2006-10-04 00:56:09 +00:00
Chris Lattner f598d73142 Fix PR933 and CodeGen/X86/2006-10-02-BoolRetCrash.ll
llvm-svn: 30703
2006-10-03 17:18:42 +00:00
Chris Lattner fc36039f86 silence warnings in release build
llvm-svn: 30631
2006-09-27 18:29:38 +00:00
Chris Lattner 104aa5dbc1 Various random and minor code cleanups.
llvm-svn: 30608
2006-09-26 03:57:53 +00:00
Nick Lewycky c68bbef874 Fix compile error.
llvm-svn: 30553
2006-09-21 02:08:31 +00:00
Anton Korobeynikov 3c5b3df6a0 Adding codegeneration for StdCall & FastCall calling conventions
llvm-svn: 30549
2006-09-20 22:03:51 +00:00
Anton Korobeynikov 6f7072c66a Added some eye-candy for Subtarget type checking
Added X86 StdCall & FastCall calling conventions. Codegen will follow.

llvm-svn: 30446
2006-09-17 20:25:45 +00:00
Anton Korobeynikov 0ab01ff6e2 Small fixes for supporting dll* linkage types
llvm-svn: 30441
2006-09-17 13:06:18 +00:00
Anton Korobeynikov d61d39ec53 Adding dllimport, dllexport and external weak linkage types.
DLL* linkages got full (I hope) codegeneration support in C & both x86
assembler backends.
External weak linkage added for future use, we don't provide any
codegeneration, etc. support for it.

llvm-svn: 30374
2006-09-14 18:23:27 +00:00
Chris Lattner 971e33930d Turn X < 0 -> TEST X,X js
llvm-svn: 30294
2006-09-13 17:04:54 +00:00
Chris Lattner 0c9ae46c5f The sense of this branch was inverted :(
llvm-svn: 30293
2006-09-13 16:56:12 +00:00
Chris Lattner 7a627676be Compile X > -1 -> text X,X; js dest
This implements CodeGen/X86/jump_sign.ll.

llvm-svn: 30283
2006-09-13 03:22:10 +00:00
Evan Cheng 9a083a4121 Reflects MachineConstantPoolEntry changes.
llvm-svn: 30279
2006-09-12 21:04:05 +00:00
Evan Cheng 4259a0f654 X86ISD::CMP now produces a chain as well as a flag. Make that the chain
operand of a conditional branch to allow load folding into CMP / TEST
instructions.

llvm-svn: 30241
2006-09-11 02:19:56 +00:00
Evan Cheng 11b0a5dbd4 Committing X86-64 support.
llvm-svn: 30177
2006-09-08 06:48:29 +00:00
Evan Cheng 89c5d04b9b - Identify a vector_shuffle that can be turned into an undef, e.g.
shuffle V1, <undef>, <undef, undef, 4, 5>
- Fix some suspicious logic into LowerVectorShuffle that cause less than
  optimal code by failing to identify MOVL (move to lowest element of a
  vector).

llvm-svn: 30171
2006-09-08 01:50:06 +00:00
Chris Lattner dc4ff5311f Eliminate X86ISD::TEST, using X86ISD::CMP instead. Match X86ISD::CMP patterns
using test, which provides nice simplifications like:

-       movl %edi, %ecx
-       andl $2, %ecx
-       cmpl $0, %ecx
+       testl $2, %edi
        je LBB1_11      #cond_next90

There are a couple of dagiselemitter deficiencies that this exposes, they will
be handled later.

llvm-svn: 30156
2006-09-07 20:33:45 +00:00
Chris Lattner 162f2d5d4c Revert this patch, the front-end has been fixed to make it unneccesary.
llvm-svn: 29752
2006-08-17 18:43:24 +00:00
Chris Lattner dfb3f0591d 'g' is handled by the front-end.
llvm-svn: 29751
2006-08-17 18:12:28 +00:00
Andrew Lenharth 4a063c5ffb Fix handling of 'g'. Closes 883
llvm-svn: 29750
2006-08-17 17:50:12 +00:00
Andrew Lenharth 1c3210d08d Add the 'c' constraint as needed by the linux kernel
llvm-svn: 29747
2006-08-17 16:07:50 +00:00
Andrew Lenharth fc60fb974c Add support for S and D constraints, as needed to compile the linux kernel.
llvm-svn: 29746
2006-08-17 15:35:43 +00:00
Chris Lattner ed728e8dc9 Eliminate use of getNode that takes a vector.
llvm-svn: 29614
2006-08-11 17:38:39 +00:00
Evan Cheng bd1c5a8fb8 Match tablegen changes.
llvm-svn: 29604
2006-08-11 09:08:15 +00:00
Evan Cheng 5c68bba085 Convert more calls of getNode() that takes a vector to pass in the start of an array.
llvm-svn: 29601
2006-08-11 07:35:45 +00:00
Chris Lattner c24a1d3093 Start eliminating temporary vectors used to create DAG nodes. Instead, pass
in the start of an array and a count of operands where applicable.  In many
cases, the number of operands is known, so this static array can be allocated
on the stack, avoiding the heap.  In many other cases, a SmallVector can be
used, which has the same benefit in the common cases.

I updated a lot of code calling getNode that takes a vector, but ran out of
time.  The rest of the code should be updated, and these methods should be
removed.

We should also do the same thing to eliminate the methods that take a
vector of MVT::ValueTypes.

It would be extra nice to convert the dagiselemitter to avoid creating vectors
for operands when calling getTargetNode.

llvm-svn: 29566
2006-08-08 02:23:42 +00:00
Chris Lattner 524129dd64 Fix PR850 and CodeGen/X86/2006-07-31-SingleRegClass.ll.
The CFE refers to all single-register constraints (like "A") by their 16-bit
name, even though the 8 or 32-bit version of the register may be needed.
The X86 backend should realize what is going on and redecode the name back
to its proper form.

llvm-svn: 29420
2006-07-31 23:26:50 +00:00
Chris Lattner 9e56e5c003 Rename RelocModel::PIC to PIC_, to avoid conflicts with -DPIC.
llvm-svn: 29307
2006-07-26 21:12:04 +00:00
Evan Cheng 74065bedf2 This opt is now handled in DAG combine.
llvm-svn: 29243
2006-07-21 08:26:46 +00:00
Evan Cheng 4cf0238720 A splat of a vector constant of all zero or all one is the vector constant.
llvm-svn: 29234
2006-07-20 23:09:47 +00:00
Chris Lattner c8db10725b Add information preventing several register class constraints from working.
This implements PR828 and CodeGen/X86/2006-07-12-InlineAsmQConstraint.ll

llvm-svn: 29118
2006-07-12 16:59:49 +00:00
Chris Lattner 298ef37e02 Implement the inline asm 'A' constraint. This implements PR825 and
CodeGen/X86/2006-07-10-InlineAsmAConstraint.ll

llvm-svn: 29101
2006-07-11 02:54:03 +00:00
Evan Cheng 79cf9a5342 Fixed stack objects do not specify alignments, but their offsets are known.
Use that information when doing the transformation to merge multiple loads
into a 128-bit load.

llvm-svn: 29090
2006-07-10 21:37:44 +00:00
Chris Lattner 9aabc1e16f Mark internal function static
llvm-svn: 29085
2006-07-10 19:53:12 +00:00
Evan Cheng 5987cfb7b1 X86 target specific DAG combine: turn build_vector (load x), (load x+4),
(load x+8), (load x+12), <0, 1, 2, 3> to a single 128-bit load (aligned and
unaligned).

e.g.

__m128 test(float a, float b, float c, float d) {
  return _mm_set_ps(d, c, b, a);
}

_test:
        movups 4(%esp), %xmm0
        ret

llvm-svn: 29042
2006-07-07 08:33:52 +00:00
Evan Cheng 0261242aa6 Reorg. No functionality change.
llvm-svn: 28999
2006-07-05 22:17:51 +00:00
Evan Cheng 38c5aee959 Simplify X86CompilationCallback: always align to 16-byte boundary; don't save EAX/EDX if unnecessary.
llvm-svn: 28910
2006-06-24 08:36:10 +00:00
Evan Cheng de7156f12c Type of vector extract / insert index operand should be iPTR.
llvm-svn: 28796
2006-06-15 08:14:54 +00:00
Evan Cheng ca25486603 Add argument registers to the end of call operand list (partial fix).
llvm-svn: 28783
2006-06-14 18:17:40 +00:00
Evan Cheng 0e14a56d35 Minor compilation speed improvement.
llvm-svn: 28736
2006-06-09 06:24:42 +00:00
Evan Cheng dc614c193e Added X86FunctionInfo subclass of MachineFunction to record whether the
function that is being lowered is forced to use FP. Currently this is only
true for main() / Cygwin.

llvm-svn: 28703
2006-06-06 23:30:24 +00:00
Evan Cheng 2b2c1be49c Typos
llvm-svn: 28617
2006-06-01 05:53:27 +00:00
Evan Cheng 2489ccdd90 Remove a warning
llvm-svn: 28607
2006-06-01 00:30:39 +00:00
Evan Cheng 550cb663e8 Remove dead code.
llvm-svn: 28581
2006-05-31 00:50:42 +00:00
Evan Cheng a3add0fea8 Change RET node to include signness information of the return values. i.e.
RET chain, value1, sign1, value2, sign2, ...

llvm-svn: 28510
2006-05-26 23:10:12 +00:00
Evan Cheng b92f418408 Vector argument must be passed in memory location aligned on 16-byte boundary.
llvm-svn: 28505
2006-05-26 20:37:47 +00:00
Evan Cheng bfb5ea6875 Mac OS X ABI document lied. The first four XMM registers are used to pass
vector arguments, not three.

llvm-svn: 28504
2006-05-26 19:22:06 +00:00
Evan Cheng a01e799927 Minor update to make the code more clear
llvm-svn: 28499
2006-05-26 18:39:59 +00:00
Evan Cheng cbfb3d07e0 Update more comments.
llvm-svn: 28498
2006-05-26 18:37:16 +00:00
Evan Cheng 763f9b00f0 Fix some comments.
llvm-svn: 28497
2006-05-26 18:25:43 +00:00
Evan Cheng 83dc51d7ff No need to handle illegal types.
llvm-svn: 28496
2006-05-26 18:22:49 +00:00
Evan Cheng 8aca43e8da Consistency
llvm-svn: 28488
2006-05-25 23:31:23 +00:00
Evan Cheng 0421aca87a Some clean up.
llvm-svn: 28483
2006-05-25 22:38:31 +00:00
Evan Cheng 29f805ec65 Remove some dead code.
llvm-svn: 28481
2006-05-25 22:25:52 +00:00
Evan Cheng 5ee96893ae Build breakage.
llvm-svn: 28475
2006-05-25 18:56:34 +00:00
Evan Cheng 2a33094284 Switch X86 over to a call-selection model where the lowering code creates
the copyto/fromregs instead of making the X86ISD::CALL selection code create
them.

llvm-svn: 28463
2006-05-25 00:59:30 +00:00
Chris Lattner a58f559848 Fix file header comment
llvm-svn: 28441
2006-05-23 23:20:42 +00:00
Evan Cheng 7068a93cae Better way to check for vararg.
llvm-svn: 28440
2006-05-23 21:08:24 +00:00
Evan Cheng 17e734f0a6 Remove PreprocessCCCArguments and PreprocessFastCCArguments now that
FORMAL_ARGUMENTS nodes include a token operand.

llvm-svn: 28439
2006-05-23 21:06:34 +00:00
Chris Lattner 8be5be817c Implement an annoying part of the Darwin/X86 abi: the callee of a struct
return argument pops the hidden struct pointer if present, not the caller.

For example, in this testcase:

struct X { int D, E, F, G; };
struct X bar() {
  struct X a;
  a.D = 0;
  a.E = 1;
  a.F = 2;
  a.G = 3;
  return a;
}
void foo(struct X *P) {
  *P = bar();
}

We used to emit:

_foo:
        subl $28, %esp
        movl 32(%esp), %eax
        movl %eax, (%esp)
        call _bar
        addl $28, %esp
        ret
_bar:
        movl 4(%esp), %eax
        movl $0, (%eax)
        movl $1, 4(%eax)
        movl $2, 8(%eax)
        movl $3, 12(%eax)
        ret

This is correct on Linux/X86 but not Darwin/X86.  With this patch, we now
emit:

_foo:
        subl $28, %esp
        movl 32(%esp), %eax
        movl %eax, (%esp)
        call _bar
***     addl $24, %esp
        ret
_bar:
        movl 4(%esp), %eax
        movl $0, (%eax)
        movl $1, 4(%eax)
        movl $2, 8(%eax)
        movl $3, 12(%eax)
***     ret $4

For the record, GCC emits (which is functionally equivalent to our new code):

_bar:
        movl    4(%esp), %eax
        movl    $3, 12(%eax)
        movl    $2, 8(%eax)
        movl    $1, 4(%eax)
        movl    $0, (%eax)
        ret     $4
_foo:
        pushl   %esi
        subl    $40, %esp
        movl    48(%esp), %esi
        leal    16(%esp), %eax
        movl    %eax, (%esp)
        call    _bar
        subl    $4, %esp
        movl    16(%esp), %eax
        movl    %eax, (%esi)
        movl    20(%esp), %eax
        movl    %eax, 4(%esi)
        movl    24(%esp), %eax
        movl    %eax, 8(%esi)
        movl    28(%esp), %eax
        movl    %eax, 12(%esi)
        addl    $40, %esp
        popl    %esi
        ret

This fixes SingleSource/Benchmarks/CoyoteBench/fftbench with LLC and the
JIT, and fixes the X86-backend portion of PR729.  The CBE still needs to
be updated.

llvm-svn: 28438
2006-05-23 18:50:38 +00:00
Chris Lattner 01dd6df5f3 CSRet allows varargs
llvm-svn: 28409
2006-05-19 21:34:04 +00:00
Evan Cheng 8c6b234ce8 Should pass by reference.
llvm-svn: 28357
2006-05-17 19:07:40 +00:00
Chris Lattner c7df70db57 Implement the custom lowering hook right, returning values for all of the
arguments at once.

llvm-svn: 28327
2006-05-16 17:14:26 +00:00
Chris Lattner 7b8b8bbbf9 Fix a bug I introduced yesterday, which broke functions with *no* arguments.
llvm-svn: 28326
2006-05-16 17:08:35 +00:00
Evan Cheng 9fee442e63 X86 integer register classes naming changes. Make them consistent with FP, vector classes.
llvm-svn: 28324
2006-05-16 07:21:53 +00:00
Chris Lattner 3d82699605 Add a chain to FORMAL_ARGUMENTS. This is a minimal port of the X86 backend,
it doesn't currently use/maintain the chain properly.  Also, make the
X86ISelLowering.cpp file 80-col clean.

llvm-svn: 28320
2006-05-16 06:45:34 +00:00
Chris Lattner 22f95b74ba Dead variable
llvm-svn: 28265
2006-05-12 21:12:22 +00:00
Chris Lattner 6d4a2dc4ad Teach the X86 backend about non-i32 inline asm register classes.
llvm-svn: 28139
2006-05-06 00:29:37 +00:00
Chris Lattner 44a73e9fa5 Teach the code generator to use cvtss2sd as extload f32 -> f64
llvm-svn: 28131
2006-05-05 21:35:18 +00:00
Owen Anderson 20a631fde7 Refactor TargetMachine, pushing handling of TargetData into the target-specific subclasses. This has one caller-visible change: getTargetData() now returns a pointer instead of a reference.
This fixes PR 759.

llvm-svn: 28074
2006-05-03 01:29:57 +00:00
Evan Cheng 88decded82 Initial caller side support (for CCC only, not FastCC) of 128-bit vector
passing by value.

llvm-svn: 28015
2006-04-28 21:29:37 +00:00
Evan Cheng 3cd4362ade Implement four-wide shuffle with 2 shufps if no more than two elements come
from each vector. e.g.
        shuffle(G1, G2, 7, 1, 5, 2)
==>
        movaps _G2, %xmm0
        shufps $151, _G1, %xmm0
        shufps $216, %xmm0, %xmm0

llvm-svn: 28011
2006-04-28 07:03:38 +00:00
Evan Cheng d43c5c6046 TargetLowering::LowerArguments should return a VBIT_CONVERT of
FORMAL_ARGUMENTS SDOperand in the return result vector.

llvm-svn: 28009
2006-04-28 05:25:15 +00:00
Evan Cheng f4f3f0d25f Make x86 isel lowering produce tailcall nodes. They are match to normal calls
for now.

Patch contributed by Alexander Friedman.

llvm-svn: 27994
2006-04-27 08:40:39 +00:00
Evan Cheng 89001ad729 Support for passing 128-bit vector arguments via XMM registers.
llvm-svn: 27992
2006-04-27 08:31:10 +00:00
Evan Cheng a0374e1bed Oops
llvm-svn: 27989
2006-04-27 05:44:50 +00:00
Evan Cheng 24eb3f4765 Bug fix: not updating NumIntRegs.
llvm-svn: 27988
2006-04-27 05:35:28 +00:00
Evan Cheng 48940d16b2 - Clean up formal argument lowering code. Prepare for vector pass by value work.
- Fixed vararg support.

llvm-svn: 27985
2006-04-27 01:32:22 +00:00
Evan Cheng 1c39903297 Fix fastcc failures.
llvm-svn: 27980
2006-04-26 18:21:31 +00:00
Evan Cheng e0bcfbe811 Switching over FORMAL_ARGUMENTS mechanism to lower call arguments.
llvm-svn: 27975
2006-04-26 01:20:17 +00:00
Evan Cheng a9467aab0a Separate LowerOperation() into multiple functions, one per opcode.
llvm-svn: 27972
2006-04-25 20:13:52 +00:00
Evan Cheng 5c2bfb069e Special case handling two wide build_vector(0, x).
llvm-svn: 27961
2006-04-24 22:58:52 +00:00
Evan Cheng b0461080e4 A little bit more build_vector enhancement for v8i16 cases.
llvm-svn: 27959
2006-04-24 18:01:45 +00:00
Evan Cheng b4f31dd1a8 MOVL shuffle (i.e. movd or movss / movsd from memory) of undef, V2 == V2
llvm-svn: 27953
2006-04-23 06:35:19 +00:00
Nate Begeman 4ca2ea5b43 JumpTable support! What this represents is working asm and jit support for
x86 and ppc for 100% dense switch statements when relocations are non-PIC.
This support will be extended and enhanced in the coming days to support
PIC, and less dense forms of jump tables.

llvm-svn: 27947
2006-04-22 18:53:45 +00:00
Evan Cheng e728efdfce Don't do all the lowering stuff for 2-wide build_vector's. Also, minor optimization for shuffle of undef.
llvm-svn: 27946
2006-04-22 08:34:05 +00:00
Evan Cheng 16ef94f4e8 Fix a performance regression. Use {p}shuf* when there are only two distinct elements in a build_vector.
llvm-svn: 27945
2006-04-22 06:21:46 +00:00
Evan Cheng 14215c36b6 Revamp build_vector lowering to take advantage of movss and movd instructions.
movd always clear the top 96 bits and movss does so when it's loading the
value from memory.
The net result is codegen for 4-wide shuffles is much improved. It is near
optimal if one or more elements is a zero. e.g.

__m128i test(int a, int b) {
  return _mm_set_epi32(0, 0, b, a);
}

compiles to

_test:
	movd 8(%esp), %xmm1
	movd 4(%esp), %xmm0
	punpckldq %xmm1, %xmm0
	ret

compare to gcc:

_test:
	subl	$12, %esp
	movd	20(%esp), %xmm0
	movd	16(%esp), %xmm1
	punpckldq	%xmm0, %xmm1
	movq	%xmm1, %xmm0
	movhps	LC0, %xmm0
	addl	$12, %esp
	ret

or icc:

_test:
        movd      4(%esp), %xmm0                                #5.10
        movd      8(%esp), %xmm3                                #5.10
        xorl      %eax, %eax                                    #5.10
        movd      %eax, %xmm1                                   #5.10
        punpckldq %xmm1, %xmm0                                  #5.10
        movd      %eax, %xmm2                                   #5.10
        punpckldq %xmm2, %xmm3                                  #5.10
        punpckldq %xmm3, %xmm0                                  #5.10
        ret                                                     #5.10

There are still room for improvement, for example the FP variant of the above example:

__m128 test(float a, float b) {
  return _mm_set_ps(0.0, 0.0, b, a);
}

_test:
	movss 8(%esp), %xmm1
	movss 4(%esp), %xmm0
	unpcklps %xmm1, %xmm0
	xorps %xmm1, %xmm1
	movlhps %xmm1, %xmm0
	ret

The xorps and movlhps are unnecessary. This will require post legalizer optimization to handle.

llvm-svn: 27939
2006-04-21 23:03:30 +00:00
Evan Cheng e8b5180044 Now generating perfect (I think) code for "vector set" with a single non-zero
scalar value.

e.g.
        _mm_set_epi32(0, a, 0, 0);
==>
	movd 4(%esp), %xmm0
	pshufd $69, %xmm0, %xmm0

        _mm_set_epi8(0, 0, 0, 0, 0, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0);
==>
	movzbw 4(%esp), %ax
	movzwl %ax, %eax
	pxor %xmm0, %xmm0
	pinsrw $5, %eax, %xmm0

llvm-svn: 27923
2006-04-21 01:05:10 +00:00
Evan Cheng 60f0b8998e - Added support to turn "vector clear elements", e.g. pand V, <-1, -1, 0, -1>
to a vector shuffle.
- VECTOR_SHUFFLE lowering change in preparation for more efficient codegen
of vector shuffle with zero (or any splat) vector.

llvm-svn: 27875
2006-04-20 08:58:49 +00:00
Evan Cheng 15c264b753 Handle v2i64 BUILD_VECTOR custom lowering correctly. v2i64 is a legal type,
but i64 is not. If possible, change a i64 op to a f64 (e.g. load, constant)
and then cast it back.

llvm-svn: 27849
2006-04-20 00:11:39 +00:00
Evan Cheng 4a1b0d3292 isSplatMask() bug: first element can be an undef.
llvm-svn: 27847
2006-04-19 23:28:59 +00:00
Evan Cheng a3caaee503 - Added support to do aribitrary 4 wide shuffle with no more than three
instructions.
- Fixed a commute vector_shuff bug.

llvm-svn: 27845
2006-04-19 22:48:17 +00:00
Evan Cheng 7855e4d032 Commute vector_shuffle to match more movlhps, movlp{s|d} cases.
llvm-svn: 27840
2006-04-19 20:35:22 +00:00
Evan Cheng 5421206c4b Use movss to insert_vector_elt(v, s, 0).
llvm-svn: 27782
2006-04-17 22:45:49 +00:00
Evan Cheng 6e5e205841 Use two pinsrw to insert an element into v4i32 / v4f32 vector.
llvm-svn: 27779
2006-04-17 22:04:06 +00:00
Evan Cheng 5022b3426e Implement v8i16, v16i8 splat using unpckl + pshufd.
llvm-svn: 27768
2006-04-17 20:43:08 +00:00
Chris Lattner c070c621ac implement returns of a vector, testcase here: CodeGen/X86/vec_return.ll
llvm-svn: 27767
2006-04-17 20:32:50 +00:00
Evan Cheng b3b41c4f3d FP SETOLT, SETOLT, SETUGE, SETUGT conditions were implemented incorrectly
llvm-svn: 27755
2006-04-17 07:24:10 +00:00
Evan Cheng 6222cf2a36 Silly bug
llvm-svn: 27719
2006-04-15 05:37:34 +00:00
Evan Cheng 65bb720a8b Do not use movs{h|l}dup for a shuffle with a single non-undef node.
llvm-svn: 27718
2006-04-15 03:13:24 +00:00
Evan Cheng 5d247f81c1 Last few SSE3 intrinsics.
llvm-svn: 27711
2006-04-14 21:59:03 +00:00
Evan Cheng e4f97ccf7f X86 SSE2 supports v8i16 multiplication
llvm-svn: 27644
2006-04-13 05:10:25 +00:00
Evan Cheng 92232307d0 All "integer" logical ops (pand, por, pxor) are now promoted to v2i64.
Clean up and fix various logical ops issues.

llvm-svn: 27633
2006-04-12 21:21:57 +00:00
Evan Cheng e2157c6e41 Promote v4i32, v8i16, v16i8 load to v2i64 load.
llvm-svn: 27612
2006-04-12 17:12:36 +00:00
Evan Cheng 12ba3e23d0 Added support for _mm_move_ss and _mm_move_sd.
llvm-svn: 27575
2006-04-11 00:19:04 +00:00
Evan Cheng 617a6a812e Conditional move of vector types.
llvm-svn: 27556
2006-04-10 07:23:14 +00:00
Evan Cheng ac847268c5 Code clean up.
llvm-svn: 27501
2006-04-07 21:53:05 +00:00
Evan Cheng c995b45f67 - movlp{s|d} and movhp{s|d} support.
- Normalize shuffle nodes so result vector lower half elements come from the
  first vector, the rest come from the second vector. (Except for the
  exceptions :-).
- Other minor fixes.

llvm-svn: 27474
2006-04-06 23:23:56 +00:00
Evan Cheng 780382946e Support for comi / ucomi intrinsics.
llvm-svn: 27444
2006-04-05 23:38:46 +00:00
Evan Cheng f3b52c84ea Handle canonical form of e.g.
vector_shuffle v1, v1, <0, 4, 1, 5, 2, 6, 3, 7>

This is turned into
vector_shuffle v1, <undef>, <0, 0, 1, 1, 2, 2, 3, 3>
by dag combiner.

It would match a {p}unpckl on x86.

llvm-svn: 27437
2006-04-05 07:20:06 +00:00
Evan Cheng 6d196db40d Bogus assert
llvm-svn: 27434
2006-04-05 06:11:20 +00:00
Evan Cheng 2cf4232ced Fallthrough to expand if a VECTOR_SHUFFLE cannot be custom lowered.
llvm-svn: 27433
2006-04-05 06:09:26 +00:00
Evan Cheng 59a6355e82 Handle v8i16 shuffle that must be broken into a pair of pshufhw / pshuflw.
llvm-svn: 27427
2006-04-05 01:47:37 +00:00
Evan Cheng b64827e662 Use movlpd to: store lower f64 extracted from v2f64.
Use movhpd to: store upper f64 extracted from v2f64.

llvm-svn: 27382
2006-04-03 22:30:54 +00:00
Evan Cheng ebf1006d16 - More efficient extract_vector_elt with shuffle and movss, movsd, movd, etc.
- Some bug fixes and naming inconsistency fixes.

llvm-svn: 27377
2006-04-03 20:53:28 +00:00
Evan Cheng 5fd7c69473 Use a X86 target specific node X86ISD::PINSRW instead of a mal-formed
INSERT_VECTOR_ELT to insert a 16-bit value in a 128-bit vector.

llvm-svn: 27314
2006-03-31 21:55:24 +00:00
Evan Cheng cbffa4656b Add support to use pextrw and pinsrw to extract and insert a word element
from a 128-bit vector.

llvm-svn: 27304
2006-03-31 19:22:53 +00:00
Evan Cheng 1b0d294de0 Expand all INSERT_VECTOR_ELT (obviously bad) for now.
llvm-svn: 27275
2006-03-31 01:30:39 +00:00
Evan Cheng d9d0bbb5ac Typo
llvm-svn: 27272
2006-03-31 00:33:57 +00:00
Evan Cheng 99d7205fba Ok for vector_shuffle mask to contain undef elements.
llvm-svn: 27271
2006-03-31 00:30:29 +00:00
Evan Cheng 7e2ff11a42 Make sure all possible shuffles are matched.
Use pshufd, pshuhw, and pshulw to shuffle v4f32 if shufps doesn't match.
Use shufps to shuffle v4f32 if pshufd, pshuhw, and pshulw don't match.

llvm-svn: 27259
2006-03-30 19:54:57 +00:00
Evan Cheng b7fedffc78 - Added some SSE2 128-bit packed integer ops.
- Added SSE2 128-bit integer pack with signed saturation ops.
- Added pshufhw and pshuflw ops.

llvm-svn: 27252
2006-03-29 23:07:14 +00:00
Evan Cheng acc336475e Need to special case splat after all. Make the second operand of splat
vector_shuffle undef.

llvm-svn: 27250
2006-03-29 19:02:40 +00:00
Evan Cheng 500ec16578 - More shuffle related bug fixes.
- Whenever possible use ops of the right packed types for vector shuffles /
  splats.

llvm-svn: 27246
2006-03-29 03:04:49 +00:00
Evan Cheng da59b0d2a8 - Only use pshufd for v4i32 vector shuffles.
- Other shuffle related fixes.

llvm-svn: 27244
2006-03-29 01:30:51 +00:00
Evan Cheng 8160fd3d42 Fixing buggy code.
llvm-svn: 27239
2006-03-28 23:41:33 +00:00
Jim Laskey 457e54efc1 Added missing paren on behalf of Ramana Radhakrishnan.
llvm-svn: 27223
2006-03-28 10:17:11 +00:00
Evan Cheng 21e5476deb Missed X86::isUNPCKHMask
llvm-svn: 27222
2006-03-28 08:27:15 +00:00
Evan Cheng 1a194a5264 * Prefer using operation of matching types. e.g unpcklpd rather than movlhps.
* Bug fixes.

llvm-svn: 27218
2006-03-28 06:50:32 +00:00
Evan Cheng 2bc3280659 - Clean up / consoladate various shuffle masks.
- Some misc. bug fixes.
- Use MOVHPDrm to load from m64 to upper half of a XMM register.

llvm-svn: 27210
2006-03-28 02:43:26 +00:00
Evan Cheng 5df75889db Model unpack lower and interleave as vector_shuffle so we can lower the
intrinsics as such.

llvm-svn: 27200
2006-03-28 00:39:58 +00:00
Evan Cheng 9b9cc4fb39 Use pcmpeq to generate vector of all ones.
llvm-svn: 27167
2006-03-27 07:00:16 +00:00
Nate Begeman ed728c1291 SelectionDAGISel can now natively handle Switch instructions, in the same
manner that the LowerSwitch LLVM to LLVM pass does: emitting a binary
search tree of basic blocks.  The new approach has several advantages:
it is faster, it generates significantly smaller code in many cases, and
it paves the way for implementing dense switch tables as a jump table by
handling switches directly in the instruction selector.

This functionality is currently only enabled on x86, but should be safe for
every target.  In anticipation of making it the default, the cfg is now
properly updated in the x86, ppc, and sparc select lowering code.

llvm-svn: 27156
2006-03-27 01:32:24 +00:00
Evan Cheng ed6184aef2 Remove X86:isZeroVector, use ISD::isBuildVectorAllZeros instead; some fixes / cleanups
llvm-svn: 27150
2006-03-26 09:53:12 +00:00
Evan Cheng 2bc0941e2a Build arbitrary vector with more than 2 distinct scalar elements with a
series of unpack and interleave ops.

llvm-svn: 27119
2006-03-25 09:37:23 +00:00
Evan Cheng 6f7d31ea50 Added 128-bit packed integer subtraction.
llvm-svn: 27096
2006-03-25 01:33:37 +00:00
Evan Cheng e7ee6a5e32 Support for scalar to vector with zero extension.
llvm-svn: 27091
2006-03-24 23:15:12 +00:00
Evan Cheng 082c8785ef Handle BUILD_VECTOR with all zero elements.
llvm-svn: 27056
2006-03-24 07:29:27 +00:00
Chris Lattner f5efddf80b Gabor points out that we can't spell. :)
llvm-svn: 27049
2006-03-24 07:12:19 +00:00
Evan Cheng a91d8a5b43 All v2f64 shuffle cases can be handled.
llvm-svn: 27044
2006-03-24 06:40:32 +00:00
Evan Cheng 2595a687da More efficient v2f64 shuffle using movlhps, movhlps, unpckhpd, and unpcklpd.
llvm-svn: 27040
2006-03-24 02:58:06 +00:00
Evan Cheng d27fb3e85e Handle more shuffle cases with SHUFP* instructions.
llvm-svn: 27024
2006-03-24 01:18:28 +00:00
Evan Cheng f842ea57bb Typo
llvm-svn: 26997
2006-03-23 20:26:04 +00:00
Evan Cheng b9b0550dc6 Add 128-bit integer vector load and add (for testing).
llvm-svn: 26967
2006-03-23 01:57:24 +00:00
Evan Cheng 021bb7c956 Added a ValueType operand to isShuffleMaskLegal(). For now, x86 will not do
64-bit vector shuffle.

llvm-svn: 26964
2006-03-22 22:07:06 +00:00
Evan Cheng bc04722860 Some clean up.
llvm-svn: 26957
2006-03-22 19:22:18 +00:00
Evan Cheng d4e1557941 - Supposely movlhps is faster / better than unpcklpd.
- Don't forget pshufd is only available with sse2.

llvm-svn: 26956
2006-03-22 19:16:21 +00:00
Evan Cheng 68ad48bd1a - Implement X86ISelLowering::isShuffleMaskLegal(). We currently only support
splat and PSHUFD cases.
- Clean up shuffle / splat matching code.

llvm-svn: 26954
2006-03-22 18:59:22 +00:00
Evan Cheng 8fdbdf20cd - VECTOR_SHUFFLE of v4i32 / v4f32 with undef second vector always matches
PSHUFD. We can make permutes entries which point to the undef pointing
  anything we want.
- Change some names to appease Chris.

llvm-svn: 26951
2006-03-22 08:01:21 +00:00
Chris Lattner f5e36c8bc0 fix a warning
llvm-svn: 26941
2006-03-22 04:18:34 +00:00
Evan Cheng d097e67544 Some splat and shuffle support.
llvm-svn: 26940
2006-03-22 02:53:00 +00:00
Evan Cheng d5e905d762 - Use movaps to store 128-bit vector integers.
- Each scalar to vector v8i16 and v16i8 is a any_extend followed by a movd.

llvm-svn: 26932
2006-03-21 23:01:21 +00:00
Chris Lattner 00f4683bf6 These targets don't support EXTRACT_VECTOR_ELT, though, in time, X86 will.
llvm-svn: 26930
2006-03-21 20:51:05 +00:00
Chris Lattner 80b6bd2746 Add a build_vector node
llvm-svn: 26895
2006-03-20 06:18:01 +00:00
Chris Lattner f7b6e7212f rename these nodes
llvm-svn: 26848
2006-03-19 01:13:28 +00:00
Evan Cheng b09a56f3a4 Darwin should use _setjmp/_longjmp instead of setjmp/longjmp.
llvm-svn: 26833
2006-03-17 20:31:41 +00:00
Chris Lattner 388fc4d9fb Disable x86 fastcc from passing args in registers
llvm-svn: 26824
2006-03-17 17:27:47 +00:00
Chris Lattner 43798850f9 Parameterize the number of integer arguments to pass in registers
llvm-svn: 26818
2006-03-17 05:10:20 +00:00
Nate Begeman bb01d4f272 Remove BRTWOWAY*
Make the PPC backend not dependent on BRTWOWAY_CC and make the branch
selector smarter about the code it generates, fixing a case in the
readme.

llvm-svn: 26814
2006-03-17 01:40:33 +00:00
Evan Cheng f75555feb9 Bug fix: condition inverted.
llvm-svn: 26804
2006-03-16 22:02:48 +00:00
Evan Cheng 20931a798e Added a way for TargetLowering to specify what values can be used as the
scale component of the target addressing mode.

llvm-svn: 26802
2006-03-16 21:47:42 +00:00
Evan Cheng af598d2461 Add LSR hooks.
llvm-svn: 26740
2006-03-13 23:18:16 +00:00
Evan Cheng adc7093fc1 Use rep/stosl; and Count 0x3; rep/stosb for memset with 4 byte aligned dest.
and variable value.
Similarly for memcpy.

llvm-svn: 26603
2006-03-07 23:29:39 +00:00
Evan Cheng 30d7b70b73 Enable Dwarf debugging info.
llvm-svn: 26581
2006-03-07 02:02:57 +00:00
Chris Lattner 9c7f50376a Copysign needs to be expanded everywhere. Note that Alpha and IA64 should
implement copysign as a native op if they have it.

llvm-svn: 26541
2006-03-05 05:08:37 +00:00
Evan Cheng 6dc73297c3 MEMSET / MEMCPY lowering bugs: we can't issue a single WORD / DWORD version of
rep/stos and rep/mov if the count is not a constant. We could do
  rep/stosl; and $count, 3; rep/stosb
For now, I will lower them to memset / memcpy calls. We will revisit this after
a little bit experiment.

Also need to take care of the trailing bytes even if the count is a constant.
Since the max. number of trailing bytes are 3, we will simply issue loads /
stores.

llvm-svn: 26517
2006-03-04 02:48:56 +00:00
Evan Cheng 084a102b17 Typo
llvm-svn: 26512
2006-03-04 01:12:00 +00:00
Chris Lattner ad3c974a77 remove the read/write port/io intrinsics.
llvm-svn: 26479
2006-03-03 00:19:58 +00:00
Evan Cheng 1926427351 Vector op lowering.
llvm-svn: 26438
2006-03-01 01:11:20 +00:00
Evan Cheng 994700101e Added a common about the need for X86ISD::Wrapper.
llvm-svn: 26372
2006-02-25 09:55:19 +00:00
Evan Cheng e0ed6ec13f - Clean up the lowering and selection code of ConstantPool, GlobalAddress,
and ExternalSymbol.
- Use C++ code (rather than tblgen'd selection code) to match the above
  mentioned leaf nodes. Do not mutate and nodes and do not record the
  selection in CodeGenMap. These nodes should be safe to duplicate. This is
  a performance win.

llvm-svn: 26335
2006-02-23 20:41:18 +00:00
Evan Cheng 1f342c2884 PIC related bug fixes.
1. Various asm printer bug.
2. Lowering bug. Now TargetGlobalAddress is wrapped in X86ISD::TGAWrapper.

llvm-svn: 26324
2006-02-23 02:43:52 +00:00
Evan Cheng 73136dfecc - Added option -relocation-model to set relocation model. Valid values include static, pic,
dynamic-no-pic, and default.
PPC and x86 default is dynamic-no-pic for Darwin, pic for others.
- Removed options -enable-pic and -ppc-static.

llvm-svn: 26315
2006-02-22 20:19:42 +00:00
Evan Cheng 9e252e3bcf Added MMX, SSE1, and SSE2 vector instructions and some simple patterns.
Fixed some existing bugs (wrong predicates, prefixes) at the same time.

llvm-svn: 26310
2006-02-22 02:26:30 +00:00
Chris Lattner 7ad77dfc2a split register class handling from explicit physreg handling.
llvm-svn: 26308
2006-02-22 00:56:39 +00:00
Chris Lattner 7bb4696dc3 Updates to match change of getRegForInlineAsmConstraint prototype
llvm-svn: 26305
2006-02-21 23:11:00 +00:00
Evan Cheng d13778eb30 If SSE3 is available, promote FP_TO_UINT i32 to FP_TO_SINT i64 to take
advantage of fisttpll.

llvm-svn: 26288
2006-02-18 07:26:17 +00:00
Evan Cheng 5588de9415 x86 / Darwin PIC support.
llvm-svn: 26273
2006-02-18 00:15:05 +00:00
Chris Lattner 07a2677e43 unbreak the build
llvm-svn: 26260
2006-02-17 07:09:27 +00:00
Evan Cheng 593bea73ba Unbreak x86 be
llvm-svn: 26259
2006-02-17 07:01:52 +00:00
Nate Begeman 5965bd19f8 kill ADD_PARTS & SUB_PARTS and replace them with fancy new ADDC, ADDE, SUBC
and SUBE nodes that actually expose what's going on and allow for
significant simplifications in the targets.

llvm-svn: 26255
2006-02-17 05:43:56 +00:00
Nate Begeman 7e5496d5fe Kill the x86 pattern isel. boom.
llvm-svn: 26246
2006-02-17 00:03:04 +00:00
Nate Begeman 8a77efe4f7 Rework the SelectionDAG-based implementations of SimplifyDemandedBits
and ComputeMaskedBits to match the new improved versions in instcombine.
Tested against all of multisource/benchmarks on ppc.

llvm-svn: 26238
2006-02-16 21:11:51 +00:00
Evan Cheng 03c1e6f48e A bit more memset / memcpy optimization.
Turns them into calls to memset / memcpy if 1) buffer(s) are not DWORD aligned,
2) size is not known to be greater or equal to some minimum value (currently 128).

llvm-svn: 26224
2006-02-16 00:21:07 +00:00
Evan Cheng 4b40a42653 Rename maxStoresPerMemSet to maxStoresPerMemset, etc.
llvm-svn: 26174
2006-02-14 08:38:30 +00:00
Evan Cheng 6a37456d73 Set maxStoresPerMemSet to 16. Ditto for maxStoresPerMemCpy and
maxStoresPerMemMove. Although the last one is not used.

llvm-svn: 26172
2006-02-14 08:25:08 +00:00
Chris Lattner 62c3484e43 Switch targets over to using SelectionDAG::getCALLSEQ_START to create
CALLSEQ_START nodes.

llvm-svn: 26143
2006-02-13 09:00:43 +00:00