Commit Graph

174 Commits

Author SHA1 Message Date
Mon P Wang 9c2d26d208 Added support for SELECT v8i8 v4i16 for X86 (MMX)
Added support for TRUNC v8i16 to v8i8 for X86 (MMX)

llvm-svn: 60916
2008-12-12 01:25:51 +00:00
Evan Cheng 1339e72d97 Use mmx (punpckldq VR64, (mmx_v_set0)) to clear high 32-bits of a VR64 register.
llvm-svn: 60499
2008-12-03 19:38:05 +00:00
Dan Gohman 69cc2cbbff Rename isSimpleLoad to canFoldAsLoad, to better reflect its meaning.
llvm-svn: 60487
2008-12-03 18:15:48 +00:00
Evan Cheng 27889ab29f Add more vector move low and zero-extend patterns.
llvm-svn: 58752
2008-11-05 06:04:51 +00:00
Bill Wendling 76105a4a4f Make "movdq2q" and "movq2dq" dependent upon having SSE2 because they use the
SSE2 registers as well as the MMX registers.

llvm-svn: 55436
2008-08-27 21:32:04 +00:00
Bill Wendling 6cfd3830fb Nevermind. This broke the bootstrap (?!).
llvm-svn: 55318
2008-08-25 18:32:39 +00:00
Bill Wendling dd6759aea7 MOVQ2DQ and MOVQ2DQ use SSE2. We should conditionalize the use of these
instructions on having SSE2.

llvm-svn: 55317
2008-08-25 18:20:52 +00:00
Anton Korobeynikov 31099519d0 Provide a 64 bit variant of mmx.maskmovq intrinsic lowering.
Is there way to avoid explicit target check?

llvm-svn: 55238
2008-08-23 15:53:19 +00:00
Nate Begeman 628ab8c673 Remove dead PatLeaf; there are a number of issues around MMX movl that need to be fixed.
llvm-svn: 54026
2008-07-25 17:25:04 +00:00
Dale Johannesen e5f4ffbdf1 Add v2f32 (MMX) type to X86. Support is primitive:
load,store,call,return,bitcast.  This is enough to
make call and return work.

llvm-svn: 52691
2008-06-24 22:01:44 +00:00
Evan Cheng 5e28227dbd Implement vector shift up / down and insert zero with ps{rl}lq / ps{rl}ldq.
llvm-svn: 51667
2008-05-29 08:22:04 +00:00
Evan Cheng 961339bbdb Handle a few more cases of folding load i64 into xmm and zero top bits.
Note, some of the code will be moved into target independent part of DAG combiner in a subsequent patch.

llvm-svn: 50918
2008-05-09 21:53:03 +00:00
Evan Cheng 78af38c392 Handle vector move / load which zero the destination register top bits (i.e. movd, movq, movss (addr), movsd (addr)) with X86 specific dag combine.
llvm-svn: 50838
2008-05-08 00:57:18 +00:00
Evan Cheng cdf22f2953 Add separate intrinsics for MMX / SSE shifts with i32 integer operands. This allow us to simplify the horribly complicated matching code.
llvm-svn: 50601
2008-05-03 00:52:09 +00:00
Evan Cheng 5ba02020e6 Fix illegal MMX_MOVDQ2Qrr pattern. vector_extract result must be a scalar value.
llvm-svn: 50291
2008-04-25 20:12:46 +00:00
Evan Cheng ccde6dd016 Special handling for MMX values being passed in either GPR64 or lower 64-bits of XMM registers.
llvm-svn: 50289
2008-04-25 19:11:04 +00:00
Evan Cheng 6d653b58f9 Fix MMX_MOVQ2DQrr pattern. It's illegal to do a bitconvert from a smaller type to a larger one.
llvm-svn: 50278
2008-04-25 18:19:54 +00:00
Dan Gohman db08f5218e Fix the encoding of the MMX movd that moves from MMX to 64-bit GPR.
llvm-svn: 50053
2008-04-21 19:52:29 +00:00
Dan Gohman 01a5d36d9d Add movd instructions to move from MMX registers
to 64-bit GPR registers on x86-64.

llvm-svn: 49757
2008-04-15 23:55:07 +00:00
Evan Cheng 92b4488202 Undo 48570. Correctly match mmx shift instructions with an immediate operand.
llvm-svn: 48627
2008-03-21 00:40:09 +00:00
Evan Cheng bbba76fc99 Add intrinsics to match mmx shift builtin's with immediate operand.
llvm-svn: 48569
2008-03-19 23:38:52 +00:00
Evan Cheng 0e7b00d79f Replace all target specific implicit def instructions with a target independent one: TargetInstrInfo::IMPLICIT_DEF.
llvm-svn: 48380
2008-03-15 00:03:38 +00:00
Evan Cheng 99ee78ef63 Clean up my own mess.
X86 lowering normalize vector 0 to v4i32. However DAGCombine can fold (sub x, x) -> 0 after legalization. It can create a zero vector of a type that's not expected (e.g. v8i16). We don't want to disable the optimization since leaving a (sub x, x) is really bad. Add isel patterns for other types of vector 0 to ensure correctness. It's highly unlikely to happen other than in bugpoint reduced test cases.

llvm-svn: 48279
2008-03-12 07:02:50 +00:00
Anders Carlsson 17df4cd397 Use the correct instruction encodings for the 64-bit MMX movd.
llvm-svn: 47740
2008-02-29 01:35:12 +00:00
Evan Cheng 6200c225e0 - When DAG combiner is folding a bit convert into a BUILD_VECTOR, it should check if it's essentially a SCALAR_TO_VECTOR. Avoid turning (v8i16) <10, u, u, u> to <10, 0, u, u, u, u, u, u>. Instead, simply convert it to a SCALAR_TO_VECTOR of the proper type.
- X86 now normalize SCALAR_TO_VECTOR to (BIT_CONVERT (v4i32 SCALAR_TO_VECTOR)). Get rid of X86ISD::S2VEC.

llvm-svn: 47290
2008-02-18 23:04:32 +00:00
Chris Lattner 317332fc2a Start inferring side effect information more aggressively, and fix many bugs in the
x86 backend where instructions were not marked maystore/mayload, and perf issues where
instructions were not marked neverHasSideEffects.  It would be really nice if we could
write patterns for copy instructions.

I have audited all the x86 instructions down to MOVDQAmr.  The flags on others and on
other targets are probably not right in all cases, but no clients currently use this
info that are enabled by default.

llvm-svn: 45829
2008-01-10 07:59:24 +00:00
Chris Lattner aca7ca3730 remove explicit sets of 'neverHasSideEffects' that can now be
inferred from the instr patterns.

llvm-svn: 45824
2008-01-10 05:45:39 +00:00
Chris Lattner a4ce4f6987 rename isLoad -> isSimpleLoad due to evan's desire to have such a predicate.
llvm-svn: 45667
2008-01-06 23:38:27 +00:00
Chris Lattner f3ebc3f3d2 Remove attribution from file headers, per discussion on llvmdev.
llvm-svn: 45418
2007-12-29 20:36:04 +00:00
Bill Wendling b3d85a5d4b Add "mayHaveSideEffects" and "neverHasSideEffects" flags to some instructions. I
based what flag to set on whether it was already marked as
"isRematerializable". If there was a further check to determine if it's "really"
rematerializable, then I marked it as "mayHaveSideEffects" and created a check
in the X86 back-end similar to the remat one.

llvm-svn: 45132
2007-12-17 23:07:56 +00:00
Evan Cheng 6e68381e02 Implicit def instructions, e.g. X86::IMPLICIT_DEF_GR32, are always re-materializable and they should not be spilled.
llvm-svn: 44960
2007-12-12 23:12:09 +00:00
Chris Lattner 5728bdd4db Fix a long standing deficiency in the X86 backend: we would
sometimes emit "zero" and "all one" vectors multiple times,
for example:

_test2:
	pcmpeqd	%mm0, %mm0
	movq	%mm0, _M1
	pcmpeqd	%mm0, %mm0
	movq	%mm0, _M2
	ret

instead of:

_test2:
	pcmpeqd	%mm0, %mm0
	movq	%mm0, _M1
	movq	%mm0, _M2
	ret

This patch fixes this by always arranging for zero/one vectors
to be defined as v4i32 or v2i32 (SSE/MMX) instead of letting them be
any random type.  This ensures they get trivially CSE'd on the dag.
This fix is also important for LegalizeDAGTypes, as it gets unhappy
when the x86 backend wants BUILD_VECTOR(i64 0) to be legal even when
'i64' isn't legal.

This patch makes the following changes:

1) X86TargetLowering::LowerBUILD_VECTOR now lowers 0/1 vectors into
   their canonical types.
2) The now-dead patterns are removed from the SSE/MMX .td files.
3) All the patterns in the .td file that referred to immAllOnesV or
   immAllZerosV in the wrong form now use *_bc to match them with a
   bitcast wrapped around them.
4) X86DAGToDAGISel::SelectScalarSSELoad is generalized to handle 
   bitcast'd zero vectors, which simplifies the code actually.
5) getShuffleVectorZeroOrUndef is updated to generate a shuffle that
   is legal, instead of generating one that is illegal and expecting
   a later legalize pass to clean it up.
6) isZeroShuffle is generalized to handle bitcast of zeros.
7) several other minor tweaks.

This patch is definite goodness, but has the potential to cause random
code quality regressions.  Please be on the lookout for these and let 
me know if they happen.

llvm-svn: 44310
2007-11-25 00:24:49 +00:00
Evan Cheng 3e18e504ae Remove (somewhat confusing) Imp<> helper, use let Defs = [], Uses = [] instead.
llvm-svn: 41863
2007-09-11 19:55:27 +00:00
Evan Cheng c2081fe573 Mark load instructions with isLoad = 1.
llvm-svn: 41595
2007-08-30 05:49:43 +00:00
Dan Gohman fa3eeeedc0 Mark the SSE and MMX load instructions that
X86InstrInfo::isReallyTriviallyReMaterializable knows how to handle
with the isReMaterializable flag so that it is given a chance to handle
them. Without hoisting constant-pool loads from loops this isn't very
visible, though it does keep CodeGen/X86/constant-pool-remat-0.ll from
making a copy of the constant pool on the stack.

llvm-svn: 40736
2007-08-02 14:27:55 +00:00
Dan Gohman 54ec4bfa5f Change the x86 assembly output to use tab characters to separate the
mnemonics from their operands instead of single spaces. This makes the
assembly output a little more consistent with various other compilers
(f.e. GCC), and slightly easier to read. Also, update the regression
tests accordingly.

llvm-svn: 40648
2007-07-31 20:11:57 +00:00
Evan Cheng 12c6be84ff Redo and generalize previously removed opt for pinsrw: (vextract (v4i32 bc (v4f32 s2v (f32 load ))), 0) -> (i32 load )
llvm-svn: 40628
2007-07-31 08:04:03 +00:00
Evan Cheng 94b5a80b93 Change instruction description to split OperandList into OutOperandList and
InOperandList. This gives one piece of important information: # of results
produced by an instruction.
An example of the change:
def ADD32rr  : I<0x01, MRMDestReg, (ops GR32:$dst, GR32:$src1, GR32:$src2),
                 "add{l} {$src2, $dst|$dst, $src2}",
                 [(set GR32:$dst, (add GR32:$src1, GR32:$src2))]>;
=>
def ADD32rr  : I<0x01, MRMDestReg, (outs GR32:$dst), (ins GR32:$src1, GR32:$src2),
                 "add{l} {$src2, $dst|$dst, $src2}",
                 [(set GR32:$dst, (add GR32:$src1, GR32:$src2))]>;

llvm-svn: 40033
2007-07-19 01:14:50 +00:00
Bill Wendling 8590f920c7 Support generation of GR64 to MMX code in the JIT.
llvm-svn: 37866
2007-07-04 01:29:22 +00:00
Bill Wendling 3053244b27 Allow a GR64 to be moved into an MMX register via the "movd" instruction.
Still need to have JIT generate this code.

llvm-svn: 37863
2007-07-04 00:19:54 +00:00
Dan Gohman e8c1e428f2 Revert the earlier change that removed the M_REMATERIALIZABLE machine
instruction flag, and use the flag along with a virtual member function
hook for targets to override if there are instructions that are only
trivially rematerializable with specific operands (i.e. constant pool
loads).

llvm-svn: 37728
2007-06-26 00:48:07 +00:00
Dan Gohman 9e82064924 Replace M_REMATERIALIZIBLE and the newly-added isOtherReMaterializableLoad
with a general target hook to identify rematerializable instructions. Some
instructions are only rematerializable with specific operands, such as loads
from constant pools, while others are always rematerializable. This hook
allows both to be identified as being rematerializable with the same
mechanism.

llvm-svn: 37644
2007-06-19 01:48:05 +00:00
Chris Lattner 888653cdba implement the missing maskmovq mmx intrinsic that akor hit.
llvm-svn: 37100
2007-05-16 06:08:17 +00:00
Bill Wendling 5c7f25632e Add the final MMX instructions. Correct a few wrong patterns.
llvm-svn: 36405
2007-04-24 21:18:37 +00:00
Bill Wendling ac5b650a54 Adding more MMX instructions.
llvm-svn: 35638
2007-04-03 23:48:32 +00:00
Bill Wendling e7b2a864f2 Add FEMMS and ADDQ. Renamed MMX recipes to prepend the MMX_ to them.
llvm-svn: 35616
2007-04-03 06:00:37 +00:00
Bill Wendling ad2db4a4cf Unbreak mmx arithmetic. It was barfing trying to do v8i8 arithmetic.
llvm-svn: 35392
2007-03-28 00:57:11 +00:00
Bill Wendling 999c77f89c Add the "unpack low packed data" instructions. This should be the last of
the MMX instructions that are needed...

llvm-svn: 35389
2007-03-27 21:20:36 +00:00
Bill Wendling 6dff51ae65 Fix so that pandn is emitted instead of an xor/and combo. Add integer
comparison operators.

llvm-svn: 35385
2007-03-27 20:22:40 +00:00
Bill Wendling 98d2104c6f Add support for the v1i64 type. This makes better code for this:
#include <mmintrin.h>

extern __m64 C;

void baz(__v2si *A, __v2si *B)
{
  *A = C;
  _mm_empty();
}

We get this:

_baz:
        call "L1$pb"
"L1$pb":
        popl %eax
        movl L_C$non_lazy_ptr-"L1$pb"(%eax), %eax
        movq (%eax), %mm0
        movl 4(%esp), %eax
        movq %mm0, (%eax)
        emms
        ret

GCC gives us this:

_baz:
        pushl   %ebx
        call    L3
"L00000000001$pb":
L3:
        popl    %ebx
        subl    $8, %esp
        movl    L_C$non_lazy_ptr-"L00000000001$pb"(%ebx), %eax
        movl    (%eax), %edx
        movl    4(%eax), %ecx
        movl    16(%esp), %eax
        movl    %edx, (%eax)
        movl    %ecx, 4(%eax)
        emms
        addl    $8, %esp
        popl    %ebx
        ret

llvm-svn: 35351
2007-03-26 07:53:08 +00:00
Bill Wendling 871c77cda1 PR1260:
Add final support to get the QT example to compile.

llvm-svn: 35290
2007-03-23 22:35:46 +00:00
Bill Wendling 7c17fbc5b7 We generate a shufflevector instruction, so we don't need the builtin
intrinsic.

llvm-svn: 35269
2007-03-22 20:29:26 +00:00
Bill Wendling d551a18783 Support added for shifts and unpacking MMX instructions.
llvm-svn: 35266
2007-03-22 18:42:45 +00:00
Bill Wendling 144b8bbf17 And now support for MMX logical operations.
llvm-svn: 35125
2007-03-16 09:44:46 +00:00
Bill Wendling e31034125c Multiplication support for MMX.
llvm-svn: 35118
2007-03-15 21:24:36 +00:00
Bill Wendling e9b81f5366 Adding more arithmetic operators to MMX. This is an almost exact copy of
the addition. Please let me know if you have suggestions.

llvm-svn: 35055
2007-03-10 09:57:05 +00:00
Bill Wendling 6092ce25cf Added "padd*" support for MMX. Added MMX move stuff to X86InstrInfo so that
moves, loads, etc. are recognized.

llvm-svn: 35031
2007-03-08 22:09:11 +00:00
Bill Wendling 6d8211c0ac Remove useless pattern fragments.
llvm-svn: 35009
2007-03-07 18:23:09 +00:00
Bill Wendling 97905b4027 Properly support v8i8 and v4i16 types. It now converts them to v2i32 for
load and stores.

llvm-svn: 35002
2007-03-07 05:43:18 +00:00
Bill Wendling bbd25984b7 Add LOAD/STORE support for MMX.
llvm-svn: 34978
2007-03-06 18:53:42 +00:00
Bill Wendling b1c86b49ea Add the emms intrinsic for MMX support.
llvm-svn: 34938
2007-03-05 23:09:45 +00:00
Evan Cheng 02d8836cd5 INC / DEC instructions have shorter code size than ADD32ri8, etc.
llvm-svn: 29194
2006-07-19 00:27:29 +00:00
Evan Cheng 9fee442e63 X86 integer register classes naming changes. Make them consistent with FP, vector classes.
llvm-svn: 28324
2006-05-16 07:21:53 +00:00
Evan Cheng c88afc36a9 SSE / SSE2 conversion intrinsics.
llvm-svn: 27637
2006-04-12 23:42:44 +00:00
Evan Cheng 09a956271a movnt* and maskmovdqu intrinsics
llvm-svn: 27587
2006-04-11 06:57:30 +00:00
Evan Cheng 1aaa7280cd Instruction encoding bug
llvm-svn: 27102
2006-03-25 06:00:03 +00:00
Evan Cheng 8e481df625 Added CVTTPS2PI.
llvm-svn: 27095
2006-03-25 01:31:59 +00:00
Evan Cheng baea59c61c Didn't mean to check this in. No MMX support yet.
llvm-svn: 26933
2006-03-21 23:04:23 +00:00
Evan Cheng d5e905d762 - Use movaps to store 128-bit vector integers.
- Each scalar to vector v8i16 and v16i8 is a any_extend followed by a movd.

llvm-svn: 26932
2006-03-21 23:01:21 +00:00
Evan Cheng 1208d9179a - Remove scalar to vector pseudo ops. They are just wrong.
- Handle FR32 to VR128:v4f32 and FR64 to VR128:v2f64 with aliases of MOVAPS
and MOVAPD. Mark them as move instructions and *hope* they will be deleted.

llvm-svn: 26919
2006-03-21 07:09:35 +00:00
Evan Cheng e4d1416239 x86 ISD::SCALAR_TO_VECTOR support.
llvm-svn: 26911
2006-03-21 00:33:35 +00:00
Evan Cheng e6448448c2 Move a few things around.
llvm-svn: 26893
2006-03-20 06:04:52 +00:00
Evan Cheng d58478161f One more round of reorg so sabre doesn't freak out. :-)
llvm-svn: 26303
2006-02-21 20:00:20 +00:00
Evan Cheng 6e595b9fd8 Split instruction info into multiple files, one for each of x87, MMX, and SSE.
llvm-svn: 26300
2006-02-21 19:13:53 +00:00