Commit Graph

210 Commits

Author SHA1 Message Date
Evan Cheng 483e1ce16e Add implicit def of EFLAGS on those instructions that may modify flags.
llvm-svn: 41962
2007-09-14 21:48:26 +00:00
Evan Cheng 3e18e504ae Remove (somewhat confusing) Imp<> helper, use let Defs = [], Uses = [] instead.
llvm-svn: 41863
2007-09-11 19:55:27 +00:00
Dan Gohman a95cbb0007 Avoid storing and reloading zeros and other constants from stack slots
by flagging the associated instructions as being trivially rematerializable.

llvm-svn: 41775
2007-09-07 21:32:51 +00:00
Evan Cheng c2081fe573 Mark load instructions with isLoad = 1.
llvm-svn: 41595
2007-08-30 05:49:43 +00:00
Bill Wendling cdbd82ee37 64-bit SSSE3 ops that use MMX registers don't require 16-byte alignment.
Make a 'memop' pattern just for them.

llvm-svn: 41017
2007-08-11 09:52:53 +00:00
Bill Wendling 7014615087 For kicks, I though it would be fun to use the correct opcode.
llvm-svn: 40985
2007-08-10 09:00:17 +00:00
Bill Wendling 2377206923 Adding SSSE3 intrinsics.
llvm-svn: 40982
2007-08-10 06:22:27 +00:00
Dan Gohman 8932bff7fe Fix the alignment requirements of several unpck and shuf instructions.
Generalize isPSHUFDMask and add a unary SHUFPD pattern so that SHUFPD's
memory operand alignment can be tested as well, with a fix to avoid
breaking MMX's use of isPSHUFDMask.

llvm-svn: 40756
2007-08-02 21:17:01 +00:00
Dan Gohman 4d436e2b7d Fix pastos in vector arithmetic intrinsics.
llvm-svn: 40754
2007-08-02 21:06:40 +00:00
Dan Gohman fa3eeeedc0 Mark the SSE and MMX load instructions that
X86InstrInfo::isReallyTriviallyReMaterializable knows how to handle
with the isReMaterializable flag so that it is given a chance to handle
them. Without hoisting constant-pool loads from loops this isn't very
visible, though it does keep CodeGen/X86/constant-pool-remat-0.ll from
making a copy of the constant pool on the stack.

llvm-svn: 40736
2007-08-02 14:27:55 +00:00
Evan Cheng da549ece5c Missing Requires.
llvm-svn: 40691
2007-08-01 21:42:24 +00:00
Dan Gohman 54ec4bfa5f Change the x86 assembly output to use tab characters to separate the
mnemonics from their operands instead of single spaces. This makes the
assembly output a little more consistent with various other compilers
(f.e. GCC), and slightly easier to read. Also, update the regression
tests accordingly.

llvm-svn: 40648
2007-07-31 20:11:57 +00:00
Evan Cheng 12c6be84ff Redo and generalize previously removed opt for pinsrw: (vextract (v4i32 bc (v4f32 s2v (f32 load ))), 0) -> (i32 load )
llvm-svn: 40628
2007-07-31 08:04:03 +00:00
Dan Gohman 4788552deb Re-apply 40504, but with a fix for the segfault it caused in oggenc:
Make the alignedload and alignedstore patterns always require 16-byte
alignment. This way when they are used in the "Fs" instructions, in which
a vector instruction is used for a scalar purpose, they can still require
the full vector alignment. And add a regression test for this.

llvm-svn: 40555
2007-07-27 17:16:43 +00:00
Evan Cheng 931de40afa Reverting 40504 for now. It's breaking oggenc.
llvm-svn: 40547
2007-07-27 01:37:47 +00:00
Dan Gohman cecd4b3793 Fix a whitespace difference between CMPSSrr and CMPSDrr.
llvm-svn: 40528
2007-07-26 15:11:50 +00:00
Dan Gohman 8455bd3fae Remove X86ISD::LOAD_PACK and X86ISD::LOAD_UA and associated code from the
x86 target, replacing them with the new alignment attributes on memory
references.

llvm-svn: 40504
2007-07-26 00:31:09 +00:00
Evan Cheng 8fefeffb37 Because we promote SSE logical ops and loads to v2i64, we often end up generate
code that cross integer / floating point domains (e.g. generate pxor / pand for
logical ops on floating point value, movdqa to load / store floating point SSE
values). Given that, it's better to use movaps instead of movdqa and movups
instead of movdqu. They have the same latency but the "aps" variants are one
byte shorter.
If the domain crossing problem is a real performance issue, then we will have to
fix it with dynamic programming based isel.

llvm-svn: 40076
2007-07-20 00:27:43 +00:00
Evan Cheng 7ca3555bfa Fix patterns so we isel the xorps, etc. for floating pt logical SSE ops. DAG combiner may fold away the (bit_convert (load)).
llvm-svn: 40070
2007-07-19 23:34:10 +00:00
Evan Cheng 94b5a80b93 Change instruction description to split OperandList into OutOperandList and
InOperandList. This gives one piece of important information: # of results
produced by an instruction.
An example of the change:
def ADD32rr  : I<0x01, MRMDestReg, (ops GR32:$dst, GR32:$src1, GR32:$src2),
                 "add{l} {$src2, $dst|$dst, $src2}",
                 [(set GR32:$dst, (add GR32:$src1, GR32:$src2))]>;
=>
def ADD32rr  : I<0x01, MRMDestReg, (outs GR32:$dst), (ins GR32:$src1, GR32:$src2),
                 "add{l} {$src2, $dst|$dst, $src2}",
                 [(set GR32:$dst, (add GR32:$src1, GR32:$src2))]>;

llvm-svn: 40033
2007-07-19 01:14:50 +00:00
Dan Gohman 776962a97a Implement initial memory alignment awareness for SSE instructions. Vector loads
and stores that have a specified alignment of less than 16 bytes now use
instructions that support misaligned memory references.

llvm-svn: 40015
2007-07-18 20:23:34 +00:00
Dan Gohman 57111e7a60 Define non-intrinsic instructions for vector min, max, sqrt, rsqrt, and rcp,
in addition to the intrinsic forms. Add spill-folding entries for these new
instructions, and for the scalar min and max instrinsic instructions which
were missing. And add some preliminary ISelLowering code for using the new
non-intrinsic vector sqrt instruction, and fneg and fabs.

llvm-svn: 38478
2007-07-10 00:05:58 +00:00
Dale Johannesen a2b3c175db Fix for PR 1505 (and 1489). Rewrite X87 register
model to include f32 variants.  Some factoring
improvments forthcoming.

llvm-svn: 37847
2007-07-03 00:53:03 +00:00
Dan Gohman e8c1e428f2 Revert the earlier change that removed the M_REMATERIALIZABLE machine
instruction flag, and use the flag along with a virtual member function
hook for targets to override if there are instructions that are only
trivially rematerializable with specific operands (i.e. constant pool
loads).

llvm-svn: 37728
2007-06-26 00:48:07 +00:00
Dan Gohman 2e84e3f7b7 Make minor adjustments to whitespace and comments to reduce differences
between SSE1 instructions and their respective SSE2 analogues.

llvm-svn: 37718
2007-06-25 15:44:19 +00:00
Dan Gohman 33209bd6b8 Fix loadv2i32 to be loadv4i32, though it isn't actually used anywhere yet.
llvm-svn: 37717
2007-06-25 15:19:03 +00:00
Dan Gohman 9e82064924 Replace M_REMATERIALIZIBLE and the newly-added isOtherReMaterializableLoad
with a general target hook to identify rematerializable instructions. Some
instructions are only rematerializable with specific operands, such as loads
from constant pools, while others are always rematerializable. This hook
allows both to be identified as being rematerializable with the same
mechanism.

llvm-svn: 37644
2007-06-19 01:48:05 +00:00
Evan Cheng 632c3f01ed Added missing patterns for UNPCKH* and PUNPCKH*.
llvm-svn: 37172
2007-05-17 18:44:37 +00:00
Bill Wendling b5ce7c5466 Non-algorithmic change. Moved definitions around into separate sections
for SSE1, SSE2, SSE3, and SSSE3.

llvm-svn: 36656
2007-05-02 23:11:52 +00:00
Dan Gohman 29845cd40d Fix the spelling of the prefetchnta instruction.
llvm-svn: 36256
2007-04-18 14:09:14 +00:00
Bill Wendling f099841573 Add support for our first SSSE3 instruction "pmulhrsw".
llvm-svn: 35869
2007-04-10 22:10:25 +00:00
Evan Cheng 61eee86487 Mark re-materializable instructions.
llvm-svn: 35230
2007-03-21 00:16:56 +00:00
Chris Lattner d647f92a14 add missing braces
llvm-svn: 34905
2007-03-04 06:13:52 +00:00
Evan Cheng b68a774fd9 How the heck did I forget patterns for llvm.x86.sse2.cmp.sd?
llvm-svn: 34434
2007-02-20 00:39:09 +00:00
Evan Cheng 82241c86e9 - FCOPYSIGN custom lowering bug. Clear the sign bit of operand 0 first before
or'ing in the sign bit of operand 1.
- Tweaking: rather than left shift the sign bit, fp_extend operand 1 first
  before taking its sign bit if its type is smaller than that of operand 0.

llvm-svn: 32932
2007-01-05 21:37:56 +00:00
Evan Cheng 4363e884c0 With SSE2, expand FCOPYSIGN to a series of SSE bitwise operations.
llvm-svn: 32900
2007-01-05 07:55:56 +00:00
Evan Cheng fccea9b2a9 - Rename MOVDSS2DIrr to MOVSS2DIrr for consistency sake.
- Add MOVDI2SSrm and MOVSS2DImr to fold load / store for i32 <-> f32 bit_convert
  patterns.

llvm-svn: 32582
2006-12-14 19:43:11 +00:00
Chris Lattner c20b7e878a If we have ScalarSSE, we can select bitconvert into single instructions.
This compiles bitcast.ll:test3/test4 into:

_test3:
        movd %xmm0, %eax
        ret
_test4:
        movd %edi, %xmm0
        ret

llvm-svn: 32230
2006-12-05 18:45:06 +00:00
Evan Cheng 572dc9cb4e Correct instructions for moving data between GR64 and SSE registers; also correct load i64 / store i64 from v2i64.
llvm-svn: 31795
2006-11-16 23:33:25 +00:00
Evan Cheng 49683ba236 Don't dag combine floating point select to max and min intrinsics. Those
take v4f32 / v2f64 operands and may end up causing larger spills / restores.
Added X86 specific nodes X86ISD::FMAX, X86ISD::FMIN instead.

This fixes PR996.

llvm-svn: 31645
2006-11-10 21:43:37 +00:00
Evan Cheng 922e191116 Fixed a bug which causes x86 be to incorrectly match
shuffle v, undef, <2, ?, 3, ?>
to movhlps
It should match to unpckhps instead.

Added proper matching code for
shuffle v, undef, <2, 3, 2, 3>

llvm-svn: 31519
2006-11-07 22:14:24 +00:00
Chris Lattner 9ac6442db6 remove dead/redundant vars
llvm-svn: 31435
2006-11-03 23:48:56 +00:00
Evan Cheng 94e5bc9e83 Fix ldmxcsr JIT encoding.
llvm-svn: 31343
2006-11-01 06:53:52 +00:00
Evan Cheng e056dd5928 Fixed a significant bug where unpcklpd is incorrectly used to extract element 1 from a v2f64 value.
llvm-svn: 31228
2006-10-27 21:08:32 +00:00
Evan Cheng 7e065ff7e4 X86ISD::PEXTRW 3rd operand type is always target pointer type.
llvm-svn: 31185
2006-10-25 21:35:05 +00:00
Evan Cheng 4090dc4703 ComplexPatterns sse_load_f32 and sse_load_f64 returns in / out chain operands.
llvm-svn: 30892
2006-10-11 21:06:01 +00:00
Evan Cheng 57ccb6d372 Don't go too crazy with these AddComplexity. Try matching shufps with load
folding first.

llvm-svn: 30848
2006-10-09 21:42:15 +00:00
Evan Cheng e71fe34d75 Reflects ISD::LOAD / ISD::LOADX / LoadSDNode changes.
llvm-svn: 30844
2006-10-09 20:57:25 +00:00
Chris Lattner 398195ebbe completely disable folding of loads into scalar sse instructions and provide
a framework for doing it right.  This fixes
CodeGen/X86/2006-10-07-ScalarSSEMiscompile.ll.

Once X86DAGToDAGISel::SelectScalarSSELoad is implemented right, this task
will be done.

llvm-svn: 30817
2006-10-07 21:55:32 +00:00
Chris Lattner 942009fee5 convert packed FP add/sub/mul/div to use a multiclass.
llvm-svn: 30815
2006-10-07 21:17:13 +00:00