Commit Graph

2005 Commits

Author SHA1 Message Date
Jeff Cohen 06041abeb6 Make external globals public; other minor cleanup.
llvm-svn: 28096
2006-05-04 16:20:22 +00:00
Jeff Cohen f812a4fa75 Make Intel syntax the default when LLVM is built with VC++.
llvm-svn: 28095
2006-05-04 16:19:27 +00:00
Chris Lattner ee64b6b40f Remove a bunch more dead V9 specific stuff
llvm-svn: 28094
2006-05-04 01:26:39 +00:00
Chris Lattner 940cc978ef Remove a bunch more SparcV9 specific stuff
llvm-svn: 28093
2006-05-04 01:15:02 +00:00
Chris Lattner 6e663f1c1e Remove some more V9-specific stuff.
llvm-svn: 28092
2006-05-04 00:49:59 +00:00
Chris Lattner 9f6639b64d Remove some more unused stuff from MachineInstr that was leftover from V9.
llvm-svn: 28091
2006-05-04 00:44:25 +00:00
Chris Lattner 2aef59f123 Simplify handling of relocations
llvm-svn: 28090
2006-05-04 00:42:08 +00:00
Evan Cheng 8b1cde2bbe Use movsd to shuffle in the lowest two elements of a v4f32 / v4i32 vector when
movlps cannot be used (e.g. when load from m64 has multiple uses).

llvm-svn: 28089
2006-05-03 20:32:03 +00:00
Chris Lattner e3a9c70ba0 Change from using MachineRelocation ctors to using static methods
in MachineRelocation to create Relocations.

llvm-svn: 28088
2006-05-03 20:30:20 +00:00
Chris Lattner 9e68942d78 inline a simple method
llvm-svn: 28083
2006-05-03 17:21:32 +00:00
Chris Lattner 1d8ee1fc80 Suck block address tracking out of targets into the JIT Emitter. This
simplifies the MachineCodeEmitter interface just a little bit and makes
BasicBlocks work like constant pools and jump tables.

llvm-svn: 28082
2006-05-03 17:10:41 +00:00
Nate Begeman 43b1ed7e3d Teach the x86 jit how to handle jump tables not directly used by a jump
instruction.

llvm-svn: 28080
2006-05-03 04:52:47 +00:00
Owen Anderson 20a631fde7 Refactor TargetMachine, pushing handling of TargetData into the target-specific subclasses. This has one caller-visible change: getTargetData() now returns a pointer instead of a reference.
This fixes PR 759.

llvm-svn: 28074
2006-05-03 01:29:57 +00:00
Chris Lattner d8b192ba3b Change the BasicBlockAddrs map to be a vector, indexed by MBB number.
llvm-svn: 28069
2006-05-03 00:32:55 +00:00
Chris Lattner b8065a9a3a Several related changes:
1. Change several methods in the MachineCodeEmitter class to be pure virtual.
2. Suck emitConstantPool/initJumpTableInfo into startFunction, removing them
   from the MachineCodeEmitter interface, and reducing the amount of target-
   specific code.
3. Change the JITEmitter so that it allocates constantpools and jump tables
   *right* next to the functions that they belong to, instead of in a separate
   pool of memory.  This makes all memory for a function be contiguous, and
   means the JITEmitter only tracks one block of memory now.

llvm-svn: 28065
2006-05-02 23:22:24 +00:00
Nate Begeman 233391f5f5 Remove some stuff from the README
llvm-svn: 28063
2006-05-02 22:43:31 +00:00
Chris Lattner e1c96369e2 Fix a purely hypothetical problem (for now): emitWord emits in the host
byte format.  This doesn't work when using the code emitter in a cross target
environment.  Since the code emitter is only really used by the JIT, this
isn't a current problem, but if we ever start emitting .o files, it would be.

llvm-svn: 28060
2006-05-02 19:14:47 +00:00
Chris Lattner c9aa3715e8 Refactor the machine code emitter interface to pull the pointers for the current
code emission location into the base class, instead of being in the derived classes.

This change means that low-level methods like emitByte/emitWord now are no longer
virtual (yaay for speed), and we now have a framework to support growable code
segments.  This implements feature request #1 of PR469.

llvm-svn: 28059
2006-05-02 18:27:26 +00:00
Nate Begeman 287dc5be0d Hooray, everyone now uses the same printBasicBlockLabel implementation
llvm-svn: 28056
2006-05-02 17:34:51 +00:00
Chris Lattner 5bc9c583e3 There is no reason to use a virtual method to store this word.
llvm-svn: 28053
2006-05-02 17:16:20 +00:00
Nate Begeman b9d4f8324d Extend printBasicBlockLabel a bit so that it can be used to print all
basic block labels, consolidating the code to do so in one place for each
target.

llvm-svn: 28050
2006-05-02 05:37:32 +00:00
Jeff Cohen 470f431f44 De-virtualize SwitchSection.
llvm-svn: 28047
2006-05-02 03:58:45 +00:00
Jeff Cohen f34ddb1e0d De-virtualize EmitZeroes.
llvm-svn: 28046
2006-05-02 03:46:13 +00:00
Jeff Cohen bfe9ffb449 Finish support for Microsoft ML/MASM. May still be a few rough edges.
llvm-svn: 28045
2006-05-02 03:11:50 +00:00
Jeff Cohen 24a62a9bc1 Make Intel syntax mode friendlier to Microsoft ML assembler (still needs more work).
llvm-svn: 28044
2006-05-02 01:16:28 +00:00
Chris Lattner 563f0417d2 Remove %'s from register names when in intel mode.
llvm-svn: 28027
2006-05-01 05:53:50 +00:00
Jeff Cohen 71c2e0f262 Mingw32 patches supplied by Anton Korobeynikov.
llvm-svn: 28023
2006-04-29 18:41:44 +00:00
Evan Cheng d369603df9 I can't spell: Register, not Regsiter.
llvm-svn: 28021
2006-04-28 23:19:39 +00:00
Evan Cheng b244b80172 Implemented x86 inline asm b, h, w, k modifiers.
llvm-svn: 28020
2006-04-28 23:11:40 +00:00
Evan Cheng 88decded82 Initial caller side support (for CCC only, not FastCC) of 128-bit vector
passing by value.

llvm-svn: 28015
2006-04-28 21:29:37 +00:00
Evan Cheng 68a44dc445 Bare-bone X86 inline asm printer support.
llvm-svn: 28014
2006-04-28 21:19:05 +00:00
Evan Cheng 3cd4362ade Implement four-wide shuffle with 2 shufps if no more than two elements come
from each vector. e.g.
        shuffle(G1, G2, 7, 1, 5, 2)
==>
        movaps _G2, %xmm0
        shufps $151, _G1, %xmm0
        shufps $216, %xmm0, %xmm0

llvm-svn: 28011
2006-04-28 07:03:38 +00:00
Evan Cheng d43c5c6046 TargetLowering::LowerArguments should return a VBIT_CONVERT of
FORMAL_ARGUMENTS SDOperand in the return result vector.

llvm-svn: 28009
2006-04-28 05:25:15 +00:00
Evan Cheng f0157cb0bc Use movaps instead of movapd for spill / restore.
llvm-svn: 28005
2006-04-28 02:23:35 +00:00
Chris Lattner b209131b56 Add a note
llvm-svn: 27998
2006-04-27 21:40:57 +00:00
Evan Cheng f4f3f0d25f Make x86 isel lowering produce tailcall nodes. They are match to normal calls
for now.

Patch contributed by Alexander Friedman.

llvm-svn: 27994
2006-04-27 08:40:39 +00:00
Evan Cheng ec04a37edd A couple of new entries.
llvm-svn: 27993
2006-04-27 08:31:33 +00:00
Evan Cheng 89001ad729 Support for passing 128-bit vector arguments via XMM registers.
llvm-svn: 27992
2006-04-27 08:31:10 +00:00
Evan Cheng a0374e1bed Oops
llvm-svn: 27989
2006-04-27 05:44:50 +00:00
Evan Cheng 24eb3f4765 Bug fix: not updating NumIntRegs.
llvm-svn: 27988
2006-04-27 05:35:28 +00:00
Evan Cheng 48940d16b2 - Clean up formal argument lowering code. Prepare for vector pass by value work.
- Fixed vararg support.

llvm-svn: 27985
2006-04-27 01:32:22 +00:00
Evan Cheng 1c39903297 Fix fastcc failures.
llvm-svn: 27980
2006-04-26 18:21:31 +00:00
Evan Cheng e0bcfbe811 Switching over FORMAL_ARGUMENTS mechanism to lower call arguments.
llvm-svn: 27975
2006-04-26 01:20:17 +00:00
Nate Begeman 4530327c04 Keep the stack from on darwin 16-byte aligned. This fixes many JIT
failres.

llvm-svn: 27973
2006-04-25 20:54:26 +00:00
Evan Cheng a9467aab0a Separate LowerOperation() into multiple functions, one per opcode.
llvm-svn: 27972
2006-04-25 20:13:52 +00:00
Evan Cheng 4cc3e0b05f Fix a typo.
llvm-svn: 27968
2006-04-25 17:48:41 +00:00
Evan Cheng fb46b2bf5d Explicitly specify result type for def : Pat<> patterns (if it produces a vector
result). Otherwise tblgen will pick the default (v16i8 for 128-bit vector).

llvm-svn: 27965
2006-04-25 00:50:01 +00:00
Evan Cheng 25b09295f8 Added X86 SSE2 intrinsics which can be represented as vector_shuffles. This is
a temporary workaround for the 2-wide vector_shuffle problem (i.e. its mask
would have type v2i32 which is not legal).

llvm-svn: 27964
2006-04-24 23:34:56 +00:00
Evan Cheng d03631ee76 Add a new entry.
llvm-svn: 27963
2006-04-24 23:30:10 +00:00
Evan Cheng 5c2bfb069e Special case handling two wide build_vector(0, x).
llvm-svn: 27961
2006-04-24 22:58:52 +00:00
Evan Cheng 63bd4d3730 Some missing movlps, movhps, movlpd, and movhpd patterns.
llvm-svn: 27960
2006-04-24 21:58:20 +00:00
Evan Cheng b0461080e4 A little bit more build_vector enhancement for v8i16 cases.
llvm-svn: 27959
2006-04-24 18:01:45 +00:00
Evan Cheng 2f9b0bcbd5 Remove a completed entry.
llvm-svn: 27958
2006-04-24 17:38:16 +00:00
Evan Cheng ab0ee6340c MakeMIInst() should handle jump table index operands.
llvm-svn: 27955
2006-04-24 05:37:35 +00:00
Chris Lattner f110527a29 Add a note
llvm-svn: 27954
2006-04-23 19:47:09 +00:00
Evan Cheng b4f31dd1a8 MOVL shuffle (i.e. movd or movss / movsd from memory) of undef, V2 == V2
llvm-svn: 27953
2006-04-23 06:35:19 +00:00
Nate Begeman 9f0b13c885 Optimized stores to the constant pool, while cool, are unnecessary.
llvm-svn: 27948
2006-04-22 22:31:45 +00:00
Nate Begeman 4ca2ea5b43 JumpTable support! What this represents is working asm and jit support for
x86 and ppc for 100% dense switch statements when relocations are non-PIC.
This support will be extended and enhanced in the coming days to support
PIC, and less dense forms of jump tables.

llvm-svn: 27947
2006-04-22 18:53:45 +00:00
Evan Cheng e728efdfce Don't do all the lowering stuff for 2-wide build_vector's. Also, minor optimization for shuffle of undef.
llvm-svn: 27946
2006-04-22 08:34:05 +00:00
Evan Cheng 16ef94f4e8 Fix a performance regression. Use {p}shuf* when there are only two distinct elements in a build_vector.
llvm-svn: 27945
2006-04-22 06:21:46 +00:00
Evan Cheng 14215c36b6 Revamp build_vector lowering to take advantage of movss and movd instructions.
movd always clear the top 96 bits and movss does so when it's loading the
value from memory.
The net result is codegen for 4-wide shuffles is much improved. It is near
optimal if one or more elements is a zero. e.g.

__m128i test(int a, int b) {
  return _mm_set_epi32(0, 0, b, a);
}

compiles to

_test:
	movd 8(%esp), %xmm1
	movd 4(%esp), %xmm0
	punpckldq %xmm1, %xmm0
	ret

compare to gcc:

_test:
	subl	$12, %esp
	movd	20(%esp), %xmm0
	movd	16(%esp), %xmm1
	punpckldq	%xmm0, %xmm1
	movq	%xmm1, %xmm0
	movhps	LC0, %xmm0
	addl	$12, %esp
	ret

or icc:

_test:
        movd      4(%esp), %xmm0                                #5.10
        movd      8(%esp), %xmm3                                #5.10
        xorl      %eax, %eax                                    #5.10
        movd      %eax, %xmm1                                   #5.10
        punpckldq %xmm1, %xmm0                                  #5.10
        movd      %eax, %xmm2                                   #5.10
        punpckldq %xmm2, %xmm3                                  #5.10
        punpckldq %xmm3, %xmm0                                  #5.10
        ret                                                     #5.10

There are still room for improvement, for example the FP variant of the above example:

__m128 test(float a, float b) {
  return _mm_set_ps(0.0, 0.0, b, a);
}

_test:
	movss 8(%esp), %xmm1
	movss 4(%esp), %xmm0
	unpcklps %xmm1, %xmm0
	xorps %xmm1, %xmm1
	movlhps %xmm1, %xmm0
	ret

The xorps and movlhps are unnecessary. This will require post legalizer optimization to handle.

llvm-svn: 27939
2006-04-21 23:03:30 +00:00
Chris Lattner 3e62d4b289 fix thinko
llvm-svn: 27935
2006-04-21 21:05:22 +00:00
Chris Lattner e1f9ab7d53 add some low-prio notes
llvm-svn: 27934
2006-04-21 21:03:21 +00:00
Evan Cheng e8b5180044 Now generating perfect (I think) code for "vector set" with a single non-zero
scalar value.

e.g.
        _mm_set_epi32(0, a, 0, 0);
==>
	movd 4(%esp), %xmm0
	pshufd $69, %xmm0, %xmm0

        _mm_set_epi8(0, 0, 0, 0, 0, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0);
==>
	movzbw 4(%esp), %ax
	movzwl %ax, %eax
	pxor %xmm0, %xmm0
	pinsrw $5, %eax, %xmm0

llvm-svn: 27923
2006-04-21 01:05:10 +00:00
Evan Cheng 60f0b8998e - Added support to turn "vector clear elements", e.g. pand V, <-1, -1, 0, -1>
to a vector shuffle.
- VECTOR_SHUFFLE lowering change in preparation for more efficient codegen
of vector shuffle with zero (or any splat) vector.

llvm-svn: 27875
2006-04-20 08:58:49 +00:00
Evan Cheng 15c264b753 Handle v2i64 BUILD_VECTOR custom lowering correctly. v2i64 is a legal type,
but i64 is not. If possible, change a i64 op to a f64 (e.g. load, constant)
and then cast it back.

llvm-svn: 27849
2006-04-20 00:11:39 +00:00
Evan Cheng 4a1b0d3292 isSplatMask() bug: first element can be an undef.
llvm-svn: 27847
2006-04-19 23:28:59 +00:00
Evan Cheng a3caaee503 - Added support to do aribitrary 4 wide shuffle with no more than three
instructions.
- Fixed a commute vector_shuff bug.

llvm-svn: 27845
2006-04-19 22:48:17 +00:00
Evan Cheng 6d5297dac3 Prefer {p}unpack* and mov*dup over {p}shuf* as well.
llvm-svn: 27844
2006-04-19 21:15:24 +00:00
Evan Cheng b416a25174 - Renamed AddedCost to AddedComplexity.
- Added more movhlps and movlhps patterns.

llvm-svn: 27842
2006-04-19 20:37:34 +00:00
Evan Cheng 7855e4d032 Commute vector_shuffle to match more movlhps, movlp{s|d} cases.
llvm-svn: 27840
2006-04-19 20:35:22 +00:00
Evan Cheng cc7abc6c38 More mov{h|l}p{d|s} patterns.
llvm-svn: 27836
2006-04-19 18:20:17 +00:00
Evan Cheng aeb09ccdd3 - More mov{h|l}ps patterns.
- Increase cost (complexity) of patterns which match mov{h|l}ps ops. These
  are preferred over shufps in most cases.

llvm-svn: 27835
2006-04-19 18:11:52 +00:00
Chris Lattner bfab82817a Add a note.
llvm-svn: 27827
2006-04-19 05:53:27 +00:00
Evan Cheng 3823aa1d0f - PEXTRW cannot take a memory location as its first source operand.
- PINSRWrmi encoding bug.

llvm-svn: 27818
2006-04-18 21:59:43 +00:00
Evan Cheng 43f4ef4ffb SHUFP{S|D}, PSHUF* encoding bugs. Left out the mask immediate operand.
llvm-svn: 27817
2006-04-18 21:56:36 +00:00
Evan Cheng a179ea631d Name change for clarity sake
llvm-svn: 27816
2006-04-18 21:55:35 +00:00
Evan Cheng 09e36ef710 Encoding bug: CMPPSrmi, CMPPDrmi dropped operand 2 (condtion immediate).
llvm-svn: 27815
2006-04-18 21:31:08 +00:00
Evan Cheng d799d680f4 Name change for clarity sake
llvm-svn: 27814
2006-04-18 21:29:50 +00:00
Evan Cheng 0ee281f37c Left a pattern out
llvm-svn: 27813
2006-04-18 21:29:08 +00:00
Evan Cheng e2d25a1a50 Fixed an encoding bug: movd from XMM to R32.
llvm-svn: 27807
2006-04-18 18:19:00 +00:00
Chris Lattner bfc2c68386 Teach the codegen about instructions used for SSE spill code, allowing it
to optimize cases where it has to spill a lot

llvm-svn: 27801
2006-04-18 16:44:51 +00:00
Evan Cheng 4d36a36900 Correct comments
llvm-svn: 27790
2006-04-18 03:45:01 +00:00
Evan Cheng 0ef233509b Another entry
llvm-svn: 27786
2006-04-18 01:22:57 +00:00
Evan Cheng e008bd3d27 Another entry.
llvm-svn: 27784
2006-04-18 00:21:01 +00:00
Evan Cheng 5421206c4b Use movss to insert_vector_elt(v, s, 0).
llvm-svn: 27782
2006-04-17 22:45:49 +00:00
Evan Cheng 6e5e205841 Use two pinsrw to insert an element into v4i32 / v4f32 vector.
llvm-svn: 27779
2006-04-17 22:04:06 +00:00
Evan Cheng 22c06f054b Encoding bug
llvm-svn: 27773
2006-04-17 21:33:57 +00:00
Evan Cheng 5022b3426e Implement v8i16, v16i8 splat using unpckl + pshufd.
llvm-svn: 27768
2006-04-17 20:43:08 +00:00
Chris Lattner c070c621ac implement returns of a vector, testcase here: CodeGen/X86/vec_return.ll
llvm-svn: 27767
2006-04-17 20:32:50 +00:00
Evan Cheng bf0d13c54f Incorrect foldMemoryOperand entries
llvm-svn: 27763
2006-04-17 18:06:12 +00:00
Evan Cheng 5112b5c544 Errors in patterns preventing load folding
llvm-svn: 27762
2006-04-17 18:05:01 +00:00
Evan Cheng b3b41c4f3d FP SETOLT, SETOLT, SETUGE, SETUGT conditions were implemented incorrectly
llvm-svn: 27755
2006-04-17 07:24:10 +00:00
Evan Cheng 20712deecb movduprm, movshduprm bugs
llvm-svn: 27734
2006-04-16 18:11:28 +00:00
Evan Cheng 3064f9aaa6 Encoding bugs
llvm-svn: 27733
2006-04-16 07:02:22 +00:00
Evan Cheng 685ddd8152 Can't fold loads into alias vector SSE ops used for scalar operation. The load
address has to be 16-byte aligned but the values aren't spilled to 128-bit
locations.

llvm-svn: 27732
2006-04-16 06:58:19 +00:00
Evan Cheng 8f1d801389 More encoding bugs
llvm-svn: 27722
2006-04-15 06:10:09 +00:00
Evan Cheng 91944e8699 pslldrm, psrawrm, etc. encoding bug
llvm-svn: 27721
2006-04-15 05:59:08 +00:00
Evan Cheng 1220b31a31 hsubp{s|d} encoding bug
llvm-svn: 27720
2006-04-15 05:52:42 +00:00
Evan Cheng 6222cf2a36 Silly bug
llvm-svn: 27719
2006-04-15 05:37:34 +00:00
Evan Cheng 65bb720a8b Do not use movs{h|l}dup for a shuffle with a single non-undef node.
llvm-svn: 27718
2006-04-15 03:13:24 +00:00
Evan Cheng 0ba896c75b Added SSE (and other) entries to foldMemoryOperand().
llvm-svn: 27716
2006-04-14 23:33:27 +00:00
Evan Cheng 00a5b3d9d3 Some clean up
llvm-svn: 27715
2006-04-14 23:32:40 +00:00
Evan Cheng 5d247f81c1 Last few SSE3 intrinsics.
llvm-svn: 27711
2006-04-14 21:59:03 +00:00
Evan Cheng 3bd605397b Misc. SSE2 intrinsics: clflush, lfench, mfence
llvm-svn: 27699
2006-04-14 07:43:12 +00:00
Evan Cheng e349d01acf We were not adjusting the frame size to ensure proper alignment when alloca /
vla are present in the function. This causes a crash when a leaf function
allocates space on the stack used to store / load with 128-bit SSE
instructions.

llvm-svn: 27698
2006-04-14 07:26:43 +00:00
Evan Cheng 8d76f3922b New entry
llvm-svn: 27697
2006-04-14 07:24:04 +00:00
Evan Cheng eb0063a34f pcmpeq* and pcmpgt* intrinsics.
llvm-svn: 27685
2006-04-14 01:39:53 +00:00
Evan Cheng 16287444ff psll*, psrl*, and psra* intrinsics.
llvm-svn: 27684
2006-04-14 00:14:05 +00:00
Evan Cheng a84319719c Doh. PANDrm, etc. are not commutable.
llvm-svn: 27668
2006-04-13 18:11:28 +00:00
Reid Spencer 9857229aba Add the README files to the distribution.
llvm-svn: 27651
2006-04-13 06:39:24 +00:00
Evan Cheng ed3996743f psad, pmax, pmin intrinsics.
llvm-svn: 27647
2006-04-13 06:11:45 +00:00
Evan Cheng 58dad55959 Various SSE2 packed integer intrinsics: pmulhuw, pavgw, etc.
llvm-svn: 27645
2006-04-13 05:24:54 +00:00
Evan Cheng e4f97ccf7f X86 SSE2 supports v8i16 multiplication
llvm-svn: 27644
2006-04-13 05:10:25 +00:00
Evan Cheng d2eb662415 Update
llvm-svn: 27643
2006-04-13 05:09:45 +00:00
Evan Cheng b3fe00bdc6 padds{b|w}, paddus{b|w}, psubs{b|w}, psubus{b|w} intrinsics.
llvm-svn: 27639
2006-04-13 00:43:35 +00:00
Evan Cheng 0aab735a1a Naming inconsistency.
llvm-svn: 27638
2006-04-13 00:00:23 +00:00
Evan Cheng c88afc36a9 SSE / SSE2 conversion intrinsics.
llvm-svn: 27637
2006-04-12 23:42:44 +00:00
Evan Cheng 92232307d0 All "integer" logical ops (pand, por, pxor) are now promoted to v2i64.
Clean up and fix various logical ops issues.

llvm-svn: 27633
2006-04-12 21:21:57 +00:00
Evan Cheng e2157c6e41 Promote v4i32, v8i16, v16i8 load to v2i64 load.
llvm-svn: 27612
2006-04-12 17:12:36 +00:00
Evan Cheng 29be057d92 Various SSE2 conversion intrinsics
llvm-svn: 27603
2006-04-12 05:20:24 +00:00
Evan Cheng 70c74a3ced Added __builtin_ia32_storelv4si, __builtin_ia32_movqv4si,
__builtin_ia32_loadlv4si, __builtin_ia32_loaddqu, __builtin_ia32_storedqu.

llvm-svn: 27599
2006-04-11 22:28:25 +00:00
Evan Cheng 6b60357f4a gcc lower SSE prefetch into generic prefetch intrinsic. Need to add support
later.

llvm-svn: 27591
2006-04-11 18:04:57 +00:00
Evan Cheng 6ea715af28 Misc. intrinsics.
llvm-svn: 27590
2006-04-11 17:35:57 +00:00
Evan Cheng 09a956271a movnt* and maskmovdqu intrinsics
llvm-svn: 27587
2006-04-11 06:57:30 +00:00
Evan Cheng 12ba3e23d0 Added support for _mm_move_ss and _mm_move_sd.
llvm-svn: 27575
2006-04-11 00:19:04 +00:00
Evan Cheng f8ac02283c Remove some bogus patterns; clean up.
llvm-svn: 27569
2006-04-10 22:35:16 +00:00
Chris Lattner d99f57c1e1 add a note
llvm-svn: 27567
2006-04-10 21:51:03 +00:00
Evan Cheng 051de9a82b Remove an entry that is now done.
llvm-svn: 27565
2006-04-10 21:42:57 +00:00
Evan Cheng 76112c3cb8 Added some missing shuffle patterns.
llvm-svn: 27564
2006-04-10 21:42:19 +00:00
Evan Cheng 664fcba5fa Correct an entry
llvm-svn: 27563
2006-04-10 21:41:39 +00:00
Evan Cheng 395fa3d2a6 movups / movupd
llvm-svn: 27562
2006-04-10 21:11:06 +00:00
Evan Cheng 617a6a812e Conditional move of vector types.
llvm-svn: 27556
2006-04-10 07:23:14 +00:00
Evan Cheng 014849e121 New entries
llvm-svn: 27555
2006-04-10 07:22:03 +00:00
Evan Cheng c9ed8e4c1a Use movaps to do VR128 reg-to-reg copies for now. It's shorter and available for SSE1.
llvm-svn: 27554
2006-04-10 07:21:31 +00:00
Nate Begeman 3f9c17906f Disable switch lowering for targets based on the selection dag isel,
letting the code generator handle them directly.

llvm-svn: 27539
2006-04-08 19:46:55 +00:00
Evan Cheng 0df9c9f57d ldmxcsr and stmxcsr.
llvm-svn: 27506
2006-04-08 00:47:44 +00:00
Evan Cheng ac847268c5 Code clean up.
llvm-svn: 27501
2006-04-07 21:53:05 +00:00
Evan Cheng aa18a52545 Added patterns for MOVHPSmr and MOVLPSmr.
llvm-svn: 27497
2006-04-07 21:20:58 +00:00
Evan Cheng 748e573ce5 Keep track of an Mac OS X / x86 ABI bug.
llvm-svn: 27496
2006-04-07 21:19:53 +00:00
Jim Laskey c0d6518f27 Make sure that debug labels are defined within the same section and after the
entry point of a function.

llvm-svn: 27494
2006-04-07 20:44:42 +00:00
Jim Laskey 2d7298c362 Foundation for call frame information.
llvm-svn: 27491
2006-04-07 16:34:46 +00:00
Evan Cheng d8e1a01be6 A MOVPS2SSmr, i.e. _mm_store_ss, encoding bug.
Also MOVPDI2DIrr.

llvm-svn: 27476
2006-04-06 23:53:29 +00:00
Evan Cheng c995b45f67 - movlp{s|d} and movhp{s|d} support.
- Normalize shuffle nodes so result vector lower half elements come from the
  first vector, the rest come from the second vector. (Except for the
  exceptions :-).
- Other minor fixes.

llvm-svn: 27474
2006-04-06 23:23:56 +00:00
Evan Cheng acf8b3c828 New entries.
llvm-svn: 27473
2006-04-06 23:21:24 +00:00
Evan Cheng 695e45c252 POR encoded as PAND, yikes.
llvm-svn: 27446
2006-04-06 01:49:20 +00:00
Evan Cheng dddb688a40 An entry about comi / ucomi intrinsics.
llvm-svn: 27445
2006-04-05 23:46:04 +00:00
Evan Cheng 780382946e Support for comi / ucomi intrinsics.
llvm-svn: 27444
2006-04-05 23:38:46 +00:00
Evan Cheng f3b52c84ea Handle canonical form of e.g.
vector_shuffle v1, v1, <0, 4, 1, 5, 2, 6, 3, 7>

This is turned into
vector_shuffle v1, <undef>, <0, 0, 1, 1, 2, 2, 3, 3>
by dag combiner.

It would match a {p}unpckl on x86.

llvm-svn: 27437
2006-04-05 07:20:06 +00:00
Evan Cheng 6d196db40d Bogus assert
llvm-svn: 27434
2006-04-05 06:11:20 +00:00
Evan Cheng 2cf4232ced Fallthrough to expand if a VECTOR_SHUFFLE cannot be custom lowered.
llvm-svn: 27433
2006-04-05 06:09:26 +00:00
Evan Cheng 59a6355e82 Handle v8i16 shuffle that must be broken into a pair of pshufhw / pshuflw.
llvm-svn: 27427
2006-04-05 01:47:37 +00:00
Evan Cheng 011c23d9d3 Added pslldq and psrldq.
llvm-svn: 27412
2006-04-04 21:49:39 +00:00
Evan Cheng 8f3b6b8d8a Minor fixes + naming changes.
llvm-svn: 27410
2006-04-04 19:12:30 +00:00
Evan Cheng 802b35c339 PSHUF* encoding bugs.
llvm-svn: 27405
2006-04-04 18:40:36 +00:00
Evan Cheng e91e3bd874 cmpps / cmppd encoding bug
llvm-svn: 27393
2006-04-04 03:04:07 +00:00
Evan Cheng dd2eb27d6d Compact some intrinsic definitions.
llvm-svn: 27388
2006-04-04 00:10:53 +00:00
Evan Cheng 0ef83c83e1 Some SSE1 intrinsics: min, max, sqrt, etc.
llvm-svn: 27384
2006-04-03 23:49:17 +00:00
Evan Cheng b64827e662 Use movlpd to: store lower f64 extracted from v2f64.
Use movhpd to: store upper f64 extracted from v2f64.

llvm-svn: 27382
2006-04-03 22:30:54 +00:00
Evan Cheng ebf1006d16 - More efficient extract_vector_elt with shuffle and movss, movsd, movd, etc.
- Some bug fixes and naming inconsistency fixes.

llvm-svn: 27377
2006-04-03 20:53:28 +00:00
Evan Cheng 5fd7c69473 Use a X86 target specific node X86ISD::PINSRW instead of a mal-formed
INSERT_VECTOR_ELT to insert a 16-bit value in a 128-bit vector.

llvm-svn: 27314
2006-03-31 21:55:24 +00:00
Evan Cheng 747e29ef0b Added support for SSE3 horizontal ops: haddp{s|d} and hsub{s|d}.
llvm-svn: 27310
2006-03-31 21:29:33 +00:00
Evan Cheng cbffa4656b Add support to use pextrw and pinsrw to extract and insert a word element
from a 128-bit vector.

llvm-svn: 27304
2006-03-31 19:22:53 +00:00
Evan Cheng 1b0d294de0 Expand all INSERT_VECTOR_ELT (obviously bad) for now.
llvm-svn: 27275
2006-03-31 01:30:39 +00:00
Evan Cheng d9d0bbb5ac Typo
llvm-svn: 27272
2006-03-31 00:33:57 +00:00
Evan Cheng 99d7205fba Ok for vector_shuffle mask to contain undef elements.
llvm-svn: 27271
2006-03-31 00:30:29 +00:00
Evan Cheng 7e2ff11a42 Make sure all possible shuffles are matched.
Use pshufd, pshuhw, and pshulw to shuffle v4f32 if shufps doesn't match.
Use shufps to shuffle v4f32 if pshufd, pshuhw, and pshulw don't match.

llvm-svn: 27259
2006-03-30 19:54:57 +00:00
Evan Cheng dd487d865b More logical ops patterns
llvm-svn: 27257
2006-03-30 07:33:32 +00:00
Evan Cheng c58ef7deeb Add support for _mm_cmp{cc}_ss and _mm_cmp{cc}_ps intrinsics
llvm-svn: 27256
2006-03-30 06:21:22 +00:00
Evan Cheng 593310016d Add 128-bit pmovmskb intrinsic support.
llvm-svn: 27255
2006-03-30 00:33:26 +00:00
Evan Cheng c5cf9bba05 Change SSE pack operation definitions to fit what the intrinsics expected.
For example, packsswb actually creates a v16i8 from a pair of v8i16. But since
the intrinsic specification forces the output type to match the operands.

llvm-svn: 27254
2006-03-29 23:53:14 +00:00
Evan Cheng b7fedffc78 - Added some SSE2 128-bit packed integer ops.
- Added SSE2 128-bit integer pack with signed saturation ops.
- Added pshufhw and pshuflw ops.

llvm-svn: 27252
2006-03-29 23:07:14 +00:00
Evan Cheng acc336475e Need to special case splat after all. Make the second operand of splat
vector_shuffle undef.

llvm-svn: 27250
2006-03-29 19:02:40 +00:00
Evan Cheng 3cf95747c7 Floating point logical operation patterns should match bit_convert. Or else
integer vector logical operations would match andp{s|d} instead of pand.

llvm-svn: 27248
2006-03-29 18:47:40 +00:00
Evan Cheng 500ec16578 - More shuffle related bug fixes.
- Whenever possible use ops of the right packed types for vector shuffles /
  splats.

llvm-svn: 27246
2006-03-29 03:04:49 +00:00
Evan Cheng 3a1c4e75de Another entry about shuffles.
llvm-svn: 27245
2006-03-29 03:03:46 +00:00
Evan Cheng da59b0d2a8 - Only use pshufd for v4i32 vector shuffles.
- Other shuffle related fixes.

llvm-svn: 27244
2006-03-29 01:30:51 +00:00
Evan Cheng 38b34296d0 Added aliases to scalar SSE instructions, e.g. addss, to match x86 intrinsics.
The source operands type are v4sf with upper bits passes through.
Added matching code for these.

llvm-svn: 27240
2006-03-28 23:51:43 +00:00
Evan Cheng 8160fd3d42 Fixing buggy code.
llvm-svn: 27239
2006-03-28 23:41:33 +00:00
Jim Laskey d1aa1638c6 Expose base register for DwarfWriter. Refactor code accordingly.
llvm-svn: 27225
2006-03-28 13:48:33 +00:00
Jim Laskey 457e54efc1 Added missing paren on behalf of Ramana Radhakrishnan.
llvm-svn: 27223
2006-03-28 10:17:11 +00:00
Evan Cheng 21e5476deb Missed X86::isUNPCKHMask
llvm-svn: 27222
2006-03-28 08:27:15 +00:00
Evan Cheng be2d9a0e99 movlps and movlpd should be modeled as two address code.
llvm-svn: 27221
2006-03-28 07:01:28 +00:00
Evan Cheng dc57ae0711 Update
llvm-svn: 27220
2006-03-28 06:55:45 +00:00
Evan Cheng 4e7374ff8a Typo
llvm-svn: 27219
2006-03-28 06:53:49 +00:00
Evan Cheng 1a194a5264 * Prefer using operation of matching types. e.g unpcklpd rather than movlhps.
* Bug fixes.

llvm-svn: 27218
2006-03-28 06:50:32 +00:00
Evan Cheng 08b473c619 Added a couple of entries about movhps and movlhps.
llvm-svn: 27212
2006-03-28 02:49:12 +00:00
Evan Cheng 3765fadef6 All unpack cases are now being handled.
llvm-svn: 27211
2006-03-28 02:44:05 +00:00
Evan Cheng 2bc3280659 - Clean up / consoladate various shuffle masks.
- Some misc. bug fixes.
- Use MOVHPDrm to load from m64 to upper half of a XMM register.

llvm-svn: 27210
2006-03-28 02:43:26 +00:00
Evan Cheng 5df75889db Model unpack lower and interleave as vector_shuffle so we can lower the
intrinsics as such.

llvm-svn: 27200
2006-03-28 00:39:58 +00:00
Jim Laskey fa53b276d0 Translate llvm target registers to dwarf register numbers properly.
llvm-svn: 27180
2006-03-27 20:18:45 +00:00
Chris Lattner 018e17c8de unbreak the build
llvm-svn: 27174
2006-03-27 16:52:45 +00:00
Evan Cheng 9b9cc4fb39 Use pcmpeq to generate vector of all ones.
llvm-svn: 27167
2006-03-27 07:00:16 +00:00
Nate Begeman ed728c1291 SelectionDAGISel can now natively handle Switch instructions, in the same
manner that the LowerSwitch LLVM to LLVM pass does: emitting a binary
search tree of basic blocks.  The new approach has several advantages:
it is faster, it generates significantly smaller code in many cases, and
it paves the way for implementing dense switch tables as a jump table by
handling switches directly in the instruction selector.

This functionality is currently only enabled on x86, but should be safe for
every target.  In anticipation of making it the default, the cfg is now
properly updated in the x86, ppc, and sparc select lowering code.

llvm-svn: 27156
2006-03-27 01:32:24 +00:00
Nate Begeman 68cc9d4540 Readme note
llvm-svn: 27152
2006-03-26 19:19:27 +00:00
Evan Cheng ed6184aef2 Remove X86:isZeroVector, use ISD::isBuildVectorAllZeros instead; some fixes / cleanups
llvm-svn: 27150
2006-03-26 09:53:12 +00:00
Evan Cheng 3e4d38eea5 Added missing (any_extend (load ...)) patterns.
llvm-svn: 27120
2006-03-25 09:45:48 +00:00
Evan Cheng 2bc0941e2a Build arbitrary vector with more than 2 distinct scalar elements with a
series of unpack and interleave ops.

llvm-svn: 27119
2006-03-25 09:37:23 +00:00
Chris Lattner 5d70a7c4a5 #include Intrinsics.h into all dag isels
llvm-svn: 27109
2006-03-25 06:47:10 +00:00
Evan Cheng 79e500ec74 Added SSE cachebility ops
llvm-svn: 27103
2006-03-25 06:03:26 +00:00
Evan Cheng 1aaa7280cd Instruction encoding bug
llvm-svn: 27102
2006-03-25 06:00:03 +00:00
Evan Cheng 6f7d31ea50 Added 128-bit packed integer subtraction.
llvm-svn: 27096
2006-03-25 01:33:37 +00:00
Evan Cheng 8e481df625 Added CVTTPS2PI.
llvm-svn: 27095
2006-03-25 01:31:59 +00:00
Evan Cheng 980c4d5b46 Added CVTSS2SI.
llvm-svn: 27094
2006-03-25 01:00:18 +00:00
Evan Cheng e7ee6a5e32 Support for scalar to vector with zero extension.
llvm-svn: 27091
2006-03-24 23:15:12 +00:00
Evan Cheng 2f0277bf48 Added LDMXCSR
llvm-svn: 27087
2006-03-24 22:28:37 +00:00
Chris Lattner 97599f1211 plug the intrinsics into the patterns for movmsk*
llvm-svn: 27083
2006-03-24 21:49:18 +00:00
Jim Laskey f0729b4067 Add dwarf register numbering to register data.
llvm-svn: 27081
2006-03-24 21:15:58 +00:00
Evan Cheng 082c8785ef Handle BUILD_VECTOR with all zero elements.
llvm-svn: 27056
2006-03-24 07:29:27 +00:00
Chris Lattner f5efddf80b Gabor points out that we can't spell. :)
llvm-svn: 27049
2006-03-24 07:12:19 +00:00
Evan Cheng a91d8a5b43 All v2f64 shuffle cases can be handled.
llvm-svn: 27044
2006-03-24 06:40:32 +00:00
Evan Cheng 2595a687da More efficient v2f64 shuffle using movlhps, movhlps, unpckhpd, and unpcklpd.
llvm-svn: 27040
2006-03-24 02:58:06 +00:00
Evan Cheng 6afb3c2de7 A new entry
llvm-svn: 27039
2006-03-24 02:57:03 +00:00
Evan Cheng d27fb3e85e Handle more shuffle cases with SHUFP* instructions.
llvm-svn: 27024
2006-03-24 01:18:28 +00:00
Evan Cheng f842ea57bb Typo
llvm-svn: 26997
2006-03-23 20:26:04 +00:00
Jim Laskey 3c43609f1f Add support to locate local variables in frames (early version.)
llvm-svn: 26994
2006-03-23 18:12:57 +00:00
Jim Laskey cf0166fbeb Change interface to DwarfWriter.
llvm-svn: 26991
2006-03-23 18:09:44 +00:00
Chris Lattner ce0206e119 Fix the encodings of these new instructions, hopefully fixing the JIT
failures from last night

llvm-svn: 26981
2006-03-23 16:13:50 +00:00
Evan Cheng 82ed4a42f9 Following icc's lead: use movdqa to load / store 128-bit integer vectors
llvm-svn: 26980
2006-03-23 07:44:07 +00:00
Chris Lattner 6f95ab7abb Eliminate IntrinsicLowering from TargetMachine.
Make the CBE and V9 backends create their own, since they're the only ones that use it.

llvm-svn: 26974
2006-03-23 05:43:16 +00:00
Evan Cheng 7055878170 Add v4i32 <-> v4f32 bitconvert patterns.
llvm-svn: 26969
2006-03-23 02:36:37 +00:00
Evan Cheng b9b0550dc6 Add 128-bit integer vector load and add (for testing).
llvm-svn: 26967
2006-03-23 01:57:24 +00:00
Nate Begeman fb6e02931c Add support for 8 bit immediates with 16/32 bit cmp instructions
llvm-svn: 26966
2006-03-23 01:29:48 +00:00
Evan Cheng 021bb7c956 Added a ValueType operand to isShuffleMaskLegal(). For now, x86 will not do
64-bit vector shuffle.

llvm-svn: 26964
2006-03-22 22:07:06 +00:00
Evan Cheng ed794cd27b SHUFP* are two address code.
llvm-svn: 26959
2006-03-22 20:08:18 +00:00
Evan Cheng bc04722860 Some clean up.
llvm-svn: 26957
2006-03-22 19:22:18 +00:00
Evan Cheng d4e1557941 - Supposely movlhps is faster / better than unpcklpd.
- Don't forget pshufd is only available with sse2.

llvm-svn: 26956
2006-03-22 19:16:21 +00:00
Evan Cheng 68ad48bd1a - Implement X86ISelLowering::isShuffleMaskLegal(). We currently only support
splat and PSHUFD cases.
- Clean up shuffle / splat matching code.

llvm-svn: 26954
2006-03-22 18:59:22 +00:00
Evan Cheng 8fdbdf20cd - VECTOR_SHUFFLE of v4i32 / v4f32 with undef second vector always matches
PSHUFD. We can make permutes entries which point to the undef pointing
  anything we want.
- Change some names to appease Chris.

llvm-svn: 26951
2006-03-22 08:01:21 +00:00
Evan Cheng 3617caf526 Fix PSHUF* and SHUF* jit code emission problems
llvm-svn: 26949
2006-03-22 07:10:28 +00:00
Chris Lattner f5e36c8bc0 fix a warning
llvm-svn: 26941
2006-03-22 04:18:34 +00:00
Evan Cheng d097e67544 Some splat and shuffle support.
llvm-svn: 26940
2006-03-22 02:53:00 +00:00
Evan Cheng b1d3c64d1f Add a couple more pseudo instructions.
llvm-svn: 26939
2006-03-22 02:52:03 +00:00
Evan Cheng baea59c61c Didn't mean to check this in. No MMX support yet.
llvm-svn: 26933
2006-03-21 23:04:23 +00:00
Evan Cheng d5e905d762 - Use movaps to store 128-bit vector integers.
- Each scalar to vector v8i16 and v16i8 is a any_extend followed by a movd.

llvm-svn: 26932
2006-03-21 23:01:21 +00:00
Chris Lattner 00f4683bf6 These targets don't support EXTRACT_VECTOR_ELT, though, in time, X86 will.
llvm-svn: 26930
2006-03-21 20:51:05 +00:00
Evan Cheng 2d819f5fa4 Combine 2 entries
llvm-svn: 26921
2006-03-21 07:18:26 +00:00
Evan Cheng aeebc96099 Add a note about x86 register coallescing
llvm-svn: 26920
2006-03-21 07:12:57 +00:00
Evan Cheng 1208d9179a - Remove scalar to vector pseudo ops. They are just wrong.
- Handle FR32 to VR128:v4f32 and FR64 to VR128:v2f64 with aliases of MOVAPS
and MOVAPD. Mark them as move instructions and *hope* they will be deleted.

llvm-svn: 26919
2006-03-21 07:09:35 +00:00
Evan Cheng e4d1416239 x86 ISD::SCALAR_TO_VECTOR support.
llvm-svn: 26911
2006-03-21 00:33:35 +00:00
Evan Cheng fb872b41c0 Junk unused vector register classes.
llvm-svn: 26910
2006-03-21 00:30:59 +00:00
Chris Lattner 80b6bd2746 Add a build_vector node
llvm-svn: 26895
2006-03-20 06:18:01 +00:00
Evan Cheng e6448448c2 Move a few things around.
llvm-svn: 26893
2006-03-20 06:04:52 +00:00
Chris Lattner d16f6fdd49 add a note with a testcase
llvm-svn: 26877
2006-03-19 22:27:41 +00:00
Evan Cheng f7c2e3628b Vector undef's
llvm-svn: 26870
2006-03-19 09:38:54 +00:00
Evan Cheng 5111c81a3c Turning on LSR by default
llvm-svn: 26861
2006-03-19 06:08:49 +00:00
Evan Cheng 66a9c0dea7 Remember which tests are hurt by LSR.
llvm-svn: 26860
2006-03-19 06:08:11 +00:00
Chris Lattner f7b6e7212f rename these nodes
llvm-svn: 26848
2006-03-19 01:13:28 +00:00
Evan Cheng 9bf978dc20 Use the generic vector register classes VR64 / VR128 rather than V4F32,
V8I16, etc.

llvm-svn: 26838
2006-03-18 01:23:20 +00:00
Evan Cheng b09a56f3a4 Darwin should use _setjmp/_longjmp instead of setjmp/longjmp.
llvm-svn: 26833
2006-03-17 20:31:41 +00:00
Evan Cheng 4f674921d6 Move some pattern fragments to the right files.
llvm-svn: 26831
2006-03-17 19:55:52 +00:00
Chris Lattner 388fc4d9fb Disable x86 fastcc from passing args in registers
llvm-svn: 26824
2006-03-17 17:27:47 +00:00
Chris Lattner 43798850f9 Parameterize the number of integer arguments to pass in registers
llvm-svn: 26818
2006-03-17 05:10:20 +00:00
Evan Cheng bfc2e97383 Also fold MOV8r0, MOV16r0, MOV32r0 + store to MOV8mi, MOV16mi, and MOV32mi.
llvm-svn: 26817
2006-03-17 02:36:22 +00:00
Evan Cheng aca7915b70 Add some missing entries to X86RegisterInfo::foldMemoryOperand(). e.g.
ADD32ri8.

llvm-svn: 26816
2006-03-17 02:25:01 +00:00
Evan Cheng 27750f3287 - Nuke 16-bit SBB instructions. We'll never use them.
- Nuke a bogus comment.

llvm-svn: 26815
2006-03-17 02:24:04 +00:00
Nate Begeman bb01d4f272 Remove BRTWOWAY*
Make the PPC backend not dependent on BRTWOWAY_CC and make the branch
selector smarter about the code it generates, fixing a case in the
readme.

llvm-svn: 26814
2006-03-17 01:40:33 +00:00
Evan Cheng c11fcceec5 A new entry.
llvm-svn: 26810
2006-03-16 22:44:22 +00:00
Evan Cheng f75555feb9 Bug fix: condition inverted.
llvm-svn: 26804
2006-03-16 22:02:48 +00:00
Evan Cheng 20931a798e Added a way for TargetLowering to specify what values can be used as the
scale component of the target addressing mode.

llvm-svn: 26802
2006-03-16 21:47:42 +00:00
Evan Cheng 2dd2c652b2 Added getTargetLowering() to TargetMachine. Refactored targets to support this.
llvm-svn: 26742
2006-03-13 23:20:37 +00:00
Evan Cheng af598d2461 Add LSR hooks.
llvm-svn: 26740
2006-03-13 23:18:16 +00:00
Evan Cheng 306c13a8fb Add option -enable-x86-lsr to enable x86 loop strength reduction pass.
llvm-svn: 26665
2006-03-09 21:51:28 +00:00
Chris Lattner 920e661e50 a couple of miscellaneous things.
llvm-svn: 26625
2006-03-09 01:39:46 +00:00
Evan Cheng 70b25efa57 X86ISD::REP_STOS and X86ISD::REP_MOVS now produces a flag.
llvm-svn: 26604
2006-03-07 23:34:23 +00:00
Evan Cheng adc7093fc1 Use rep/stosl; and Count 0x3; rep/stosb for memset with 4 byte aligned dest.
and variable value.
Similarly for memcpy.

llvm-svn: 26603
2006-03-07 23:29:39 +00:00
Jim Laskey 313570fb17 Use "llvm.metadata" section for debug globals. Filter out these globals in the
asm printer.

llvm-svn: 26599
2006-03-07 22:00:35 +00:00
Evan Cheng a4a4ceb478 - Emit subsections_via_symbols for Darwin.
- Conditionalize Dwarf debugging output (Darwin only for now).

llvm-svn: 26582
2006-03-07 02:23:26 +00:00
Evan Cheng 30d7b70b73 Enable Dwarf debugging info.
llvm-svn: 26581
2006-03-07 02:02:57 +00:00
Chris Lattner 9c7f50376a Copysign needs to be expanded everywhere. Note that Alpha and IA64 should
implement copysign as a native op if they have it.

llvm-svn: 26541
2006-03-05 05:08:37 +00:00
Chris Lattner c2dd7aae71 add a note for something evan noticed
llvm-svn: 26539
2006-03-05 01:15:18 +00:00
Evan Cheng c66fd44541 Add an entry
llvm-svn: 26520
2006-03-04 07:49:50 +00:00
Evan Cheng 6dc73297c3 MEMSET / MEMCPY lowering bugs: we can't issue a single WORD / DWORD version of
rep/stos and rep/mov if the count is not a constant. We could do
  rep/stosl; and $count, 3; rep/stosb
For now, I will lower them to memset / memcpy calls. We will revisit this after
a little bit experiment.

Also need to take care of the trailing bytes even if the count is a constant.
Since the max. number of trailing bytes are 3, we will simply issue loads /
stores.

llvm-svn: 26517
2006-03-04 02:48:56 +00:00
Evan Cheng 084a102b17 Typo
llvm-svn: 26512
2006-03-04 01:12:00 +00:00
Chris Lattner ad3c974a77 remove the read/write port/io intrinsics.
llvm-svn: 26479
2006-03-03 00:19:58 +00:00
Evan Cheng 1926427351 Vector op lowering.
llvm-svn: 26438
2006-03-01 01:11:20 +00:00
Evan Cheng 0e69f45b07 Another entry.
llvm-svn: 26430
2006-02-28 23:38:49 +00:00
Evan Cheng 990c3602bd Don't match x << 1 to LEAL. It's better to emit x + x.
llvm-svn: 26429
2006-02-28 21:13:57 +00:00
Evan Cheng 877ab55e06 ConstantPoolIndex is now the displacement portion of the address (rather
than base).

llvm-svn: 26382
2006-02-26 09:12:34 +00:00
Evan Cheng 75b8783aaf Fixed ConstantPoolIndex operand asm print bug. This fixed 2005-07-17-INT-To-FP
and 2005-05-12-Int64ToFP.

llvm-svn: 26380
2006-02-26 08:28:12 +00:00
Evan Cheng 77d86ff8fc * Cleaned up addressing mode matching code.
* Cleaned up and tweaked LEA cost analysis code. Removed some hacks.
* Handle ADD $X, c to MOV32ri $X+c. These patterns cannot be autogen'd and
  they need to be matched before LEA.

llvm-svn: 26376
2006-02-25 10:09:08 +00:00
Evan Cheng 1c557bfeb5 Updates.
llvm-svn: 26375
2006-02-25 10:04:07 +00:00
Evan Cheng 1fac3b3360 * Allow mul, shl nodes to be codegen'd as LEA (if appropriate).
* Add patterns to handle GlobalAddress, ConstantPool, etc.
  MOV32ri to materialize these nodes in registers.
  ADD32ri to handle %reg + GA, etc.
  MOV32mi to handle store GA, etc. to memory.

llvm-svn: 26374
2006-02-25 10:02:21 +00:00
Evan Cheng e4a8b74e4f ConstantPoolIndex is now the displacement field of addressing mode.
llvm-svn: 26373
2006-02-25 09:56:50 +00:00
Evan Cheng 994700101e Added a common about the need for X86ISD::Wrapper.
llvm-svn: 26372
2006-02-25 09:55:19 +00:00
Evan Cheng ed169db8a5 Added an offset field to ConstantPoolSDNode.
llvm-svn: 26371
2006-02-25 09:54:52 +00:00
Evan Cheng 42d5ac557c Fix an obvious bug exposed when we are doing
ADD X, 4
==>
MOV32ri $X+4, ...

llvm-svn: 26366
2006-02-25 01:37:02 +00:00
Evan Cheng e0ed6ec13f - Clean up the lowering and selection code of ConstantPool, GlobalAddress,
and ExternalSymbol.
- Use C++ code (rather than tblgen'd selection code) to match the above
  mentioned leaf nodes. Do not mutate and nodes and do not record the
  selection in CodeGenMap. These nodes should be safe to duplicate. This is
  a performance win.

llvm-svn: 26335
2006-02-23 20:41:18 +00:00
Chris Lattner 16f08f53b1 "." isn't enough to get a private label on linux, use ".L".
llvm-svn: 26327
2006-02-23 05:25:02 +00:00
Chris Lattner 2bacf981bf add a small and simple case.
llvm-svn: 26326
2006-02-23 05:17:43 +00:00
Evan Cheng f4448cee66 A couple of new entries.
llvm-svn: 26325
2006-02-23 02:50:21 +00:00
Evan Cheng 1f342c2884 PIC related bug fixes.
1. Various asm printer bug.
2. Lowering bug. Now TargetGlobalAddress is wrapped in X86ISD::TGAWrapper.

llvm-svn: 26324
2006-02-23 02:43:52 +00:00
Evan Cheng 7eabbfd618 X86 codegen tweak to use lea in another case:
Suppose base == %eax and it has multiple uses, then instead of
  movl %eax, %ecx
  addl $8, %ecx
use
  leal 8(%eax), %ecx.

llvm-svn: 26323
2006-02-23 00:13:58 +00:00
Evan Cheng 7714a59d91 Missing .globl for weak / link-once .text symbols.
llvm-svn: 26321
2006-02-22 23:59:57 +00:00
Evan Cheng 73136dfecc - Added option -relocation-model to set relocation model. Valid values include static, pic,
dynamic-no-pic, and default.
PPC and x86 default is dynamic-no-pic for Darwin, pic for others.
- Removed options -enable-pic and -ppc-static.

llvm-svn: 26315
2006-02-22 20:19:42 +00:00
Evan Cheng 9e252e3bcf Added MMX, SSE1, and SSE2 vector instructions and some simple patterns.
Fixed some existing bugs (wrong predicates, prefixes) at the same time.

llvm-svn: 26310
2006-02-22 02:26:30 +00:00
Chris Lattner 7ad77dfc2a split register class handling from explicit physreg handling.
llvm-svn: 26308
2006-02-22 00:56:39 +00:00
Chris Lattner 7bb4696dc3 Updates to match change of getRegForInlineAsmConstraint prototype
llvm-svn: 26305
2006-02-21 23:11:00 +00:00
Evan Cheng d58478161f One more round of reorg so sabre doesn't freak out. :-)
llvm-svn: 26303
2006-02-21 20:00:20 +00:00
Evan Cheng 6fc1162855 A big more cleaning up.
llvm-svn: 26302
2006-02-21 19:30:30 +00:00