Commit Graph

160 Commits

Author SHA1 Message Date
Evan Cheng e5a94a03e2 Add inc + dec patterns which fold load + stores.
llvm-svn: 24686
2005-12-13 01:02:47 +00:00
Evan Cheng bde9e6fca6 Add neg and not patterns which fold load + stores.
llvm-svn: 24685
2005-12-13 00:54:44 +00:00
Evan Cheng c414d563f0 Missed a couple redundant explicit type casts.
llvm-svn: 24684
2005-12-13 00:25:07 +00:00
Evan Cheng 62e6808aa5 Fix some bad choice of names: i16SExt8 ->i16immSExt8, etc.
llvm-svn: 24683
2005-12-13 00:14:11 +00:00
Evan Cheng 86b2cf22d2 * Split immSExt8 to i16SExt8 and i32SExt8 for i16 and i32 immediate operands.
This enables the removal of some explicit type casts.
* Rename immZExt8 to i16ZExt8 as well.

llvm-svn: 24682
2005-12-13 00:01:09 +00:00
Evan Cheng 3e52756928 Add some integer mul patterns.
llvm-svn: 24681
2005-12-12 23:47:46 +00:00
Evan Cheng af3fe8217a Add some sub patterns.
llvm-svn: 24675
2005-12-12 21:54:05 +00:00
Evan Cheng e80248b378 Add a few more add / store patterns. e.g. ADD32mi8.
llvm-svn: 24670
2005-12-12 19:45:23 +00:00
Evan Cheng 0d6cfee704 * Added X86 store patterns.
* Added X86 dec patterns.

llvm-svn: 24654
2005-12-10 00:48:20 +00:00
Evan Cheng 275a3ed80c Added patterns for ADD8rm, etc. These fold load operands. e.g. addb 4(%esp), %al
llvm-svn: 24648
2005-12-09 22:48:48 +00:00
Evan Cheng f039648614 Added explicit type field to ComplexPattern.
llvm-svn: 24637
2005-12-08 02:15:07 +00:00
Evan Cheng c9fab31098 * Added intelligence to X86 LEA addressing mode matching routine so it returns
false if the match is not profitable. e.g. leal 1(%eax), %eax.
* Added patterns for X86 integer loads and LEA32.

llvm-svn: 24635
2005-12-08 02:01:35 +00:00
Evan Cheng c0c190239d Remove unnecessary let hasCtrlDep=1 now it can be inferred.
llvm-svn: 24611
2005-12-05 23:09:43 +00:00
Chris Lattner 3c0b8f577d Several things:
1. Remove redundant type casts now that PR673 is implemented.
2. Implement the OUT*ir instructions correctly.  The port number really
   *is* a 16-bit value, but the patterns should only match if the number
   is 0-255.  Update the patterns so they now match.
3. Fix patterns for shifts to reflect that the shift amount is always an
   i8, not an i16 as they were believed to be before.  This previous fib
   stopped working when we started knowing that CL has type i8.
4. Change use of i16i8imm in SH*ri patterns to all be imm.

llvm-svn: 24599
2005-12-05 02:40:25 +00:00
Evan Cheng 95cb763818 Added isel patterns for RET, JMP, and WRITEPORT.
llvm-svn: 24588
2005-12-04 08:19:43 +00:00
Evan Cheng 4b02426130 Proper support for shifts with register shift value.
llvm-svn: 24559
2005-12-01 00:43:55 +00:00
Nate Begeman 6f8c1ace6e No longer track value types for asm printer operands, and remove them as
an argument to every operand printing function.  Requires some slight
tweaks to x86, the only user.

llvm-svn: 24541
2005-11-30 18:54:35 +00:00
Chris Lattner 9c7af08bc9 Fix a bug in a recent patch that broke shifts
llvm-svn: 24526
2005-11-30 05:11:18 +00:00
Evan Cheng 72ab335858 Add more X86 ISel patterns.
llvm-svn: 24520
2005-11-29 19:38:52 +00:00
Chris Lattner d1061ac8d1 encode rdtsc correctly
llvm-svn: 24435
2005-11-20 22:13:18 +00:00
Andrew Lenharth 0bf68ae434 The second patch of X86 support for read cycle counter.
llvm-svn: 24430
2005-11-20 21:41:10 +00:00
Chris Lattner d7102c4980 Teach the x86 backend about the register constraints of its addressing mode.
Patch by Evan Cheng

llvm-svn: 24423
2005-11-19 07:01:30 +00:00
Chris Lattner 57ce97862d add more patterns, patch by Evan Cheng.
llvm-svn: 24406
2005-11-18 01:04:42 +00:00
Chris Lattner 2bf458af92 Add patterns for some 16-bit immediate instructions, patch contributed by
Evan Cheng.

llvm-svn: 24384
2005-11-17 02:01:55 +00:00
Chris Lattner 5930d3df3d Add patterns for several simple instructions that take i32 immediates.
Patch contributed by Evan Cheng!

llvm-svn: 24382
2005-11-16 22:59:19 +00:00
Nate Begeman 9d7008b08d Properly split f32 and f64 into separate register classes for scalar sse fp
fixing a bunch of nasty hackery

llvm-svn: 23735
2005-10-14 22:06:00 +00:00
Chris Lattner 2e84be22a8 give all operands names
llvm-svn: 23356
2005-09-14 21:10:24 +00:00
Chris Lattner 423d7cbbf8 add a few missing cases
llvm-svn: 22891
2005-08-19 00:41:29 +00:00
Chris Lattner e2967ac53d Give ADJCALLSTACKDOWN/UP the correct operands.
Give a whole bunch of other stuff variable operands, particularly FP.  The
FP stackifier is playing fast and loose with operands here, so we have to
mark them all as variable.  This will have to be fixed before we can dag->dag
the X86 backend.  The solution is for the pre-stackifier and post-stackifier
instructions to all be disjoint.

llvm-svn: 22890
2005-08-19 00:38:22 +00:00
Nate Begeman 8d394eb703 Scalar SSE: load +0.0 -> xorps/xorpd
Scalar SSE: a < b ? c : 0.0 -> cmpss, andps
Scalar SSE: float -> i16 needs to be promoted

llvm-svn: 22637
2005-08-03 23:26:28 +00:00
Nate Begeman a0b5e035ea Get closer to fully working scalar FP in SSE regs. This gets singlesource
working, and Olden/power.

llvm-svn: 22441
2005-07-15 00:38:55 +00:00
Nate Begeman 8a0933608a First round of support for doing scalar FP using the SSE2 ISA extension and
XMM registers.  There are many known deficiencies and fixmes, which will be
addressed ASAP.  The major benefit of this work is that it will allow the
LLVM register allocator to allocate FP registers across basic blocks.

The x86 backend will still default to x87 style FP.  To enable this work,
you must pass -enable-sse-scalar-fp and either -sse2 or -sse3 to llc.

An example before and after would be for:
double foo(double *P) { double Sum = 0; int i; for (i = 0; i < 1000; ++i)
                        Sum += P[i]; return Sum; }

The inner loop looks like the following:
x87:
.LBB_foo_1:     # no_exit
        fldl (%esp)
        faddl (%eax,%ecx,8)
        fstpl (%esp)
        incl %ecx
        cmpl $1000, %ecx
        #FP_REG_KILL
        jne .LBB_foo_1  # no_exit

SSE2:
        addsd (%eax,%ecx,8), %xmm0
        incl %ecx
        cmpl $1000, %ecx
        #FP_REG_KILL
        jne .LBB_foo_1  # no_exit

llvm-svn: 22340
2005-07-06 18:59:04 +00:00
Nate Begeman db32921535 Initial set of .td file changes necessary to get scalar fp in xmm registers
working.  The instruction selector changes will hopefully be coming later
this week once they are debugged.  This is necessary to support the darwin
x86 FP model, and is recommended by intel as the replacement for x87.  As
a bonus, the register allocator knows how to deal with these registers
across basic blocks, unliky the FP stackifier.  This leads to significantly
better codegen in several cases.

llvm-svn: 22300
2005-06-27 21:20:31 +00:00
Chris Lattner 3f5a98d1f4 Add markers in the asm file for tail calls, add a new ADJSTACKPTRri
sorta-pseudo-instruction

llvm-svn: 22042
2005-05-15 03:10:37 +00:00
Chris Lattner 6b5fa91a63 Yes, calltarget is the operand of the day.
llvm-svn: 22040
2005-05-15 01:10:30 +00:00
Chris Lattner f0649db870 Add some new instructions
llvm-svn: 22036
2005-05-14 23:35:21 +00:00
Chris Lattner 6e4c2302e6 add 'ret imm' instruction
llvm-svn: 21945
2005-05-13 17:56:48 +00:00
Chris Lattner 46b5ca4310 Fix the syntax of the i/o instructions, these are obviously unused.
llvm-svn: 21829
2005-05-09 20:49:20 +00:00
Chris Lattner 61827484c7 Add some new X86 instrs, patch contributed by Morten Ofstad
llvm-svn: 21608
2005-04-28 21:50:05 +00:00
Chris Lattner c21db6b15c add signed versions of the extra precision multiplies
llvm-svn: 21106
2005-04-06 04:19:22 +00:00
Chris Lattner 2d451658a6 add an fabs instr
llvm-svn: 21006
2005-04-02 04:31:56 +00:00
Chris Lattner 0ce80cd542 Fix spelling, patch contributed by Gabor Greif!
llvm-svn: 20343
2005-02-27 06:18:25 +00:00
Chris Lattner 0edf9535b9 Add rotate instructions.
llvm-svn: 19690
2005-01-19 07:50:03 +00:00
Chris Lattner d54845f530 Improve coverage of the X86 instruction set by adding 16-bit shift doubles.
llvm-svn: 19687
2005-01-19 07:31:24 +00:00
Chris Lattner 2947801735 Teach the code generator that shrd/shld is commutable if it has an immediate.
This allows us to generate this:

foo:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %EDX, DWORD PTR [%ESP + 8]
        shld %EDX, %EDX, 2
        shl %EAX, 2
        ret

instead of this:

foo:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, DWORD PTR [%ESP + 8]
        mov %EDX, %EAX
        shrd %EDX, %ECX, 30
        shl %EAX, 2
        ret

Note the magically transmogrifying immediate.

llvm-svn: 19686
2005-01-19 07:11:01 +00:00
Chris Lattner 5b589ec0c4 Add conditional moves for the parity flag.
llvm-svn: 19437
2005-01-10 22:09:33 +00:00
Chris Lattner d4bb2bbce1 ADC and IMUL are also commutable.
llvm-svn: 19264
2005-01-03 01:27:59 +00:00
Chris Lattner 295e45e60e Two changes here:
1. Add new instructions for checking parity flags: JP, JNP, SETP, SETNP.
2. Set the isCommutable and isPromotableTo3Address bits on several
   instructions.

llvm-svn: 19246
2005-01-02 02:35:46 +00:00
John Criswell 04570265a5 Correct the name of stosd for the AT&T syntax:
It's stosl (l for long == 32 bit).

llvm-svn: 17658
2004-11-10 04:48:15 +00:00
Chris Lattner 93867e516a Remove debugging code, fix encoding problem. This fixes the problems
the JIT had last night.

llvm-svn: 16766
2004-10-06 14:31:50 +00:00