Commit Graph

497 Commits

Author SHA1 Message Date
Arnold Schwaighofer d2c16ff905 Update tailcall code to include inline attribute operand for memcpy.
llvm-svn: 43978
2007-11-10 10:48:01 +00:00
Evan Cheng 797d56ff17 Much improved pic jumptable codegen:
Then:
        call    "L1$pb"
"L1$pb":
        popl    %eax
		...
LBB1_1: # entry
        imull   $4, %ecx, %ecx
        leal    LJTI1_0-"L1$pb"(%eax), %edx
        addl    LJTI1_0-"L1$pb"(%ecx,%eax), %edx
        jmpl    *%edx

        .align  2
        .set L1_0_set_3,LBB1_3-LJTI1_0
        .set L1_0_set_2,LBB1_2-LJTI1_0
        .set L1_0_set_5,LBB1_5-LJTI1_0
        .set L1_0_set_4,LBB1_4-LJTI1_0
LJTI1_0:
        .long    L1_0_set_3
        .long    L1_0_set_2

Now:
        call    "L1$pb"
"L1$pb":
        popl    %eax
		...
LBB1_1: # entry
        addl    LJTI1_0-"L1$pb"(%eax,%ecx,4), %eax
        jmpl    *%eax

		.align  2
		.set L1_0_set_3,LBB1_3-"L1$pb"
		.set L1_0_set_2,LBB1_2-"L1$pb"
		.set L1_0_set_5,LBB1_5-"L1$pb"
		.set L1_0_set_4,LBB1_4-"L1$pb"
LJTI1_0:
        .long    L1_0_set_3
        .long    L1_0_set_2

llvm-svn: 43924
2007-11-09 01:32:10 +00:00
Rafael Espindola fa0df55bdd Move the LowerMEMCPY and LowerMEMCPYCall to a common place.
Thanks for the suggestions Bill :-)

llvm-svn: 43742
2007-11-05 23:12:20 +00:00
Chris Lattner 296160d443 Fix PR1763 by allowing the 'q' constraint to work with 64-bit
regs on x86-64.

llvm-svn: 43669
2007-11-04 06:51:12 +00:00
Evan Cheng 2b93a20b09 Unbreak tailcall opt.
llvm-svn: 43646
2007-11-02 17:45:40 +00:00
Evan Cheng e453ff4913 Missing a getNumOperands check.
llvm-svn: 43630
2007-11-02 01:26:22 +00:00
Rafael Espindola 419b6d7ce4 Make ARM and X86 LowerMEMCPY identical by moving the isThumb check into getMaxInlineSizeThreshold
and by restructuring the X86 version.

New I just have to move this to a common place :-)

llvm-svn: 43554
2007-10-31 14:39:58 +00:00
Rafael Espindola 063f177300 Make ARM an X86 memcpy expansion more similar to each other.
Now both subtarget define getMaxInlineSizeThreshold and the expansion uses it.

This should not change generated code.

llvm-svn: 43552
2007-10-31 11:52:06 +00:00
Dale Johannesen b066c1f216 Make i64=expand_vector_elt(v2i64) work in 32-bit mode.
llvm-svn: 43535
2007-10-31 00:32:36 +00:00
Dale Johannesen 6aa304e529 Add missing MMX PSUBQ.
llvm-svn: 43488
2007-10-30 01:18:38 +00:00
Evan Cheng e106e2f142 Enable more fold (sext (load x)) -> (sext (truncate (sextload x)))
transformation. Previously, it's restricted by ensuring the number of load uses
is one. Now the restriction is loosened up by allowing setcc uses to be
"extended" (e.g. setcc x, c, eq -> setcc sext(x), sext(c), eq).

llvm-svn: 43465
2007-10-29 19:58:20 +00:00
Evan Cheng 7b3f7feaea Avoid doing something dumb like rewriting using a 64-bit iv in 32-bit mode.
llvm-svn: 43446
2007-10-29 07:57:50 +00:00
Evan Cheng 7f3d02471d Loosen up iv reuse to allow reuse of the same stride but a larger type when truncating from the larger type to smaller type is free.
e.g.
Turns this loop:
LBB1_1: # entry.bb_crit_edge
        xorl    %ecx, %ecx
        xorw    %dx, %dx
        movw    %dx, %si
LBB1_2: # bb
        movl    L_X$non_lazy_ptr, %edi
        movw    %si, (%edi)
        movl    L_Y$non_lazy_ptr, %edi
        movw    %dx, (%edi)
		addw    $4, %dx
		incw    %si
		incl    %ecx
		cmpl    %eax, %ecx
		jne     LBB1_2  # bb
	
into

LBB1_1: # entry.bb_crit_edge
        xorl    %ecx, %ecx
        xorw    %dx, %dx
LBB1_2: # bb
        movl    L_X$non_lazy_ptr, %esi
        movw    %cx, (%esi)
        movl    L_Y$non_lazy_ptr, %esi
        movw    %dx, (%esi)
        addw    $4, %dx
		incl    %ecx
        cmpl    %eax, %ecx
        jne     LBB1_2  # bb

llvm-svn: 43375
2007-10-26 01:56:11 +00:00
Dale Johannesen 8ee70112ea Allow for copysign having f80 second argument.
Fixes 5550319.

llvm-svn: 43205
2007-10-21 01:07:44 +00:00
Rafael Espindola 846c19dd70 Add support for byval function whose argument is not 32 bit aligned.
To do this it is necessary to add a "always inline" argument to the
memcpy node. For completeness I have also added this node to memmove
and memset.  I have also added getMem* functions, because the extra
argument makes it cumbersome to use getNode and because I get confused
by it :-)

llvm-svn: 43172
2007-10-19 10:41:11 +00:00
Chris Lattner 12d5da49d3 Change fp to sint legalization on x86-32 to do 2 x i32
loads instead of 1 x i64 loads.  This doesn't change any functionality yet.

llvm-svn: 43068
2007-10-17 06:17:29 +00:00
Chris Lattner 693cbeadff fix some funny indentation, add comments.
llvm-svn: 43066
2007-10-17 06:02:13 +00:00
Dale Johannesen e5530a35d4 Check for invalid cc's in f80 select.
llvm-svn: 43033
2007-10-16 18:09:08 +00:00
Arnold Schwaighofer b3d58b98d0 Correction to tail call optimization code. The new return address
was stored to the acutal stack slot before the parameters were
lowered to their stack slot. This could cause arguments to be
overwritten by the return address if the called function had less
parameters than the caller function. The update should remove the
last failing test case of llc-beta: SPASS.

llvm-svn: 43027
2007-10-16 09:05:00 +00:00
Evan Cheng 7bcfd8f880 LowerFP_TO_SINT must not create a stack object if it's not needed.
llvm-svn: 43004
2007-10-15 20:11:21 +00:00
Evan Cheng 4099f4f91a Unbreak x86-64.
llvm-svn: 42962
2007-10-14 10:09:39 +00:00
Arnold Schwaighofer e8d0bf2669 Correcting the corrections. Bad bad baaad emacs!
llvm-svn: 42935
2007-10-12 21:53:12 +00:00
Arnold Schwaighofer 1f0da1fefb Corrected many typing errors. And removed 'nest' parameter handling
for fastcc from X86CallingConv.td.  This means that nested functions
are not supported for calling convention 'fastcc'.

llvm-svn: 42934
2007-10-12 21:30:57 +00:00
Duncan Sands a6286bd502 Due to the new tail call optimization, trampolines can no
longer be created for fastcc functions.

llvm-svn: 42925
2007-10-12 19:37:31 +00:00
Dan Gohman 8d978da3b0 Mark vector ctpop, cttz, and ctlz as Expand on x86.
llvm-svn: 42905
2007-10-12 14:09:42 +00:00
Dan Gohman 482732af9d Set ISD::FPOW to Expand.
llvm-svn: 42881
2007-10-11 23:21:31 +00:00
Arnold Schwaighofer 9ccea99165 Added tail call optimization to the x86 back end. It can be
enabled by passing -tailcallopt to llc.  The optimization is
performed if the following conditions are satisfied:
* caller/callee are fastcc
* elf/pic is disabled OR
  elf/pic enabled + callee is in module + callee has
  visibility protected or hidden

llvm-svn: 42870
2007-10-11 19:40:01 +00:00
Evan Cheng f5ec10b64c Bug fix. X86 was emitting redundant setcc and test instructions before a conditional move.
llvm-svn: 42774
2007-10-08 22:16:29 +00:00
Dan Gohman a160361c85 Migrate X86 and ARM from using X86ISD::{,I}DIV and ARMISD::MULHILO{U,S} to
use ISD::{S,U}DIVREM and ISD::{S,U}MUL_HIO. Move the lowering code
associated with these operators into target-independent in LegalizeDAG.cpp
and TargetLowering.cpp.

llvm-svn: 42762
2007-10-08 18:33:35 +00:00
Evan Cheng 6912b50958 Not needed any more.
llvm-svn: 42623
2007-10-05 01:34:14 +00:00
Evan Cheng 5fb5a1f389 Enabling new condition code modeling scheme.
llvm-svn: 42459
2007-09-29 00:00:36 +00:00
Rafael Espindola 6c04ac1db0 Refactor the memcpy lowering for the x86 target.
The only generated code difference is that now we call memcpy when
the size of the array is unknown. This matches GCC behavior and is
better since the run time value can be arbitrarily large.

llvm-svn: 42433
2007-09-28 12:53:01 +00:00
Dale Johannesen b6d56401aa Enable codegen for long double abs, sin, cos
llvm-svn: 42368
2007-09-26 21:10:55 +00:00
Evan Cheng 9b7f0e6eb4 translateX86CC updates the last two operands.
llvm-svn: 42333
2007-09-26 00:45:55 +00:00
Dan Gohman 31599685c7 When both x/y and x%y are needed (x and y both scalar integer), compute
both results with a single div or idiv instruction. This uses new X86ISD
nodes for DIV and IDIV which are introduced during the legalize phase
so that the SelectionDAG's CSE can automatically eliminate redundant
computations.

llvm-svn: 42308
2007-09-25 18:23:27 +00:00
Dan Gohman 5e1a428344 Move the setOperationAction(ISD::DEBUG_LOC, MVT::Other, Expand) and
the check to see if the assembler supports .loc from X86TargetLowering
into the superclass TargetLowering.

llvm-svn: 42297
2007-09-25 15:10:49 +00:00
Evan Cheng e95f391ef1 Added support for new condition code modeling scheme (i.e. physical register dependency). These are a bunch of instructions that are duplicated so the x86 backend can support both the old and new schemes at the same time. They will be deleted after
all the kinks are worked out.

llvm-svn: 42285
2007-09-25 01:57:46 +00:00
Dan Gohman 1b2156fcae Add support on x86 for having Legalize lower ISD::LOCATION to ISD::DEBUG_LOC
instead of ISD::LABEL with a manual .debug_line entry when the assembler
supports .file and .loc directives.

llvm-svn: 42278
2007-09-24 21:54:14 +00:00
Chris Lattner 5b5484db63 claim that "st" is from the 80-bit register file. This causes x87-using inline
asm to die with:

ScheduleDAG.cpp:269: failed assertion `false && "Couldn't find the register class"'

instead of:
failed assertion `RegMap->getRegClass(VReg) == RC && "Register class of operand and regclass of use don't agree!"'

yay.

llvm-svn: 42259
2007-09-24 05:27:37 +00:00
Dale Johannesen e36c400255 Fix PR 1681. When X86 target uses +sse -sse2,
keep f32 in SSE registers and f64 in x87.  This
is effectively a new codegen mode.
Change addLegalFPImmediate to permit float and
double variants to do different things.
Adjust callers.

llvm-svn: 42246
2007-09-23 14:52:20 +00:00
Rafael Espindola 4730c04904 Don't add a default STACK_ALIGN (use the generic ABI alignment)
Implement calls to functions with byval arguments on X86

llvm-svn: 42192
2007-09-21 15:50:22 +00:00
Rafael Espindola f065f0e2a1 small cleanup: use LowerMemArgument in LowerFastCCArguments also
llvm-svn: 42189
2007-09-21 14:55:38 +00:00
Dale Johannesen 7d67e547b5 More long double fixes. x86_64 should build now.
llvm-svn: 42155
2007-09-19 23:55:34 +00:00
Dan Gohman 863bdc332d Emit integer x<1 as x<=0, as comparisons with zero (now includeing
64-bit) can use test instead of cmp with an immediate.

llvm-svn: 42026
2007-09-17 14:49:27 +00:00
Dale Johannesen 98d3a08d8f Remove the assumption that FP's are either float or
double from some of the many places in the optimizers
it appears, and do something reasonable with x86
long double.
Make APInt::dump() public, remove newline, use it to
dump ConstantSDNode's.
Allow APFloats in FoldingSet.
Expand X86 backend handling of long doubles (conversions
to/from int, mostly).

llvm-svn: 41967
2007-09-14 22:26:36 +00:00
Rafael Espindola 272f7304f0 Add support for functions with byval arguments on x86
llvm-svn: 41953
2007-09-14 15:48:13 +00:00
Dale Johannesen 245dceb06d Add APInt interfaces to APFloat (allows directly
access to bits).  Use them in place of float and
double interfaces where appropriate.
First bits of x86 long double constants handling 
(untested, probably does not work).

llvm-svn: 41858
2007-09-11 18:32:33 +00:00
Duncan Sands 86e0119822 Fold the adjust_trampoline intrinsic into
init_trampoline.  There is now only one
trampoline intrinsic.

llvm-svn: 41841
2007-09-11 14:10:23 +00:00
Dale Johannesen bed9dc423c Next round of APFloat changes.
Use APFloat in UpgradeParser and AsmParser.
Change all references to ConstantFP to use the
APFloat interface rather than double.  Remove
the ConstantFP double interfaces.
Use APFloat functions for constant folding arithmetic
and comparisons.
(There are still way too many places APFloat is
just a wrapper around host float/double, but we're
getting there.)

llvm-svn: 41747
2007-09-06 18:13:44 +00:00
Anton Korobeynikov 50ab26e835 Reapply r41578 with proper fix
llvm-svn: 41680
2007-09-03 00:36:06 +00:00