Commit Graph

352 Commits

Author SHA1 Message Date
Chad Rosier 8446ede023 [x86 fast-isel] Per discussion with Eric, add all cases to switch with verbose
comments.

llvm-svn: 160069
2012-07-11 19:58:38 +00:00
Chad Rosier 43218c59c3 [x86 fast-isel] Rather then call llvm_unreachable() have fast-isel fall back
to Selection DAG isel.  Patch by Andrew Kaylor <andrew.kaylor@intel.com>.

llvm-svn: 160055
2012-07-11 17:23:17 +00:00
Jakob Stoklund Olesen d14101e0b9 Make X86 call and return instructions non-variadic.
Function argument and return value registers aren't part of the
encoding, so they should be implicit operands.

llvm-svn: 159728
2012-07-04 23:53:27 +00:00
Eli Friedman 315a0c79f3 Simplify code for calling a function where CanLowerReturn fails, fixing a small bug in the process.
llvm-svn: 157446
2012-05-25 00:09:29 +00:00
Chad Rosier 06e34d9220 Typo.
llvm-svn: 156633
2012-05-11 19:43:29 +00:00
Derek Schuff b051adf263 Fix fastcc structure return with fast-isel on x86-32
On x86-32, structure return via sret lets the callee pop the hidden
pointer argument off the stack, which the caller then re-pushes.
However if the calling convention is fastcc, then a register is used
instead, and the caller should not adjust the stack. This is
implemented with a check of IsTailCallConvention
X86TargetLowering::LowerCall but is now checked properly in
X86FastISel::DoSelectCall.

(this time, actually commit what was reviewed!)

llvm-svn: 155825
2012-04-30 16:57:15 +00:00
Derek Schuff a99b168145 Revert r155745
llvm-svn: 155746
2012-04-27 23:37:41 +00:00
Derek Schuff bbf8b83e90 Fix fastcc structure return with fast-isel on x86-32
On x86-32, structure return via sret lets the callee pop the hidden
pointer argument off the stack, which the caller then re-pushes.
However if the calling convention is fastcc, then a register is used
instead, and the caller should not adjust the stack. This is
implemented with a check of IsTailCallConvention
X86TargetLowering::LowerCall but is now checked properly in
X86FastISel::DoSelectCall.

llvm-svn: 155745
2012-04-27 23:27:17 +00:00
Craig Topper abadc660e0 Convert some uses of XXXRegisterClass to &XXXRegClass. No functional change since they are equivalent.
llvm-svn: 155186
2012-04-20 06:31:50 +00:00
Craig Topper f6e7e12f75 Remove unnecessary llvm:: qualifications
llvm-svn: 153500
2012-03-27 07:21:54 +00:00
Craig Topper bef78fc2ee Convert more static tables of registers used by calling convention to uint16_t to reduce space.
llvm-svn: 152538
2012-03-11 07:57:25 +00:00
Craig Topper 760b134ffa Make all pointers to TargetRegisterClass const since they are all pointers to static data that should not be modified.
llvm-svn: 151134
2012-02-22 05:59:10 +00:00
Jakob Stoklund Olesen 97e3115dc2 Use the same CALL instructions for Windows as for everything else.
The different calling conventions and call-preserved registers are
represented with regmask operands that are added dynamically.

llvm-svn: 150708
2012-02-16 17:56:02 +00:00
Jakob Stoklund Olesen 8a450cb2fa Enable register mask operands for x86 calls.
Call instructions no longer have a list of 43 call-clobbered registers.
Instead, they get a single register mask operand with a bit vector of
call-preserved registers.

This saves a lot of memory, 42 x 32 bytes = 1344 bytes per call
instruction, and it speeds up building call instructions because those
43 imp-def operands no longer need to be added to use-def lists. (And
removed and shifted and re-added for every explicit call operand).

Passes like LiveVariables, LiveIntervals, RAGreedy, PEI, and
BranchFolding are significantly faster because they can deal with call
clobbers in bulk.

Overall, clang -O2 is between 0% and 8% faster, uniformly distributed
depending on call density in the compiled code.  Debug builds using
clang -O0 are 0% - 3% faster.

I have verified that this patch doesn't change the assembly generated
for the LLVM nightly test suite when building with -disable-copyprop
and -disable-branch-fold.

Branch folding behaves slightly differently in a few cases because call
instructions have different hash values now.

Copy propagation flushes its data structures when it crosses a register
mask operand. This causes it to leave a few dead copies behind, on the
order of 20 instruction across the entire nightly test suite, including
SPEC. Fixing this properly would require the pass to use different data
structures.

llvm-svn: 150638
2012-02-16 00:02:50 +00:00
Chad Rosier f0687634c3 Use a temporary variable, rather then a series of redundant calls.
llvm-svn: 150538
2012-02-15 00:36:26 +00:00
Eli Friedman 32c7c25dcb Support MSVC x86-32 sret convention. PR11688. Patch by Joe Groff.
llvm-svn: 148513
2012-01-20 00:05:46 +00:00
Craig Topper b0c0f72ae6 Remove hasXMM/hasXMMInt functions. Move callers to hasSSE1/hasSSE2. This is the final piece to remove the AVX hack that disabled SSE.
llvm-svn: 147843
2012-01-10 06:54:16 +00:00
Craig Topper 210e4f81b3 Change some places that were checking for AVX OR SSE1/2 to use hasXMM/hasXMMInt instead. Also fix one place that checked SSE3, but accidentally excluded AVX to use hasSSE3orAVX. This is a step towards removing the AVX hack from the X86Subtarget.h
llvm-svn: 147764
2012-01-09 02:28:15 +00:00
Nick Lewycky 50f02cb21b Move global variables in TargetMachine into new TargetOptions class. As an API
change, now you need a TargetOptions object to create a TargetMachine. Clang
patch to follow.

One small functionality change in PTX. PTX had commented out the machine
verifier parts in their copy of printAndVerify. That now calls the version in
LLVMTargetMachine. Users of PTX who need verification disabled should rely on
not passing the command-line flag to enable it.

llvm-svn: 145714
2011-12-02 22:16:29 +00:00
Jakob Stoklund Olesen bde32d36bb Make X86::FsFLD0SS / FsFLD0SD real pseudo-instructions.
Like V_SET0, these instructions are expanded by ExpandPostRA to xorps /
vxorps so they can participate in execution domain swizzling.

This also makes the AVX variants redundant.

llvm-svn: 145440
2011-11-29 22:27:25 +00:00
Lang Hames 7d2f7b5a33 Teach fast isel about vector stores, and make DoSelectCall return false when it fails to emit a store. This fixes <rdar://problem/10215997>.
llvm-svn: 142432
2011-10-18 22:11:33 +00:00
Nick Lewycky 064c1c0e77 Fix indent in comment.
llvm-svn: 141749
2011-10-12 00:14:12 +00:00
Eli Friedman 87c844cdf8 PR10991: make fast-isel correctly check whether accessing a global through an alias involves thread-local storage. (I'm not entirely sure how this is supposed to work, but this patch makes fast-isel consistent with the normal isel path.)
llvm-svn: 140355
2011-09-22 23:41:28 +00:00
Bruno Cardoso Lopes d893fc92af Teach X86FastISel to use AVX versions of instructions when possible
llvm-svn: 139062
2011-09-03 00:46:42 +00:00
Eli Friedman f3dd6da7a8 Don't fast-isel for atomic load/store; some cases require extra handling missing from fast-isel.
llvm-svn: 139044
2011-09-02 22:33:24 +00:00
Nick Lewycky a530a4d925 Bail from FastISel when we encounter a volatile memset intrinsic. Patch by Ivan
Krasin!

llvm-svn: 136663
2011-08-02 00:40:16 +00:00
Bruno Cardoso Lopes 616fe60548 Teach PreprocessISelDAG to be aware of vector types and to not process them.
llvm-svn: 136653
2011-08-01 21:54:05 +00:00
Chris Lattner 229907cd11 land David Blaikie's patch to de-constify Type, with a few tweaks.
llvm-svn: 135375
2011-07-18 04:54:35 +00:00
Jakob Stoklund Olesen d0e2352b65 Fix a problem with fast-isel return values introduced in r134018.
We would put the return value from long double functions in the wrong
register.

This fixes gcc.c-torture/execute/conversion.c

llvm-svn: 134205
2011-06-30 23:42:18 +00:00
Evan Cheng 194c3dc01f Move CallFrameSetupOpcode and CallFrameDestroyOpcode to TargetInstrInfo.
llvm-svn: 134030
2011-06-28 21:14:33 +00:00
Evan Cheng 6cc775f905 - Rename TargetInstrDesc, TargetOperandInfo to MCInstrDesc and MCOperandInfo and
sink them into MC layer.
- Added MCInstrInfo, which captures the tablegen generated static data. Chang
TargetInstrInfo so it's based off MCInstrInfo.

llvm-svn: 134021
2011-06-28 19:10:37 +00:00
Jakob Stoklund Olesen 7297e7e223 Clean up the handling of the x87 fp stack to make it more robust.
Drop the FpMov instructions, use plain COPY instead.

Drop the FpSET/GET instruction for accessing fixed stack positions.
Instead use normal COPY to/from ST registers around inline assembly, and
provide a single new FpPOP_RETVAL instruction that can access the return
value(s) from a call. This is still necessary since you cannot tell from
the CALL instruction alone if it returns anything on the FP stack. Teach
fast isel to use this.

This provides a much more robust way of handling fixed stack registers -
we can tolerate arbitrary FP stack instructions inserted around calls
and inline assembly. Live range splitting could sometimes break x87 code
by inserting spill code in unfortunate places.

As a bonus we handle floating point inline assembly correctly now.

llvm-svn: 134018
2011-06-28 18:32:28 +00:00
Evan Cheng 3a0c5e52ff Remove TargetOptions.h dependency from X86Subtarget.
llvm-svn: 133726
2011-06-23 17:54:54 +00:00
Eli Friedman 1735b29196 Make sure to pass OpFlags into MachineInstrBuilder::addExternalSymbol; the
memcpy/memset symbol doesn't get marked up correctly in PIC modes otherwise.
Should fix llvm-x86_64-linux-checks buildbot.  Followup to r132864.

llvm-svn: 132869
2011-06-11 01:55:07 +00:00
Eli Friedman cd2124a3f0 Add full x86 fast-isel support for memcpy and memset.
rdar://9431466

llvm-svn: 132864
2011-06-10 23:39:36 +00:00
Eric Christopher 0713a9d8fc Add a parameter to CCState so that it can access the MachineFunction.
No functional change.

Part of PR6965

llvm-svn: 132763
2011-06-08 23:55:35 +00:00
Eli Friedman c70355195c Rewrite fast-isel integer cast handling to handle more cases, and to be simpler and more consistent.
The practical effects here are that x86-64 fast-isel can now handle trunc from i8 to i1, and ARM fast-isel can handle many more constructs involving integers narrower than 32 bits (including loads, stores, and many integer casts).

rdar://9437928 .

llvm-svn: 132099
2011-05-25 23:49:02 +00:00
Eli Friedman 60afcc2a6f Add fast-isel support for byval calls on x86.
llvm-svn: 131764
2011-05-20 22:21:04 +00:00
Eli Friedman 22da799428 Add fast-isel support for zeroext and signext ret instructions on x86.
llvm-svn: 131689
2011-05-19 22:16:13 +00:00
Eli Friedman 6fc94dd687 Revert unintentional commit.
llvm-svn: 131597
2011-05-18 23:13:10 +00:00
Eli Friedman 1754a25977 More instcombine simplifications towards better debug locations.
llvm-svn: 131596
2011-05-18 23:11:30 +00:00
Eli Friedman 7b27942fe7 Add x86 fast-isel for calls returning first-class aggregates. rdar://9435872.
This is r131438 with a couple small fixes.

llvm-svn: 131474
2011-05-17 18:29:03 +00:00
Eli Friedman 7335e8a720 Back out r131444 and r131438; they're breaking nightly tests. I'll look into
it more tomorrow.

llvm-svn: 131451
2011-05-17 02:36:59 +00:00
Eli Friedman 83ba150f3a Add x86 fast-isel for calls returning first-class aggregates. rdar://9435872.
llvm-svn: 131438
2011-05-17 00:13:47 +00:00
Eli Friedman a4d4a0162d Make fast-isel work correctly s/uadd.with.overflow intrinsics.
llvm-svn: 131420
2011-05-16 21:06:17 +00:00
Eli Friedman 8f1e11cde9 Fix a FIXME by moving the fast-isel implementation of the objectsize intrinsic from the x86 code to the generic code.
llvm-svn: 131332
2011-05-14 00:47:51 +00:00
Eli Friedman f080a57b81 Zap useless code; this hasn't done anything useful since fast-isel switched to being bottom-up (a very long time ago).
llvm-svn: 131329
2011-05-14 00:19:32 +00:00
Eli Friedman 7cd5101ad3 fast-isel sret calls, try 2. We actually do need to do something on x86-32. rdar://problem/9303592 .
llvm-svn: 130429
2011-04-28 20:19:12 +00:00
Eli Friedman d5a80ca3c8 Revert r130348; causing buildbot issues on x86-32.
llvm-svn: 130412
2011-04-28 18:06:10 +00:00
Eli Friedman 8bd572fc58 fast-isel sret. We actually don't need to do anything special on x86. :) rdar://problem/9303592 .
llvm-svn: 130348
2011-04-27 23:58:52 +00:00
Eli Friedman 406c471b69 Make the fast-isel code for literal 0.0 a bit shorter/faster, since 0.0 is common. rdar://problem/9303592 .
llvm-svn: 130338
2011-04-27 22:41:55 +00:00
Eli Friedman bcc6914146 Refactor out code to fast-isel a memcpy operation with a small constant
length.  (I'm planning to use this to implement byval.)

llvm-svn: 130274
2011-04-27 01:45:07 +00:00
Eli Friedman 0eea0293d9 Fix an edge case involving branches in fast-isel on x86.
rdar://problem/9303306 .

llvm-svn: 130272
2011-04-27 01:34:27 +00:00
Daniel Dunbar cd01ed5bd6 ADT/Triple: Renambe isOSX... methods to isMacOSX for consistency with the OS
triple component.

llvm-svn: 129838
2011-04-20 00:14:25 +00:00
Daniel Dunbar 100455a3c8 Target/X86: Eliminate uses of getDarwinVers().
llvm-svn: 129813
2011-04-19 21:04:12 +00:00
Eli Friedman ee92a6b332 Add support for FastISel'ing varargs calls.
llvm-svn: 129765
2011-04-19 17:22:22 +00:00
Chris Lattner 91328b317b Implement support for x86 fastisel of small fixed-sized memcpys, which are generated
en-mass for C++ PODs.  On my c++ test file, this cuts the fast isel rejects by 10x 
and shrinks the generated .s file by 5%

llvm-svn: 129755
2011-04-19 05:52:03 +00:00
Chris Lattner 34a08c2344 tidy up
llvm-svn: 129753
2011-04-19 05:15:59 +00:00
Chris Lattner 5f4b783426 Implement support for fast isel of calls of i1 arguments, even though they are illegal,
when they are a truncate from something else.  This eliminates fully half of all the 
fastisel rejections on a test c++ file I'm working with, which should make a substantial
improvement for -O0 compile of c++ code.

This fixed rdar://9297003 - fast isel bails out on all functions taking bools

llvm-svn: 129752
2011-04-19 05:09:50 +00:00
Chris Lattner d7f7c93914 Handle i1/i8/i16 constant integer arguments to calls by prepromoting them.
Before we would bail out on i1 arguments all together, now we just bail on
non-constant ones.  Also, we used to emit extraneous code.  e.g. test12 was:

	movb	$0, %al
	movzbl	%al, %edi
	callq	_test12

and test13 was:
	movb	$0, %al
	xorl	%edi, %edi
	movb	%al, 7(%rsp)
	callq	_test13f

Now we get:

	movl	$0, %edi
	callq	_test12
and:
	movl	$0, %edi
	callq	_test13f

llvm-svn: 129751
2011-04-19 04:42:38 +00:00
Chris Lattner c59290a34c be layout aware, to produce:
testb	$1, %al
	je	LBB0_2
## BB#1:                                ## %if.then
	movb	$0, %al

instead of:

	testb	$1, %al
	jne	LBB0_1
	jmp	LBB0_2
LBB0_1:                                 ## %if.then
	movb	$0, %al

how 'bout that.

llvm-svn: 129749
2011-04-19 04:26:32 +00:00
Chris Lattner 2c8a4c3b1b fix rdar://9297006 - fast isel bails out on trunc to i1 -> bools cry,
a common cause of fast isel rejects on c++ code.

llvm-svn: 129748
2011-04-19 04:22:17 +00:00
Chris Lattner b53ccb8e36 1. merge fast-isel-shift-imm.ll into fast-isel-x86-64.ll
2. implement rdar://9289501 - fast isel should fold trivial multiplies to shifts
3. teach tblgen to handle shift immediates that are different sizes than the 
   shifted operands, eliminating some code from the X86 fast isel backend.
4. Have FastISel::SelectBinaryOp use (the poorly named) FastEmit_ri_ function
   instead of FastEmit_ri to simplify code.

llvm-svn: 129666
2011-04-17 20:23:29 +00:00
Chris Lattner eb729d48ff fix an x86 fast isel issue where we'd completely give up on folding an address
when we have a global variable base an an index.  Instead, just give up on
folding the global variable.

Before we'd geenrate:

_test:                                  ## @test
## BB#0:
	movq	_rtx_length@GOTPCREL(%rip), %rax
	leaq	(%rax), %rax
	addq	%rdi, %rax
	movzbl	(%rax), %eax
	ret

now we generate:

_test:                                  ## @test
## BB#0:
	movq	_rtx_length@GOTPCREL(%rip), %rax
	movzbl	(%rax,%rdi), %eax
	ret

The difference is even more significant when there is a scale
involved.

This fixes rdar://9289558 - total fail with addr mode formation at -O0/x86-64

llvm-svn: 129664
2011-04-17 17:47:38 +00:00
Chris Lattner 4832660b4d fix an oversight which caused us to compile the testcase (and other
less trivial things) into a dummy lea.  Before we generated:

_test:                                  ## @test
	movq	_G@GOTPCREL(%rip), %rax
	leaq	(%rax), %rax
	ret

now we produce:

_test:                                  ## @test
	movq	_G@GOTPCREL(%rip), %rax
	ret

This is part of rdar://9289558

llvm-svn: 129662
2011-04-17 17:12:08 +00:00
Chris Lattner 4b026b962a tidy up and reduce indentation.
llvm-svn: 129661
2011-04-17 17:05:12 +00:00
Jay Foad 7c14a558fe Don't include Operator.h from InstrTypes.h.
llvm-svn: 129271
2011-04-11 09:35:34 +00:00
Dan Gohman c1783b31a4 Fix fast-isel address mode folding to avoid folding instructions
outside of the current basic block. This fixes PR9500, rdar://9156159.

llvm-svn: 128041
2011-03-22 00:04:35 +00:00
NAKAMURA Takumi 860abd0f28 Target/X86/X86FastISel: [PR6275] Fix Win32's dllimport function with fastisel.
"dllimport" function must not be GlobalVariable, but Function. It is enough to check with GlobalValue.
test/CodeGen/X86/dll-linkage.ll is updated to check llc -O0.

llvm-svn: 126110
2011-02-21 04:50:06 +00:00
Chris Lattner 2d186574a6 reapply my fix for PR8961 with a tweak to properly handle
multi-instruction sequences like calls.  Many thanks to Jakob for
finding a testcase.

llvm-svn: 123559
2011-01-16 02:27:38 +00:00
Chris Lattner e93e4f118c revert my fastisel patch again which apparently still gives the
llvm-gcc-i386-linux-selfhost buildbot heartburn...

llvm-svn: 123431
2011-01-14 06:14:33 +00:00
Chris Lattner 5ca1391003 reapply r123414 now that the botz are calmed down and the fix is already in.
llvm-svn: 123427
2011-01-14 04:24:28 +00:00
Chris Lattner 21a64979f1 r123414 broke llvm-gcc bootstrap apparently, revert
llvm-svn: 123422
2011-01-14 02:07:32 +00:00
Chris Lattner 0c34cb429e fix PR8961 - a fast isel miscompilation where we'd insert a new instruction
after sext's generated for addressing that got folded.  Previously we compiled
test5 into:

_test5:                                 ## @test5
## BB#0:
        movq    -8(%rsp), %rax          ## 8-byte Reload
        movq    (%rdi,%rax), %rdi
        addq    %rdx, %rdi
        movslq  %esi, %rax
        movq    %rax, -8(%rsp)          ## 8-byte Spill
        movq    %rdi, %rax
        ret

which is insane and wrong.  Now we produce:

_test5:                                 ## @test5
## BB#0:
	movslq	%esi, %rax
	movq	(%rdi,%rax), %rax
	addq	%rdx, %rax
	ret

llvm-svn: 123414
2011-01-14 00:01:01 +00:00
Evan Cheng 6eb516dbea Do not model all INLINEASM instructions as having unmodelled side effects.
Instead encode llvm IR level property "HasSideEffects" in an operand (shared
with IsAlignStack). Added MachineInstrs::hasUnmodeledSideEffects() to check
the operand when the instruction is an INLINEASM.

This allows memory instructions to be moved around INLINEASM instructions.

llvm-svn: 123044
2011-01-07 23:50:32 +00:00
Benjamin Kramer 3aa955e906 Remove dead code and silence warnings.
llvm-svn: 122957
2011-01-06 13:01:02 +00:00
Chris Lattner 2d7df02670 silence more self assignment warnings.
llvm-svn: 122920
2011-01-05 22:26:52 +00:00
Wesley Peck 527da1b6e2 Renaming ISD::BIT_CONVERT to ISD::BITCAST to better reflect the LLVM IR concept.
llvm-svn: 119990
2010-11-23 03:31:01 +00:00
Dan Gohman aeb5e66772 Reapply r118917. With pseudo-instruction expansion moved to
a different pass, the complicated interaction between cmov expansion
and fast isel is no longer a concern.

llvm-svn: 119400
2010-11-16 22:43:23 +00:00
Dan Gohman 1279fc47f9 Revert r118917, which is implicated in the llvm-gcc-i386-linux-selfhost failure.
llvm-svn: 118954
2010-11-13 00:31:40 +00:00
Dan Gohman 0284c5d0c7 When the definition of an address value is in a different block
from the user of the address, fall back to just using the
address in a register instead of bailing out of fast-isel
altogether.

llvm-svn: 118917
2010-11-12 19:14:00 +00:00
Duncan Sands 71049f78ed In the calling convention logic, ValVT is always a legal type,
and as such can be represented by an MVT - the more complicated
EVT is not needed.  Use MVT for ValVT everywhere.

llvm-svn: 118245
2010-11-04 10:49:57 +00:00
Duncan Sands f5dda01f33 Inside the calling convention logic LocVT is always a simple
value type, so there is no point in passing it around using
an EVT.  Use the simpler MVT everywhere.  Rather than trying
to propagate this information maximally in all the code that
using the calling convention stuff, I chose to do a mainly
low impact change instead.

llvm-svn: 118167
2010-11-03 11:35:31 +00:00
Duncan Sands fb0a48ef96 Factorize the duplicated logic for choosing the right argument
calling convention out of the fast and normal ISel files, and
into the calling convention TD file.

llvm-svn: 117856
2010-10-31 13:21:44 +00:00
Duncan Sands fa7e6f2417 Remove CCAssignFnForRet from X86 FastISel in favour of RetCC_X86,
which has the same logic specified in the CallingConv TD file.
This brings FastISel in line with the standard X86 ISel.

llvm-svn: 117855
2010-10-31 13:02:38 +00:00
Eric Christopher 0574cc556a Noticed by inspection when looking for other cmov bits.
llvm-svn: 115100
2010-09-29 23:00:29 +00:00
Dale Johannesen 786874de82 MMX parameters aren't handled here yet.
llvm-svn: 114844
2010-09-27 17:29:47 +00:00
Chris Lattner eeba0c73e5 implement rdar://6653118 - fastisel should fold loads where possible.
Since mem2reg isn't run at -O0, we get a ton of reloads from the stack,
for example, before, this code:

int foo(int x, int y, int z) {
  return x+y+z;
}

used to compile into:

_foo:                                   ## @foo
	subq	$12, %rsp
	movl	%edi, 8(%rsp)
	movl	%esi, 4(%rsp)
	movl	%edx, (%rsp)
	movl	8(%rsp), %edx
	movl	4(%rsp), %esi
	addl	%edx, %esi
	movl	(%rsp), %edx
	addl	%esi, %edx
	movl	%edx, %eax
	addq	$12, %rsp
	ret

Now we produce:

_foo:                                   ## @foo
	subq	$12, %rsp
	movl	%edi, 8(%rsp)
	movl	%esi, 4(%rsp)
	movl	%edx, (%rsp)
	movl	8(%rsp), %edx
	addl	4(%rsp), %edx    ## Folded load
	addl	(%rsp), %edx     ## Folded load
	movl	%edx, %eax
	addq	$12, %rsp
	ret

Fewer instructions and less register use = faster compiles.

llvm-svn: 113102
2010-09-05 02:18:34 +00:00
Dan Gohman 42ef669d81 Fix x86 fast-isel's cmp+branch folding to avoid folding when the
comparison is in a different basic block from the branch. In such
cases, the comparison's operands may not have initialized virtual
registers available.

llvm-svn: 111709
2010-08-21 02:32:36 +00:00
Nate Begeman 68a069a188 Make fast isel win64-aware w.r.t. call-clobbered regs
llvm-svn: 109069
2010-07-22 00:09:39 +00:00
Jakob Stoklund Olesen 2c130b8ead Use MI.isCopy.
llvm-svn: 108565
2010-07-16 22:35:34 +00:00
Jakob Stoklund Olesen 8b1bb8cfbd Last COPY conversion.
llvm-svn: 108387
2010-07-14 23:58:21 +00:00
Dan Gohman 1f471435f8 Don't propagate debug locations to instructions for materializing
constants, since they may not be emited near the other instructions
which get the same line, and this confuses debug info.

llvm-svn: 108302
2010-07-14 01:07:44 +00:00
Dan Gohman 68d7424a65 Don't fast-isel an x87 comparison opcode, as fast-isel doesn't
support branching on x87 comparisons yet. This fixes PR7624.

llvm-svn: 108149
2010-07-12 15:46:30 +00:00
Jakob Stoklund Olesen 4806848799 Avoid SSE instructions in FastIsel when it is not available.
llvm-svn: 108091
2010-07-11 16:22:13 +00:00
Jakob Stoklund Olesen 8969657f0c Use COPY in X86FastISel::X86SelectRet.
Don't try a cross-class copy. That is very unlikely anywy since return value
registers are usually register class friendly. (%EAX, %XMM0, etc).

llvm-svn: 108074
2010-07-11 05:17:02 +00:00
Jakob Stoklund Olesen 3bb1267431 Use COPY in FastISel everywhere it is safe and trivial.
The remaining copyRegToReg calls actually check the return value (shock!), so we
cannot trivially replace them with COPY instructions.

llvm-svn: 108069
2010-07-11 03:31:00 +00:00
Dan Gohman d7b5ce3312 Reapply bottom-up fast-isel, with several fixes for x86-32:
- Check getBytesToPopOnReturn().
 - Eschew ST0 and ST1 for return values.
 - Fix the PIC base register initialization so that it doesn't ever
   fail to end up the top of the entry block.

llvm-svn: 108039
2010-07-10 09:00:22 +00:00
Bob Wilson 6586e9b203 --- Reverse-merging r107947 into '.':
U    utils/TableGen/FastISelEmitter.cpp
--- Reverse-merging r107943 into '.':
U    test/CodeGen/X86/fast-isel.ll
U    test/CodeGen/X86/fast-isel-loads.ll
U    include/llvm/Target/TargetLowering.h
U    include/llvm/Support/PassNameParser.h
U    include/llvm/CodeGen/FunctionLoweringInfo.h
U    include/llvm/CodeGen/CallingConvLower.h
U    include/llvm/CodeGen/FastISel.h
U    include/llvm/CodeGen/SelectionDAGISel.h
U    lib/CodeGen/LLVMTargetMachine.cpp
U    lib/CodeGen/CallingConvLower.cpp
U    lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
U    lib/CodeGen/SelectionDAG/FunctionLoweringInfo.cpp
U    lib/CodeGen/SelectionDAG/FastISel.cpp
U    lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
U    lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp
U    lib/CodeGen/SelectionDAG/InstrEmitter.cpp
U    lib/CodeGen/SelectionDAG/TargetLowering.cpp
U    lib/Target/XCore/XCoreISelLowering.cpp
U    lib/Target/XCore/XCoreISelLowering.h
U    lib/Target/X86/X86ISelLowering.cpp
U    lib/Target/X86/X86FastISel.cpp
U    lib/Target/X86/X86ISelLowering.h

llvm-svn: 107987
2010-07-09 16:37:18 +00:00
Dan Gohman 0b5aa1cdd3 Re-apply bottom-up fast-isel, with fixes. Be very careful to avoid emitting
a DBG_VALUE after a terminator, or emitting any instructions before an EH_LABEL.

llvm-svn: 107943
2010-07-09 00:39:23 +00:00