Commit Graph

543 Commits

Author SHA1 Message Date
Rafael Espindola 9277379fc0 remove unused arguments.
llvm-svn: 68109
2009-03-31 16:16:57 +00:00
Evan Cheng 885bc6de52 X86 address mode isel tweak. If the base of the address is also used by a CopyToReg (i.e. it's likely live-out), do not fold the sub-expressions into the addressing mode to avoid computing the address twice. The CopyToReg use will be isel'ed to a LEA, re-use it for address instead.
This is not yet enabled.

llvm-svn: 68082
2009-03-31 01:13:53 +00:00
Evan Cheng a84a318873 When optimzing a mul by immediate into two, the resulting mul's should get a x86 specific node to avoid dag combiner from hacking on them further.
llvm-svn: 68066
2009-03-30 21:36:47 +00:00
Rafael Espindola 1f11c3c36f Use array_lengthof
llvm-svn: 67950
2009-03-28 19:02:18 +00:00
Rafael Espindola 227815437a Use less hard coded constants to make the code less brittle.
llvm-svn: 67846
2009-03-27 15:45:05 +00:00
Dan Gohman 2293eb6037 Don't forego folding of loads into 64-bit adds when the other
operand is a signed 32-bit immediate. Unlike with the 8-bit
signed immediate case, it isn't actually smaller to fold a
32-bit signed immediate instead of a load. In fact, it's
larger in the case of 32-bit unsigned immediates, because
they can be materialized with movl instead of movq.

llvm-svn: 67001
2009-03-14 02:07:16 +00:00
Dan Gohman a1d92423cf Enhance address-mode folding of ISD::ADD to handle cases where the
operands can't both be fully folded at the same time. For example,
in the included testcase, a global variable is being added with
an add of two values. The global variable wants RIP-relative
addressing, so it can't share the address with another base
register, but it's still possible to fold the initial add.

llvm-svn: 66865
2009-03-13 02:25:09 +00:00
Dale Johannesen 9bba902c83 Remove non-DebugLoc versions of BuildMI from X86.
There were some that might even matter in X86FastISel.

llvm-svn: 64437
2009-02-13 02:33:27 +00:00
Chris Lattner aed3a4215b fix the X86 backend to just drop llvm.declare nodes for VLAs instead of
leaving them in the DAG and then getting selection errors.  This is a 
fix for PR3538.

llvm-svn: 64382
2009-02-12 17:33:11 +00:00
Dale Johannesen 9c310711bb Use getDebugLoc forwarder instead of getNode()->getDebugLoc.
No functional change.

llvm-svn: 64026
2009-02-07 19:59:05 +00:00
Dan Gohman 4e3e3deed3 Refactor some repeated logic into a separate function.
llvm-svn: 63989
2009-02-07 00:43:41 +00:00
Dale Johannesen 9f3f72f144 Get rid of one more non-DebugLoc getNode and
its corresponding getTargetNode.  Lots of
caller changes.

llvm-svn: 63904
2009-02-06 01:31:28 +00:00
Dale Johannesen bbf13f54e0 Patch up omissions in DebugLoc propagation.
llvm-svn: 63693
2009-02-04 00:33:20 +00:00
Dale Johannesen 14f2d9dcbd DebugLoc propgation
llvm-svn: 63664
2009-02-03 21:48:12 +00:00
Dan Gohman f77f0ce21a Simplify findNonImmUse; return the result using the return value
instead of via a by-reference argument. No functionality change.

llvm-svn: 63118
2009-01-27 19:04:30 +00:00
Dan Gohman 7740523a89 Eliminate unnecessary operands-list traversals.
llvm-svn: 63088
2009-01-27 02:37:43 +00:00
Evan Cheng 6c7e85142b Enhance logic in X86DAGToDAGISel::PreprocessForRMW which move load inside callseq_start to allow it to be folded into a call. It was not considering the cases where a token factor is between the load and the callseq_start.
llvm-svn: 63022
2009-01-26 18:43:34 +00:00
Dan Gohman b43c8996f2 Fix a recent regression. ClrOpcode is not set for i8; for i8, if
we want to clear %ah to zero before a division, just use a
zero-extending mov to %al. This fixes PR3366.

llvm-svn: 62691
2009-01-21 14:50:16 +00:00
Evan Cheng 44cc554311 DIVREM isel deficiency: If sign bit is known zero, zero out DX/EDX/RDX instead of sign extending the low part (in AX/EAX/RAX) into it.
llvm-svn: 62519
2009-01-19 19:06:11 +00:00
Evan Cheng bf38a5e540 Fix MatchAddress bug that's preventing negative displacement from being folded in 64-bit mode.
llvm-svn: 62413
2009-01-17 07:09:27 +00:00
Dan Gohman 619ef48a52 Move a few containers out of ScheduleDAGInstrs::BuildSchedGraph
and into the ScheduleDAGInstrs class, so that they don't get
destructed and re-constructed for each block. This fixes a
compile-time hot spot in the post-pass scheduler.

To help facilitate this, tidy and do some minor reorganization
in the scheduler constructor functions.

llvm-svn: 62275
2009-01-15 19:20:50 +00:00
Evan Cheng 5a272e79e5 80 col violation.
llvm-svn: 62024
2009-01-10 03:33:22 +00:00
Evan Cheng 01fa50ca4f Some code clean up.
llvm-svn: 60850
2008-12-10 21:49:05 +00:00
Evan Cheng 83bdb38965 On x86 favors folding short immediate into some arithmetic operations (e.g. add, and, xor, etc.) because materializing an immediate in a register is expensive in turns of code size.
e.g.
movl 4(%esp), %eax
addl $4, %eax

is 2 bytes shorter than

movl $4, %eax
addl 4(%esp), %eax

llvm-svn: 60139
2008-11-27 00:49:46 +00:00
Dan Gohman 88ba5f0b96 Move the code that inserts X87 FP_REG_KILL instructions from a
special-purpose hook to a new pass. Also, add check to see if any
x87 virtual registers are used, to avoid doing any work in the
common case that no x87 code is needed.

llvm-svn: 59190
2008-11-12 22:55:05 +00:00
Dan Gohman 059c4fa8d8 The 32-bit displacement field in an x86 address is signed. Arrange for it
to be sign-extended when it is promoted to 64 bits for intermediate
offset calculations. The offset calculations are done as uint64_t so that
overflow conditions are well defined.

This fixes a problem which is currently hidden by the x86 AsmPrinter but
which was exposed by r58917 (which is temporarily reverted).  See PR3027
for details.

llvm-svn: 59044
2008-11-11 15:52:29 +00:00
Dan Gohman f14b77ebf1 Eliminate the ISel priority queue, which used the topological order for a
priority function. Instead, just iterate over the AllNodes list, which is
already in topological order. This eliminates a fair amount of bookkeeping,
and speeds up the isel phase by about 15% on many testcases.

The impact on most targets is that AddToISelQueue calls can be simply removed.

In the x86 target, there are two additional notable changes.

The rule-bending AND+SHIFT optimization in MatchAddress that creates new
pre-isel nodes during isel is now a little more verbose, but more robust.
Instead of either creating an invalid DAG or creating an invalid topological
sort, as it has historically done, it can now just insert the new nodes into
the node list at a position where they will be consistent with the topological
ordering.

Also, the address-matching code has logic that checked to see if a node was
"already selected". However, when a node is selected, it has all its uses
taken away via ReplaceAllUsesWith or equivalent, so it won't recieve any
further visits from MatchAddress. This code is now removed.

llvm-svn: 58748
2008-11-05 04:14:16 +00:00
Dan Gohman b9110e7fbb The ANDMask node folds to a constant, and isn't the node that needs to
have its node id set. The new and and shift nodes are the nodes that need
the IDs. This fixes PR2982.

llvm-svn: 58655
2008-11-03 23:43:55 +00:00
David Greene ce2a938186 Have TableGen emit setSubgraphColor calls under control of a -gen-debug
flag.  Then in a debugger developers can set breakpoints at these calls
to see waht is about to be selected and what the resulting subgraph
looks like.  This really helps when debugging instruction selection.

llvm-svn: 58278
2008-10-27 21:56:29 +00:00
Dan Gohman 2fe6bee5b6 Teach DAGCombine to fold constant offsets into GlobalAddress nodes,
and add a TargetLowering hook for it to use to determine when this
is legal (i.e. not in PIC mode, etc.)

This allows instruction selection to emit folded constant offsets
in more cases, such as the included testcase, eliminating the need
for explicit arithmetic instructions.

This eliminates the need for the C++ code in X86ISelDAGToDAG.cpp
that attempted to achieve the same effect, but wasn't as effective.

Also, fix handling of offsets in GlobalAddressSDNodes in several
places, including changing GlobalAddressSDNode's offset from
int to int64_t.

The Mips, Alpha, Sparc, and CellSPU targets appear to be
unaware of GlobalAddress offsets currently, so set the hook to
false on those targets.

llvm-svn: 57748
2008-10-18 02:06:02 +00:00
Dan Gohman e33afda4fa Trim #includes.
llvm-svn: 57649
2008-10-16 20:18:31 +00:00
Evan Cheng c36231b95e Fix indentation.
llvm-svn: 57508
2008-10-14 17:15:39 +00:00
Dan Gohman 56b6885104 When doing the very-late shift-and address-mode optimization,
create a new DAG node to represent the new shift to keep the
DAG consistent, even though it'll almost always be folded into
the address.

If a user of the resulting address has multiple uses, the
nodes may get revisited by a later MatchAddress call, in which
case DAG inconsistencies do matter.

This fixes PR2849.

llvm-svn: 57465
2008-10-13 20:52:04 +00:00
Devang Patel c0f3b52e65 It is possible that all functions in one module are not being
optimized for size. Set OptForSize for each function separately.

llvm-svn: 57182
2008-10-06 18:03:39 +00:00
Dale Johannesen 8c36a1c09c Make atomic Swap work, 64-bit on x86-32.
Make it all work in non-pic mode.

llvm-svn: 57034
2008-10-03 22:25:52 +00:00
Dale Johannesen 5d60c1ebb1 Pass MemOperand through for 64-bit atomics on 32-bit,
incidentally making the case where the memop is a
pointer deref work.  Fix cmp-and-swap regression.

llvm-svn: 57027
2008-10-03 19:41:08 +00:00
Dan Gohman 2c836cf187 Avoid creating two TargetLowering objects for each target.
Instead, just create one, and make sure everything that needs
it can access it. Previously most of the SelectionDAGISel
subclasses all had their own TargetLowering object, which was
redundant with the TargetLowering object in the TargetMachine
subclasses, except on Sparc, where SparcTargetMachine
didn't have a TargetLowering object. Change Sparc to work
more like the other targets here.

llvm-svn: 57016
2008-10-03 16:55:19 +00:00
Dan Gohman eae96ce3ec Remove an unused field.
llvm-svn: 57014
2008-10-03 16:17:33 +00:00
Dan Gohman 0d1e9a8e04 Switch the MachineOperand accessors back to the short names like
isReg, etc., from isRegister, etc.

llvm-svn: 57006
2008-10-03 15:45:36 +00:00
Dale Johannesen 867d549fce Handle some 64-bit atomics on x86-32, some of the time.
llvm-svn: 56963
2008-10-02 18:53:47 +00:00
Devang Patel 1b76f2c40b Remove OptimizeForSize global. Use function attribute optsize.
llvm-svn: 56937
2008-10-01 23:18:38 +00:00
Dan Gohman 86aa16a69a Optimize SelectionDAG's AssignTopologicalOrder even further.
Completely eliminate the TopOrder std::vector. Instead, sort
the AllNodes list in place. This also eliminates the need to
call AllNodes.size(), a linear-time operation, before
performing the sort.

Also, eliminate the Sources temporary std::vector, since it
essentially duplicates the sorted result as it is being
built.

This also changes the direction of the topological sort
from bottom-up to top-down. The AllNodes list starts out in
roughly top-down order, so this reduces the amount of
reordering needed. Top-down is also more convenient for
Legalize, and ISel needed only minor adjustments.

llvm-svn: 56867
2008-09-30 18:30:35 +00:00
Dan Gohman 6ebe734ca6 Move the GlobalBaseReg field out of X86ISelDAGToDAG.cpp
and X86FastISel.cpp into X86MachineFunction.h, so that it
can be shared, instead of having each selector keep track
of its own.

llvm-svn: 56825
2008-09-30 00:58:23 +00:00
Daniel Dunbar 1d5e766016 Unbreak build.
llvm-svn: 56727
2008-09-27 00:22:09 +00:00
Evan Cheng 7d6fa97567 Implement "punpckldq %xmm0, $xmm0" as "pshufd $0x50, %xmm0, %xmm" unless optimizing for code size.
llvm-svn: 56711
2008-09-26 23:41:32 +00:00
Dan Gohman 6e0548336a Rename ConstantSDNode's getSignExtended to getSExtValue, for
consistancy with ConstantInt, and re-implement it in terms
of ConstantInt's getSExtValue.

llvm-svn: 56700
2008-09-26 21:54:37 +00:00
Dan Gohman 007a6bb9b9 Factor out the code for determining when symblic addresses
require RIP-relative addressing and use it to fix a bug
in X86FastISel in x86-64 PIC mode, where it was trying to
use base/index registers with RIP-relative addresses. This
fixes a bunch of x86-64 testsuite failures.

llvm-svn: 56676
2008-09-26 19:15:30 +00:00
Evan Cheng e0add20c1b Properly handle 'm' inline asm constraints. If a GV is being selected for the addressing mode, it requires the same logic for PIC relative addressing, etc.
llvm-svn: 56526
2008-09-24 00:05:32 +00:00
Dan Gohman e64c9944f6 Delete an unused function.
llvm-svn: 56495
2008-09-23 18:26:47 +00:00
Dan Gohman 2430073657 Move the code for initializing the global base reg out of
X86ISelDAGToDAG.cpp and into X86InstrInfo.cpp. This will allow
it to be reused by FastISel.

llvm-svn: 56494
2008-09-23 18:22:58 +00:00
Dan Gohman 173aa8602d Simplify and generalize X86DAGToDAGISel::CanBeFoldedBy, and draw
up some new ascii art to illustrate what it does. This change
currently has no effect on generated code.

llvm-svn: 56270
2008-09-17 01:39:10 +00:00
Bill Wendling 24c79f28b1 Reverting r56249. On further investigation, this functionality isn't needed.
Apologies for the thrashing.

llvm-svn: 56251
2008-09-16 21:48:12 +00:00
Bill Wendling 8bc392fb1d - Change "ExternalSymbolSDNode" to "SymbolSDNode".
- Add linkage to SymbolSDNode (default to external).
- Change ISD::ExternalSymbol to ISD::Symbol.
- Change ISD::TargetExternalSymbol to ISD::TargetSymbol

These changes pave the way to allowing SymbolSDNodes with non-external linkage.

llvm-svn: 56249
2008-09-16 21:12:30 +00:00
Dan Gohman effb894453 Rename ConstantSDNode::getValue to getZExtValue, for consistency
with ConstantInt. This led to fixing a bug in TargetLowering.cpp
using getValue instead of getAPIntValue.

llvm-svn: 56159
2008-09-12 16:56:44 +00:00
Gabor Greif 81d6a38434 fix a bunch of 80-col violations
llvm-svn: 55588
2008-08-31 15:37:04 +00:00
Gabor Greif f304a7aa4d erect abstraction boundaries for accessing SDValue members, rename Val -> Node to reflect semantics
llvm-svn: 55504
2008-08-28 21:40:38 +00:00
Gabor Greif abfdf928d8 disallow direct access to SDValue::ResNo, provide a getter instead
llvm-svn: 55394
2008-08-26 22:36:50 +00:00
Evan Cheng f00f1e50b5 Try approach to moving call address load inside of callseq_start. Now it's done during the preprocess of x86 isel. callseq_start's chain is changed to load's chain node; while load's chain is the last of callseq_start or the loads or copytoreg nodes inserted to move arguments to the right spot.
llvm-svn: 55338
2008-08-25 21:27:18 +00:00
Dan Gohman eb0cee91f6 Move the point at which FastISel taps into the SelectionDAGISel
process up to a higher level. This allows FastISel to leverage
more of SelectionDAGISel's infastructure, such as updating Machine
PHI nodes.

Also, implement transitioning from SDISel back to FastISel in
the middle of a block, so it's now possible to go back and
forth. This allows FastISel to hand individual CallInsts and other
complicated things off to SDISel to handle, while handling the rest
of the block itself.

To help support this, reorganize the SelectionDAG class so that it
is allocated once and reused throughout a function, instead of
being completely reallocated for each block.

llvm-svn: 55219
2008-08-23 02:25:05 +00:00
Dan Gohman d3582c9bda Simplify SelectRoot's interface, and factor out some common code
from all targets.

llvm-svn: 55124
2008-08-21 16:36:34 +00:00
Dan Gohman 814f291664 Move the handling of ANY_EXTEND, SIGN_EXTEND_INREG, and TRUNCATE
out of X86ISelDAGToDAG.cpp C++ code and into tablegen code.
Among other things, using tablegen for these things makes them
friendlier to FastISel.

Tablegen can handle the case of i8 subregs on x86-32, but currently
the C++ code for that case uses MVT::Flag in a tricky way, and it
happens to schedule better in some cases. So for now, leave the
C++ code in place to handle the i8 case on x86-32.

llvm-svn: 55078
2008-08-20 21:27:32 +00:00
Evan Cheng ab35bfdf18 Fix a (u)comiss intrinsic lowering bug. It was using anyext which can return junk in higher bits. Patch by Nate Begeman.
llvm-svn: 54903
2008-08-17 19:22:34 +00:00
Dan Gohman e81ac0b66f Oops, check in these files too, for the FastISel -> Fast rename.
llvm-svn: 54750
2008-08-13 19:55:00 +00:00
Dale Johannesen dafdbf77b3 Some fixes for x86-64 JIT. Make it use small code
model, except for external calls; this makes
addressing modes PC-relative.  Incomplete.

The assertion at the top of Emitter::runOnMachineFunction
was obviously bogus (always true) so I removed it.
If someone knows what the correct test should be to cover
all the various targets, please fix.

llvm-svn: 54656
2008-08-11 23:46:25 +00:00
Dan Gohman 2ce6f2ad5e Rename SDOperand to SDValue.
llvm-svn: 54128
2008-07-27 21:46:04 +00:00
Dan Gohman 91e5dcb680 Tidy SDNode::use_iterator, and complete the transition to have it
parallel its analogue, Value::value_use_iterator. The operator* method
now returns the user, rather than the use.

llvm-svn: 54127
2008-07-27 20:43:25 +00:00
Dan Gohman 581cc87f57 Add titles to the various SelectionDAG viewGraph calls
that include useful information like the name of the
block being viewed and the current phase of compilation.

llvm-svn: 53872
2008-07-21 20:00:07 +00:00
Dan Gohman 1705968102 Add a new function, ReplaceAllUsesOfValuesWith, which handles bulk
replacement of multiple values. This is slightly more efficient
than doing multiple ReplaceAllUsesOfValueWith calls, and theoretically
could be optimized even further. However, an important property of this
new function is that it handles the case where the source value set and
destination value set overlap. This makes it feasible for isel to use
SelectNodeTo in many very common cases, which is advantageous because
SelectNodeTo avoids a temporary node and it doesn't require CSEMap
updates for users of values that don't change position.

Revamp MorphNodeTo, which is what does all the work of SelectNodeTo, to
handle operand lists more efficiently, and to correctly handle a number
of corner cases to which its new wider use exposes it.

This commit also includes a change to the encoding of post-isel opcodes
in SDNodes; now instead of being sandwiched between the target-independent
pre-isel opcodes and the target-dependent pre-isel opcodes, post-isel
opcodes are now represented as negative values. This makes it possible
to test if an opcode is pre-isel or post-isel without having to know
the size of the current target's post-isel instruction set.

These changes speed up llc overall by 3% and reduce memory usage by 10%
on the InstructionCombining.cpp testcase with -fast and -regalloc=local.

llvm-svn: 53728
2008-07-17 19:10:17 +00:00
Dan Gohman f169f81036 Fix the result type of X86's truncate to i8.
llvm-svn: 53688
2008-07-16 16:20:48 +00:00
Evan Cheng 2c9773155a Do not use computationally expensive scheduling heuristics with -fast.
llvm-svn: 52971
2008-07-01 18:05:03 +00:00
Evan Cheng 0711d68fa7 Split scheduling from instruction selection.
llvm-svn: 52923
2008-06-30 20:45:06 +00:00
Evan Cheng f6a1466829 Unbreak DECLARE isel in pic mode.
llvm-svn: 52439
2008-06-18 02:48:27 +00:00
Evan Cheng e47ca0940f Rather than avoiding to wrap ISD::DECLARE GV operand in X86ISD::Wrapper, simply handle it at dagisel time with x86 specific isel code.
llvm-svn: 52377
2008-06-17 02:01:22 +00:00
Duncan Sands 13237ac3b9 Wrap MVT::ValueType in a struct to get type safety
and better control the abstraction.  Rename the type
to MVT.  To update out-of-tree patches, the main
thing to do is to rename MVT::ValueType to MVT, and
rewrite expressions like MVT::getSizeInBits(VT) in
the form VT.getSizeInBits().  Use VT.getSimpleVT()
to extract a MVT::SimpleValueType for use in switch
statements (you will get an assert failure if VT is
an extended value type - these shouldn't exist after
type legalization).
This results in a small speedup of codegen and no
new testsuite failures (x86-64 linux).

llvm-svn: 52044
2008-06-06 12:08:01 +00:00
Dan Gohman 6e582c449f Fix a tblgen problem handling variable_ops in tblgen instruction
definitions. This adds a new construct, "discard", for indicating
that a named node in the input matching pattern is to be discarded,
instead of corresponding to a node in the output pattern. This
allows tblgen to know where the arguments for the varaible_ops are
supposed to begin.

This fixes "rdar://5791600", whatever that is ;-).

llvm-svn: 51699
2008-05-29 19:57:41 +00:00
Evan Cheng 04d24edcbb Use movlps / movhps to modify low / high half of 16-byet memory location.
llvm-svn: 51501
2008-05-23 21:23:16 +00:00
Evan Cheng 961339bbdb Handle a few more cases of folding load i64 into xmm and zero top bits.
Note, some of the code will be moved into target independent part of DAG combiner in a subsequent patch.

llvm-svn: 50918
2008-05-09 21:53:03 +00:00
Evan Cheng 78af38c392 Handle vector move / load which zero the destination register top bits (i.e. movd, movq, movss (addr), movsd (addr)) with X86 specific dag combine.
llvm-svn: 50838
2008-05-08 00:57:18 +00:00
Evan Cheng 59834d1c7a Not checking for intrinsics which do not have a chain operand.
llvm-svn: 50260
2008-04-25 08:55:28 +00:00
Evan Cheng 051da5deaa - Switch from std::set to SmallPtrSet.
- Add comments.

llvm-svn: 50259
2008-04-25 08:22:20 +00:00
Chris Lattner 741c7a3b49 Loosen up an assertion to allow intrinsics. I really have no
idea what this code (findNonImmUse) does, so I'm only guessing 
that this is the right thing.  It would be really really nice
if this had comments and perhaps switched to SmallPtrSet
(hint hint) :)

This fixes rdar://5886601, a crash on gcc.target/i386/sse4_1-pblendw.c

llvm-svn: 50252
2008-04-25 05:13:01 +00:00
Roman Levenstein 51f532f92d Re-commit of the r48822, where the infinite looping problem discovered
by Dan Gohman is fixed.

llvm-svn: 49330
2008-04-07 10:06:32 +00:00
Evan Cheng d9129d1de3 Cosmetic
llvm-svn: 49156
2008-04-03 07:45:18 +00:00
Evan Cheng 025cea1126 Backing out 48222 temporarily.
llvm-svn: 49124
2008-04-03 03:13:16 +00:00
Roman Levenstein 358e04a185 Use a linked data structure for the uses lists of an SDNode, just like
LLVM Value/Use does and MachineRegisterInfo/MachineOperand does.
This allows constant time for all uses list maintenance operations.

The idea was suggested by Chris. Reviewed by Evan and Dan.
Patch is tested and approved by Dan.

On normal use-cases compilation speed is not affected. On very big basic
blocks there are compilation speedups in the range of 15-20% or even better. 

llvm-svn: 48822
2008-03-26 12:39:26 +00:00
Chris Lattner 68b11e14bc remove Evan's "ugly hack" that sorta attempted to get
x86-64 return conventions correct, but was never enabled.
We can now do the "right thing" with multiple return values.

llvm-svn: 48635
2008-03-21 06:50:21 +00:00
Christopher Lamb d3d0ad3f58 Make insert_subreg a two-address instruction, vastly simplifying LowerSubregs pass. Add a new TII, subreg_to_reg, which is like insert_subreg except that it takes an immediate implicit value to insert into rather than a register.
llvm-svn: 48412
2008-03-16 03:12:01 +00:00
Christopher Lamb dd55d3f1b2 Get rid of a pseudo instruction and replace it with subreg based operation on real instructions, ridding the asm printers of the hack used to do this previously. In the process, update LowerSubregs to be careful about eliminating copies that have side affects.
Note: the coalescer will have to be careful about this too, when it starts coalescing insert_subreg nodes.
llvm-svn: 48329
2008-03-13 05:47:01 +00:00
Christopher Lamb aa7c2105de Recommitting parts of r48130. These do not appear to cause the observed failures.
llvm-svn: 48223
2008-03-11 10:09:17 +00:00
Chris Lattner 1bd44363f2 Change the model for FP Stack return to use fp operands on the
RET instruction instead of using FpSET_ST0_32.  This also generalizes
the code to handling returning of multiple FP results.

llvm-svn: 48209
2008-03-11 03:23:40 +00:00
Chris Lattner 7362d38391 Don't emit FP_REG_KILL into a block that just returns. Nothing
can be live out of the block anyway, so it isn't needed.

llvm-svn: 48192
2008-03-10 23:34:12 +00:00
Evan Cheng d4e1d9eeb2 Revert 48125, 48126, and 48130 for now to unbreak some x86-64 tests.
llvm-svn: 48167
2008-03-10 19:31:26 +00:00
Christopher Lamb 4ba3f0430b Allow insert_subreg into implicit, target-specific values.
Change insert/extract subreg instructions to be able to be used in TableGen patterns.
Use the above features to reimplement an x86-64 pseudo instruction as a pattern.

llvm-svn: 48130
2008-03-10 06:12:08 +00:00
Chris Lattner d587e580a6 rename FpGETRESULT32 -> FpGET_ST0_32 etc. Add support for
isel'ing value preserving FP roundings from one fp stack reg to another
into a noop, instead of stack traffic.

llvm-svn: 48093
2008-03-09 07:05:32 +00:00
Evan Cheng 33ff36321e Remove -always-fold-and-in-test.
llvm-svn: 47871
2008-03-04 00:40:35 +00:00
Evan Cheng 507713de08 Set to default: x86 no longer fold and into test if it has more than one use.
llvm-svn: 47711
2008-02-28 07:46:38 +00:00
Dan Gohman a790af3a88 Revert the assert for MUL_LOHI with an unused high result; Chris
pointed out that this isn't correct at -O0.

llvm-svn: 47575
2008-02-25 22:43:48 +00:00
Dan Gohman 0be2f3b941 Add an assert to verify that we don't see an
{S,U}MUL_LOHI with an unused high value.

llvm-svn: 47569
2008-02-25 22:15:55 +00:00
Dan Gohman 2ff975e749 Remove the hack that turned an {S,U}MUL_LOHI with an unused high
result into a MUL late in the X86 codegen process. ISD::MUL is
once again Legal on X86, so this is no longer needed. And, the
hack was suboptimal; see PR1874 for details.

llvm-svn: 47567
2008-02-25 21:57:04 +00:00
Dan Gohman 1f372edd97 Convert MaskedValueIsZero and all its users to use APInt. Also add
a SignBitIsZero function to simplify a common use case.

llvm-svn: 47561
2008-02-25 21:11:39 +00:00
Evan Cheng b6b69208ba Poorly named option.
llvm-svn: 47400
2008-02-20 20:57:32 +00:00
Evan Cheng 7626ab33d8 Disable for now. This is pessimizing code.
llvm-svn: 47354
2008-02-20 02:29:17 +00:00
Evan Cheng 5ce8dd93ef Add hidden option -x86-fold-and-in-test to test the effect the test / and folding change.
llvm-svn: 47351
2008-02-19 23:36:51 +00:00
Evan Cheng 8a25d6ac53 Only using x86-64 rip relative addressing in non-staic mode?
llvm-svn: 47019
2008-02-12 19:20:46 +00:00
Dan Gohman 3a4be0fdef Rename MRegisterInfo to TargetRegisterInfo.
llvm-svn: 46930
2008-02-10 18:45:23 +00:00
Evan Cheng a20a773654 Fix a x86-64 codegen deficiency. Allow gv + offset when using rip addressing mode.
Before:
_main:
        subq    $8, %rsp
        leaq    _X(%rip), %rax
        movsd   8(%rax), %xmm1
        movss   _X(%rip), %xmm0
        call    _t
        xorl    %ecx, %ecx
        movl    %ecx, %eax
        addq    $8, %rsp
        ret
Now:
_main:
        subq    $8, %rsp
        movsd   _X+8(%rip), %xmm1
        movss   _X(%rip), %xmm0
        call    _t
        xorl    %ecx, %ecx
        movl    %ecx, %eax
        addq    $8, %rsp
        ret

Notice there is another idiotic codegen issue that needs to be fixed asap:
xorl    %ecx, %ecx
movl    %ecx, %eax

llvm-svn: 46850
2008-02-07 08:53:49 +00:00
Evan Cheng 2cb9068c78 Dwarf requires variable entries to be in the source order. Right now, since we are recording variable information at isel time this means parameters would appear in the reverse order. The short term fix is to issue recordVariable() at asm printing time instead.
llvm-svn: 46724
2008-02-04 23:06:48 +00:00
Evan Cheng efd142a920 SDIsel processes llvm.dbg.declare by recording the variable debug information descriptor and its corresponding stack frame index in MachineModuleInfo. This only works if the local variable is "homed" in the stack frame. It does not work for byval parameter, etc.
Added ISD::DECLARE node type to represent llvm.dbg.declare intrinsic. Now the intrinsic calls are lowered into a SDNode and lives on through out the codegen passes.
For now, since all the debugging information recording is done at isel time, when a ISD::DECLARE node is selected, it has the side effect of also recording the variable. This is a short term solution that should be fixed in time.

llvm-svn: 46659
2008-02-02 04:07:54 +00:00
Evan Cheng 084a1cdcdd Work in progress. This patch *fixes* x86-64 calls which are modelled as StructRet but really should be return in registers, e.g. _Complex long double, some 128-bit aggregates. This is a short term solution that is necessary only because llvm, for now, cannot model i128 nor call's with multiple results.
Status: This only works for direct calls, and only the caller side is done. Disabled for now.

llvm-svn: 46527
2008-01-29 19:34:22 +00:00
Chris Lattner a91f77eaac Significantly simplify and improve handling of FP function results on x86-32.
This case returns the value in ST(0) and then has to convert it to an SSE
register.  This causes significant codegen ugliness in some cases.  For 
example in the trivial fp-stack-direct-ret.ll testcase we used to generate:

_bar:
	subl	$28, %esp
	call	L_foo$stub
	fstpl	16(%esp)
	movsd	16(%esp), %xmm0
	movsd	%xmm0, 8(%esp)
	fldl	8(%esp)
	addl	$28, %esp
	ret

because we move the result of foo() into an XMM register, then have to
move it back for the return of bar.

Instead of hacking ever-more special cases into the call result lowering code
we take a much simpler approach: on x86-32, fp return is modeled as always 
returning into an f80 register which is then truncated to f32 or f64 as needed.
Similarly for a result, we model it as an extension to f80 + return.

This exposes the truncate and extensions to the dag combiner, allowing target
independent code to hack on them, eliminating them in this case.  This gives 
us this code for the example above:

_bar:
	subl	$12, %esp
	call	L_foo$stub
	addl	$12, %esp
	ret

The nasty aspect of this is that these conversions are not legal, but we want
the second pass of dag combiner (post-legalize) to be able to hack on them.
To handle this, we lie to legalize and say they are legal, then custom expand
them on entry to the isel pass (PreprocessForFPConvert).  This is gross, but
less gross than the code it is replacing :)

This also allows us to generate better code in several other cases.  For 
example on fp-stack-ret-conv.ll, we now generate:

_test:
	subl	$12, %esp
	call	L_foo$stub
	fstps	8(%esp)
	movl	16(%esp), %eax
	cvtss2sd	8(%esp), %xmm0
	movsd	%xmm0, (%eax)
	addl	$12, %esp
	ret

where before we produced (incidentally, the old bad code is identical to what
gcc produces):

_test:
	subl	$12, %esp
	call	L_foo$stub
	fstpl	(%esp)
	cvtsd2ss	(%esp), %xmm0
	cvtss2sd	%xmm0, %xmm0
	movl	16(%esp), %eax
	movsd	%xmm0, (%eax)
	addl	$12, %esp
	ret

Note that we generate slightly worse code on pr1505b.ll due to a scheduling 
deficiency that is unrelated to this patch.

llvm-svn: 46307
2008-01-24 08:07:48 +00:00
Evan Cheng 4951da49aa Fix a x86-64 static codegen bug. This fixes a lot of x86-64 jit failures.
llvm-svn: 45733
2008-01-08 02:06:11 +00:00
Evan Cheng f55b7381af Combine MovePCtoStack + POP32r into one instruction MOVPC32r so it can be moved if needed.
llvm-svn: 45605
2008-01-05 00:41:47 +00:00
Chris Lattner a10fff51d9 Rename SSARegMap -> MachineRegisterInfo in keeping with the idea
that "machine" classes are used to represent the current state of
the code being compiled.  Given this expanded name, we can start 
moving other stuff into it.  For now, move the UsedPhysRegs and
LiveIn/LoveOuts vectors from MachineFunction into it.

Update all the clients to match.

This also reduces some needless #includes, such as MachineModuleInfo
from MachineFunction.

llvm-svn: 45467
2007-12-31 04:13:23 +00:00
Chris Lattner f3ebc3f3d2 Remove attribution from file headers, per discussion on llvmdev.
llvm-svn: 45418
2007-12-29 20:36:04 +00:00
Evan Cheng f4f52dbc8c Fix JIT code emission of X86::MovePCtoStack.
llvm-svn: 45307
2007-12-22 02:26:46 +00:00
Evan Cheng 827d30db19 Fold some and + shift in x86 addressing mode.
llvm-svn: 44970
2007-12-13 00:43:27 +00:00
Chris Lattner ff87f05e43 aesthetic changes, no functionality change. Evan, it's not clear
what 'Available' is, please add a comment near it and rename it
if appropriate.

llvm-svn: 44703
2007-12-08 07:22:58 +00:00
Chris Lattner 5728bdd4db Fix a long standing deficiency in the X86 backend: we would
sometimes emit "zero" and "all one" vectors multiple times,
for example:

_test2:
	pcmpeqd	%mm0, %mm0
	movq	%mm0, _M1
	pcmpeqd	%mm0, %mm0
	movq	%mm0, _M2
	ret

instead of:

_test2:
	pcmpeqd	%mm0, %mm0
	movq	%mm0, _M1
	movq	%mm0, _M2
	ret

This patch fixes this by always arranging for zero/one vectors
to be defined as v4i32 or v2i32 (SSE/MMX) instead of letting them be
any random type.  This ensures they get trivially CSE'd on the dag.
This fix is also important for LegalizeDAGTypes, as it gets unhappy
when the x86 backend wants BUILD_VECTOR(i64 0) to be legal even when
'i64' isn't legal.

This patch makes the following changes:

1) X86TargetLowering::LowerBUILD_VECTOR now lowers 0/1 vectors into
   their canonical types.
2) The now-dead patterns are removed from the SSE/MMX .td files.
3) All the patterns in the .td file that referred to immAllOnesV or
   immAllZerosV in the wrong form now use *_bc to match them with a
   bitcast wrapped around them.
4) X86DAGToDAGISel::SelectScalarSSELoad is generalized to handle 
   bitcast'd zero vectors, which simplifies the code actually.
5) getShuffleVectorZeroOrUndef is updated to generate a shuffle that
   is legal, instead of generating one that is illegal and expecting
   a later legalize pass to clean it up.
6) isZeroShuffle is generalized to handle bitcast of zeros.
7) several other minor tweaks.

This patch is definite goodness, but has the potential to cause random
code quality regressions.  Please be on the lookout for these and let 
me know if they happen.

llvm-svn: 44310
2007-11-25 00:24:49 +00:00
Bill Wendling b7cabbe295 Silence, accersed warning
llvm-svn: 43609
2007-11-01 08:51:44 +00:00
Dan Gohman bf474959a3 Fix the folding of multiplication into addresses on x86, which was broken
by the recent {U,S}MUL_LOHI changes.

llvm-svn: 43230
2007-10-22 20:22:24 +00:00
Evan Cheng f8c23f074b Flag MOV32to32_ with EXTRACT_SUBREG. They should not be scheduled apart.
llvm-svn: 42894
2007-10-12 07:55:53 +00:00
Dan Gohman 51554bf30e Fix grammar in a comment.
llvm-svn: 42786
2007-10-09 15:44:37 +00:00
Dan Gohman a160361c85 Migrate X86 and ARM from using X86ISD::{,I}DIV and ARMISD::MULHILO{U,S} to
use ISD::{S,U}DIVREM and ISD::{S,U}MUL_HIO. Move the lowering code
associated with these operators into target-independent in LegalizeDAG.cpp
and TargetLowering.cpp.

llvm-svn: 42762
2007-10-08 18:33:35 +00:00
Anton Korobeynikov 90910745bb Partly revert invalid r41774
llvm-svn: 42322
2007-09-25 21:52:30 +00:00
Dan Gohman 31599685c7 When both x/y and x%y are needed (x and y both scalar integer), compute
both results with a single div or idiv instruction. This uses new X86ISD
nodes for DIV and IDIV which are introduced during the legalize phase
so that the SelectionDAG's CSE can automatically eliminate redundant
computations.

llvm-svn: 42308
2007-09-25 18:23:27 +00:00
Dale Johannesen 0241bb57b2 When mixing SSE and x87 codegen, it's possible to
have situations where an SSE instruction turns into
multiple blocks, with the live range of an x87
register crossing them.  To do this correctly make
sure we examine all blocks when inserting
FP_REG_KILL.  PR 1697.  (This was exposed by my
fix for PR 1681, but the same thing could happen
mixing x87 long double with SSE.)

llvm-svn: 42281
2007-09-24 22:52:39 +00:00
Evan Cheng cef2c0efcc TableGen no longer emit CopyFromReg nodes for implicit results in physical
registers. The scheduler is now responsible for emitting them.

llvm-svn: 41781
2007-09-07 23:59:02 +00:00
Dale Johannesen 9e70086c8f Apply feedback from previous patch.
llvm-svn: 41774
2007-09-07 21:07:57 +00:00
Dale Johannesen 3cf889f75e Enhance APFloat to retain bits of NaNs (fixes oggenc).
Use APFloat interfaces for more references, mostly
of ConstantFPSDNode.

llvm-svn: 41632
2007-08-31 04:03:46 +00:00
Dan Gohman ccb3611881 When x86 addresses matching exceeds its recursion limit, check to
see if the base register is already occupied before assuming it can be
used. This fixes bogus code generation in the accompanying testcase.

llvm-svn: 41049
2007-08-13 20:03:06 +00:00
Christopher Lamb 44e79f8aba Use subregs to improve any_extend code generation when feasible.
llvm-svn: 41013
2007-08-10 22:22:41 +00:00
Christopher Lamb b372abab14 Increase efficiency of sign_extend_inreg by using subregisters for truncation. As the README suggests sign_extend_subreg is selected to (sext(trunc)).
llvm-svn: 41010
2007-08-10 21:48:46 +00:00
Evan Cheng e32e923a6a divb / mulb outputs to ah. Under x86-64 it's not legal to read ah if the instruction requires a rex prefix (i.e. outputs to r8b, etc.). So issue shift right by 8 on AX and then truncate it to 8 bits instead.
llvm-svn: 40972
2007-08-09 21:59:35 +00:00
Dale Johannesen a47f7d7cfd Long double patch 8 of N: make it partially work in
SSE mode (all but conversions <-> other FP types, I think):
>>Do not mark all-80-bit operations as "Requires[FPStack]"
(which really means "not SSE").
>>Refactor load-and-extend to facilitate this.
>>Update comments.
>>Handle long double in SSE when computing FP_REG_KILL.

llvm-svn: 40906
2007-08-07 20:29:26 +00:00
Dale Johannesen 75169a82d6 Get X86 long double calling convention to work
(on Darwin, anyway).  Fix some table omissions for
LD arithmetic.

llvm-svn: 40877
2007-08-06 21:31:06 +00:00
Evan Cheng 473c5111c3 Switch some multiplication instructions over to the new scheme for testing.
llvm-svn: 40723
2007-08-02 05:48:35 +00:00
Evan Cheng 763cdfd371 Mac OS X X86-64 low 4G address not available.
llvm-svn: 40701
2007-08-01 23:45:51 +00:00
Christopher Lamb 5fecb80efa Change the x86 backend to use extract_subreg for truncation operations. Passes DejaGnu, SingleSource and MultiSource.
llvm-svn: 40578
2007-07-29 01:24:57 +00:00
Evan Cheng ca6e041903 Minor bug.
llvm-svn: 40535
2007-07-26 17:02:45 +00:00
Evan Cheng ce5185b181 Same goes for constantpool, etc.
llvm-svn: 40517
2007-07-26 07:35:15 +00:00
Evan Cheng 630c1f75b8 Mac OS X x86-64 lower 4G address is not available.
llvm-svn: 40502
2007-07-25 23:41:36 +00:00
Dan Gohman f0bb12848f Add const to CanBeFoldedBy, CheckAndMask, and CheckOrMask.
llvm-svn: 40480
2007-07-24 23:00:27 +00:00
Dale Johannesen a2b3c175db Fix for PR 1505 (and 1489). Rewrite X87 register
model to include f32 variants.  Some factoring
improvments forthcoming.

llvm-svn: 37847
2007-07-03 00:53:03 +00:00
Dan Gohman 309d3d51b3 Move ComputeMaskedBits, MaskedValueIsZero, and ComputeNumSignBits from
TargetLowering to SelectionDAG so that they have more convenient
access to the current DAG, in preparation for the ValueType routines
being changed from standalone functions to members of SelectionDAG for
the pre-legalize vector type changes.

llvm-svn: 37704
2007-06-22 14:59:07 +00:00
Chris Lattner a5fcd24746 Fix CodeGen/X86/2007-03-24-InlineAsmPModifier.ll
llvm-svn: 35926
2007-04-11 22:29:46 +00:00
Anton Korobeynikov 0ad22563b8 Oops :)
llvm-svn: 35438
2007-03-28 18:38:33 +00:00
Anton Korobeynikov 7522c9d8e1 Don't allow MatchAddress recurse too much. This trims exponential
behaviour in some cases.

llvm-svn: 35437
2007-03-28 18:36:33 +00:00
Chris Lattner 3e1d917e80 Two changes:
1) codegen a shift of a register as a shift, not an LEA.
2) teach the RA to convert a shift to an LEA instruction if it wants something
   in three-address form.

This gives us asm diffs like:

-       leal (,%eax,4), %eax
+       shll $2, %eax

which is faster on some processors and smaller on all of them.

and, more interestingly:

-       movl 24(%esi), %eax
-       leal (,%eax,4), %edi
+       movl 24(%esi), %edi
+       shll $2, %edi

Without #2, #1 was a significant pessimization in some cases.

This implements CodeGen/X86/shift-codegen.ll

llvm-svn: 35204
2007-03-20 06:08:29 +00:00
Chris Lattner fe8c530d79 Fix a miscompilation in the addr mode code trying to implement X | C and
X + C to promote LEA formation.  We would incorrectly apply it in some cases
(test) and miss it in others.

This fixes CodeGen/X86/2007-02-04-OrAddrMode.ll

llvm-svn: 33884
2007-02-04 20:18:17 +00:00
Evan Cheng 1281dc32ef Linux GOT indirect reference is only necessary in PIC mode.
llvm-svn: 33441
2007-01-22 21:34:25 +00:00
Reid Spencer 015b432b54 Adjust #includes to compensate for lost of DerivedTypes.h in
TargetLowering.h

llvm-svn: 33154
2007-01-12 23:22:14 +00:00
Anton Korobeynikov a0554d90e8 * PIC codegen for X86/Linux has been implemented
* PIC-aware internal structures in X86 Codegen have been refactored
* Visibility (default/weak) has been added
* Docs fixes (external weak linkage, visibility, formatting)

llvm-svn: 33136
2007-01-12 19:20:47 +00:00
Anton Korobeynikov 4efbbc963f Really big cleanup.
- New target type "mingw" was introduced
- Same things for both mingw & cygwin are marked as "cygming" (as in
gcc)
- .lcomm is supported here, so allow LLVM to use it
- Correctly use underscored versions of setjmp & _longjmp for both mingw
& cygwin

llvm-svn: 32833
2007-01-03 11:43:14 +00:00
Chris Lattner 1ef9cd400d eliminate static ctors for Statistic objects.
llvm-svn: 32703
2006-12-19 22:59:26 +00:00
Evan Cheng 582ac4bed7 Fix for PR1062 by Dan Gohman.
llvm-svn: 32688
2006-12-19 21:31:42 +00:00
Bill Wendling 9bfb1e1f29 What should be the last unnecessary <iostream>s in the library.
llvm-svn: 32333
2006-12-07 22:21:48 +00:00
Chris Lattner 700b873130 Detemplatize the Statistic class. The only type it is instantiated with
is 'unsigned'.

llvm-svn: 32279
2006-12-06 17:46:33 +00:00
Evan Cheng 47e181cc4d Revert an unintended change.
llvm-svn: 32239
2006-12-05 22:03:40 +00:00
Evan Cheng dd60ca029c - Switch X86-64 JIT to large code size model.
- Re-enable some codegen niceties for X86-64 static relocation model codegen.
- Clean ups, etc.

llvm-svn: 32238
2006-12-05 19:50:18 +00:00
Evan Cheng 62cdc3f011 - Fix X86-64 JIT by temporarily disabling code that treats GV address as 32-bit
immediate in small code model. The JIT cannot ensure GV's are placed in the
lower 4G.
- Some preliminary support for large code model.

llvm-svn: 32215
2006-12-05 04:01:03 +00:00
Evan Cheng ae1cd75af7 - Use a different wrapper node for RIP-relative GV, etc.
- Proper support for both small static and PIC modes under X86-64
- Some (non-optimal) support for medium modes.

llvm-svn: 32046
2006-11-30 21:55:46 +00:00
Evan Cheng 8c84c7cd0d Clean up.
llvm-svn: 32027
2006-11-29 23:46:27 +00:00
Evan Cheng 0b1692216d Fix for PR1018 - Better support for X86-64 Linux in small code model.
llvm-svn: 32026
2006-11-29 23:19:46 +00:00
Evan Cheng 20350c4025 Change MachineInstr ctor's to take a TargetInstrDescriptor reference instead
of opcode and number of operands.

llvm-svn: 31947
2006-11-27 23:37:22 +00:00
Evan Cheng 9e8093ae20 For unsigned 8-bit division. Use movzbw to set the lower 8 bits of AX while
clearing the upper 8-bits instead of issuing two instructions. This also
eliminates the need to target the AH register which can be problematic on
x86-64.

llvm-svn: 31832
2006-11-17 22:10:14 +00:00
Bill Wendling c8e81b8d48 Removed even more std::cerr and #include <iostream> things.
llvm-svn: 31813
2006-11-17 07:52:03 +00:00
Evan Cheng dbd3d294e6 Matches MachineInstr changes.
llvm-svn: 31712
2006-11-13 23:36:35 +00:00
Evan Cheng db04c958a5 Add implicit use / def operands to created MI's.
llvm-svn: 31676
2006-11-11 10:21:44 +00:00
Evan Cheng a36cdcfaf8 Add all implicit defs to FP_REG_KILL mi.
llvm-svn: 31674
2006-11-11 07:19:36 +00:00
Evan Cheng fb44822a98 Fix a bug in SelectScalarSSELoad. Since the load is wrapped in a
SCALAR_TO_VECTOR, even if the hasOneUse() check pass we may end up folding
the load into two instructions. Make sure we check the SCALAR_TO_VECTOR
has only one use as well.

llvm-svn: 31641
2006-11-10 21:23:04 +00:00
Evan Cheng 6cd0909da7 Match tblegen changes.
llvm-svn: 31571
2006-11-08 20:34:28 +00:00
Jeff Cohen 7d6f3db3e2 Unbreak VC++ build.
llvm-svn: 31464
2006-11-05 19:31:28 +00:00
Chris Lattner de2f0906e4 silence warning
llvm-svn: 31393
2006-11-03 01:13:15 +00:00
Evan Cheng ff1a712794 SelectScalarSSELoad should call CanBeFoldedBy as well.
llvm-svn: 30973
2006-10-16 06:34:55 +00:00
Evan Cheng b86375cfd0 Corrected load folding check. We need to start from the root of the sub-dag
being matched and ensure there isn't a non-direct path to the load (i.e. a
path that goes out of the sub-dag.)

llvm-svn: 30958
2006-10-14 08:33:25 +00:00
Evan Cheng ab51cf2e78 Merge ISD::TRUNCSTORE to ISD::STORE. Switch to using StoreSDNode.
llvm-svn: 30945
2006-10-13 21:14:26 +00:00
Evan Cheng a7956d2894 Doh. This wasn't causing problems by luck.
llvm-svn: 30914
2006-10-12 19:13:59 +00:00
Chris Lattner 40ec2bebf9 fix compilation failure of smg2000
llvm-svn: 30900
2006-10-12 03:55:48 +00:00
Chris Lattner d5fcfaa6da Fold "zero extending vector loads" now that evan added the chain manip stuff.
This compiles both tests in X86/vec_ss_load_fold.ll into:

_test1:
        movss 4(%esp), %xmm0
        subss LCPI1_0, %xmm0
        mulss LCPI1_1, %xmm0
        minss LCPI1_2, %xmm0
        xorps %xmm1, %xmm1
        maxss %xmm1, %xmm0
        cvttss2si %xmm0, %eax
        andl $65535, %eax
        ret

instead of:

_test1:
        movss LCPI1_0, %xmm0
        movss 4(%esp), %xmm1
        subss %xmm0, %xmm1
        movss LCPI1_1, %xmm0
        mulss %xmm0, %xmm1
        movss LCPI1_2, %xmm0
        minss %xmm0, %xmm1
        xorps %xmm0, %xmm0
        maxss %xmm0, %xmm1
        cvttss2si %xmm1, %eax
        andl $65535, %eax
        ret

llvm-svn: 30894
2006-10-11 22:09:58 +00:00
Evan Cheng 4090dc4703 ComplexPatterns sse_load_f32 and sse_load_f64 returns in / out chain operands.
llvm-svn: 30892
2006-10-11 21:06:01 +00:00
Evan Cheng 61b8b43bbe More isel time load folding checking for nodes that produce flag values.
See comment in CanBeFoldedBy() for detailed explanation.

llvm-svn: 30851
2006-10-10 01:46:56 +00:00
Evan Cheng e71fe34d75 Reflects ISD::LOAD / ISD::LOADX / LoadSDNode changes.
llvm-svn: 30844
2006-10-09 20:57:25 +00:00
Chris Lattner 398195ebbe completely disable folding of loads into scalar sse instructions and provide
a framework for doing it right.  This fixes
CodeGen/X86/2006-10-07-ScalarSSEMiscompile.ll.

Once X86DAGToDAGISel::SelectScalarSSELoad is implemented right, this task
will be done.

llvm-svn: 30817
2006-10-07 21:55:32 +00:00
Evan Cheng 1212b4d249 Not needed.
llvm-svn: 30674
2006-09-29 22:05:10 +00:00
Anton Korobeynikov 6f7072c66a Added some eye-candy for Subtarget type checking
Added X86 StdCall & FastCall calling conventions. Codegen will follow.

llvm-svn: 30446
2006-09-17 20:25:45 +00:00
Evan Cheng f8464da015 Remove a unnecessary check.
llvm-svn: 30382
2006-09-14 23:55:02 +00:00
Chris Lattner 706dd3e0d4 Fix a regression in the 32-bit port from the 64-bit port landing.
We now compile CodeGen/X86/lea-2.ll into:

_test:
        movl 4(%esp), %eax
        movl 8(%esp), %ecx
        leal -5(%ecx,%eax,4), %eax
        ret

instead of:

_test:
        movl 4(%esp), %eax
        leal (,%eax,4), %eax
        addl 8(%esp), %eax
        addl $4294967291, %eax
        ret

llvm-svn: 30288
2006-09-13 04:45:25 +00:00
Evan Cheng 9a083a4121 Reflects MachineConstantPoolEntry changes.
llvm-svn: 30279
2006-09-12 21:04:05 +00:00
Evan Cheng 11b0a5dbd4 Committing X86-64 support.
llvm-svn: 30177
2006-09-08 06:48:29 +00:00
Evan Cheng 2c4e0f120f Oops. Bad typo. Without the check of N1.hasOneUse() bad things can happen.
Suppose the TokenFactor can reach the Op:

       [Load chain]
           ^
           |
         [Load]
         ^    ^
         |    |
        /      \-
       /         |
      /          [Op]
     /          ^ ^
     |        ..  |
     |       /    |
   [TokenFactor]  |
       ^          |
       |          |
        \        /
         \      /
         [Store]

If we move the Load below the TokenFactor, we would have created a cycle in
the DAG.

llvm-svn: 30040
2006-09-01 22:52:28 +00:00
Evan Cheng b28800f4d5 Remove dead code.
llvm-svn: 29962
2006-08-29 21:42:58 +00:00
Evan Cheng dfb85155dc Don't performance load/op/store transformation if op produces a floating point
or vector result. X86 does not have load/mod/store variants of those
instructions.

llvm-svn: 29957
2006-08-29 18:37:37 +00:00
Evan Cheng 358b9ed98a - Enable x86 isel preprocessing by default unless -fast is specified.
- Also disable isel load folding if -fast.

llvm-svn: 29956
2006-08-29 18:28:33 +00:00
Evan Cheng c07feb14b0 Avoid making unneeded load/mod/store transformation which can hurt performance.
llvm-svn: 29952
2006-08-29 06:44:17 +00:00
Evan Cheng 64a9e28846 Add an optional pass to preprocess the DAG before x86 isel to allow selecting more load/mod/store instructions.
llvm-svn: 29943
2006-08-28 20:10:17 +00:00
Chris Lattner 3d27be1333 s|llvm/Support/Visibility.h|llvm/Support/Compiler.h|
llvm-svn: 29911
2006-08-27 12:54:02 +00:00
Evan Cheng c3acfc0b10 Do not use getTargetNode() and SelectNodeTo() which takes more than 3
SDOperand arguments. Use the variants which take an array and number instead.

llvm-svn: 29907
2006-08-27 08:14:06 +00:00
Evan Cheng 34b70eea5c SelectNodeTo now returns a SDNode*.
llvm-svn: 29901
2006-08-26 08:00:10 +00:00
Evan Cheng 61413a3d72 Select() no longer require Result operand by reference.
llvm-svn: 29898
2006-08-26 05:34:46 +00:00
Evan Cheng 2d48722e92 Match tblgen changes; clean up.
llvm-svn: 29894
2006-08-26 01:05:16 +00:00
Evan Cheng 29ab7c42a8 Doh. Incorrectly inverted condition. Also add a isOnlyUse check to match tablegen.
llvm-svn: 29741
2006-08-16 23:59:00 +00:00
Evan Cheng 63d178f473 SelectNodeTo() may return a SDOperand that is different from the input.
llvm-svn: 29726
2006-08-16 07:30:09 +00:00
Evan Cheng bd1c5a8fb8 Match tablegen changes.
llvm-svn: 29604
2006-08-11 09:08:15 +00:00
Evan Cheng 72bb66a4b8 Eliminate reachability matrix. It has to be calculated before any instruction
selection is done. That's rather expensive especially in situations where it
isn't really needed.
Move back to a searching the predecessors, but make use of topological order
to trim the search space.

llvm-svn: 29559
2006-08-08 00:31:00 +00:00
Evan Cheng b9d34bd098 Match tablegen isel changes.
llvm-svn: 29549
2006-08-07 22:28:20 +00:00
Evan Cheng 8f585196e1 Reflect change to AssignTopologicalOrder().
llvm-svn: 29480
2006-08-02 22:01:32 +00:00
Evan Cheng 8101dd67d1 Use of vector<bool> causes some horrendous compile time regression (2x)!
Looks like libstdc++ implementation does not scale very well. Switch back
to using directly managed arrays.

llvm-svn: 29469
2006-08-02 09:18:33 +00:00
Evan Cheng 45af287957 Factor topological order code to SelectionDAG. Clean up.
llvm-svn: 29430
2006-08-01 08:17:22 +00:00
Evan Cheng e8071ecc3b Can't spell.
llvm-svn: 29383
2006-07-28 06:33:41 +00:00
Evan Cheng 2e94538b8e Some clean up.
llvm-svn: 29382
2006-07-28 06:05:06 +00:00
Evan Cheng e2a3f7014d Rename IsFoldableBy to CanBeFoldedleBy
llvm-svn: 29376
2006-07-28 01:03:48 +00:00
Evan Cheng 11a4d8c2f4 Node selected into address mode cannot be folded.
llvm-svn: 29374
2006-07-28 00:49:31 +00:00
Evan Cheng 3b5e0cafd1 Another duh. Determine topological order before any target node is added.
llvm-svn: 29371
2006-07-28 00:10:59 +00:00
Evan Cheng f38707b8d4 Brain cramp..
llvm-svn: 29370
2006-07-27 23:35:40 +00:00
Evan Cheng 390dd7eb7d Allocating too large an array for ReachibilityMatrix.
llvm-svn: 29367
2006-07-27 22:35:40 +00:00
Evan Cheng 87585760ab Calculate the portion of reachbility matrix on demand.
llvm-svn: 29366
2006-07-27 22:10:00 +00:00
Evan Cheng d6c0c2dfd9 isNonImmUse is replaced by IsFoldableBy
llvm-svn: 29365
2006-07-27 21:19:10 +00:00
Evan Cheng 691a63d564 Use reachbility information to determine whether a node can be folded into another during isel.
llvm-svn: 29346
2006-07-27 16:44:36 +00:00
Chris Lattner 0cc5907728 Hide x86 symbols
llvm-svn: 28976
2006-06-28 23:27:49 +00:00
Chris Lattner ba1ed585ee Add support for "m" inline asm constraints.
llvm-svn: 28728
2006-06-08 18:03:49 +00:00
Evan Cheng e8a42360c5 Cygwin support. Patch by Anton Korobeynikov!
llvm-svn: 28672
2006-06-02 22:38:37 +00:00
Evan Cheng a2efb9f3ec Use xor to clear a register.
llvm-svn: 28667
2006-06-02 21:20:34 +00:00
Evan Cheng b33e54ead7 Remove bogus comment.
llvm-svn: 28564
2006-05-30 20:24:48 +00:00
Evan Cheng 734e1e241b A addressing mode folding enhancement:
Fold c2 in (x << c1) | c2 where (c2 < c1)
e.g.
int test(int x) {
  return (x << 3) + 7;
}

This can be codegen'd as:
leal 7(,%eax,8), %eax

llvm-svn: 28550
2006-05-30 06:59:36 +00:00
Evan Cheng 4af59dac0b Assert if InflightSet is not cleared after instruction selecting a BB.
llvm-svn: 28459
2006-05-25 00:24:28 +00:00
Evan Cheng 1a8e74d113 Clear HandleMap and ReplaceMap after instruction selection. Or it may cause
non-deterministic behavior.

llvm-svn: 28454
2006-05-24 20:46:25 +00:00
Chris Lattner aa2372562e Patches to make the LLVM sources more -pedantic clean. Patch provided
by Anton Korobeynikov!  This is a step towards closing PR786.

llvm-svn: 28447
2006-05-24 17:04:05 +00:00
Evan Cheng 85b6232b53 Back out indirect branch load folding hack. It broke some tests.
llvm-svn: 28425
2006-05-21 06:28:50 +00:00
Evan Cheng 401049ce33 - Use of load's chain result should be redirected to load's chain operand.
If it reads the chain result of the call, then the use, callseq_start,
  and call would form a cycle!
- Don't forget handle node replacement!
- There could also be a TokenFactor between the load and the callseq_start.

llvm-svn: 28420
2006-05-20 09:21:39 +00:00
Evan Cheng a26c451fa2 Missing break statements.
llvm-svn: 28418
2006-05-20 07:44:28 +00:00
Evan Cheng b9ac06bb33 Remove unused patterns.
llvm-svn: 28417
2006-05-20 01:40:16 +00:00
Evan Cheng f838cfcfbe Handle indirect call which folds a load manually. This never matches by
the TableGen generated code since the load's chain result is read by
the callseq_start node.

llvm-svn: 28416
2006-05-20 01:36:52 +00:00
Evan Cheng 9fee442e63 X86 integer register classes naming changes. Make them consistent with FP, vector classes.
llvm-svn: 28324
2006-05-16 07:21:53 +00:00
Evan Cheng db30388d48 Remove dead code
llvm-svn: 28261
2006-05-12 19:03:56 +00:00
Evan Cheng 9733bde74c Fixing truncate. Previously we were emitting truncate from r16 to r8 as
movw. That is we promote the destination operand to r16. So
        %CH = TRUNC_R16_R8 %BP
is emitted as
        movw %bp, %cx.

This is incorrect. If %cl is live, it would be clobbered.
Ideally we want to do the opposite, that is emitted it as
        movb ??, %ch
But this is not possible since %bp does not have a r8 sub-register.

We are now defining a new register class R16_ which is a subclass of R16
containing only those 16-bit registers that have r8 sub-registers (i.e.
AX - DX). We isel the truncate to two instructions, a MOV16to16_ to copy the
value to the R16_ class, followed by a TRUNC_R16_R8.

Due to bug 770, the register colaescer is not going to coalesce between R16 and
R16_. That will be fixed later so we can eliminate the MOV16to16_. Right now, it
can only be eliminated if we are lucky that source and destination registers are
the same.

llvm-svn: 28164
2006-05-08 08:01:26 +00:00
Evan Cheng ddb6cc1d8e Better implementation of truncate. ISel matches it to a pseudo instruction
that gets emitted as movl (for r32 to i16, i8) or a movw (for r16 to i8). And
if the destination gets allocated a subregister of the source operand, then
the instruction will not be emitted at all.

llvm-svn: 28119
2006-05-05 05:40:20 +00:00
Chris Lattner 5d70a7c4a5 #include Intrinsics.h into all dag isels
llvm-svn: 27109
2006-03-25 06:47:10 +00:00
Evan Cheng 2dd2c652b2 Added getTargetLowering() to TargetMachine. Refactored targets to support this.
llvm-svn: 26742
2006-03-13 23:20:37 +00:00
Evan Cheng 990c3602bd Don't match x << 1 to LEAL. It's better to emit x + x.
llvm-svn: 26429
2006-02-28 21:13:57 +00:00
Evan Cheng 77d86ff8fc * Cleaned up addressing mode matching code.
* Cleaned up and tweaked LEA cost analysis code. Removed some hacks.
* Handle ADD $X, c to MOV32ri $X+c. These patterns cannot be autogen'd and
  they need to be matched before LEA.

llvm-svn: 26376
2006-02-25 10:09:08 +00:00
Evan Cheng e0ed6ec13f - Clean up the lowering and selection code of ConstantPool, GlobalAddress,
and ExternalSymbol.
- Use C++ code (rather than tblgen'd selection code) to match the above
  mentioned leaf nodes. Do not mutate and nodes and do not record the
  selection in CodeGenMap. These nodes should be safe to duplicate. This is
  a performance win.

llvm-svn: 26335
2006-02-23 20:41:18 +00:00
Evan Cheng 1f342c2884 PIC related bug fixes.
1. Various asm printer bug.
2. Lowering bug. Now TargetGlobalAddress is wrapped in X86ISD::TGAWrapper.

llvm-svn: 26324
2006-02-23 02:43:52 +00:00
Evan Cheng 7eabbfd618 X86 codegen tweak to use lea in another case:
Suppose base == %eax and it has multiple uses, then instead of
  movl %eax, %ecx
  addl $8, %ecx
use
  leal 8(%eax), %ecx.

llvm-svn: 26323
2006-02-23 00:13:58 +00:00
Evan Cheng 5588de9415 x86 / Darwin PIC support.
llvm-svn: 26273
2006-02-18 00:15:05 +00:00
Evan Cheng a86ba85dc5 Prevent certain nodes that have already been selected from being folded into
X86 addressing mode. Currently we do not allow any node whose target node
produces a chain as well as any node that is at the root of the addressing
mode expression tree.

llvm-svn: 26117
2006-02-11 02:05:36 +00:00
Evan Cheng 2b6f78b664 Nicer code. :-)
llvm-svn: 26111
2006-02-10 22:46:26 +00:00
Evan Cheng d49cc3634e Added X86 isel debugging stuff.
llvm-svn: 26110
2006-02-10 22:24:32 +00:00
Evan Cheng 101e4b916a Match tblgen change.
llvm-svn: 26096
2006-02-09 22:12:53 +00:00
Evan Cheng d1b82d8db0 Match getTargetNode() changes (now return SDNode* instead of SDOperand).
llvm-svn: 26085
2006-02-09 07:17:49 +00:00
Evan Cheng 6dc90ca172 Change Select() from
SDOperand Select(SDOperand N);
to
void Select(SDOperand &Result, SDOperand N);

llvm-svn: 26067
2006-02-09 00:37:58 +00:00
Evan Cheng d5f2ba0d6f - Update load folding checks to match those auto-generated by tblgen.
- Manually select SDOperand's returned by TryFoldLoad which make up the
  load address.

llvm-svn: 26012
2006-02-06 06:02:33 +00:00
Evan Cheng 54cb1833a4 Use SelectRoot() as entry of any tblgen based isel.
llvm-svn: 25997
2006-02-05 06:46:41 +00:00
Evan Cheng d19d51f414 Re-commit the last bit of change that was backed out.
llvm-svn: 25983
2006-02-05 05:25:07 +00:00
Chris Lattner 22b4edfb42 Temporarily revert this patch, which probably breaks with the
tblgen patch reverted.

llvm-svn: 25971
2006-02-04 09:24:16 +00:00
Evan Cheng ce87cac555 Complex pattern's custom matcher should not call Select() on any operands.
Select them afterwards if it returns true.

llvm-svn: 25968
2006-02-04 08:50:49 +00:00
Evan Cheng 72d5c256c9 - Allow XMM load (for scalar use) to be folded into ANDP* and XORP*.
- Use XORP* to implement fneg.

llvm-svn: 25857
2006-01-31 22:28:30 +00:00
Evan Cheng cde9e30bc6 x86 CPU detection and proper subtarget support
llvm-svn: 25679
2006-01-27 08:10:46 +00:00
Chris Lattner de02d7727f Add explicit #includes of <iostream>
llvm-svn: 25515
2006-01-22 23:41:00 +00:00
Evan Cheng 6135a7a546 Didn't mean to check that in.
llvm-svn: 25436
2006-01-19 01:52:56 +00:00
Evan Cheng 267ba5965e A obvious typo
llvm-svn: 25435
2006-01-19 01:46:14 +00:00
Evan Cheng 911c68d7a8 Fix FP_TO_INT**_IN_MEM lowering.
llvm-svn: 25368
2006-01-16 21:21:29 +00:00
Chris Lattner 78c358d1ad Use the default lowering of ISD::DYNAMIC_STACKALLOC, delete now dead code.
llvm-svn: 25333
2006-01-15 09:00:21 +00:00
Chris Lattner 8869c6f782 silence a warning
llvm-svn: 25322
2006-01-14 20:11:13 +00:00
Evan Cheng 2ae799aff0 Select DYNAMIC_STACKALLOC
llvm-svn: 25225
2006-01-11 22:15:18 +00:00
Evan Cheng bc7a0f44bd * Add special entry code main() (to set x87 to 64-bit precision).
* Allow a register node as SelectAddr() base.
* ExternalSymbol -> TargetExternalSymbol as direct function callee.
* Use X86::ESP register rather than CopyFromReg(X86::ESP) as stack ptr for
  call parmater passing.

llvm-svn: 25207
2006-01-11 06:09:51 +00:00
Chris Lattner 7c551268d0 implement FP_REG_KILL insertion for the dag-dag instruction selector
llvm-svn: 25192
2006-01-11 01:15:34 +00:00
Chris Lattner 29852a58b0 Fit into 80 cols
llvm-svn: 25191
2006-01-11 00:46:55 +00:00
Evan Cheng 73a1ad975e FP_TO_INT*_IN_MEM and x87 FP Select support.
llvm-svn: 25188
2006-01-10 20:26:56 +00:00
Evan Cheng 7c4486215f * Added undef patterns.
* Some reorg.

llvm-svn: 25163
2006-01-09 23:10:28 +00:00
Evan Cheng 92e2797ce2 * Added integer div / rem.
* Fixed a load folding bug.

llvm-svn: 25136
2006-01-06 23:19:29 +00:00
Evan Cheng 10d2790d50 ISEL code for MULHU, MULHS, and UNDEF.
llvm-svn: 25132
2006-01-06 20:36:21 +00:00
Evan Cheng b03f9b32d2 fold (shl x, 1) -> (add x, x)
llvm-svn: 25120
2006-01-06 01:06:31 +00:00
Evan Cheng a5ae6e8320 Added ConstantFP patterns.
llvm-svn: 25108
2006-01-05 02:08:37 +00:00
Evan Cheng 45e19098a6 DAG based isel call support.
llvm-svn: 25103
2006-01-05 00:27:02 +00:00
Evan Cheng 9cdc16c6d3 * Fix a GlobalAddress lowering bug.
* Teach DAG combiner about X86ISD::SETCC by adding a TargetLowering hook.

llvm-svn: 24921
2005-12-21 23:05:39 +00:00
Evan Cheng a2f308fc3e Remove ISD::RET select code. Now tblgen'd.
llvm-svn: 24889
2005-12-21 02:41:57 +00:00
Evan Cheng a74ce62746 * Added lowering hook for external weak global address. It inserts a load
for Darwin.
* Added lowering hook for ISD::RET. It inserts CopyToRegs for the return
  value (or store / fld / copy to ST(0) for floating point value). This
  eliminate the need to write C++ code to handle RET with variable number
  of operands.

llvm-svn: 24888
2005-12-21 02:39:21 +00:00
Evan Cheng 1d9b671de0 It's essential we clear CodeGenMap after isel every basic block!
llvm-svn: 24867
2005-12-19 22:36:02 +00:00
Evan Cheng 1d71248392 Darwin API issue: indirect load of external and weak symbols.
llvm-svn: 24775
2005-12-17 09:13:43 +00:00
Evan Cheng bc7708c0e8 Added truncate.
llvm-svn: 24760
2005-12-17 02:02:50 +00:00
Evan Cheng cb19390ead Added support for cmp, test, and conditional move instructions.
llvm-svn: 24756
2005-12-17 01:24:02 +00:00
Evan Cheng 74151ba279 * Promote all 1 bit entities to 8 bit.
* Handling extload (1 bit -> 8 bit) and remove C++ code that handle 1 bit
zextload.

llvm-svn: 24726
2005-12-15 19:49:23 +00:00
Evan Cheng 00fcb0017e Handling zero extension of 1 bit value.
llvm-svn: 24722
2005-12-15 01:02:48 +00:00
Evan Cheng 67ed58e22b When SelectLEAAddr() fails, it shouldn't cause the side effect of having the
base or index operands being selected.

llvm-svn: 24674
2005-12-12 21:49:40 +00:00
Evan Cheng bfd259a2b7 For ISD::RET, if # of operands >= 2, try selection the real data dep. operand
first before the chain.
e.g.
int X;

int foo(int x)
{
  x += X + 37;
  return x;
}

If chain operand is selected first, we would generate:
	movl X, %eax
	movl 4(%esp), %ecx
	leal 37(%ecx,%eax), %eax

rather than
	movl $37, %eax
	addl 4(%esp), %eax
	addl X, %eax

which does not require %ecx. (Due to ADD32rm not matching.)

llvm-svn: 24673
2005-12-12 20:32:18 +00:00
Evan Cheng 0d6cfee704 * Added X86 store patterns.
* Added X86 dec patterns.

llvm-svn: 24654
2005-12-10 00:48:20 +00:00
Evan Cheng c9fab31098 * Added intelligence to X86 LEA addressing mode matching routine so it returns
false if the match is not profitable. e.g. leal 1(%eax), %eax.
* Added patterns for X86 integer loads and LEA32.

llvm-svn: 24635
2005-12-08 02:01:35 +00:00
Evan Cheng 4b02426130 Proper support for shifts with register shift value.
llvm-svn: 24559
2005-12-01 00:43:55 +00:00
Chris Lattner af2e0373dd SelectNodeTo now returns its result, we must pay attention to it.
llvm-svn: 24550
2005-11-30 22:59:19 +00:00
Evan Cheng 4eb7af9bc9 Added support to STORE and shifts to DAG to DAG isel.
llvm-svn: 24525
2005-11-30 02:51:20 +00:00
Chris Lattner 3f0f71b92b Add load and other support to the dag-dag isel. Patch contributed by Evan
Cheng!

llvm-svn: 24419
2005-11-19 02:11:08 +00:00
Chris Lattner 5930d3df3d Add patterns for several simple instructions that take i32 immediates.
Patch contributed by Evan Cheng!

llvm-svn: 24382
2005-11-16 22:59:19 +00:00
Chris Lattner 655e7dfd0d initial step at adding a dag-to-dag isel for X86 backend. Patch contributed
by Evan Cheng!

llvm-svn: 24371
2005-11-16 01:54:32 +00:00