LLVM Value/Use does and MachineRegisterInfo/MachineOperand does.
This allows constant time for all uses list maintenance operations.
The idea was suggested by Chris. Reviewed by Evan and Dan.
Patch is tested and approved by Dan.
On normal use-cases compilation speed is not affected. On very big basic
blocks there are compilation speedups in the range of 15-20% or even better.
llvm-svn: 48822
Change insert/extract subreg instructions to be able to be used in TableGen patterns.
Use the above features to reimplement an x86-64 pseudo instruction as a pattern.
llvm-svn: 48130
result into a MUL late in the X86 codegen process. ISD::MUL is
once again Legal on X86, so this is no longer needed. And, the
hack was suboptimal; see PR1874 for details.
llvm-svn: 47567
Added ISD::DECLARE node type to represent llvm.dbg.declare intrinsic. Now the intrinsic calls are lowered into a SDNode and lives on through out the codegen passes.
For now, since all the debugging information recording is done at isel time, when a ISD::DECLARE node is selected, it has the side effect of also recording the variable. This is a short term solution that should be fixed in time.
llvm-svn: 46659
This case returns the value in ST(0) and then has to convert it to an SSE
register. This causes significant codegen ugliness in some cases. For
example in the trivial fp-stack-direct-ret.ll testcase we used to generate:
_bar:
subl $28, %esp
call L_foo$stub
fstpl 16(%esp)
movsd 16(%esp), %xmm0
movsd %xmm0, 8(%esp)
fldl 8(%esp)
addl $28, %esp
ret
because we move the result of foo() into an XMM register, then have to
move it back for the return of bar.
Instead of hacking ever-more special cases into the call result lowering code
we take a much simpler approach: on x86-32, fp return is modeled as always
returning into an f80 register which is then truncated to f32 or f64 as needed.
Similarly for a result, we model it as an extension to f80 + return.
This exposes the truncate and extensions to the dag combiner, allowing target
independent code to hack on them, eliminating them in this case. This gives
us this code for the example above:
_bar:
subl $12, %esp
call L_foo$stub
addl $12, %esp
ret
The nasty aspect of this is that these conversions are not legal, but we want
the second pass of dag combiner (post-legalize) to be able to hack on them.
To handle this, we lie to legalize and say they are legal, then custom expand
them on entry to the isel pass (PreprocessForFPConvert). This is gross, but
less gross than the code it is replacing :)
This also allows us to generate better code in several other cases. For
example on fp-stack-ret-conv.ll, we now generate:
_test:
subl $12, %esp
call L_foo$stub
fstps 8(%esp)
movl 16(%esp), %eax
cvtss2sd 8(%esp), %xmm0
movsd %xmm0, (%eax)
addl $12, %esp
ret
where before we produced (incidentally, the old bad code is identical to what
gcc produces):
_test:
subl $12, %esp
call L_foo$stub
fstpl (%esp)
cvtsd2ss (%esp), %xmm0
cvtss2sd %xmm0, %xmm0
movl 16(%esp), %eax
movsd %xmm0, (%eax)
addl $12, %esp
ret
Note that we generate slightly worse code on pr1505b.ll due to a scheduling
deficiency that is unrelated to this patch.
llvm-svn: 46307
that "machine" classes are used to represent the current state of
the code being compiled. Given this expanded name, we can start
moving other stuff into it. For now, move the UsedPhysRegs and
LiveIn/LoveOuts vectors from MachineFunction into it.
Update all the clients to match.
This also reduces some needless #includes, such as MachineModuleInfo
from MachineFunction.
llvm-svn: 45467
sometimes emit "zero" and "all one" vectors multiple times,
for example:
_test2:
pcmpeqd %mm0, %mm0
movq %mm0, _M1
pcmpeqd %mm0, %mm0
movq %mm0, _M2
ret
instead of:
_test2:
pcmpeqd %mm0, %mm0
movq %mm0, _M1
movq %mm0, _M2
ret
This patch fixes this by always arranging for zero/one vectors
to be defined as v4i32 or v2i32 (SSE/MMX) instead of letting them be
any random type. This ensures they get trivially CSE'd on the dag.
This fix is also important for LegalizeDAGTypes, as it gets unhappy
when the x86 backend wants BUILD_VECTOR(i64 0) to be legal even when
'i64' isn't legal.
This patch makes the following changes:
1) X86TargetLowering::LowerBUILD_VECTOR now lowers 0/1 vectors into
their canonical types.
2) The now-dead patterns are removed from the SSE/MMX .td files.
3) All the patterns in the .td file that referred to immAllOnesV or
immAllZerosV in the wrong form now use *_bc to match them with a
bitcast wrapped around them.
4) X86DAGToDAGISel::SelectScalarSSELoad is generalized to handle
bitcast'd zero vectors, which simplifies the code actually.
5) getShuffleVectorZeroOrUndef is updated to generate a shuffle that
is legal, instead of generating one that is illegal and expecting
a later legalize pass to clean it up.
6) isZeroShuffle is generalized to handle bitcast of zeros.
7) several other minor tweaks.
This patch is definite goodness, but has the potential to cause random
code quality regressions. Please be on the lookout for these and let
me know if they happen.
llvm-svn: 44310
use ISD::{S,U}DIVREM and ISD::{S,U}MUL_HIO. Move the lowering code
associated with these operators into target-independent in LegalizeDAG.cpp
and TargetLowering.cpp.
llvm-svn: 42762
both results with a single div or idiv instruction. This uses new X86ISD
nodes for DIV and IDIV which are introduced during the legalize phase
so that the SelectionDAG's CSE can automatically eliminate redundant
computations.
llvm-svn: 42308
have situations where an SSE instruction turns into
multiple blocks, with the live range of an x87
register crossing them. To do this correctly make
sure we examine all blocks when inserting
FP_REG_KILL. PR 1697. (This was exposed by my
fix for PR 1681, but the same thing could happen
mixing x87 long double with SSE.)
llvm-svn: 42281
see if the base register is already occupied before assuming it can be
used. This fixes bogus code generation in the accompanying testcase.
llvm-svn: 41049
SSE mode (all but conversions <-> other FP types, I think):
>>Do not mark all-80-bit operations as "Requires[FPStack]"
(which really means "not SSE").
>>Refactor load-and-extend to facilitate this.
>>Update comments.
>>Handle long double in SSE when computing FP_REG_KILL.
llvm-svn: 40906
TargetLowering to SelectionDAG so that they have more convenient
access to the current DAG, in preparation for the ValueType routines
being changed from standalone functions to members of SelectionDAG for
the pre-legalize vector type changes.
llvm-svn: 37704
1) codegen a shift of a register as a shift, not an LEA.
2) teach the RA to convert a shift to an LEA instruction if it wants something
in three-address form.
This gives us asm diffs like:
- leal (,%eax,4), %eax
+ shll $2, %eax
which is faster on some processors and smaller on all of them.
and, more interestingly:
- movl 24(%esi), %eax
- leal (,%eax,4), %edi
+ movl 24(%esi), %edi
+ shll $2, %edi
Without #2, #1 was a significant pessimization in some cases.
This implements CodeGen/X86/shift-codegen.ll
llvm-svn: 35204
X + C to promote LEA formation. We would incorrectly apply it in some cases
(test) and miss it in others.
This fixes CodeGen/X86/2007-02-04-OrAddrMode.ll
llvm-svn: 33884
* PIC-aware internal structures in X86 Codegen have been refactored
* Visibility (default/weak) has been added
* Docs fixes (external weak linkage, visibility, formatting)
llvm-svn: 33136
- New target type "mingw" was introduced
- Same things for both mingw & cygwin are marked as "cygming" (as in
gcc)
- .lcomm is supported here, so allow LLVM to use it
- Correctly use underscored versions of setjmp & _longjmp for both mingw
& cygwin
llvm-svn: 32833
clearing the upper 8-bits instead of issuing two instructions. This also
eliminates the need to target the AH register which can be problematic on
x86-64.
llvm-svn: 31832
SCALAR_TO_VECTOR, even if the hasOneUse() check pass we may end up folding
the load into two instructions. Make sure we check the SCALAR_TO_VECTOR
has only one use as well.
llvm-svn: 31641
a framework for doing it right. This fixes
CodeGen/X86/2006-10-07-ScalarSSEMiscompile.ll.
Once X86DAGToDAGISel::SelectScalarSSELoad is implemented right, this task
will be done.
llvm-svn: 30817
Suppose the TokenFactor can reach the Op:
[Load chain]
^
|
[Load]
^ ^
| |
/ \-
/ |
/ [Op]
/ ^ ^
| .. |
| / |
[TokenFactor] |
^ |
| |
\ /
\ /
[Store]
If we move the Load below the TokenFactor, we would have created a cycle in
the DAG.
llvm-svn: 30040
selection is done. That's rather expensive especially in situations where it
isn't really needed.
Move back to a searching the predecessors, but make use of topological order
to trim the search space.
llvm-svn: 29559
Fold c2 in (x << c1) | c2 where (c2 < c1)
e.g.
int test(int x) {
return (x << 3) + 7;
}
This can be codegen'd as:
leal 7(,%eax,8), %eax
llvm-svn: 28550
If it reads the chain result of the call, then the use, callseq_start,
and call would form a cycle!
- Don't forget handle node replacement!
- There could also be a TokenFactor between the load and the callseq_start.
llvm-svn: 28420
movw. That is we promote the destination operand to r16. So
%CH = TRUNC_R16_R8 %BP
is emitted as
movw %bp, %cx.
This is incorrect. If %cl is live, it would be clobbered.
Ideally we want to do the opposite, that is emitted it as
movb ??, %ch
But this is not possible since %bp does not have a r8 sub-register.
We are now defining a new register class R16_ which is a subclass of R16
containing only those 16-bit registers that have r8 sub-registers (i.e.
AX - DX). We isel the truncate to two instructions, a MOV16to16_ to copy the
value to the R16_ class, followed by a TRUNC_R16_R8.
Due to bug 770, the register colaescer is not going to coalesce between R16 and
R16_. That will be fixed later so we can eliminate the MOV16to16_. Right now, it
can only be eliminated if we are lucky that source and destination registers are
the same.
llvm-svn: 28164
that gets emitted as movl (for r32 to i16, i8) or a movw (for r16 to i8). And
if the destination gets allocated a subregister of the source operand, then
the instruction will not be emitted at all.
llvm-svn: 28119
* Cleaned up and tweaked LEA cost analysis code. Removed some hacks.
* Handle ADD $X, c to MOV32ri $X+c. These patterns cannot be autogen'd and
they need to be matched before LEA.
llvm-svn: 26376
and ExternalSymbol.
- Use C++ code (rather than tblgen'd selection code) to match the above
mentioned leaf nodes. Do not mutate and nodes and do not record the
selection in CodeGenMap. These nodes should be safe to duplicate. This is
a performance win.
llvm-svn: 26335
X86 addressing mode. Currently we do not allow any node whose target node
produces a chain as well as any node that is at the root of the addressing
mode expression tree.
llvm-svn: 26117
* Allow a register node as SelectAddr() base.
* ExternalSymbol -> TargetExternalSymbol as direct function callee.
* Use X86::ESP register rather than CopyFromReg(X86::ESP) as stack ptr for
call parmater passing.
llvm-svn: 25207
for Darwin.
* Added lowering hook for ISD::RET. It inserts CopyToRegs for the return
value (or store / fld / copy to ST(0) for floating point value). This
eliminate the need to write C++ code to handle RET with variable number
of operands.
llvm-svn: 24888