Commit Graph

395 Commits

Author SHA1 Message Date
Jakob Stoklund Olesen 9340ea59e1 Rename X86 subregister indices to something shorter.
Use the tablegen-produced enums.

llvm-svn: 104493
2010-05-24 14:48:17 +00:00
Dan Gohman 0fd54fbbcf Don't leave Base.FrameIndex uninitialized, so that it doesn't
print randomly in debug output.

llvm-svn: 102668
2010-04-29 23:30:41 +00:00
Evan Cheng 050df1b8de Enable i16 to i32 promotion by default.
llvm-svn: 102493
2010-04-28 08:30:49 +00:00
Chris Lattner 84776786a7 teach the x86 address matching stuff to handle
(shl (or x,c), 3) the same as (shl (add x, c), 3)
when x doesn't have any bits from c set.

This finishes off PR1135.  Before we compiled the block to:
to:

LBB0_3:                                 ## %bb
	cmpb	$4, %dl
	sete	%dl
	addb	%dl, %cl
	movb	%cl, %dl
	shlb	$2, %dl
	addb	%r8b, %dl
	shlb	$2, %dl
	movzbl	%dl, %edx
	movl	%esi, (%rdi,%rdx,4)
	leaq	2(%rdx), %r9
	movl	%esi, (%rdi,%r9,4)
	leaq	1(%rdx), %r9
	movl	%esi, (%rdi,%r9,4)
	addq	$3, %rdx
	movl	%esi, (%rdi,%rdx,4)
	incb	%r8b
	decb	%al
	movb	%r8b, %dl
	jne	LBB0_1

Now we produce:

LBB0_3:                                 ## %bb
	cmpb	$4, %dl
	sete	%dl
	addb	%dl, %cl
	movb	%cl, %dl
	shlb	$2, %dl
	addb	%r8b, %dl
	shlb	$2, %dl
	movzbl	%dl, %edx
	movl	%esi, (%rdi,%rdx,4)
	movl	%esi, 8(%rdi,%rdx,4)
	movl	%esi, 4(%rdi,%rdx,4)
	movl	%esi, 12(%rdi,%rdx,4)
	incb	%r8b
	decb	%al
	movb	%r8b, %dl
	jne	LBB0_1

llvm-svn: 101958
2010-04-20 23:18:40 +00:00
Dan Gohman 21cea8ac2e Use const qualifiers with TargetLowering. This eliminates several
const_casts, and it reinforces the design of the Target classes being
immutable.

SelectionDAGISel::IsLegalToFold is now a static member function, because
PIC16 uses it in an unconventional way. There is more room for API
cleanup here.

And PIC16's AsmPrinter no longer uses TargetLowering.

llvm-svn: 101635
2010-04-17 15:26:15 +00:00
Dan Gohman bcaf681cde Add const qualifiers to CodeGen's use of LLVM IR constructs.
llvm-svn: 101334
2010-04-15 01:51:59 +00:00
Dan Gohman c87b74d913 Delete unneeeded arguments.
llvm-svn: 101276
2010-04-14 20:17:22 +00:00
Chris Lattner 6f306d7d30 use DebugLoc default ctor instead of DebugLoc::getUnknownLoc()
llvm-svn: 100214
2010-04-02 20:16:16 +00:00
Evan Cheng 68333f5c6e X86 address mode matching code MatchAddressRecursively does some aggressive hack which require doing a RAUW. It may end up deleting some SDNode up stream. It should avoid referencing deleted nodes.
llvm-svn: 98780
2010-03-17 23:58:35 +00:00
Evan Cheng d703df67ce Do not force indirect tailcall through fixed registers: eax, r11. Add support to allow loads to be folded to tail call instructions.
llvm-svn: 98465
2010-03-14 03:48:46 +00:00
Chris Lattner 82cc53388e add a comment.
llvm-svn: 97709
2010-03-04 01:43:43 +00:00
Chris Lattner 3fcbbd8673 factor the 'sign extended from 8 bit' patterns better so
that they are not destination type specific.  This allows
tblgen to factor them and the type check is redundant with
what the isel does anyway.

llvm-svn: 97629
2010-03-03 01:45:01 +00:00
Chris Lattner 8d63704021 merge two loops over all nodes in the graph into one.
llvm-svn: 97606
2010-03-02 23:12:51 +00:00
Chris Lattner 1eb6eb059c eliminate PreprocessForRMW now that isel handles it.
We still preprocess calls and fp return stuff.

llvm-svn: 97598
2010-03-02 22:33:56 +00:00
Chris Lattner dd030701bd Fix some issues in WalkChainUsers dealing with
CopyToReg/CopyFromReg/INLINEASM.  These are annoying because
they have the same opcode before an after isel.  Fix this by
setting their NodeID to -1 to indicate that they are selected,
just like what automatically happens when selecting things that
end up being machine nodes.

With that done, give IsLegalToFold a new flag that causes it to
ignore chains.  This lets the HandleMergeInputChains routine be
the one place that validates chains after a match is successful,
enabling the new hotness in chain processing.  This smarter
chain processing eliminates the need for "PreprocessRMW" in the
X86 and MSP430 backends and enables MSP to start matching it's
multiple mem operand instructions more aggressively.

I currently #if out the dead code in the X86 backend and MSP 
backend, I'll remove it for real in a follow-on patch.

The testcase changes are:
  test/CodeGen/X86/sse3.ll: we generate better code
  test/CodeGen/X86/store_op_load_fold2.ll: PreprocessRMW was 
      miscompiling this before, we now generate correct code
      Convert it to filecheck while I'm at it.
  test/CodeGen/MSP430/Inst16mm.ll: Add a testcase for mem/mem
      folding to make anton happy. :)

llvm-svn: 97596
2010-03-02 22:20:06 +00:00
Chris Lattner f98f124a73 Sink InstructionSelect() out of each target into SDISel, and rename it
DoInstructionSelection.  Inline "SelectRoot" into it from DAGISelHeader.
Sink some other stuff out of DAGISelHeader into SDISel.

Eliminate the various 'Indent' stuff from various targets, which dates
to when isel was recursive.

 17 files changed, 114 insertions(+), 430 deletions(-)

llvm-svn: 97555
2010-03-02 06:34:30 +00:00
Chris Lattner bd6e193f54 remove a little hack I did for the old isel, not needed
now that it is gone.

llvm-svn: 97516
2010-03-01 22:51:11 +00:00
Chris Lattner 55ef1ebe52 remove a terrible hack that disabled assertions from this file because of build time
problems.  rdar://7697850.

llvm-svn: 97500
2010-03-01 21:20:46 +00:00
Chris Lattner 8d7b4393d2 no need to override IsLegalToFold, the base implementation
disables load folding at -O0.

llvm-svn: 96973
2010-02-23 19:33:11 +00:00
Chris Lattner 3c29aff9ff fix and un-xfail X86/vec_ss_load_fold.ll
llvm-svn: 96720
2010-02-21 04:53:34 +00:00
Chris Lattner 18a32ce0f3 rename SelectScalarSSELoad -> SelectScalarSSELoadXXX and rewrite
it to follow the mode needed by the new isel.  Instead of returning
the input and output chains, it just returns the (currently only one,
which is a silly limitation) node that has input and output chains.

Since we want the old thing to still work, add a new 
SelectScalarSSELoad to emulate the old interface.  The XXX suffix
and the wrapper will eventually go away.

llvm-svn: 96715
2010-02-21 03:17:59 +00:00
Chris Lattner 3f48215480 rename and document some arguments so I don't have to keep
reverse engineering what they are.

llvm-svn: 96456
2010-02-17 06:07:47 +00:00
Chris Lattner afac7dad21 fix rdar://7653908, a crash on a case where we would fold a load
into a roundss intrinsic, producing a cyclic dag.  The root cause
of this is badness handling ComplexPattern nodes in the old dagisel
that I noticed through inspection.  Eliminate a copy of the of the
code that handled ComplexPatterns by making EmitChildMatchCode call
into EmitMatchCode.

llvm-svn: 96408
2010-02-16 22:35:06 +00:00
Evan Cheng 5e73ff2e3a Split SelectionDAGISel::IsLegalAndProfitableToFold to
IsLegalToFold and IsProfitableToFold. The generic version of the later simply checks whether the folding candidate has a single use.

This allows the target isel routines more flexibility in deciding whether folding makes sense. The specific case we are interested in is folding constant pool loads with multiple uses.

llvm-svn: 96255
2010-02-15 19:41:07 +00:00
David Greene cbd39c5def Remove an assumption of default arguments. This is in anticipation of a
change to SelectionDAG build APIs.

llvm-svn: 96239
2010-02-15 16:57:43 +00:00
Chris Lattner 2b0a7a2592 refactor the conditional jump instructions in the .td file to
use a multipattern that generates both the 1-byte and 4-byte 
versions from the same defm

llvm-svn: 95901
2010-02-11 19:25:55 +00:00
Chris Lattner b06015aa69 move target-independent opcodes out of TargetInstrInfo
into TargetOpcodes.h.  #include the new TargetOpcodes.h
into MachineInstr.  Add new inline accessors (like isPHI())
to MachineInstr, and start using them throughout the 
codebase.

llvm-svn: 95687
2010-02-09 19:54:29 +00:00
Dan Gohman 51ad99d2c5 Re-implement the main strength-reduction portion of LoopStrengthReduction.
This new version is much more aggressive about doing "full" reduction in
cases where it reduces register pressure, and also more aggressive about
rewriting induction variables to count down (or up) to zero when doing so
reduces register pressure.

It currently uses fairly simplistic algorithms for finding reuse
opportunities, but it introduces a new framework allows it to combine
multiple strategies at once to form hybrid solutions, instead of doing
all full-reduction or all base+index.

llvm-svn: 94061
2010-01-21 02:09:26 +00:00
David Greene 0985160c54 When XDEBUG is enabled, check for SelectionDAG cycles at some key
points.  This will help us find future problems like the one
described in PR6019.

llvm-svn: 94019
2010-01-20 20:13:31 +00:00
David Greene b0c0e6433f Fix PR6019. A load has more than one use if it feeds a bitconvert that
has more than one use.

llvm-svn: 93576
2010-01-15 23:23:41 +00:00
Dan Gohman c119580307 Reapply the MOV64r0 patch, with a fix: MOV64r0 clobbers EFLAGS.
llvm-svn: 93229
2010-01-12 04:42:54 +00:00
Evan Cheng 7bdf339602 Revert 93158. It's breaking quite a few x86_64 tests.
llvm-svn: 93185
2010-01-11 21:13:41 +00:00
Dan Gohman 3a55686345 Re-instate MOV64r0 and MOV16r0, with adjustments to work with the
new AsmPrinter. This is perhaps less elegant than describing them
in terms of MOV32r0 and subreg operations, but it allows the
current register to rematerialize them.

llvm-svn: 93158
2010-01-11 17:37:57 +00:00
David Greene dbdb1b28b8 Change errs() to dbgs().
llvm-svn: 92647
2010-01-05 01:29:08 +00:00
Dan Gohman ea6f91ff64 Change SelectCode's argument from SDValue to SDNode *, to make it more
clear what information these functions are actually using.

This is also a micro-optimization, as passing a SDNode * around is
simpler than passing a { SDNode *, int } by value or reference.

llvm-svn: 92564
2010-01-05 01:24:18 +00:00
Dan Gohman 85d4fdfe37 Flags-producing add, and, or, etc. have the same profibility
rules as normal add, and, or, etc.

llvm-svn: 92507
2010-01-04 20:51:50 +00:00
Chris Lattner 518b037620 completely eliminate the MOV16r0 'instruction'. The only
interesting part of this is the divrem changes, which are
already tested by CodeGen/X86/divrem.ll.

llvm-svn: 91975
2009-12-23 01:45:04 +00:00
Evan Cheng 3dfd04e2b7 Re-apply 91623 now that I actually know what I was trying to do.
llvm-svn: 91655
2009-12-18 01:59:21 +00:00
Jeffrey Yasskin 0de0ce11d8 Revert r91623 to unbreak the buildbots.
llvm-svn: 91632
2009-12-17 22:44:34 +00:00
Evan Cheng e43b403c87 Remove an unused option.
llvm-svn: 91623
2009-12-17 21:23:58 +00:00
Dan Gohman 7a6611793f Target-independent support for TargetFlags on BlockAddress operands,
and support for blockaddresses in x86-32 PIC mode.

llvm-svn: 89506
2009-11-20 23:18:13 +00:00
Daniel Dunbar bc299f0092 llvm-gcc/clang don't (won't?) need this hack.
llvm-svn: 86769
2009-11-11 00:28:38 +00:00
Daniel Dunbar b9415c7d9a Add a monstrous hack to improve X86ISelDAGToDAG compile time.
- Force NDEBUG on in any Release build. This drops the compile time to ~100s
   from ~600s, in Release mode.

 - This may just be a temporary workaround, I don't know the true nature of the
   gcc-4.2 compile time performance problem.

llvm-svn: 86695
2009-11-10 18:24:37 +00:00
Dan Gohman 006f9353e1 Use SUBREG_TO_REG instead of INSERT_SUBREG to model x86-64's
implicit zero-extend.

llvm-svn: 86196
2009-11-05 23:53:08 +00:00
Dan Gohman b15f4a1cbd Remove uninteresting and confusing debug output.
llvm-svn: 86149
2009-11-05 18:47:09 +00:00
Chris Lattner 50ba5c3dc2 improve x86 codegen support for blockaddress. We now compile
the testcase into:

_test1:                                                     ## @test1
## BB#0:                                                    ## %entry
	leaq	L_test1_bb6(%rip), %rax
	jmpq	*%rax
L_test1_bb:                                                 ## Address Taken
LBB1_1:                                                     ## %bb
	movb	$1, %al
	ret
L_test1_bb6:                                                ## Address Taken
LBB1_2:                                                     ## %bb6
	movb	$2, %al
	ret

Note, it is very very strange that BlockAddressSDNode doesn't carry 
around TargetFlags.  Dan, please fix this.

llvm-svn: 85703
2009-11-01 03:25:03 +00:00
Nick Lewycky 974e12b2d3 Remove includes of Support/Compiler.h that are no longer needed after the
VISIBILITY_HIDDEN removal.

llvm-svn: 85043
2009-10-25 06:57:41 +00:00
Nick Lewycky 02d5f77d26 Remove VISIBILITY_HIDDEN from class/struct found inside anonymous namespaces.
Chris claims we should never have visibility_hidden inside any .cpp file but
that's still not true even after this commit.

llvm-svn: 85042
2009-10-25 06:33:48 +00:00
Dan Gohman 7d9dffb413 Fix the x86 test-shrink optimization so that it doesn't shrink comparisons
when one of the bits being tested would end up being the sign bit in the
narrower type, and a signed comparison is being performed, since this would
change the result of the signed comparison. This fixes PR5132.

llvm-svn: 83670
2009-10-09 20:35:19 +00:00
Dan Gohman 48b185d6f7 Improve MachineMemOperand handling.
- Allocate MachineMemOperands and MachineMemOperand lists in MachineFunctions.
   This eliminates MachineInstr's std::list member and allows the data to be
   created by isel and live for the remainder of codegen, avoiding a lot of
   copying and unnecessary translation. This also shrinks MemSDNode.
 - Delete MemOperandSDNode. Introduce MachineSDNode which has dedicated
   fields for MachineMemOperands.
 - Change MemSDNode to have a MachineMemOperand member instead of its own
   fields with the same information. This introduces some redundancy, but
   it's more consistent with what MachineInstr will eventually want.
 - Ignore alignment when searching for redundant loads for CSE, but remember
   the greatest alignment.

Target-specific code which previously used MemOperandSDNodes with generic
SDNodes now use MemIntrinsicSDNodes, with opcodes in a designated range
so that the SelectionDAG framework knows that MachineMemOperand information
is available.

llvm-svn: 82794
2009-09-25 20:36:54 +00:00