Commit Graph

4422 Commits

Author SHA1 Message Date
Owen Anderson 1f93edd08a Remove ugly and horrible code. It's not necessary for correctness, and can be added back later if it causes code quality issues.
llvm-svn: 44986
2007-12-13 05:43:37 +00:00
Evan Cheng 6e68381e02 Implicit def instructions, e.g. X86::IMPLICIT_DEF_GR32, are always re-materializable and they should not be spilled.
llvm-svn: 44960
2007-12-12 23:12:09 +00:00
Dan Gohman 7a7742c2fe Allow vector integer constants to be created with
SelectionDAG::getConstant, in the same way as vector floating-point
constants. This allows the legalize expansion code for @llvm.ctpop and
friends to be usable with vector types.

llvm-svn: 44954
2007-12-12 22:21:26 +00:00
Owen Anderson 499e5bffcf Forgot to remove a register from the PHI-union after I'd determined that it
interfered with other registers.  Seems like that might be a good thing to do. :-)

llvm-svn: 44902
2007-12-12 01:25:08 +00:00
Evan Cheng 6766d2fa4f If deleting a reload instruction due to reuse (value is available in register R and reload is targeting R), make sure to invalidate the kill information of the last kill.
llvm-svn: 44894
2007-12-11 23:36:57 +00:00
Bill Wendling 38236ef6cb Need to grow the indexed map. Added debug statements.
llvm-svn: 44892
2007-12-11 23:27:51 +00:00
Bill Wendling 642e15a7cb Simplify slightly.
llvm-svn: 44881
2007-12-11 22:22:22 +00:00
Owen Anderson f24dd1c1eb More progress on StrongPHIElimination. Now we actually USE the DomForest!
llvm-svn: 44877
2007-12-11 20:12:11 +00:00
Bill Wendling b678ae7c38 Blark! How in the world did this work without this?!
llvm-svn: 44874
2007-12-11 19:40:06 +00:00
Bill Wendling 7717a8a37d - Update the virtual reg to machine instruction map when hoisting.
- Fix subtle bug when creating initially creating this map.

llvm-svn: 44873
2007-12-11 19:17:04 +00:00
Bill Wendling 5143d898c8 Checking for "zero operands" during the "CanHoistInst()" method isn't necessary
because those with side effects will be caught by other checks in here.

Also, simplify the check for a BB in a sub loop.

llvm-svn: 44871
2007-12-11 18:45:11 +00:00
Evan Cheng 303417d242 Switch over to MachineLoopInfo.
llvm-svn: 44838
2007-12-11 02:09:15 +00:00
Evan Cheng f54030231e Pretty print shuffle mask operand.
llvm-svn: 44837
2007-12-11 02:08:35 +00:00
Gordon Henriksen 7843c16f31 CollectorMetadata and Collector are rejiggered to get along with
per-function collector model. Collector is now the factory for
CollectorMetadata, so the latter may be subclassed.

llvm-svn: 44827
2007-12-11 00:30:17 +00:00
Owen Anderson ba61806ef1 A little more progress on StrongPHIElimination, now that I have a better sense of
how the CodeGen machinery works.

llvm-svn: 44786
2007-12-10 08:07:09 +00:00
Christopher Lamb d202e03fe5 Improve branch folding by recgonizing that explict successor relationships impact the value of fall-through choices.
llvm-svn: 44785
2007-12-10 07:24:06 +00:00
Chris Lattner 64443973c0 Duncan points out that the subtraction is unneeded since hte code
knows the vector is not pow2

llvm-svn: 44740
2007-12-09 17:56:34 +00:00
Chris Lattner 69d3298777 Add support for splitting the operand of a return instruction.
llvm-svn: 44728
2007-12-09 00:06:19 +00:00
Bill Wendling 3f19dfe794 Reverting 44702. It wasn't correct to rename them.
llvm-svn: 44727
2007-12-08 23:58:46 +00:00
Chris Lattner e48fc80446 add many new cases to SplitResult. SplitResult now handles all the cases that LegalizeDAG does.
llvm-svn: 44726
2007-12-08 23:58:27 +00:00
Chris Lattner de9046af54 Implement splitting support for store, allowing us to compile:
%f8 = type <8 x float>

define void @test_f8(%f8* %P, %f8* %Q, %f8* %S) {
	%p = load %f8* %P		; <%f8> [#uses=1]
	%q = load %f8* %Q		; <%f8> [#uses=1]
	%R = add %f8 %p, %q		; <%f8> [#uses=1]
	store %f8 %R, %f8* %S
	ret void
}

into:

_test_f8:
	movaps	16(%rdi), %xmm0
	addps	16(%rsi), %xmm0
	movaps	(%rdi), %xmm1
	addps	(%rsi), %xmm1
	movaps	%xmm0, 16(%rdx)
	movaps	%xmm1, (%rdx)
	ret

llvm-svn: 44725
2007-12-08 23:24:26 +00:00
Chris Lattner de87224cd9 implement vector splitting of load, undef, and binops.
llvm-svn: 44724
2007-12-08 23:08:49 +00:00
Chris Lattner 1ef437d4e1 implement some methods.
llvm-svn: 44723
2007-12-08 22:40:18 +00:00
Chris Lattner a5e7db115e add scaffolding for splitting of vectors.
llvm-svn: 44722
2007-12-08 22:37:41 +00:00
Chris Lattner 8c8eaf6b92 reorganize header to separate into functional blocks.
llvm-svn: 44719
2007-12-08 21:59:32 +00:00
Chris Lattner 4063bd6eae split scalarization out to its own file.
llvm-svn: 44718
2007-12-08 20:30:28 +00:00
Chris Lattner 5c7c46baaf Split expansion out into its own file.
llvm-svn: 44717
2007-12-08 20:27:32 +00:00
Chris Lattner 029c816460 Split promotion support out to its own file.
llvm-svn: 44716
2007-12-08 20:24:38 +00:00
Chris Lattner 757d4beba9 Rename LegalizeDAGTypes.cpp -> LegalizeTypes.cpp
llvm-svn: 44715
2007-12-08 20:17:13 +00:00
Chris Lattner 92288147b6 Split the class definition of DAGTypeLegalizer out into a header.
Leave it visibility hidden, but not in an anon namespace.

llvm-svn: 44714
2007-12-08 20:16:06 +00:00
Bill Wendling 2b07d8c5a0 Renaming:
isTriviallyReMaterializable -> hasNoSideEffects
  isReallyTriviallyReMaterializable -> isTriviallyReMaterializable

llvm-svn: 44702
2007-12-08 07:17:56 +00:00
Bill Wendling 4375173ba0 Incorporated comments from Evan and Chris:
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20071203/056043.html
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20071203/056048.html

llvm-svn: 44696
2007-12-08 01:47:01 +00:00
Bill Wendling fb706bc52b Initial commit of the machine code LICM pass. It successfully hoists this:
_foo:
        li r2, 0
LBB1_1: ; bb
        li r5, 0
        stw r5, 0(r3)
        addi r2, r2, 1
        addi r3, r3, 4
        cmplw cr0, r2, r4
        bne cr0, LBB1_1 ; bb
LBB1_2: ; return
        blr 

to:

_foo:
        li r2, 0
        li r5, 0
LBB1_1: ; bb
        stw r5, 0(r3)
        addi r2, r2, 1
        addi r3, r3, 4
        cmplw cr0, r2, r4
        bne cr0, LBB1_1 ; bb
LBB1_2: ; return
        blr

ZOMG!! :-)

Moar to come...

llvm-svn: 44687
2007-12-07 21:42:31 +00:00
Evan Cheng 85cdba29b0 Add an option to control this heuristic tweak so I can test it.
llvm-svn: 44671
2007-12-07 00:28:32 +00:00
Dale Johannesen 5eff4de9c8 Redo previous patch so optimization only done for i1.
Simpler and safer.

llvm-svn: 44663
2007-12-06 17:53:31 +00:00
Evan Cheng 8393dc7378 Turning simple splitting on. Start testing new coalescer heuristics as new llcbeta.
llvm-svn: 44660
2007-12-06 08:54:31 +00:00
Chris Lattner eedaf92fcf third time around: instead of disabling this completely,
only disable it if we don't know it will be obviously profitable.
Still fixme, but less so. :)

llvm-svn: 44658
2007-12-06 07:47:55 +00:00
Chris Lattner b5fdfb9612 Actually, disable this code for now. More analysis and improvements to
the X86 backend are needed before this should be enabled by default.

llvm-svn: 44657
2007-12-06 07:44:31 +00:00
Chris Lattner 7c709a5d08 implement a readme entry, compiling the code into:
_foo:
	movl	$12, %eax
	andl	4(%esp), %eax
	movl	_array(%eax), %eax
	ret

instead of:

_foo:
	movl	4(%esp), %eax
	shrl	$2, %eax
	andl	$3, %eax
	movl	_array(,%eax,4), %eax
	ret

As it turns out, this triggers all the time, in a wide variety of
situations, for example, I see diffs like this in various programs:

-       movl    8(%eax), %eax
-       shll    $2, %eax
-       andl    $1020, %eax
-       movl    (%esi,%eax), %eax
+       movzbl  8(%eax), %eax
+       movl    (%esi,%eax,4), %eax


-       shll    $2, %edx
-       andl    $1020, %edx
-       movl    (%edi,%edx), %edx
+       andl    $255, %edx
+       movl    (%edi,%edx,4), %edx

Unfortunately, I also see stuff like this, which can be fixed in the
X86 backend:

-       andl    $85, %ebx
-       addl    _bit_count(,%ebx,4), %ebp
+       shll    $2, %ebx
+       andl    $340, %ebx
+       addl    _bit_count(%ebx), %ebp

llvm-svn: 44656
2007-12-06 07:33:36 +00:00
Chris Lattner 42558bf664 implement the rest of the functionality from SelectionDAGLegalize::ScalarizeVectorOp
llvm-svn: 44654
2007-12-06 05:53:43 +00:00
Dale Johannesen 05bbbda78a Fix PR1842.
llvm-svn: 44649
2007-12-06 01:43:46 +00:00
Evan Cheng 7fc1d98353 Fix for PR1831: if all defs of an interval are re-materializable, then it's a preferred spill candiate.
llvm-svn: 44644
2007-12-06 00:01:56 +00:00
Evan Cheng 678b86d6ce MachineInstr can change. Store indexes instead.
llvm-svn: 44612
2007-12-05 10:24:35 +00:00
Evan Cheng 06353b48b5 If a split live interval is spilled again, remove the kill marker on its last use.
llvm-svn: 44611
2007-12-05 09:51:10 +00:00
Evan Cheng 64b3baaaea Clobber more bugs.
llvm-svn: 44610
2007-12-05 09:05:34 +00:00
Evan Cheng d7de56ac93 Fix kill info for split intervals.
llvm-svn: 44609
2007-12-05 08:16:32 +00:00
Chris Lattner c9693c60a5 more scalarization
llvm-svn: 44608
2007-12-05 07:45:02 +00:00
Chris Lattner 1a0d49a63c scalarize vector binops
llvm-svn: 44607
2007-12-05 07:36:58 +00:00
Evan Cheng 269dbd31d0 - Mark last use of a split interval as kill instead of letting spiller track it.
This allows an important optimization to be re-enabled.
- If all uses / defs of a split interval can be folded, give the interval a
  low spill weight so it would not be picked in case spilling is needed (avoid
  pushing other intervals in the same BB to be spilled).

llvm-svn: 44601
2007-12-05 03:22:34 +00:00
Evan Cheng bb26301864 Add a argument to storeRegToStackSlot and storeRegToAddr to specify whether
the stored register is killed.

llvm-svn: 44600
2007-12-05 03:14:33 +00:00
Evan Cheng e412a4427b Remove a unsafe optimization. This fixes 401.bzip2.
llvm-svn: 44587
2007-12-04 23:57:55 +00:00
Evan Cheng cd8a89b3cd Spiller unfold optimization bug: do not clobber a reusable stack slot value unless it can be modified.
llvm-svn: 44575
2007-12-04 19:19:45 +00:00
Chris Lattner b892225fb9 Implement framework for scalarizing node results. This is sufficient
to codegen this:

define float @test_extract_elt(<1 x float> * %P) {
	%p = load <1 x float>* %P
	%R = extractelement <1 x float> %p, i32 0
	ret float %R
}

llvm-svn: 44570
2007-12-04 07:48:46 +00:00
Chris Lattner 681c9d6697 start providing framework for scalarizing vectors.
llvm-svn: 44569
2007-12-04 07:29:51 +00:00
Evan Cheng d1badb960e Discard split intervals made empty due to folding.
llvm-svn: 44565
2007-12-04 00:32:23 +00:00
Evan Cheng 40965448ff Bug fixes.
llvm-svn: 44549
2007-12-03 21:31:55 +00:00
Duncan Sands 38ef3a8ec7 Rather than having special rules like "intrinsics cannot
throw exceptions", just mark intrinsics with the nounwind
attribute.  Likewise, mark intrinsics as readnone/readonly
and get rid of special aliasing logic (which didn't use
anything more than this anyway).

llvm-svn: 44544
2007-12-03 20:06:50 +00:00
Evan Cheng 196faa9dc5 Typo
llvm-svn: 44532
2007-12-03 10:00:00 +00:00
Evan Cheng 85ef9834a6 Update kill info for uses of split intervals.
llvm-svn: 44531
2007-12-03 09:58:48 +00:00
Evan Cheng f45a1d623c Remove redundant foldMemoryOperand variants and other code clean up.
llvm-svn: 44517
2007-12-02 08:30:39 +00:00
Evan Cheng 388f6f51a0 Fix a bug where splitting cause some unnecessary spilling.
llvm-svn: 44482
2007-12-01 04:42:39 +00:00
Evan Cheng 69fda0a716 Allow some reloads to be folded in multi-use cases. Specifically testl r, r -> cmpl [mem], 0.
llvm-svn: 44479
2007-12-01 02:07:52 +00:00
Evan Cheng b10dc27b20 Do not fold reload into an instruction with multiple uses. It issues one extra load.
llvm-svn: 44467
2007-11-30 21:23:43 +00:00
Devang Patel cc45c338d1 Provide a way to update DescGlobals cache directly.
llvm-svn: 44446
2007-11-30 00:51:33 +00:00
Evan Cheng d35b5acae4 Do not lose rematerialization info when spilling already split live intervals.
llvm-svn: 44443
2007-11-29 23:02:50 +00:00
Evan Cheng 8494ee175c Fix a major performance issue with splitting. If there is a def (not def/use)
in the middle of a split basic block, create a new live interval starting at
the def. This avoid artifically extending the live interval over a number of
cycles where it is dead. e.g.

bb1:
       = vr1204   (use / kill) <= new interval starts and ends here.
...
...
vr1204 =          (new def)   <= start a new interval here.
       = vr1204   (use)

llvm-svn: 44436
2007-11-29 10:12:14 +00:00
Evan Cheng f85c063ec0 Replace the odd kill# hack with something less fragile.
llvm-svn: 44434
2007-11-29 09:49:23 +00:00
Evan Cheng be255b0650 Fixed various live interval splitting bugs / compile time issues.
llvm-svn: 44428
2007-11-29 01:06:25 +00:00
Evan Cheng 147f7799c5 Kill info update bug.
llvm-svn: 44427
2007-11-29 01:05:47 +00:00
Duncan Sands 5208d1ab4a Add some convenience methods for querying attributes, and
use them.

llvm-svn: 44403
2007-11-28 17:07:01 +00:00
Duncan Sands 45a0c3265f Add missing newlines at EOF.
llvm-svn: 44399
2007-11-28 10:13:38 +00:00
Evan Cheng c1648b6a0d Recover compile time regression.
llvm-svn: 44386
2007-11-28 01:28:46 +00:00
Owen Anderson 30767b15e9 Add MachineLoopInfo. This is not yet tested.
llvm-svn: 44384
2007-11-27 22:47:08 +00:00
Nate Begeman 6f026a654c Support returning non-power-of-2 vectors to unblock some work
llvm-svn: 44371
2007-11-27 19:28:48 +00:00
Duncan Sands ad0ea2d430 Fix PR1146: parameter attributes are longer part of
the function type, instead they belong to functions
and function calls.  This is an updated and slightly
corrected version of Reid Spencer's original patch.
The only known problem is that auto-upgrading of
bitcode files doesn't seem to work properly (see
test/Bitcode/AutoUpgradeIntrinsics.ll).  Hopefully
a bitcode guru (who might that be? :) ) will fix it.

llvm-svn: 44359
2007-11-27 13:23:08 +00:00
Chris Lattner 698b1cb28d err, no really.
llvm-svn: 44352
2007-11-27 06:14:32 +00:00
Chris Lattner 28caf2717a don't depend on ADL.
llvm-svn: 44351
2007-11-27 06:14:12 +00:00
Dan Gohman 9a69341725 Don't lower srem/urem X%C to X-X/C*C unless the division is actually
optimized. This avoids creating illegal divisions when the combiner is
running after legalize; this fixes PR1815. Also, it produces better
code in the included testcase by avoiding the subtract and multiply
when the division isn't optimized.

llvm-svn: 44341
2007-11-26 23:46:11 +00:00
Chris Lattner cab915f9cf Implement expand support for MERGE_VALUEs that only produces one result.
llvm-svn: 44304
2007-11-24 19:12:15 +00:00
Chris Lattner 6e3641897b Implement support for custom legalization in DAGTypeLegalizer::ExpandOperand.
Improve a comment.
Unbreak Duncan's carefully written path compression where I didn't realize
what was happening!

llvm-svn: 44301
2007-11-24 18:11:42 +00:00
Chris Lattner f81d5886c6 Several changes:
1) Change the interface to TargetLowering::ExpandOperationResult to 
   take and return entire NODES that need a result expanded, not just
   the value.  This allows us to handle things like READCYCLECOUNTER,
   which returns two values.
2) Implement (extremely limited) support in LegalizeDAG::ExpandOp for MERGE_VALUES.
3) Reimplement custom lowering in LegalizeDAGTypes in terms of the new
   ExpandOperationResult.  This makes the result simpler and fully 
   general.
4) Implement (fully general) expand support for MERGE_VALUES in LegalizeDAGTypes.
5) Implement ExpandOperationResult support for ARM f64->i64 bitconvert and ARM
   i64 shifts, allowing them to work with LegalizeDAGTypes.
6) Implement ExpandOperationResult support for X86 READCYCLECOUNTER and FP_TO_SINT,
   allowing them to work with LegalizeDAGTypes.

LegalizeDAGTypes now passes several more X86 codegen tests when enabled and when
type legalization in LegalizeDAG is ifdef'd out.

llvm-svn: 44300
2007-11-24 07:07:01 +00:00
Duncan Sands b87dde7e8e Fix a bug in which node A is replaced by node B, but later
node A gets back into the DAG again because it was hiding in
one of the node maps: make sure that node replacement happens
in those maps too.

llvm-svn: 44263
2007-11-21 16:43:19 +00:00
Dale Johannesen 763e110a9f Fix .eh table linkage issues on Darwin. Some EH support
for Darwin PPC, but it's not fully working yet.

llvm-svn: 44258
2007-11-20 23:24:42 +00:00
Chris Lattner 09c0393d5e ExpandUnalignedLoad doesn't handle vectors right at all apparently.
Fix a couple of problems:
1. Don't assume the VT-1 is a VT that is half the size.
2. Treat vectors of FP in the vector path, not the FP path.

This has a couple of remaining problems before it will work with
the code in PR1811: the code below this change assumes that it can
use extload/shift/or to construct the result, which isn't right for
vectors.

This also doesn't handle vectors of 1 or vectors that aren't pow-2.

llvm-svn: 44243
2007-11-19 21:38:03 +00:00
Chris Lattner 6fa95ec19d Implement vector expand support for shuffle_vector. This fixes PR1811.
llvm-svn: 44242
2007-11-19 21:16:54 +00:00
Chris Lattner 67d77945e7 Implement splitting of UNDEF nodes. This is the first step towards fixing PR1811
llvm-svn: 44239
2007-11-19 20:21:32 +00:00
Dan Gohman 36347a26f9 Add support in SplitVectorOp for remainder operators.
llvm-svn: 44233
2007-11-19 15:15:03 +00:00
Nate Begeman d4d45c268c Add support for vectors to int <-> float casts.
llvm-svn: 44204
2007-11-17 03:58:34 +00:00
Evan Cheng 8e22379303 Live interval splitting:
When a live interval is being spilled, rather than creating short, non-spillable
intervals for every def / use, split the interval at BB boundaries. That is, for
every BB where the live interval is defined or used, create a new interval that
covers all the defs and uses in the BB.

This is designed to eliminate one common problem: multiple reloads of the same
value in a single basic block. Note, it does *not* decrease the number of spills
since no copies are inserted so the split intervals are *connected* through
spill and reloads (or rematerialization). The newly created intervals can be
spilled again, in that case, since it does not span multiple basic blocks, it's
spilled in the usual manner. However, it can reuse the same stack slot as the
previously split interval.

This is currently controlled by -split-intervals-at-bb.

llvm-svn: 44198
2007-11-17 00:40:40 +00:00
Anton Korobeynikov 66b91e66ec Implement necessary bits for flt_rounds gcc builtin.
Codegen bits and llvm-gcc support will follow.

llvm-svn: 44182
2007-11-15 23:25:33 +00:00
Nate Begeman bd117f06ba Basic non-power-of-2 vector support
llvm-svn: 44181
2007-11-15 21:15:26 +00:00
Duncan Sands d4494352f8 This assertion was bogus.
llvm-svn: 44167
2007-11-15 09:54:37 +00:00
Evan Cheng 2c1a50455c Fix a thinko in post-allocation coalescer.
llvm-svn: 44166
2007-11-15 08:13:29 +00:00
Bill Wendling b3712f8146 Adding debug output during coalescing.
llvm-svn: 44154
2007-11-15 02:06:30 +00:00
Bill Wendling 8269925b1e Need to increment the iterator.
llvm-svn: 44153
2007-11-15 00:40:48 +00:00
Anton Korobeynikov 2c6387803e Fix PIC jump table codegen on x86-32/linux. In fact, such thing should be applied
to all targets uses GOT-relative offsets for PIC (Alpha?)

llvm-svn: 44108
2007-11-14 09:18:41 +00:00
Evan Cheng 7f02cfa599 Clean up sub-register implementation by moving subReg information back to
MachineOperand auxInfo. Previous clunky implementation uses an external map
to track sub-register uses. That works because register allocator uses
a new virtual register for each spilled use. With interval splitting (coming
soon), we may have multiple uses of the same register some of which are
of using different sub-registers from others. It's too fragile to constantly
update the information.

llvm-svn: 44104
2007-11-14 07:59:08 +00:00
Owen Anderson d8167ab332 Run computeDomForest() on the set of registers that need to be tested for
interference.

llvm-svn: 44064
2007-11-13 20:13:24 +00:00
Owen Anderson 569ef71e44 Preserve LiveVariables when doing critical edge splitting.
llvm-svn: 44063
2007-11-13 20:04:45 +00:00
Dale Johannesen 7a7085f6d3 Add parameter to getDwarfRegNum to permit targets
to use different mappings for EH and debug info;
no functional change yet.
Fix warning in X86CodeEmitter.

llvm-svn: 44056
2007-11-13 19:13:01 +00:00
Bill Wendling f359fed9f9 Unify CALLSEQ_{START,END}. They take 4 parameters: the chain, two stack
adjustment fields, and an optional flag. If there is a "dynamic_stackalloc" in
the code, make sure that it's bracketed by CALLSEQ_START and CALLSEQ_END. If
not, then there is the potential for the stack to be changed while the stack's
being used by another instruction (like a call).

This can only result in tears...

llvm-svn: 44037
2007-11-13 00:44:25 +00:00
Owen Anderson c520c4b325 Break critical edges coming into blocks with PHI nodes.
llvm-svn: 44019
2007-11-12 17:27:27 +00:00
Evan Cheng be51f28e2b Refactor some code.
llvm-svn: 44010
2007-11-12 06:35:08 +00:00
Owen Anderson a1cd45213d As Chris and Evan pointed out, BreakCriticalMachineEdges doesn't really need
to be a pass of its own.  Instead, move it out into a helper method.

llvm-svn: 44002
2007-11-12 01:05:09 +00:00
Hartmut Kaiser 67297144ab Fixed a strange construct. Please review.
llvm-svn: 43960
2007-11-09 19:59:00 +00:00
Duncan Sands e795efea5b Move MinAlign to MathExtras.h.
llvm-svn: 43944
2007-11-09 13:41:39 +00:00
Duncan Sands e7a9ac929f Fix some load/store logic that would be wrong for
apints on big-endian machines if the bitwidth is
not a multiple of 8.  Introduce a new helper,
MVT::getStoreSizeInBits, and use it.

llvm-svn: 43934
2007-11-09 08:57:19 +00:00
Duncan Sands bab9dc9433 Add terminating newline.
llvm-svn: 43933
2007-11-09 08:30:21 +00:00
Evan Cheng 797d56ff17 Much improved pic jumptable codegen:
Then:
        call    "L1$pb"
"L1$pb":
        popl    %eax
		...
LBB1_1: # entry
        imull   $4, %ecx, %ecx
        leal    LJTI1_0-"L1$pb"(%eax), %edx
        addl    LJTI1_0-"L1$pb"(%ecx,%eax), %edx
        jmpl    *%edx

        .align  2
        .set L1_0_set_3,LBB1_3-LJTI1_0
        .set L1_0_set_2,LBB1_2-LJTI1_0
        .set L1_0_set_5,LBB1_5-LJTI1_0
        .set L1_0_set_4,LBB1_4-LJTI1_0
LJTI1_0:
        .long    L1_0_set_3
        .long    L1_0_set_2

Now:
        call    "L1$pb"
"L1$pb":
        popl    %eax
		...
LBB1_1: # entry
        addl    LJTI1_0-"L1$pb"(%eax,%ecx,4), %eax
        jmpl    *%eax

		.align  2
		.set L1_0_set_3,LBB1_3-"L1$pb"
		.set L1_0_set_2,LBB1_2-"L1$pb"
		.set L1_0_set_5,LBB1_5-"L1$pb"
		.set L1_0_set_4,LBB1_4-"L1$pb"
LJTI1_0:
        .long    L1_0_set_3
        .long    L1_0_set_2

llvm-svn: 43924
2007-11-09 01:32:10 +00:00
Evan Cheng f14006f4d6 Didn't mean to check these in.
llvm-svn: 43923
2007-11-09 01:28:33 +00:00
Evan Cheng 1bf166312b Bug fix. Passive nodes are not in SUnitMap.
llvm-svn: 43922
2007-11-09 01:27:11 +00:00
Owen Anderson 65d2fcdd2a This preserves critical edge breaking.
llvm-svn: 43911
2007-11-08 22:23:57 +00:00
Owen Anderson 3bc8124a66 Make BreakCriticalMachineEdges available as a pass that can be depended on.
llvm-svn: 43910
2007-11-08 22:20:23 +00:00
Evan Cheng ece4c68b82 If both parts of smul_lohi, etc. are used, don't simplify. If only one part is used, try simplify it.
llvm-svn: 43888
2007-11-08 09:25:29 +00:00
Owen Anderson 0be8c1dafe Add the majority of machine-level critical edge breaking pass. Most of this was written by Fernando, cleanup and updating to TOT by me.
This still needs a bit of work, particularly to handle jump tables properly.

llvm-svn: 43885
2007-11-08 07:55:43 +00:00
Owen Anderson bfbc12973d Take another stab at getting isLiveIn() and isLiveOut() right.
llvm-svn: 43869
2007-11-08 01:32:45 +00:00
Owen Anderson 9d86ef12c8 Bring UsedBlocks back. StrongPHIElimination needs this information.
llvm-svn: 43866
2007-11-08 01:20:48 +00:00
Evan Cheng e742ee1dbe Simplify my (il)logic.
llvm-svn: 43819
2007-11-07 08:08:25 +00:00
Owen Anderson c6a5387d09 Add some more of StrongPHIElim.
llvm-svn: 43805
2007-11-07 05:17:15 +00:00
Dan Gohman ccfc028283 Remainder operations must be either integer or floating-point.
llvm-svn: 43781
2007-11-06 22:11:54 +00:00
Evan Cheng dd71a5c37b When the allocator rewrite a spill register with new virtual register, it replaces other operands of the same register. Watch out for situations where
only some of the operands are sub-register uses.

llvm-svn: 43776
2007-11-06 21:12:10 +00:00
Evan Cheng d5d59ad634 First step towards moving the coalescer to priority_queue based machinery.
llvm-svn: 43764
2007-11-06 08:52:21 +00:00
Evan Cheng 92d23e5204 Fix a bug where a def use operand isn't being detected as a sub-register use.
llvm-svn: 43763
2007-11-06 08:50:44 +00:00
Evan Cheng 2dbffa4e76 Add pseudo dependency to force two-address instruction to be scheduled after
other uses. There was a overly restricted check that prevented some obvious
cases.

llvm-svn: 43762
2007-11-06 08:44:59 +00:00
Owen Anderson d378cea030 Add a few comments.
llvm-svn: 43755
2007-11-06 05:26:02 +00:00
Owen Anderson eb964eb2c8 DomForest is a forest of registers, not instructions.
llvm-svn: 43754
2007-11-06 05:22:43 +00:00
Owen Anderson a9057f0b97 StrongPHIElimination requires LiveVariables.
llvm-svn: 43751
2007-11-06 04:49:43 +00:00
Dan Gohman 08143e397d Add support for vector remainder operations.
llvm-svn: 43744
2007-11-05 23:35:22 +00:00
Rafael Espindola fa0df55bdd Move the LowerMEMCPY and LowerMEMCPYCall to a common place.
Thanks for the suggestions Bill :-)

llvm-svn: 43742
2007-11-05 23:12:20 +00:00
Dale Johannesen 4646aa3e33 Make labels work in asm blocks; allow labels as
parameters.  Rename ValueRefList to ParamList
in AsmParser, since its only use is for parameters.

llvm-svn: 43734
2007-11-05 21:20:28 +00:00
Duncan Sands f7ae8bd090 Don't output ABI size padding twice. By using the store
size for the field we get ABI padding automatically, so
no need to put it in again when we emit the field.

llvm-svn: 43720
2007-11-05 18:03:02 +00:00
Evan Cheng 8bb30184a8 Move SimpleRegisterCoalescing.h to lib/CodeGen since there is now a common
register coalescer interface: RegisterCoalescing.

llvm-svn: 43714
2007-11-05 17:41:38 +00:00
Evan Cheng 17b0e3e1ae Skip over deleted val#'s.
llvm-svn: 43700
2007-11-05 06:46:45 +00:00
Evan Cheng a406b47f14 Handle cases where a register and one of its super-register are both marked as
defined on the same instruction. This fixes PR1767.

llvm-svn: 43699
2007-11-05 03:11:55 +00:00
Evan Cheng a8044084ac Fix PR1187.
llvm-svn: 43692
2007-11-05 00:59:10 +00:00
Duncan Sands 283207a71c Eliminate the remaining uses of getTypeSize. This
should only effect x86 when using long double.  Now
12/16 bytes are output for long double globals (the
exact amount depends on the alignment).  This brings
globals in line with the rest of LLVM: the space
reserved for an object is now always the ABI size.
One tricky point is that only 10 bytes should be
output for long double if it is a field in a packed
struct, which is the reason for the additional
argument to EmitGlobalConstant.

llvm-svn: 43688
2007-11-05 00:04:43 +00:00
Owen Anderson eea82746b3 Another step of stronger PHI elimination down.
llvm-svn: 43684
2007-11-04 22:33:26 +00:00
Evan Cheng 5c1b044899 If an interval is being undone clear its preference as well since the source interval may have been undone as well.
llvm-svn: 43670
2007-11-04 08:32:21 +00:00
Evan Cheng 66298e226f There are times when the coalescer would not coalesce away a copy but the copy
can be eliminated by the allocator is the destination and source targets the
same register. The most common case is when the source and destination registers
are in different class. For example, on x86 mov32to32_ targets GR32_ which
contains a subset of the registers in GR32.

The allocator can do 2 things:
1. Set the preferred allocation for the destination of a copy to that of its source.
2. After allocation is done, change the allocation of a copy destination (if
   legal) so the copy can be eliminated.

This eliminates 443 extra moves from 403.gcc.

llvm-svn: 43662
2007-11-03 07:20:12 +00:00
Dan Gohman d7917b6248 Add std:: to sort calls.
llvm-svn: 43652
2007-11-02 22:24:01 +00:00
Dan Gohman c981d72d1a Change illegal uses of ++ to uses of STLExtra.h's next function.
llvm-svn: 43651
2007-11-02 22:22:02 +00:00
Evan Cheng f851163c53 One more extract_subreg coalescing bug.
llvm-svn: 43644
2007-11-02 17:35:08 +00:00
Duncan Sands 04059dd351 Fix a thinko.
llvm-svn: 43639
2007-11-02 15:18:06 +00:00
Duncan Sands 44b8721de8 Executive summary: getTypeSize -> getTypeStoreSize / getABITypeSize.
The meaning of getTypeSize was not clear - clarifying it is important
now that we have x86 long double and arbitrary precision integers.
The issue with long double is that it requires 80 bits, and this is
not a multiple of its alignment.  This gives a primitive type for
which getTypeSize differed from getABITypeSize.  For arbitrary precision
integers it is even worse: there is the minimum number of bits needed to
hold the type (eg: 36 for an i36), the maximum number of bits that will
be overwriten when storing the type (40 bits for i36) and the ABI size
(i.e. the storage size rounded up to a multiple of the alignment; 64 bits
for i36).

This patch removes getTypeSize (not really - it is still there but
deprecated to allow for a gradual transition).  Instead there is:

(1) getTypeSizeInBits - a number of bits that suffices to hold all
values of the type.  For a primitive type, this is the minimum number
of bits.  For an i36 this is 36 bits.  For x86 long double it is 80.
This corresponds to gcc's TYPE_PRECISION.

(2) getTypeStoreSizeInBits - the maximum number of bits that is
written when storing the type (or read when reading it).  For an
i36 this is 40 bits, for an x86 long double it is 80 bits.  This
is the size alias analysis is interested in (getTypeStoreSize
returns the number of bytes).  There doesn't seem to be anything
corresponding to this in gcc.

(3) getABITypeSizeInBits - this is getTypeStoreSizeInBits rounded
up to a multiple of the alignment.  For an i36 this is 64, for an
x86 long double this is 96 or 128 depending on the OS.  This is the
spacing between consecutive elements when you form an array out of
this type (getABITypeSize returns the number of bytes).  This is
TYPE_SIZE in gcc.

Since successive elements in a SequentialType (arrays, pointers
and vectors) need to be aligned, the spacing between them will be
given by getABITypeSize.  This means that the size of an array
is the length times the getABITypeSize.  It also means that GEP
computations need to use getABITypeSize when computing offsets.
Furthermore, if an alloca allocates several elements at once then
these too need to be aligned, so the size of the alloca has to be
the number of elements multiplied by getABITypeSize.  Logically
speaking this doesn't have to be the case when allocating just
one element, but it is simpler to also use getABITypeSize in this
case.  So alloca's and mallocs should use getABITypeSize.  Finally,
since gcc's only notion of size is that given by getABITypeSize, if
you want to output assembler etc the same as gcc then getABITypeSize
is the size you want.

Since a store will overwrite no more than getTypeStoreSize bytes,
and a read will read no more than that many bytes, this is the
notion of size appropriate for alias analysis calculations.

In this patch I have corrected all type size uses except some of
those in ScalarReplAggregates, lib/Codegen, lib/Target (the hard
cases).  I will get around to auditing these too at some point,
but I could do with some help.

Finally, I made one change which I think wise but others might
consider pointless and suboptimal: in an unpacked struct the
amount of space allocated for a field is now given by the ABI
size rather than getTypeStoreSize.  I did this because every
other place that reserves memory for a type (eg: alloca) now
uses getABITypeSize, and I didn't want to make an exception
for unpacked structs, i.e. I did it to make things more uniform.
This only effects structs containing long doubles and arbitrary
precision integers.  If someone wants to pack these types more
tightly they can always use a packed struct.

llvm-svn: 43620
2007-11-01 20:53:16 +00:00
Evan Cheng fe1ac52836 - Coalesce extract_subreg when both intervals are relatively small.
- Some code clean up.

llvm-svn: 43606
2007-11-01 06:22:48 +00:00
Duncan Sands 3b4668a5d8 Promotion of sdiv/srem/udiv/urem.
llvm-svn: 43551
2007-10-31 08:57:43 +00:00
Duncan Sands 21ca939683 Add a newline at the end of the file.
llvm-svn: 43550
2007-10-31 08:49:24 +00:00
Owen Anderson 0b59fa0605 Add the skeleton of a better PHI elimination pass.
llvm-svn: 43542
2007-10-31 03:37:57 +00:00
Owen Anderson 9b8f34f2ac Some fixes to get MachineDomTree working better.
llvm-svn: 43541
2007-10-31 03:30:14 +00:00
Dale Johannesen b066c1f216 Make i64=expand_vector_elt(v2i64) work in 32-bit mode.
llvm-svn: 43535
2007-10-31 00:32:36 +00:00