Commit Graph

12737 Commits

Author SHA1 Message Date
Pete Cooper 7c7ba1baa1 Added custom lowering for load->dec->store sequence in x86 when the EFLAGS registers is used
by later instructions.

Only done for DEC64m right now.

Fixes <rdar://problem/6172640>

llvm-svn: 144705
2011-11-15 21:57:53 +00:00
Devang Patel 43bde96a4c Insert modified DBG_VALUE into LiveDbgValueMap.
llvm-svn: 144696
2011-11-15 21:03:58 +00:00
Rafael Espindola f11e7f1305 We currently use a callback to handle an IL pass deleting a BB that still
has a reference to it. Unfortunately, that doesn't work for codegen passes
since we don't get notified of MBB's being deleted (the original BB stays).

Use that fact to our advantage and after printing a function, check if
any of the IL BBs corresponds to a symbol that was not printed. This fixes
pr11202.

llvm-svn: 144674
2011-11-15 19:08:46 +00:00
Benjamin Kramer 1f97a5a671 Remove all remaining uses of Value::getNameStr().
llvm-svn: 144648
2011-11-15 16:27:03 +00:00
Benjamin Kramer 4c93d15f09 Twinify GraphWriter a little bit.
llvm-svn: 144647
2011-11-15 16:26:38 +00:00
Jakob Stoklund Olesen e14ef7e6f8 Check all overlaps when looking for used registers.
A function using any RC alias is enough to enable the ExeDepsFix pass.

llvm-svn: 144636
2011-11-15 08:20:43 +00:00
Jay Foad ab9ebd3521 Make use of MachinePointerInfo::getFixedStack.
llvm-svn: 144635
2011-11-15 07:51:13 +00:00
Jay Foad 70679df664 Remove some unnecessary includes of PseudoSourceValue.h.
llvm-svn: 144634
2011-11-15 07:50:46 +00:00
Evan Cheng 7098c4e5f4 Set SeenStore to true to prevent loads from being moved; also eliminates a non-deterministic behavior.
llvm-svn: 144628
2011-11-15 06:26:51 +00:00
Chandler Carruth 9b548a7fcf Rather than trying to use the loop block sequence *or* the function
block sequence when recovering from unanalyzable control flow
constructs, *always* use the function sequence. I'm not sure why I ever
went down the path of trying to use the loop sequence, it is
fundamentally not the correct sequence to use. We're trying to preserve
the incoming layout in the cases of unreasonable control flow, and that
is only encoded at the function level. We already have a filter to
select *exactly* the sub-set of blocks within the function that we're
trying to form into a chain.

The resulting code layout is also significantly better because of this.
In several places we were ending up with completely unreasonable control
flow constructs due to the ordering chosen by the loop structure for its
internal storage. This change removes a completely wasteful vector of
basic blocks, saving memory allocation in the common case even though it
costs us CPU in the fairly rare case of unnatural loops. Finally, it
fixes the latest crasher reduced out of GCC's single source. Thanks
again to Benjamin Kramer for the reduction, my bugpoint skills failed at
it.

llvm-svn: 144627
2011-11-15 06:26:43 +00:00
Jakob Stoklund Olesen f8ad336bc4 Break false dependencies before partial register updates.
Two new TargetInstrInfo hooks lets the target tell ExecutionDepsFix
about instructions with partial register updates causing false unwanted
dependencies.

The ExecutionDepsFix pass will break the false dependencies if the
updated register was written in the previoius N instructions.

The small loop added to sse-domains.ll runs twice as fast with
dependency-breaking instructions inserted.

llvm-svn: 144602
2011-11-15 01:15:30 +00:00
Jakob Stoklund Olesen 543bef6ead Track register ages more accurately.
Keep track of the last instruction to define each register individually
instead of per DomainValue.  This lets us track more accurately when a
register was last written.

Also track register ages across basic blocks.  When entering a new
basic block, use the least stale predecessor def as a worst case
estimate for register age.

The register age is used to arbitrate between conflicting domains. The
most recently defined register wins.

llvm-svn: 144601
2011-11-15 01:15:25 +00:00
Evan Cheng f2fc508d4d Avoid dereferencing off the beginning of lists.
llvm-svn: 144569
2011-11-14 21:11:15 +00:00
Evan Cheng 28ffb7e444 At -O0, multiple uses of a virtual registers in the same BB are being marked
"kill". This looks like a bug upstream. Since that's going to take some time
to understand, loosen the assertion and disable the optimization when
multiple kills are seen.

llvm-svn: 144568
2011-11-14 21:02:09 +00:00
Evan Cheng 30f44ad785 Teach two-address pass to re-schedule two-address instructions (or the kill
instructions of the two-address operands) in order to avoid inserting copies.
This fixes the few regressions introduced when the two-address hack was
disabled (without regressing the improvements).
rdar://10422688

llvm-svn: 144559
2011-11-14 19:48:55 +00:00
Jakob Stoklund Olesen 7e6004a3c1 Fix early-clobber handling in shrinkToUses.
I broke this in r144515, it affected most ARM testers.

<rdar://problem/10441389>

llvm-svn: 144547
2011-11-14 18:45:38 +00:00
Chandler Carruth fd9b4d9813 It helps to deallocate memory as well as allocate it. =] This actually
cleans up all the chains allocated during the processing of each
function so that for very large inputs we don't just grow memory usage
without bound.

llvm-svn: 144533
2011-11-14 10:57:23 +00:00
Chandler Carruth 0a31d149ea Remove an over-eager assert that was firing on one of the ARM regression
tests when I forcibly enabled block placement.

It is apparantly possible for an unanalyzable block to fallthrough to
a non-loop block. I don't actually beleive this is correct, I believe
that 'canFallThrough' is returning true needlessly for the code
construct, and I've left a bit of a FIXME on the verification code to
try to track down why this is coming up.

Anyways, removing the assert doesn't degrade the correctness of the algorithm.

llvm-svn: 144532
2011-11-14 10:55:53 +00:00
Chandler Carruth 0af6a0bb69 Begin chipping away at one of the biggest quadratic-ish behaviors in
this pass. We're leaving already merged blocks on the worklist, and
scanning them again and again only to determine each time through that
indeed they aren't viable. We can instead remove them once we're going
to have to scan the worklist. This is the easy way to implement removing
them. If this remains on the profile (as I somewhat suspect it will), we
can get a lot more clever here, as the worklist's order is essentially
irrelevant. We can use swapping and fold the two loops to reduce
overhead even when there are many blocks on the worklist but only a few
of them are removed.

llvm-svn: 144531
2011-11-14 09:46:33 +00:00
Chandler Carruth 84cd44c750 Under the hood, MBPI is doing a linear scan of every successor every
time it is queried to compute the probability of a single successor.
This makes computing the probability of every successor of a block in
sequence... really really slow. ;] This switches to a linear walk of the
successors rather than a quadratic one. One of several quadratic
behaviors slowing this pass down.

I'm not really thrilled with moving the sum code into the public
interface of MBPI, but I don't (at the moment) have ideas for a better
interface. My direction I'm thinking in for a better interface is to
have MBPI actually retain much more state and make *all* of these
queries cheap. That's a lot of work, and would require invasive changes.
Until then, this seems like the least bad (ie, least quadratic)
solution. Suggestions welcome.

llvm-svn: 144530
2011-11-14 09:12:57 +00:00
Chandler Carruth a9e71faa0f Reuse the logic in getEdgeProbability within getHotSucc in order to
correctly handle blocks whose successor weights sum to more than
UINT32_MAX. This is slightly less efficient, but the entire thing is
already linear on the number of successors. Calling it within any hot
routine is a mistake, and indeed no one is calling it. It also
simplifies the code.

llvm-svn: 144527
2011-11-14 08:55:59 +00:00
Chandler Carruth ed5aa547bc Fix an overflow bug in MachineBranchProbabilityInfo. This pass relied on
the sum of the edge weights not overflowing uint32, and crashed when
they did. This is generally safe as BranchProbabilityInfo tries to
provide this guarantee. However, the CFG can get modified during codegen
in a way that grows the *sum* of the edge weights. This doesn't seem
unreasonable (imagine just adding more blocks all with the default
weight of 16), but it is hard to come up with a case that actually
triggers 32-bit overflow. Fortuately, the single-source GCC build is
good at this. The solution isn't very pretty, but its no worse than the
previous code. We're already summing all of the edge weights on each
query, we can sum them, check for an overflow, compute a scale, and sum
them again.

I've included a *greatly* reduced test case out of the GCC source that
triggers it. It's a pretty lame test, as it clearly is just barely
triggering the overflow. I'd like to have something that is much more
definitive, but I don't understand the fundamental pattern that triggers
an explosion in the edge weight sums.

The buggy code is duplicated within this file. I'll colapse them into
a single implementation in a subsequent commit.

llvm-svn: 144526
2011-11-14 08:50:16 +00:00
Jakob Stoklund Olesen d7bcf43dc2 Use getVNInfoBefore() when it makes sense.
llvm-svn: 144517
2011-11-14 01:39:36 +00:00
Chandler Carruth 1071cfa4ae Teach machine block placement to cope with unnatural loops. These don't
get loop info structures associated with them, and so we need some way
to make forward progress selecting and placing basic blocks. The
technique used here is pretty brutal -- it just scans the list of blocks
looking for the first unplaced candidate. It keeps placing blocks like
this until the CFG becomes tractable.

The cost is somewhat unfortunate, it requires allocating a vector of all
basic block pointers eagerly. I have some ideas about how to simplify
and optimize this, but I'm trying to get the logic correct first.

Thanks to Benjamin Kramer for the reduced test case out of GCC. Sadly
there are other bugs that GCC is tickling that I'm reducing and working
on now.

llvm-svn: 144516
2011-11-14 00:00:35 +00:00
Jakob Stoklund Olesen 697979028f Use kill slots instead of the previous slot in shrinkToUses.
It's more natural to use the actual end points.

llvm-svn: 144515
2011-11-13 23:53:25 +00:00
Chandler Carruth c4a2cb34bb Cleanup some 80-columns violations and poor formatting. These snuck by
when I was reading through the code for style.

llvm-svn: 144513
2011-11-13 22:50:09 +00:00
Jakob Stoklund Olesen d8f2405e73 Terminate all dead defs at the dead slot instead of the 'next' slot.
This makes no difference for normal defs, but early clobber dead defs
now look like:

  [Slot_EarlyClobber; Slot_Dead)

instead of:

  [Slot_EarlyClobber; Slot_Register).

Live ranges for normal dead defs look like:

  [Slot_Register; Slot_Dead)

as before.

llvm-svn: 144512
2011-11-13 22:42:13 +00:00
Jakob Stoklund Olesen ce7cc08f3a Simplify early clobber slots a bit.
llvm-svn: 144507
2011-11-13 22:05:42 +00:00
Chandler Carruth 8e1d906734 Enhance the assertion mechanisms in place to make it easier to catch
when we fail to place all the blocks of a loop. Currently this is
happening for unnatural loops, and this logic helps more immediately
point to the problem.

llvm-svn: 144504
2011-11-13 21:39:51 +00:00
Jakob Stoklund Olesen 90b5e565b6 Rename SlotIndexes to match how they are used.
The old naming scheme (load/use/def/store) can be traced back to an old
linear scan article, but the names don't match how slots are actually
used.

The load and store slots are not needed after the deferred spill code
insertion framework was deleted.

The use and def slots don't make any sense because we are using
half-open intervals as is customary in C code, but the names suggest
closed intervals.  In reality, these slots were used to distinguish
early-clobber defs from normal defs.

The new naming scheme also has 4 slots, but the names match how the
slots are really used.  This is a purely mechanical renaming, but some
of the code makes a lot more sense now.

llvm-svn: 144503
2011-11-13 20:45:27 +00:00
Chandler Carruth 0bb42c0f86 Teach MBP to force-merge layout successors for blocks with unanalyzable
branches that also may involve fallthrough. In the case of blocks with
no fallthrough, we can still re-order the blocks profitably. For example
instruction decoding will in some cases continue past an indirect jump,
making laying out its most likely successor there profitable.

Note, no test case. I don't know how to write a test case that exercises
this logic, but it matches the described desired semantics in
discussions with Jakob and others. If anyone has a nice example of IR
that will trigger this, that would be lovely.

Also note, there are still assertion failures in real world code with
this. I'm digging into those next, now that I know this isn't the cause.

llvm-svn: 144499
2011-11-13 12:17:28 +00:00
Chandler Carruth f9213fe721 Hoist another gross nested loop into a helper method.
llvm-svn: 144498
2011-11-13 11:42:26 +00:00
Chandler Carruth eb4ec3aea5 Add a missing doxygen comment for a helper method.
llvm-svn: 144497
2011-11-13 11:34:55 +00:00
Chandler Carruth b336172f90 Hoist a nested loop into its own method.
llvm-svn: 144496
2011-11-13 11:34:53 +00:00
Chandler Carruth 8d15078927 Rewrite #3 of machine block placement. This is based somewhat on the
second algorithm, but only loosely. It is more heavily based on the last
discussion I had with Andy. It continues to walk from the inner-most
loop outward, but there is a key difference. With this algorithm we
ensure that as we visit each loop, the entire loop is merged into
a single chain. At the end, the entire function is treated as a "loop",
and merged into a single chain. This chain forms the desired sequence of
blocks within the function. Switching to a single algorithm removes my
biggest problem with the previous approaches -- they had different
behavior depending on which system triggered the layout. Now there is
exactly one algorithm and one basis for the decision making.

The other key difference is how the chain is formed. This is based
heavily on the idea Andy mentioned of keeping a worklist of blocks that
are viable layout successors based on the CFG. Having this set allows us
to consistently select the best layout successor for each block. It is
expensive though.

The code here remains very rough. There is a lot that needs to be done
to clean up the code, and to make the runtime cost of this pass much
lower. Very much WIP, but this was a giant chunk of code and I'd rather
folks see it sooner than later. Everything remains behind a flag of
course.

I've added a couple of tests to exercise the issues that this iteration
was motivated by: loop structure preservation. I've also fixed one test
that was exhibiting the broken behavior of the previous version.

llvm-svn: 144495
2011-11-13 11:20:44 +00:00
NAKAMURA Takumi 4784df7161 Prune more RALinScan. RALinScan was also here!
llvm-svn: 144487
2011-11-13 01:33:10 +00:00
Jakob Stoklund Olesen c601d8c762 More dead code elimination in VirtRegMap.
This thing is looking a lot like a virtual register map now.

llvm-svn: 144486
2011-11-13 01:23:34 +00:00
Jakob Stoklund Olesen 28df7ef8c9 Stop tracking spill slot uses in VirtRegMap.
Nobody cared, StackSlotColoring scans the instructions to find used stack
slots.

llvm-svn: 144485
2011-11-13 01:23:30 +00:00
Jakob Stoklund Olesen 92255f27f1 Remove dead code and data from VirtRegMap.
Most of this stuff was supporting the old deferred spill code insertion
mechanism.  Modern spillers just edit machine code in place.

llvm-svn: 144484
2011-11-13 01:02:04 +00:00
Jakob Stoklund Olesen 38b3f312ca Stop tracking unused registers in VirtRegMap.
The information was only used by the register allocator in
StackSlotColoring.

llvm-svn: 144482
2011-11-13 00:39:45 +00:00
Jakob Stoklund Olesen 6ddb767fb5 Remove the -color-ss-with-regs option.
It was off by default.

The new register allocators don't have the problems that made it
necessary to reallocate registers during stack slot coloring.

llvm-svn: 144481
2011-11-13 00:31:23 +00:00
Jakob Stoklund Olesen 5343da6497 Delete VirtRegRewriter.
And there was much rejoicing.

llvm-svn: 144480
2011-11-13 00:16:01 +00:00
Jakob Stoklund Olesen 03f73ab76f Switch PBQP to VRM's trivial rewriter.
The very complicated VirtRegRewriter is going away.

llvm-svn: 144479
2011-11-13 00:02:24 +00:00
Jakob Stoklund Olesen f61a6fe221 Delete the old spilling framework from LiveIntervalAnalysis.
This is dead code, all register allocators use InlineSpiller.

llvm-svn: 144478
2011-11-12 23:57:05 +00:00
Jakob Stoklund Olesen 7ef502f6d1 Delete the 'standard' spiller with used the old spilling framework.
The current register allocators all use the inline spiller.

llvm-svn: 144477
2011-11-12 23:29:02 +00:00
Jakob Stoklund Olesen 11bb63a756 Switch PBQP to the modern InlineSpiller framework.
It is worth noting that the old spiller would split live ranges around
basic blocks. The new spiller doesn't do that.

PBQP should do its own live range splitting with
SplitEditor::splitSingleBlock() if desired.  See
RAGreedy::tryBlockSplit().

llvm-svn: 144476
2011-11-12 23:17:52 +00:00
Jakob Stoklund Olesen e7e50e6f45 Delete the linear scan register allocator.
RegAllocGreedy has been the default for six months now.

Deleting RegAllocLinearScan makes it possible to also delete
VirtRegRewriter and clean up the spiller code.

llvm-svn: 144475
2011-11-12 22:39:45 +00:00
Rafael Espindola e7cc8bff82 The dwarf standard says that the only differences between a out-of-line
instance and a concrete inlined instance are the use of DW_TAG_subprogram
instead of DW_TAG_inlined_subroutine and the who owns the tree.

We were also omitting DW_AT_inline from the abstract roots. To fix this,
make sure we mark abstract instance roots with DW_AT_inline even when
we have only out-of-line instances referring to them with DW_AT_abstract_origin.

FileCheck is not a very good tool for tests like this, maybe we should add
a -verify mode to llvm-dwarfdump.

llvm-svn: 144441
2011-11-12 01:57:54 +00:00
Eli Friedman 9d448e4a42 Don't try to form pre/post-indexed loads/stores until after LegalizeDAG runs. Fixes PR11029.
llvm-svn: 144438
2011-11-12 00:35:34 +00:00
Eli Friedman 1347715644 Some cleanup and bulletproofing for node replacement in LegalizeDAG. To maintain LegalizeDAG invariants, whenever we a node is replaced, we must attempt to delete it, and if it still
has uses after it is replaced (which can happen in rare cases due to CSE), we must revisit it.

llvm-svn: 144432
2011-11-11 23:58:27 +00:00
Nicolas Geoffray 26c328d734 Add a custom safepoint method, in order for language implementers to decide which machine instruction gets to be a safepoint.
llvm-svn: 144399
2011-11-11 18:32:52 +00:00
Eric Christopher 0a917b7ad4 Initialize variable.
llvm-svn: 144360
2011-11-11 03:16:32 +00:00
Eric Christopher c12c211c44 If we have a DIE with an AT_specification use that instead of the normal
addr DIE when adding to the dwarf accelerator tables.

llvm-svn: 144354
2011-11-11 01:55:22 +00:00
Rafael Espindola 79278365d3 Check in getOrCreateSubprogramDIE if a declaration exists and if so output
it first.

This is a more general fix to pr11300.

llvm-svn: 144324
2011-11-10 22:34:29 +00:00
Eric Christopher 66b37db641 Make types and namespaces take multiple DIEs for the accelerator tables
as well.

llvm-svn: 144319
2011-11-10 21:47:55 +00:00
Eric Christopher e288793e44 Move type handling to make sure we get all created types that aren't
forward decls and have names into the dwarf accelerator types table.

llvm-svn: 144306
2011-11-10 19:52:58 +00:00
Eric Christopher d9843b34e6 Rework adding function names to the dwarf accelerator tables, allow
multiple dies per function and support C++ basenames.

llvm-svn: 144304
2011-11-10 19:25:34 +00:00
Evan Cheng d33b2d6b7a Use a bigger hammer to fix PR11314 by disabling the "forcing two-address
instruction lower optimization" in the pre-RA scheduler.

The optimization, rather the hack, was done before MI use-list was available.
Now we should be able to implement it in a better way, perhaps in the
two-address pass until a MI scheduler is available.

Now that the scheduler has to backtrack to handle call sequences. Adding
artificial scheduling constraints is just not safe. Furthermore, the hack
is not taking all the other scheduling decisions into consideration so it's just
as likely to pessimize code. So I view disabling this optimization goodness
regardless of PR11314.

llvm-svn: 144267
2011-11-10 07:43:16 +00:00
Jakob Stoklund Olesen eef48b6938 Strip old implicit operands after foldMemoryOperand.
The TII.foldMemoryOperand hook preserves implicit operands from the
original instruction.  This is not what we want when those implicit
operands refer to the register being spilled.

Implicit operands referring to other registers are preserved.

This fixes PR11347.

llvm-svn: 144247
2011-11-10 00:17:03 +00:00
Eli Friedman 53218b6fcc Add check so we don't try to perform an impossible transformation. Fixes issue from PR11319.
llvm-svn: 144216
2011-11-09 22:25:12 +00:00
Benjamin Kramer 966ed1b698 Add comments.
llvm-svn: 144194
2011-11-09 18:16:11 +00:00
Duncan Sands 635e4efca0 Speculatively revert commit 144124 (djg) in the hope that the 32 bit
dragonegg self-host buildbot will recover (it is complaining about object
files differing between different build stages).  Original commit message:

Add a hack to the scheduler to disable pseudo-two-address dependencies in
basic blocks containing calls. This works around a problem in which
these artificial dependencies can get tied up in calling seqeunce
scheduling in a way that makes the graph unschedulable with the current
approach of using artificial physical register dependencies for calling
sequences. This fixes PR11314.

llvm-svn: 144188
2011-11-09 14:20:48 +00:00
Benjamin Kramer 148db36263 Take advantage of the zero byte in StringMap when emitting dwarf stringpool entries.
llvm-svn: 144184
2011-11-09 12:12:04 +00:00
Devang Patel fa4520968b Remove extra ';'
llvm-svn: 144172
2011-11-09 06:20:49 +00:00
Eric Christopher 5223a57533 Remove the pubnames section, no one consumes it.
llvm-svn: 144169
2011-11-09 05:24:07 +00:00
Jakob Stoklund Olesen 3dc89c9768 Collapse DomainValues across loop back-edges.
During the initial RPO traversal of the basic blocks, remember the ones
that are incomplete because of back-edges from predecessors that haven't
been visited yet.

After the initial RPO, revisit all those loop headers so the incoming
DomainValues on the back-edges can be properly collapsed.

This will properly fix execution domains on software pipelined code,
like the included test case.

llvm-svn: 144151
2011-11-09 01:06:56 +00:00
Jakob Stoklund Olesen 53ec977cd2 Link to the live DomainValue after merging.
When merging two uncollapsed DomainValues, place a link to the active
DomainValue from the passive DomainValue.  This allows old stale
references to the passive DomainValue to be updated to point to the
active DomainValue.

The new resolve() function finds the active DomainValue and updates the
pointer.

This change makes old live-out lists more useful since they may contain
uncollapsed DomainValues that have since been merged into other
DomainValues.

llvm-svn: 144149
2011-11-09 00:06:18 +00:00
Jakob Stoklund Olesen b7e44a3f5f Track reference count independently from clear().
This allows clear() to be called on a DomainValue with references.

llvm-svn: 144147
2011-11-08 23:26:00 +00:00
Jakob Stoklund Olesen 5d08293999 Call release() directly when cleaning up the remaining DomainValues.
There is no need to involve the LiveRegs array and kill() any longer.

llvm-svn: 144133
2011-11-08 22:05:17 +00:00
Jakob Stoklund Olesen 9e338bb0f3 Rename all methods to follow style guide.
No functional change.

llvm-svn: 144132
2011-11-08 21:57:47 +00:00
Jakob Stoklund Olesen 1438e191bd Handle reference counts in one function: release().
This new function will decrement the reference count, and collapse a
domain value when the last reference is gone.

This simplifies DomainValue reference counting, and decouples it from
the LiveRegs array.

llvm-svn: 144131
2011-11-08 21:57:44 +00:00
Eric Christopher 08a558eeef Also add the linkage name to the name accelerator tables if it exists
and is different than the normal name.

llvm-svn: 144130
2011-11-08 21:56:23 +00:00
Dan Gohman a4bc6171a5 Add a hack to the scheduler to disable pseudo-two-address dependencies in
basic blocks containing calls. This works around a problem in which
these artificial dependencies can get tied up in calling seqeunce
scheduling in a way that makes the graph unschedulable with the current
approach of using artificial physical register dependencies for calling
sequences. This fixes PR11314.

llvm-svn: 144124
2011-11-08 21:29:06 +00:00
Jakob Stoklund Olesen 1205881820 Clear old DomainValue after merging.
The old value may still be referenced by some live-out list, and we
don't wan't to collapse those instructions twice.

This fixes the "Can only swizzle VMOVD" assertion in some armv7 SPEC
builds.

<rdar://problem/10413292>

llvm-svn: 144117
2011-11-08 20:57:04 +00:00
Eric Christopher 970771c0e8 Add the base ObjC method name to the names lookup table as well.
llvm-svn: 144105
2011-11-08 19:16:01 +00:00
Lang Hames b85fcd07df Lower mem-ops to unaligned i32/i16 load/stores on ARM where supported.
Add support for trimming constants to GetDemandedBits. This fixes some funky
constant generation that occurs when stores are expanded for targets that don't
support unaligned stores natively.

llvm-svn: 144102
2011-11-08 18:56:23 +00:00
Pete Cooper 82cd9e81fc Added invariant field to the DAG.getLoad method and changed all calls.
When this field is true it means that the load is from constant (runt-time or compile-time) and so can be hoisted from loops or moved around other memory accesses

llvm-svn: 144100
2011-11-08 18:42:53 +00:00
Eric Christopher 54ce295d37 A few more places where we can avoid multiple size queries.
llvm-svn: 144099
2011-11-08 18:38:40 +00:00
Eric Christopher f1932270c0 Don't evaluate Data.size() on every iteration.
llvm-svn: 144095
2011-11-08 18:22:25 +00:00
Eli Friedman f2a9bd4b1e Add a bunch of calls to RemoveDeadNode in LegalizeDAG, so legalization doesn't get confused by CSE later on. Fixes PR11318.
Re-commit of r144034, with an extra fix so that RemoveDeadNode doesn't blow up.

llvm-svn: 144055
2011-11-08 01:25:24 +00:00
Eli Friedman a35a5295e0 Revert r144034 while I try to track down a crash.
llvm-svn: 144044
2011-11-07 23:53:20 +00:00
Bill Wendling 478f58cad4 This code is dead, what with the new EH model and the auto-upgraders in place.
Delete!

llvm-svn: 144043
2011-11-07 23:36:48 +00:00
Jakob Stoklund Olesen a70e9417fb Kill and collapse outstanding DomainValues.
DomainValues that are only used by "don't care" instructions are now
collapsed to the first possible execution domain after all basic blocks
have been processed.  This typically means the PS domain on x86.

For example, the vsel_i64 and vsel_double functions in sse2-blend.ll are
completely collapsed to the PS domain instead of containing a mix of
execution domains created by isel.

llvm-svn: 144037
2011-11-07 23:08:21 +00:00
Eli Friedman 55a86d32d3 Add a bunch of calls to RemoveDeadNode in LegalizeDAG, so legalization doesn't get confused by CSE later on. Fixes PR11318.
llvm-svn: 144034
2011-11-07 22:51:10 +00:00
Eric Christopher 5139dadd50 Add all completed and named types to the dwarf type accelerator tables.
llvm-svn: 144027
2011-11-07 22:11:16 +00:00
Jakob Stoklund Olesen 68e197e151 Use a reverse post order instead of a DFS order.
The enterBasicBlock() function is combining live-out values from
predecessor blocks.  The RPO traversal means that more predecessors
have been visited when that happens, only back-edges are missing.

llvm-svn: 144025
2011-11-07 21:59:29 +00:00
Eric Christopher ff2edf1499 Move the hash function to using and taking a StringRef.
llvm-svn: 144024
2011-11-07 21:49:35 +00:00
Eric Christopher b6205d8b49 Simple destructor to delete the hash data we created earlier.
llvm-svn: 144023
2011-11-07 21:49:28 +00:00
Jakob Stoklund Olesen 736cf46c3e Extract two methods. No functional change.
llvm-svn: 144020
2011-11-07 21:40:27 +00:00
Jakob Stoklund Olesen 44dcc589b3 MBB doesn't need to be a class member.
llvm-svn: 144015
2011-11-07 21:23:42 +00:00
Jakob Stoklund Olesen baffa7d35d Fix pass name after the source was moved.
llvm-svn: 144014
2011-11-07 21:23:39 +00:00
Eric Christopher 988ac00abd Use StringRef::startswith to do some string comparisons.
llvm-svn: 143982
2011-11-07 18:53:23 +00:00
Eric Christopher 2c5dab541d Avoid the use of a local temporary for comment twines.
llvm-svn: 143974
2011-11-07 18:34:47 +00:00
Eric Christopher cc979f9ae6 Allow for the case where the name of the subprogram is "".
Fixes a self-host error.

llvm-svn: 143970
2011-11-07 18:10:17 +00:00
Richard Osborne 561fac4d4e Don't introduce custom nodes after legalization in TargetLowering::BuildSDIV()
and TargetLowering::BuildUDIV(). Fixes PR11283

llvm-svn: 143964
2011-11-07 17:09:05 +00:00
Eric Christopher 8c7505f258 Remove unnecessary addition to API. Replace with something much simpler.
llvm-svn: 143925
2011-11-07 09:38:42 +00:00
Eric Christopher e1c874aa70 Add new files to cmake.
llvm-svn: 143924
2011-11-07 09:37:06 +00:00
Eric Christopher 4996c70034 Add the support code to enable the dwarf accelerator tables. Upcoming patches
to fix the types section (all types, not just global types), and testcases.

The code to do the final emission is disabled by default.

llvm-svn: 143923
2011-11-07 09:24:32 +00:00
Eric Christopher 6e47204b0c Add a new dwarf accelerator table prototype with the goal of replacing
the pubnames and pubtypes tables. LLDB can currently use this format
and a full spec is forthcoming and submission for standardization is planned.

A basic summary:

The dwarf accelerator tables are an indirect hash table optimized
for null lookup rather than access to known data. They are output into
an on-disk format that looks like this:

.-------------.
|  HEADER     |
|-------------|
|  BUCKETS    |
|-------------|
|  HASHES     |
|-------------|
|  OFFSETS    |
|-------------|
|  DATA       |
`-------------'

where the header contains a magic number, version, type of hash function,
the number of buckets, total number of hashes, and room for a special
struct of data and the length of that struct.

The buckets contain an index (e.g. 6) into the hashes array. The hashes
section contains all of the 32-bit hash values in contiguous memory, and
the offsets contain the offset into the data area for the particular
hash.

For a lookup example, we could hash a function name and take it modulo the
number of buckets giving us our bucket. From there we take the bucket value
as an index into the hashes table and look at each successive hash as long
as the hash value is still the same modulo result (bucket value) as earlier.
If we have a match we look at that same entry in the offsets table and
grab the offset in the data for our final match.

llvm-svn: 143921
2011-11-07 09:18:42 +00:00
Eric Christopher a7b6189071 Expose a way to get the beginning of the dwarf string section.
llvm-svn: 143920
2011-11-07 09:18:38 +00:00