Commit Graph

15993 Commits

Author SHA1 Message Date
Chandler Carruth 73523021d0 [PM] Split DominatorTree into a concrete analysis result object which
can be used by both the new pass manager and the old.

This removes it from any of the virtual mess of the pass interfaces and
lets it derive cleanly from the DominatorTreeBase<> template. In turn,
tons of boilerplate interface can be nuked and it turns into a very
straightforward extension of the base DominatorTree interface.

The old analysis pass is now a simple wrapper. The names and style of
this split should match the split between CallGraph and
CallGraphWrapperPass. All of the users of DominatorTree have been
updated to match using many of the same tricks as with CallGraph. The
goal is that the common type remains the resulting DominatorTree rather
than the pass. This will make subsequent work toward the new pass
manager significantly easier.

Also in numerous places things became cleaner because I switched from
re-running the pass (!!! mid way through some other passes run!!!) to
directly recomputing the domtree.

llvm-svn: 199104
2014-01-13 13:07:17 +00:00
Chandler Carruth e509db410a [PM] Pull the generic graph algorithms and data structures for dominator
trees into the Support library.

These are all expressed in terms of the generic GraphTraits and CFG,
with no reliance on any concrete IR types. Putting them in support
clarifies that and makes the fact that the static analyzer in Clang uses
them much more sane. When moving the Dominators.h file into the IR
library I claimed that this was the right home for it but not something
I planned to work on. Oops.

So why am I doing this? It happens to be one step toward breaking the
requirement that IR verification can only be performed from inside of
a pass context, which completely blocks the implementation of
verification for the new pass manager infrastructure. Fixing it will
also allow removing the concept of the "preverify" step (WTF???) and
allow the verifier to cleanly flag functions which fail verification in
a way that precludes even computing dominance information. Currently,
that results in a fatal error even when you ask the verifier to not
fatally error. It's awesome like that.

The yak shaving will continue...

llvm-svn: 199095
2014-01-13 10:52:56 +00:00
Tim Northover 7fdd4857f7 Revert "ReMat: fix overly cavalier attitude to sub-register indices"
Very sorry, this was a premature patch that I still need to investigate and
finish off (for some reason beyond me at the moment it doesn't actually fix the
issue in all cases).

This reverts commit r199091.

llvm-svn: 199093
2014-01-13 10:49:11 +00:00
Tim Northover 59f8d4b4ee ReMat: fix overly cavalier attitude to sub-register indices
There are two attempted optimisations in reMaterializeTrivialDef, trying to
avoid promoting the size of a register too much when rematerializing.
Unfortunately, both appear to be flawed. First, we see if the original register
would have worked, but this is inadequate. Consider:

    v1 = SOMETHING (v1 is QQ)
    v2:Q0 = COPY v1:Q1 (v1, v2 are QQ)
    ...
    uses of v2

In this case even though v2 *could* be used directly as the output of
SOMETHING, this would set the wrong bits of the QQ register involved. The
correct rematerialization must be:

    v2:Q0_Q1 = SOMETHING (v2 promoted to QQQ)
    ...
    uses of v2:Q1_Q2

For the second optimisation, if the correct remat is "v2:idx = SOMETHING" then
we can't necessarily expect v2 itself to be valid for SOMETHING, but we do try
to hunt for a class between v1 and v2 that works. Unfortunately, this is also
wrong:

    v1 = SOMETHING (v1 is QQ)
    v2:Q0_Q1 = COPY v1 (v1 is QQ, v2 is QQQ)
    ...
    uses of v2 as a QQQ

The canonical rematerialization here is "v2:Q0_Q1 = SOMETHING". However current
logic would decide that v2 could be a QQ (no interest is taken in later uses).

This patch, therefore, always accepts the widened register class without trying
to be clever. Generally there is no penalty to this (e.g. in the common GR32 <
GR64 case, expanding the width doesn't matter because it's not like you were
going to do anything else with the high bits of a GR32 register). It can
increase register pressure in cases like the ARM VFP regs though (multiple
non-overlapping but equivalent subregisters). Hopefully this situation is rare
enough that it won't matter.

Unfortunately, no in-tree targets actually expose this as far as I can tell
(there are so few isAsCheapAsAMove instructions for it to trigger on) so I've
been unable to produce a test. It was exposed in our ARM64 SPEC tests though,
and I will be adding a test there that we should be able to contribute
soon(TM).

llvm-svn: 199091
2014-01-13 10:47:01 +00:00
Chandler Carruth 5ad5f15cff [cleanup] Move the Dominators.h and Verifier.h headers into the IR
directory. These passes are already defined in the IR library, and it
doesn't make any sense to have the headers in Analysis.

Long term, I think there is going to be a much better way to divide
these matters. The dominators code should be fully separated into the
abstract graph algorithm and have that put in Support where it becomes
obvious that evn Clang's CFGBlock's can use it. Then the verifier can
manually construct dominance information from the Support-driven
interface while the Analysis library can provide a pass which both
caches, reconstructs, and supports a nice update API.

But those are very long term, and so I don't want to leave the really
confusing structure until that day arrives.

llvm-svn: 199082
2014-01-13 09:26:24 +00:00
Jakob Stoklund Olesen 1995b9fead Handle bundled terminators in isBlockOnlyReachableByFallthrough.
Targets like SPARC and MIPS have delay slots and normally bundle the
delay slot instruction with the corresponding terminator.

Teach isBlockOnlyReachableByFallthrough to find any MBB operands on
bundled terminators so SPARC doesn't need to specialize this function.

llvm-svn: 199061
2014-01-12 19:24:08 +00:00
Nico Rieck b5262d6d8f Fix non-deterministic SDNodeOrder-dependent codegen
Reset SelectionDAGBuilder's SDNodeOrder to ensure deterministic code
generation.

llvm-svn: 199050
2014-01-12 14:09:17 +00:00
Chandler Carruth 9d805139bd [PM] Simplify the interface exposed for IR printing passes.
Nothing was using the ability of the pass to delete the raw_ostream it
printed to, and nothing was trying to pass it a pointer to the
raw_ostream. Also, the function variant had a different order of
arguments from all of the others which was just really confusing. Now
the interface accepts a reference, doesn't offer to delete it, and uses
a consistent order. The implementation of the printing passes haven't
been updated with this simplification, this is just the API switch.

llvm-svn: 199044
2014-01-12 11:30:46 +00:00
Chandler Carruth b8ddc7043c [PM] Rename the IR printing pass header to a more generic and correct
name to match the source file which I got earlier. Update the include
sites. Also modernize the comments in the header to use the more
recommended doxygen style.

llvm-svn: 199041
2014-01-12 11:10:32 +00:00
Alp Toker 798060e006 Fix 'ned' typo in doc comment
Patch by Jasper Neumann!

llvm-svn: 199007
2014-01-11 14:01:43 +00:00
Eric Christopher 942f22c439 Revert r198979 - accidental commit.
llvm-svn: 198981
2014-01-11 00:28:12 +00:00
Eric Christopher ceec7b02fa Reformat.
llvm-svn: 198980
2014-01-11 00:23:18 +00:00
Eric Christopher 67cde9ac07 Update function name and add some helpful comments.
llvm-svn: 198979
2014-01-11 00:23:16 +00:00
David Blaikie 15ed5ebfc5 Revert "Revert r198851, "Prototype of skeleton type units for fission""
This reverts commit r198865 which reverts r198851.

ASan identified a use-of-uninitialized of the DwarfTypeUnit::Ty variable
in skeleton type units.

llvm-svn: 198908
2014-01-10 01:38:41 +00:00
NAKAMURA Takumi c5bf572993 Revert r198851, "Prototype of skeleton type units for fission"
It caused undefined behavior. DwarfTypeUnit::Ty might not be initialized properly, I guess.

llvm-svn: 198865
2014-01-09 13:08:00 +00:00
Richard Sandiford 15cfc1c33c Handle masked rotate amounts
At the moment we expect rotates to have the form:

   (or (shl X, Y), (shr X, Z))

where Y == bitsize(X) - Z or Z == bitsize(X) - Y.  This form means that
the (or ...) is undefined for Y == 0 or Z == 0.  This undefinedness can
be avoided by using Y == (C * bitsize(X) - Z) & (bitsize(X) - 1) or
Z == (C * bitsize(X) - Y) & (bitsize(X) - 1) for any integer C
(including 0, the most natural choice).

llvm-svn: 198861
2014-01-09 10:56:42 +00:00
Richard Sandiford 0f264db3c6 Match the InstCombine form of rotates by X+C
InstCombine converts (sub 32, (add X, C)) into (sub 32-C, X),
so a rotate left of a 32-bit Y by X+C could appear as either:

   (or (shl Y, (add X, C)), (shr Y, (sub 32, (add X, C))))

without InstCombine or:

   (or (shl Y, (add X, C)), (shr Y, (sub 32-C, X)))

with it.

We already matched the first form.  This patch handles the second too.

llvm-svn: 198860
2014-01-09 10:49:40 +00:00
David Blaikie a588365df6 Prototype of skeleton type units for fission
llvm-svn: 198851
2014-01-09 05:08:28 +00:00
David Blaikie 38fe6342f6 DwarfDebug: Refactor out common skeleton construction code to be reused for type unit skeletons.
llvm-svn: 198846
2014-01-09 04:28:46 +00:00
David Blaikie b334e94492 Reformatting for r198842
llvm-svn: 198843
2014-01-09 03:24:13 +00:00
David Blaikie f645f963ff DwarfUnit: Rename "Node" to "CUNode" and propagate it through DwarfTypeUnit as well.
Since we'll now also need the split dwarf file name along with the
language in DwarfTypeUnits, just use the whole DICompileUnit rather than
explicitly handling each field needed.

llvm-svn: 198842
2014-01-09 03:23:41 +00:00
David Blaikie 7480ae6e19 Revert "DwarfUnit: Move the DICompileUnit Node to the DwarfCompileUnit only"
This reverts commit r198830.

Decided to go a different way with this...

llvm-svn: 198841
2014-01-09 03:03:27 +00:00
Chandler Carruth d48cdbf0c3 Put the functionality for printing a value to a raw_ostream as an
operand into the Value interface just like the core print method is.
That gives a more conistent organization to the IR printing interfaces
-- they are all attached to the IR objects themselves. Also, update all
the users.

This removes the 'Writer.h' header which contained only a single function
declaration.

llvm-svn: 198836
2014-01-09 02:29:41 +00:00
David Blaikie 08badfd2ba DwarfUnit: Move the DICompileUnit Node to the DwarfCompileUnit only
It's unused in DwarfTypeUnit, as is expected.

llvm-svn: 198830
2014-01-09 01:20:14 +00:00
Andrew Trick 32e1be7bd0 llvm.experimental.stackmap: fix encoding of large constants.
In the stackmap format we advertise the constant field as signed.
However, we were determining whether to promote to a 64-bit constant
pool based on an unsigned comparison.

This fix allows -1 to be encoded as a small constant.

llvm-svn: 198816
2014-01-09 00:22:31 +00:00
Hal Finkel 2150e3a743 Conservatively handle multiple MMOs in MIsNeedChainEdge
MIsNeedChainEdge, which is used by -enable-aa-sched-mi (AA in misched), had an
llvm_unreachable when -enable-aa-sched-mi is enabled and we reach an
instruction with multiple MMOs. Instead, return a conservative answer. This
allows testing -enable-aa-sched-mi on x86.

Also, this moves the check above the isUnsafeMemoryObject checks.
isUnsafeMemoryObject is currently correct only for instructions with one MMO
(as noted in the comment in isUnsafeMemoryObject):

  // We purposefully do no check for hasOneMemOperand() here
  // in hope to trigger an assert downstream in order to
  // finish implementation.

The problem with this is that, had the candidate edge passed the
"!MIa->mayStore() && !MIb->mayStore()" check, the hoped-for assert would never
happen (which could, in theory, lead to incorrect behavior if one of these
secondary MMOs was volatile, for example).

llvm-svn: 198795
2014-01-08 21:52:02 +00:00
Andrea Di Biagio 23df4e4a2d Teach the DAGCombiner how to fold 'vselect' dag nodes according
to the following two rules:
  1) fold (vselect (build_vector AllOnes), A, B) -> A
  2) fold (vselect (build_vector AllZeros), A, B) -> B

llvm-svn: 198777
2014-01-08 18:33:04 +00:00
Richard Sandiford 95c864d9bd [DAGCombiner] Factor duplicated rotate code into a separate function
No functional change intended.

llvm-svn: 198768
2014-01-08 15:40:47 +00:00
Rafael Espindola 894843cb4e Move the llvm mangler to lib/IR.
This makes it available to tools that don't link with target (like llvm-ar).

llvm-svn: 198708
2014-01-07 21:19:40 +00:00
Benjamin Kramer 8a68ab3710 Emit arange padding with a single directive.
llvm-svn: 198700
2014-01-07 19:28:14 +00:00
Chandler Carruth 9aca918df9 Move the LLVM IR asm writer header files into the IR directory, as they
are part of the core IR library in order to support dumping and other
basic functionality.

Rename the 'Assembly' include directory to 'AsmParser' to match the
library name and the only functionality left their -- printing has been
in the core IR library for quite some time.

Update all of the #includes to match.

All of this started because I wanted to have the layering in good shape
before I started adding support for printing LLVM IR using the new pass
infrastructure, and commandline support for the new pass infrastructure.

llvm-svn: 198688
2014-01-07 12:34:26 +00:00
Chandler Carruth 8a8cd2bab9 Re-sort all of the includes with ./utils/sort_includes.py so that
subsequent changes are easier to review. About to fix some layering
issues, and wanted to separate out the necessary churn.

Also comment and sink the include of "Windows.h" in three .inc files to
match the usage in Memory.inc.

llvm-svn: 198685
2014-01-07 11:48:04 +00:00
Andrew Trick dfacda3635 Fix for PR18396: Assertion: MO->isDead "Cannot fold physreg def".
InlineSpiller::foldMemoryOperand needs to handle undef call operands.

llvm-svn: 198679
2014-01-07 07:31:10 +00:00
Kevin Qin 5cd73c9e0a [AArch64 NEON] Fix invalid constant used in vselect condition.
There is a wrong assumption that the vector element type and the
type of each ConstantSDNode in the build_vector were the same.
However, when promoting the integer operand of a legally typed
build_vector, the operand type and the vector element type do not
need to be the same
(See method 'DAGTypeLegalizer::PromoteIntOp_BUILD_VECTOR' in
LegalizeIntegerTypes.cpp).

  in AArch64 backend, the following dag sequence:

  C0: i1 = Constant<0>
  C1: i1 = Constant<-1>
  V: v8i1 = BUILD_VECTOR C1, C1, C0, C0, C0, C0, C0, C0

  is type-legalized into:

  NewC0: i32 = Constant<0>
  NewC1: i32 = Constant<1>
  V: v8i8 = BUILD_VECTOR NewC1, NewC1, NewC0, NewC0, NewC0, NewC0, NewC0, NewC0

Forcing a getZeroExtend to VTBits to ensure that the new constant
is correctly.

llvm-svn: 198582
2014-01-06 02:26:10 +00:00
Bill Wendling 908bf814e7 Refactor function that checks that __builtin_returnaddress's argument is constant.
This moves the check up into the parent class so that all targets can use it
without having to copy (and keep in sync) the same error message.

llvm-svn: 198579
2014-01-06 00:43:20 +00:00
Nico Weber 7408c7066a Add a LLVM_DUMP_METHOD macro.
The motivation is to mark dump methods as used in debug builds so that they can
be called from lldb, but to not do so in release builds so that they can be
dead-stripped.

There's lots of potential follow-up work suggested in the thread
"Should dump methods be LLVM_ATTRIBUTE_USED only in debug builds?" on cfe-dev,
but everyone seems to agreen on this subset.

Macro name chosen by fair coin toss.

llvm-svn: 198456
2014-01-03 22:53:37 +00:00
Rafael Espindola 58873566b3 Make the llvm mangler depend only on DataLayout.
Before this patch any program that wanted to know the final symbol name of a
GlobalValue had to link with Target.

This patch implements a compromise solution where the mangler uses DataLayout.
This way, any tool that already links with Target (llc, clang) gets the exact
behavior as before and new IR files can be mangled without linking with Target.

With this patch the mangler is constructed with just a DataLayout and DataLayout
is extended to include the information the Mangler needs.

llvm-svn: 198438
2014-01-03 19:21:54 +00:00
David Blaikie cfb2115e66 Revert "Revert "Debug Info: Type Units: Simplify type hashing using IR-provided unique names.""
This reverts commit r198398, thus reapplying r198397.

I had accidentally introduced an endianness issue when applying the hash
to the type unit. Using support::ulittle64_t in the reinterpret_cast in
addDwarfTypeUnitType fixes this issue.

Original commit message:

Debug Info: Type Units: Simplify type hashing using IR-provided unique
names.

What's good for LTO metadata size problems ought to be good for non-LTO
debug info size too, so let's rely on the same uniqueness in both cases.
If it's insufficient for non-LTO for whatever reason (since we now won't
be uniquing CU-local types or any C types - but these are likely to not
be the most significant contributors to type bloat) we should consider a
frontend solution that'll help both LTO and non-LTO alike, rather than
using DWARF-level DIE-hashing that only helps non-LTO debug info size.

It's also much simpler this way and benefits C++ even more since we can
deduplicate lexically separate definitions of the same C++ type since
they have the same mangled name.

llvm-svn: 198436
2014-01-03 18:59:42 +00:00
David Blaikie ab0ba24983 Revert "Debug Info: Type Units: Simplify type hashing using IR-provided unique names."
Reverting due to bot failure I won't have time to investigate until
tomorrow.

This reverts commit r198397.

llvm-svn: 198398
2014-01-03 04:49:04 +00:00
David Blaikie ddb66281cd Debug Info: Type Units: Simplify type hashing using IR-provided unique names.
What's good for LTO metadata size problems ought to be good for non-LTO
debug info size too, so let's rely on the same uniqueness in both cases.
If it's insufficient for non-LTO for whatever reason (since we now won't
be uniquing CU-local types or any C types - but these are likely to not
be the most significant contributors to type bloat) we should consider a
frontend solution that'll help both LTO and non-LTO alike, rather than
using DWARF-level DIE-hashing that only helps non-LTO debug info size.

It's also much simpler this way and benefits C++ even more since we can
deduplicate lexically separate definitions of the same C++ type since
they have the same mangled name.

llvm-svn: 198397
2014-01-03 04:20:26 +00:00
Eric Christopher 4d214b9e9c 80-column.
llvm-svn: 198394
2014-01-03 02:17:35 +00:00
Eric Christopher 50effa0437 Remove TextSectionSym as it is unused.
llvm-svn: 198393
2014-01-03 02:16:44 +00:00
David Blaikie 22b29a5f1a Revert "Reverting r193835 due to weirdness with Go..."
The cgo problem was that it wants dwarf2 which doesn't support direct
constant encoding of the location. So let's add support for dwarf2
encoding (using a location expression) of data member locations.

This reverts commit r198385.

llvm-svn: 198389
2014-01-03 01:30:05 +00:00
David Blaikie 2ada116a34 Reverting r193835 due to weirdness with Go...
Apologies for the noise - we're seeing some Go failures with cgo
interacting with Clang's debug info due to this change.

llvm-svn: 198385
2014-01-03 00:48:38 +00:00
Quentin Colombet 1fb3362a6e [RegAlloc] Make tryInstructionSplit less aggressive.
The greedy register allocator tries to split a live-range around each
instruction where it is used or defined to relax the constraints on the entire
live-range (this is a last chance split before falling back to spill).
The goal is to have a big live-range that is unconstrained (i.e., that can use
the largest legal register class) and several small local live-range that carry
the constraints implied by each instruction.
E.g.,
Let csti be the constraints on operation i.

V1=
op1 V1(cst1)
op2 V1(cst2)

V1 live-range is constrained on the intersection of cst1 and cst2.

tryInstructionSplit relaxes those constraints by aggressively splitting each
def/use point:
V1=
V2 = V1
V3 = V2
op1 V3(cst1)
V4 = V2
op2 V4(cst2)

Because of how the coalescer infrastructure works, each new variable (V3, V4)
that is alive at the same time as V1 (or its copy, here V2) interfere with V1.
Thus, we end up with an uncoalescable copy for each split point.

To make tryInstructionSplit less aggressive, we check if the split point
actually relaxes the constraints on the whole live-range. If it does not, we do
not insert it.
Indeed, it will not help the global allocation problem:
- V1 will have the same constraints.
- V1 will have the same interference + possibly the newly added split variable
  VS.
- VS will produce an uncoalesceable copy if alive at the same time as V1.

<rdar://problem/15570057>

llvm-svn: 198369
2014-01-02 22:47:22 +00:00
Eric Christopher 94932438d4 Remove comments on CU skeleton construction, they're probably
obvious.

llvm-svn: 198361
2014-01-02 22:04:47 +00:00
Eric Christopher d8beca3b78 Elaborate on comment for skeleton CU construction.
llvm-svn: 198358
2014-01-02 21:38:18 +00:00
Eric Christopher 40734c4c0c Revert seemingly unnecessary section sym for the data section.
llvm-svn: 198357
2014-01-02 21:38:13 +00:00
Hal Finkel decb024c86 Disable compare sinking in CodeGenPrepare when multiple condition registers are available
As noted in the comment above CodeGenPrepare::OptimizeInst, which aggressively
sinks compares to reduce pressure on the condition register(s), for targets
such as PowerPC with multiple condition registers, this may not be the right
thing to do. This adds an HasMultipleConditionRegisters boolean to TLI, and
CodeGenPrepare::OptimizeInst is skipped when HasMultipleConditionRegisters is
true.

This functionality will be used by the PowerPC backend in an upcoming commit.
Especially when the PowerPC backend starts tracking individual condition
register bits as separate allocatable entities (which will happen in this
upcoming commit), this sinking from CodeGenPrepare::OptimizeInst is
significantly suboptimial.

llvm-svn: 198354
2014-01-02 21:13:43 +00:00
Eric Christopher d4368fde45 Fix up a couple of review comments:
Use an if statement instead of a pair of ternary operators checking
the same condition.
Use a cheap method call rather than returning the local symbol.

llvm-svn: 198351
2014-01-02 21:03:28 +00:00