OptimizationRemarkAnalysis directly takes the role of the report that is
generated by LAA.
Then we need the magic to be able to turn an LAA remark into an LV
remark. This is done via a new OptimizationRemark ctor.
llvm-svn: 282758
This is a step toward statically allocate InstructionMapping. Like the
previous few commits, the goal is to move toward a TableGen'ed like
structure with no dynamic allocation at all.
This should already improve compile time by getting rid of a bunch of
memmove of SmallVectors.
llvm-svn: 282643
Summary:
Answering any meaningful questions about .sancov files requires
accessing symbol information from the corresponding binary.
This change introduces a separate intermediate data structure and
format: symbolized coverage. It contains all symbol information that
is required to answer common queries:
- merging
- coverd/uncovered files and functions
- line status.
Also removing the html report functionality from sancov: generated
HTML files are too huge, and a different approach is required.
Maintaining this half-working approach in the C++ is painful.
Differential Revision: https://reviews.llvm.org/D24947
llvm-svn: 282639
Summary:
When using llc with -compile-twice, module is generated twice, but getAnalysis<ProfileSummaryInfoWrapperPass>().getPSI will still get the old PSI with the original (invalidated) Module. This patch checks if the module has changed when calling getPSI, if yes, update the module and invalidate the Summary.
The bug does not show up in the current llc because PSI is not used in CodeGen yet. But with https://reviews.llvm.org/D24989, the bug will be exposed by test/CodeGen/PowerPC/pr26378.ll
Reviewers: eraman, davidxl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D24993
llvm-svn: 282616
Summary:
Instead of creating and destroying SCEVUnionPredicate instances (which
internally creates and destroys a DenseMap), use temporary SmallPtrSet
instances of remember the set of predicates that will get reified into a
SCEVUnionPredicate.
Reviewers: silviu.baranga, sbaranga
Subscribers: sanjoy, mcrosier, llvm-commits, mzolotukhin
Differential Revision: https://reviews.llvm.org/D25000
llvm-svn: 282606
This commit enables more unrolling for SystemZ by implementing the
SystemZTargetTransformInfo::getUnrollingPreferences() method.
It has been found that it is better to only unroll moderately, so the
DefaultUnrollRuntimeCount has been moved into UnrollingPreferences in order
to set this to a lower value for SystemZ (4).
Reviewers: Evgeny Stupachenko, Ulrich Weigand.
https://reviews.llvm.org/D24451
llvm-svn: 282570
Ever since LAA was split out into an analysis on its own, this function
stopped emitting the report directly. Instead it stores it to be
retrieved by the client which can then emit it as its own report
(e.g. -Rpass-analysis=loop-vectorize).
llvm-svn: 282561
Summary:
The current implementation of isConstantPhysReg() checks for defs of
physical registers to determine if they are constant. Some
architectures (e.g. AArch64 XZR/WZR) have registers that are constant
and may be used as destinations to indicate the generated value is
discarded, preventing isConstantPhysReg() from returning true. This
change adds a TargetRegisterInfo hook that overrides the no defs check
for cases such as this.
Reviewers: MatzeB, qcolombet, t.p.northover, jmolloy
Subscribers: junbuml, aemerson, mcrosier, rengolin
Differential Revision: https://reviews.llvm.org/D24570
llvm-svn: 282543
(Re-committed after moving the template specialization under the yaml
namespace. GCC was complaining about this.)
This allows various presentation of this data using an external tool.
This was first recommended here[1].
As an example, consider this module:
1 int foo();
2 int bar();
3
4 int baz() {
5 return foo() + bar();
6 }
The inliner generates these missed-optimization remarks today (the
hotness information is pulled from PGO):
remark: /tmp/s.c:5:10: foo will not be inlined into baz (hotness: 30)
remark: /tmp/s.c:5:18: bar will not be inlined into baz (hotness: 30)
Now with -pass-remarks-output=<yaml-file>, we generate this YAML file:
--- !Missed
Pass: inline
Name: NotInlined
DebugLoc: { File: /tmp/s.c, Line: 5, Column: 10 }
Function: baz
Hotness: 30
Args:
- Callee: foo
- String: will not be inlined into
- Caller: baz
...
--- !Missed
Pass: inline
Name: NotInlined
DebugLoc: { File: /tmp/s.c, Line: 5, Column: 18 }
Function: baz
Hotness: 30
Args:
- Callee: bar
- String: will not be inlined into
- Caller: baz
...
This is a summary of the high-level decisions:
* There is a new streaming interface to emit optimization remarks.
E.g. for the inliner remark above:
ORE.emit(DiagnosticInfoOptimizationRemarkMissed(
DEBUG_TYPE, "NotInlined", &I)
<< NV("Callee", Callee) << " will not be inlined into "
<< NV("Caller", CS.getCaller()) << setIsVerbose());
NV stands for named value and allows the YAML client to process a remark
using its name (NotInlined) and the named arguments (Callee and Caller)
without parsing the text of the message.
Subsequent patches will update ORE users to use the new streaming API.
* I am using YAML I/O for writing the YAML file. YAML I/O requires you
to specify reading and writing at once but reading is highly non-trivial
for some of the more complex LLVM types. Since it's not clear that we
(ever) want to use LLVM to parse this YAML file, the code supports and
asserts that we're writing only.
On the other hand, I did experiment that the class hierarchy starting at
DiagnosticInfoOptimizationBase can be mapped back from YAML generated
here (see D24479).
* The YAML stream is stored in the LLVM context.
* In the example, we can probably further specify the IR value used,
i.e. print "Function" rather than "Value".
* As before hotness is computed in the analysis pass instead of
DiganosticInfo. This avoids the layering problem since BFI is in
Analysis while DiagnosticInfo is in IR.
[1] https://reviews.llvm.org/D19678#419445
Differential Revision: https://reviews.llvm.org/D24587
llvm-svn: 282539
Turns out several external projects relied on llvm printing statistics
on exit. Let's go back to this behaviour by default and have an optional
parameter to disable it.
llvm-svn: 282532
I don't expect `PendingLoopPredicates` to have very many
elements (e.g. when -O3'ing the sqlite3 amalgamation,
`PendingLoopPredicates` has at most 3 elements). So now we use a
`SmallPtrSet` for it instead of the more heavyweight `DenseSet`.
llvm-svn: 282511
llvm::join_items is similar to llvm::join, which produces a string
by concatenating a sequence of values together separated by a
given separator. But it differs in that the arguments to
llvm::join() are same-type members of a container, whereas the
arguments to llvm::join_items are arbitrary types passed into
a variadic template. The only requirement on parameters to
llvm::join_items (including for the separator themselves) is
that they be implicitly convertible to std::string or have
an overload of std::string::operator+
Differential Revision: https://reviews.llvm.org/D24880
llvm-svn: 282502
This allows various presentation of this data using an external tool.
This was first recommended here[1].
As an example, consider this module:
1 int foo();
2 int bar();
3
4 int baz() {
5 return foo() + bar();
6 }
The inliner generates these missed-optimization remarks today (the
hotness information is pulled from PGO):
remark: /tmp/s.c:5:10: foo will not be inlined into baz (hotness: 30)
remark: /tmp/s.c:5:18: bar will not be inlined into baz (hotness: 30)
Now with -pass-remarks-output=<yaml-file>, we generate this YAML file:
--- !Missed
Pass: inline
Name: NotInlined
DebugLoc: { File: /tmp/s.c, Line: 5, Column: 10 }
Function: baz
Hotness: 30
Args:
- Callee: foo
- String: will not be inlined into
- Caller: baz
...
--- !Missed
Pass: inline
Name: NotInlined
DebugLoc: { File: /tmp/s.c, Line: 5, Column: 18 }
Function: baz
Hotness: 30
Args:
- Callee: bar
- String: will not be inlined into
- Caller: baz
...
This is a summary of the high-level decisions:
* There is a new streaming interface to emit optimization remarks.
E.g. for the inliner remark above:
ORE.emit(DiagnosticInfoOptimizationRemarkMissed(
DEBUG_TYPE, "NotInlined", &I)
<< NV("Callee", Callee) << " will not be inlined into "
<< NV("Caller", CS.getCaller()) << setIsVerbose());
NV stands for named value and allows the YAML client to process a remark
using its name (NotInlined) and the named arguments (Callee and Caller)
without parsing the text of the message.
Subsequent patches will update ORE users to use the new streaming API.
* I am using YAML I/O for writing the YAML file. YAML I/O requires you
to specify reading and writing at once but reading is highly non-trivial
for some of the more complex LLVM types. Since it's not clear that we
(ever) want to use LLVM to parse this YAML file, the code supports and
asserts that we're writing only.
On the other hand, I did experiment that the class hierarchy starting at
DiagnosticInfoOptimizationBase can be mapped back from YAML generated
here (see D24479).
* The YAML stream is stored in the LLVM context.
* In the example, we can probably further specify the IR value used,
i.e. print "Function" rather than "Value".
* As before hotness is computed in the analysis pass instead of
DiganosticInfo. This avoids the layering problem since BFI is in
Analysis while DiagnosticInfo is in IR.
[1] https://reviews.llvm.org/D19678#419445
Differential Revision: https://reviews.llvm.org/D24587
llvm-svn: 282499
This patch corresponds to review:
https://reviews.llvm.org/D24396
This patch adds support for the "vector count trailing zeroes",
"vector compare not equal" and "vector compare not equal or zero instructions"
as well as "scalar count trailing zeroes" instructions. It also changes the
vector negation to use XXLNOR (when VSX is enabled) so as not to increase
register pressure (previously this was done with a splat immediate of all
ones followed by an XXLXOR). This was done because the altivec.h
builtins (patch to follow) use vector negation and the use of an additional
register for the splat immediate is not optimal.
llvm-svn: 282478
Summary:
This patch improves thinlto importer
by importing 3x larger functions that are called from hot block.
I compared performance with the trunk on spec, and there
were about 2% on povray and 3.33% on milc. These results seems
to be consistant and match the results Teresa got with her simple
heuristic. Some benchmarks got slower but I think they are just
noisy (mcf, xalancbmki, omnetpp)- running the benchmarks again with
more iterations to confirm. Geomean of all benchmarks including the noisy ones
were about +0.02%.
I see much better improvement on google branch with Easwaran patch
for pgo callsite inlining (the inliner actually inline those big functions)
Over all I see +0.5% improvement, and I get +8.65% on povray.
So I guess we will see much bigger change when Easwaran patch will land
(it depends on new pass manager), but it is still worth putting this to trunk
before it.
Implementation details changes:
- Removed CallsiteCount.
- ProfileCount got replaced by Hotness
- hot-import-multiplier is set to 3.0 for now,
didn't have time to tune it up, but I see that we get most of the interesting
functions with 3, so there is no much performance difference with higher, and
binary size doesn't grow as much as with 10.0.
Reviewers: eraman, mehdi_amini, tejohnson
Subscribers: mehdi_amini, llvm-commits
Differential Revision: https://reviews.llvm.org/D24638
llvm-svn: 282437
Many high-performance processors have a dedicated branch predictor for
indirect branches, commonly used with jump tables. As sophisticated as such
branch predictors are, they tend to have well defined limits beyond which
their effectiveness is hampered or even nullified. One such limit is the
number of possible destinations for a given indirect branches that such
branch predictors can handle.
This patch considers a limit that a target may set to the number of
destination addresses in a jump table.
Patch by: Evandro Menezes <e.menezes@samsung.com>, Aditya Kumar
<aditya.k7@samsung.com>, Sebastian Pop <s.pop@samsung.com>.
Differential revision: https://reviews.llvm.org/D21940
llvm-svn: 282412
Both `loopHasNoSideEffects` and `loopHasNoAbnormalExits` involve walking
the loop and maintaining similar sorts of caches. This commit changes
SCEV to compute both the predicates via a single walk, and maintain a
single cache instead of two.
llvm-svn: 282375
This change simplifies a data structure optimization in the
`BackedgeTakenInfo` class for loops with exactly one computable exit.
I've sanity checked that this does not regress compile time performance,
using sqlite3's amalgamated build.
llvm-svn: 282365
This adds 4 new functions to StringRef, which can be used to
take or drop characters while a certain condition is met, or
until a certain condition is met. They are:
take_while - Return characters until a condition is not met.
take_until - Return characters until a condition is met.
drop_while - Remove characters until a condition is not met.
drop_until - Remove characters until a condition is met.
Internally, all of these functions delegate to two additional
helper functions which can be used to search for the position
of a character meeting or not meeting a condition, which are:
find_if - Find the first character matching a predicate.
find_if_not - Find the first character not matching a predicate.
Differential Revision: https://reviews.llvm.org/D24842
llvm-svn: 282346
This adds a comment explaining why we will duplicate PartialMapping to
represent the breakdown for complex mappings (mappings with more than
one partial mapping), instead of using an array of pointer.
NFC
llvm-svn: 282326
This is a step toward statically allocate ValueMapping. Like the
previous few commits, the goal is to move toward a TableGen'ed like
structure with no dynamic allocation at all.
llvm-svn: 282324
Previously we were using the address of the unique instance of a partial
mapping in the related map to access this instance. However, when the
map grows, the whole set of instances may be moved elsewhere and the
previous addresses are not valid anymore.
Instead, keep the address of the unique heap allocated instance of a
partial mapping.
Note: I did not see any actual bugs for that problem as the number of
partial mappings dynamically allocated is small (<= 4).
llvm-svn: 282323
The NativeObjectOutput class has a design problem: it mixes up the caching
policy with the interface for output streams, which makes the client-side
code hard to follow and would for example make it harder to replace the
cache implementation in an arbitrary client.
This change separates the two aspects by moving the caching policy
to a separate field in Config, replacing NativeObjectOutput with a
NativeObjectStream class which only deals with streams and does not need to
be overridden by most clients and introducing an AddFile callback for adding
files (e.g. from the cache) to the link.
Differential Revision: https://reviews.llvm.org/D24622
llvm-svn: 282299
Summary:
For AMDGPU, we have been using the operating system component of the triple
for specifying the low-level runtime that is being used. The rationale for
this is that the host operating system (e.g. Linux) is irrelevant for GPU code,
since its execution enviroment will be mostly controled by the low-level runtime
being used to execute the code.
In most cases, higher level languages have their own runtime which is
implemented on top of the low-level runtime. The kernel ABIs of each
language mostly depend on the low-level runtime, but there may be some
slight differences between languages. OpenCL for example, may append
additional arguments to the kernel in order to pass values like global
offsets or buffers for printf. OpenMP, HCC, or other languages may want
to add their own values which differ from OpenCL.
The reason for adding a new opencl environment type is to make it possible for the backend
to distinguish between the ABIs of the higher-level languages and handle them correctly.
It seems cleaner to use the enviroment component for this rather than creating a new
OS type for every combination of low-level runtime / high-level language.
Reviewers: Anastasia, chandlerc
Subscribers: whchung, pekka.jaaskelainen, wdng, yaxunl, llvm-commits
Differential Revision: https://reviews.llvm.org/D24735
llvm-svn: 282218
In the verify method of the ValueMapping class we used to check that the
mapping exactly matches the bits of the input value. This is problematic
for statically allocated mappings because we would need a different
mapping for each different size of the value that maps on one
instruction. For instance, with such scheme, we would need a different
mapping for a value of size 1, 5, 23 whereas they all end up on a 32-bit
wide instruction.
Therefore, change the verifier to check that the meaningful bits are
covered by the mapping instead of matching them.
llvm-svn: 282214
This is another step toward TableGen'ed like structures. The BreakDown of
the mapping of the value will be statically computed by TableGen, thus
we only have to point to the right entry in the table instead of
dynamically allocate the mapping for each instruction.
We still support the dynamic allocation through a factory of
PartialMapping to ease the bring-up of the targets while the TableGen
backend is not available.
llvm-svn: 282213
Currently all nodes get added to the NextSU list when they are released,
so any candidate must be in that list, making the heuristic ineffective.
Remove it for now, we can add it back later in a working fashion if
necessary.
llvm-svn: 282200
According to MSDN (see the PR), functions which don't touch any callee-saved
registers (including %rsp) don't need any unwind info.
This patch makes LLVM not emit unwind info for such functions, to save
binary size.
Differential Revision: https://reviews.llvm.org/D24748
llvm-svn: 282185
StringRef::getInteger() exists and treats the entire string as
an integer of the specified radix, failing if any invalid characters
are encountered or the number overflows.
Sometimes you might have something like "123456foo" and you want
to get the number 123456 and leave the string "foo" remaining.
This is similar to what would be possible by using the standard
runtime library functions strtoul et al and specifying an end
pointer.
This patch adds consumeInteger(), which does exactly that. It
consumes as much as possible until an invalid character is found,
and modifies the StringRef in place so that upon return only
the portion of the StringRef after the number remains.
Differential Revision: https://reviews.llvm.org/D24778
llvm-svn: 282164
The additional fix is:
When adding debug information to a lowered phi node in mem2reg
check that we have a valid insertion point after the phi for adding
the debug information.
This change addresses the issue in pr30468 where a lowered phi was
added before a catchswitch and no debug information should be added
after the phi in this case.
Differential Revision: https://reviews.llvm.org/D24797
llvm-svn: 282155
This commit is basically the first step toward what will
RegisterBankInfo look when it gets TableGen'ed.
It introduces a XXXGenRegisterBankInfo.def file that is what TableGen
will issue at some point. Moreover, the RegBanks field in
RegisterBankInfo changed to reflect the static (compile time) aspect of
the information.
llvm-svn: 282131
With the new LTO API in r278338, we stopped emitting the individual
index files and imports files for some modules in the distributed backend
case (thinlto-index-only plugin option).
Specifically, this is when the linker decides not to include a module in the
link, because it was in an archive library and did not have a strong
reference to it. Not creating the expected output files makes the
distributed build system implementation more difficult, in terms of
checking for the expected outputs of the thin link, and scheduling the
backend jobs. To address this, the gold-plugin will write dummy empty
.thinlto.bc and .imports files for modules not included in the link
(which LTO never sees).
Augmented a gold v1.12+ test, since that version of gold has the handling
for notifying on modules not being included in the link.
llvm-svn: 282100
We still don't really have an equivalent of "AssertXExt" in DAG, so we don't
exploit the guarantees on the receiving side yet, but this should produce
conservatively correct code on iOS ABIs.
llvm-svn: 282069
The only implementation that exists immediately looks it up anyway, and the
information is needed to handle various parameter attributes (stored on the
function itself).
llvm-svn: 282068
TargetMachine::getNameWithPrefix and inline the result into the singular
caller." and "Remove more guts of TargetMachine::getNameWithPrefix and
migrate one check to the TLOF mach-o version." temporarily until I can
get the whole call migrated out of the TargetMachine as we could hit
places where TLOF isn't valid.
This reverts commits r281981 and r281983.
llvm-svn: 282028
This should match the existing behaviour for passing complicated struct and
array types, in particular HFAs come through like that from Clang.
For C & C++ we still need to somehow support all the weird ABI flags, or at
least those that are present in the IR (signext, byval, ...), and stack-based
parameter passing.
llvm-svn: 281977
The routines llvm::ConvertDebugDeclareToDebugValue() always returned
a true value which was never checked at the call site; change the
function return type to void.
This NFC cleanup was approved in the review https://reviews.llvm.org/D23715
llvm-svn: 281964
Use SmallVector instead of dynamically allocated arrays for the mapping of the
operands in the InstructionMapping. That way we avoid heap allocation for most
of the cases. Ultimately, we should not have to rely on such tricky, the
instances of InstructionMapping would be TableGen'ed.
This improves the compilation time of the RegBankSelect pass.
llvm-svn: 281955
The comments of SplitBlockAndInsertIfThenElse say the SplitBefore instruction will stay in the old block.
But according to the implementation(split the block at SplitBefore by using splitBasicBlock), the SplitBefore will be moved to the new block.
This patch fixes the comments.
Patch by Zhe Yu Wu.
llvm-svn: 281939
The OperandsMapper class is used heavy in RegBankSelect and each
instantiation triggered a heap allocation for the array of operands.
Instead, use a SmallVector with a big enough size such that most of the
cases do not have to use dynamically allocated memory.
This improves the compile time of the RegBankSelect pass.
llvm-svn: 281916
Summary: The call target count profile is directly derived from LBR branch->target data. This is more reliable than instruction frequency profiles that could be moved across basic block boundaries. This patches uses call target count profile to annotate call instructions.
Reviewers: davidxl, dnovillo
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D24410
llvm-svn: 281911
When porting large bodies of code from using const char*
to StringRef, it is helpful to be able to treat nullptr
as an empty string, since that it is often what it is used
to indicate in C-style code.
Differential Revision: https://reviews.llvm.org/D24697
llvm-svn: 281906
When phi nodes are created in the -mem2reg phase, the @llvm.dbg.declare
entries are converted to @llvm.dbg.value entries at the place where the
store instructions existed. However no entry is created to describe
the resulting value of the phi node.
The effect of this is especially noticeable in for loops which have a
constant for the intial value; the loop control variable's location
would be described as the intial constant value in the loop body once
the -mem2reg optimization phase was run.
This change adds the creation of the @llvm.dbg.value entries to describe
variables whose location is the result of a phi node created in -mem2reg.
Also when the phi node is finally lowered to a machine instruction it
is important that the lowered "load" instruction is placed before the
associated DEBUG_VALUE entry describing the value loaded.
Differential Revision: https://reviews.llvm.org/D23715
llvm-svn: 281895
This is a port of XRay to ARM 32-bit, without Thumb support yet. The XRay instrumentation support is moving up to AsmPrinter.
This is one of 3 commits to different repositories of XRay ARM port. The other 2 are:
https://reviews.llvm.org/D23932 (Clang test)
https://reviews.llvm.org/D23933 (compiler-rt)
Differential Revision: https://reviews.llvm.org/D23931
llvm-svn: 281878
The ValueSymbolTable is used to detect name conflict and rename
instructions automatically. This is not needed when the value
names are automatically discarded by the LLVMContext.
No functional change intended, just saving a little bit of memory.
This is a recommit of r281806 after fixing the accessor to return
a pointer instead of a reference and updating all the call-sites.
llvm-svn: 281813
The ValueSymbolTable is used to detect name conflict and rename
instructions automatically. This is not needed when the value
names are automatically discarded by the LLVMContext.
No functional change intended, just saving a little bit of memory.
llvm-svn: 281806
Recommitting after fixing AsmParser initialization and X86 inline asm
error cleanup.
Allow errors to be deferred and emitted as part of clean up to simplify
and shorten Assembly parser code. This will allow error messages to be
emitted in helper functions and be modified by the caller which has
better context.
As part of this many minor cleanups to the Parser:
* Unify parser cleanup on error
* Add Workaround for incorrect return values in ParseDirective instances
* Tighten checks on error-signifying return values for parser functions
and fix in-tree TargetParsers to be more consistent with the changes.
* Fix AArch64 test cases checking for spurious error messages that are
now fixed.
These changes should be backwards compatible with current Target Parsers
so long as the error status are correctly returned in appropriate
functions.
Reviewers: rnk, majnemer
Subscribers: aemerson, jyknight, llvm-commits
Differential Revision: https://reviews.llvm.org/D24047
llvm-svn: 281762
Enhance SCEV to compute the trip count for some loops with unknown stride.
Patch by Pankaj Chawla
Differential Revision: https://reviews.llvm.org/D22377
llvm-svn: 281732
When a phi node is finally lowered to a machine instruction it is
important that the lowered "load" instruction is placed before the
associated DEBUG_VALUE entry describing the value loaded.
Renamed the existing SkipPHIsAndLabels to SkipPHIsLabelsAndDebug to
more fully describe that it also skips debug entries. Then used the
"new" function SkipPHIsAndLabels when the debug information should not
be skipped when placing the lowered "load" instructions so that it is
placed before the debug entries.
Differential Revision: https://reviews.llvm.org/D23760
llvm-svn: 281727
Summary:
In runThinLTO we start the task numbering for ThinLTO backend
tasks depending on whether there was also a regular LTO object
(CombinedModule). However, the CombinedModule is moved at
the end of runRegularLTO, so we need to save this information and
pass it into runThinLTO. Otherwise the AddOutput callback to the client
will use the same task number for both the regular LTO object
and the first ThinLTO object, which in gold-plugin caused only
one to be end up in the output filename array and therefore passed
back to gold for the final native link.
Reviewers: pcc, mehdi_amini
Subscribers: mehdi_amini, kromanova
Differential Revision: https://reviews.llvm.org/D24643
llvm-svn: 281725
Summary:
`TargetLoweringObjectFile` can be re-used and thus `TargetLoweringObjectFile::Initialize()`
can be called multiple times causing `Mang` pointer memory leak.
Reviewers: echristo
Subscribers: llvm-commits, mehdi_amini
Differential Revision: https://reviews.llvm.org/D24659
llvm-svn: 281718
LazyCallGraph to support repeated, stable iterations, even in the face
of graph updates.
This is particularly important to allow the CGSCC pass manager to walk
the RefSCCs (and thus everything else) in a module more than once. Lots
of unittests and other tests were hard or impossible to write because
repeated CGSCC pass managers which didn't invalidate the LazyCallGraph
would conclude the module was empty after the first one. =[ Really,
really bad.
The interesting thing is that in many ways this simplifies the code. We
can now re-use the same code for handling reference edge insertion
updates of the RefSCC graph as we use for handling call edge insertion
updates of the SCC graph. Outside of adapting to the shared logic for
this (which isn't trivial, but is *much* simpler than the DFS it
replaces!), the new code involves putting newly created RefSCCs when
deleting a reference edge into the cached list in the correct way, and
to re-formulate the iterator to be stable and effective even in the face
of these kinds of updates.
I've updated the unittests for the LazyCallGraph to re-iterate the
postorder sequence and verify that this all works. We even check for
using alternating iterators to trigger the lazy formation of RefSCCs
after mutation has occured.
It's worth noting that there are a reasonable number of likely
simplifications we can make past this. It isn't clear that we need to
keep the "LeafRefSCCs" around any more. But I've not removed that mostly
because I want this to be a more isolated change.
Differential Revision: https://reviews.llvm.org/D24219
llvm-svn: 281716
The IPI stream is structurally identical to the TPI stream, but it
contains different record types. So we just re-use the TPI writing
code.
llvm-svn: 281638
It was only really there as a sentinel when instructions had to have precisely
one type. Now that registers are typed, each register really has to have a type
that is sized.
llvm-svn: 281599
Otherwise everything that needs to work out what size they are has to keep a
DataLayout handy, which is a bit silly and very annoying.
llvm-svn: 281597
The `CVType` had two redundant fields which were confusing and
error-prone to fill out. By treating member records as a distinct
type from leaf records, we are able to simplify this quite a bit.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D24432
llvm-svn: 281556
This completes being able to write all the interesting
values of a PDB TPI stream.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D24370
llvm-svn: 281555
Summary:
This fixes a dumpbin warning on objects produced by the MC assembler
when starting from text. All .debug$S sections are supposed to be marked
IMAGE_SCN_MEM_DISCARDABLE. The main, non-COMDAT .debug$S section had
this set, but any comdat ones were not being marked discardable because
there was no .section flag for it.
This change does two things:
- If the section name starts with .debug, implicitly mark the section as
discardable. This means we do the same thing as gas on .s files with
DWARF from GCC, which is important.
- Adds the 'D' flag to the .section directive on COFF to explicitly say
a section is discardable. We only emit this flag if the section name
does not start with .debug. This allows the user to explicitly tweak
their section flags without worrying about magic section names.
The only thing you can't do in this scheme is to create a
non-discardable section with a name starting with ".debug", but
hopefully users don't need that.
Reviewers: majnemer, rafael
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D24582
llvm-svn: 281554
If TBAA is on an intrinsic and it gets upgraded, it'll delete the call
instruction that we collected in a vector. Even if we were to use
WeakVH, it'll drop the TBAA and we'll hit the assert on the upgrade
path.
r263673 gave a shot to make sure the TBAA upgrade happens before
intrinsics upgrade, but failed to account for all cases.
Instead of collecting instructions in a vector, this patch makes it
just upgrade the TBAA on the fly, because metadata are always
already loaded at this point.
Differential Revision: https://reviews.llvm.org/D24533
llvm-svn: 281549
Previously the prevailing information was not honored, and commons
symbols could override a strong definition. This patch fixes it and
propose the following semantic for commons: the client should mark
as prevailing the commons that it expects the LTO implementation to
merge (i.e. take the maximum size and alignment).
It implies that commons are allowed to have multiple prevailing
definitions.
Differential Revision: https://reviews.llvm.org/D24545
llvm-svn: 281538
Summary:
It was previously not possible for tools to use solely the stackmap
information emitted to reconstruct the return addresses of callsites in
the map, which is necessary to use the information to walk a stack. This
patch adds per-function callsite counts when emitting the stackmap
section in order to resolve the problem. Note that this slightly alters
the stackmap format, so external tools parsing these maps will need to
be updated.
**Problem Details:**
Records only store their offset from the beginning of the function they
belong to. While these records and the functions are output in program
order, it is not possible to determine where the end of one function's
records are without the callsite count when processing the records to
compute return addresses.
Patch by Kavon Farvardin!
Reviewers: atrick, ributzka, sanjoy
Subscribers: nemanjai
Differential Revision: https://reviews.llvm.org/D23487
llvm-svn: 281532
is an uint64_t. However, getter function getFlags returned an unsigned,
and in function hasProperty (1 << MCFlag) was used instead of (1ULL << MCFlag).
llvm-svn: 281483