llvm::join_items is similar to llvm::join, which produces a string
by concatenating a sequence of values together separated by a
given separator. But it differs in that the arguments to
llvm::join() are same-type members of a container, whereas the
arguments to llvm::join_items are arbitrary types passed into
a variadic template. The only requirement on parameters to
llvm::join_items (including for the separator themselves) is
that they be implicitly convertible to std::string or have
an overload of std::string::operator+
Differential Revision: https://reviews.llvm.org/D24880
llvm-svn: 282502
This allows various presentation of this data using an external tool.
This was first recommended here[1].
As an example, consider this module:
1 int foo();
2 int bar();
3
4 int baz() {
5 return foo() + bar();
6 }
The inliner generates these missed-optimization remarks today (the
hotness information is pulled from PGO):
remark: /tmp/s.c:5:10: foo will not be inlined into baz (hotness: 30)
remark: /tmp/s.c:5:18: bar will not be inlined into baz (hotness: 30)
Now with -pass-remarks-output=<yaml-file>, we generate this YAML file:
--- !Missed
Pass: inline
Name: NotInlined
DebugLoc: { File: /tmp/s.c, Line: 5, Column: 10 }
Function: baz
Hotness: 30
Args:
- Callee: foo
- String: will not be inlined into
- Caller: baz
...
--- !Missed
Pass: inline
Name: NotInlined
DebugLoc: { File: /tmp/s.c, Line: 5, Column: 18 }
Function: baz
Hotness: 30
Args:
- Callee: bar
- String: will not be inlined into
- Caller: baz
...
This is a summary of the high-level decisions:
* There is a new streaming interface to emit optimization remarks.
E.g. for the inliner remark above:
ORE.emit(DiagnosticInfoOptimizationRemarkMissed(
DEBUG_TYPE, "NotInlined", &I)
<< NV("Callee", Callee) << " will not be inlined into "
<< NV("Caller", CS.getCaller()) << setIsVerbose());
NV stands for named value and allows the YAML client to process a remark
using its name (NotInlined) and the named arguments (Callee and Caller)
without parsing the text of the message.
Subsequent patches will update ORE users to use the new streaming API.
* I am using YAML I/O for writing the YAML file. YAML I/O requires you
to specify reading and writing at once but reading is highly non-trivial
for some of the more complex LLVM types. Since it's not clear that we
(ever) want to use LLVM to parse this YAML file, the code supports and
asserts that we're writing only.
On the other hand, I did experiment that the class hierarchy starting at
DiagnosticInfoOptimizationBase can be mapped back from YAML generated
here (see D24479).
* The YAML stream is stored in the LLVM context.
* In the example, we can probably further specify the IR value used,
i.e. print "Function" rather than "Value".
* As before hotness is computed in the analysis pass instead of
DiganosticInfo. This avoids the layering problem since BFI is in
Analysis while DiagnosticInfo is in IR.
[1] https://reviews.llvm.org/D19678#419445
Differential Revision: https://reviews.llvm.org/D24587
llvm-svn: 282499
This patch corresponds to review:
https://reviews.llvm.org/D24396
This patch adds support for the "vector count trailing zeroes",
"vector compare not equal" and "vector compare not equal or zero instructions"
as well as "scalar count trailing zeroes" instructions. It also changes the
vector negation to use XXLNOR (when VSX is enabled) so as not to increase
register pressure (previously this was done with a splat immediate of all
ones followed by an XXLXOR). This was done because the altivec.h
builtins (patch to follow) use vector negation and the use of an additional
register for the splat immediate is not optimal.
llvm-svn: 282478
Summary:
This patch improves thinlto importer
by importing 3x larger functions that are called from hot block.
I compared performance with the trunk on spec, and there
were about 2% on povray and 3.33% on milc. These results seems
to be consistant and match the results Teresa got with her simple
heuristic. Some benchmarks got slower but I think they are just
noisy (mcf, xalancbmki, omnetpp)- running the benchmarks again with
more iterations to confirm. Geomean of all benchmarks including the noisy ones
were about +0.02%.
I see much better improvement on google branch with Easwaran patch
for pgo callsite inlining (the inliner actually inline those big functions)
Over all I see +0.5% improvement, and I get +8.65% on povray.
So I guess we will see much bigger change when Easwaran patch will land
(it depends on new pass manager), but it is still worth putting this to trunk
before it.
Implementation details changes:
- Removed CallsiteCount.
- ProfileCount got replaced by Hotness
- hot-import-multiplier is set to 3.0 for now,
didn't have time to tune it up, but I see that we get most of the interesting
functions with 3, so there is no much performance difference with higher, and
binary size doesn't grow as much as with 10.0.
Reviewers: eraman, mehdi_amini, tejohnson
Subscribers: mehdi_amini, llvm-commits
Differential Revision: https://reviews.llvm.org/D24638
llvm-svn: 282437
Many high-performance processors have a dedicated branch predictor for
indirect branches, commonly used with jump tables. As sophisticated as such
branch predictors are, they tend to have well defined limits beyond which
their effectiveness is hampered or even nullified. One such limit is the
number of possible destinations for a given indirect branches that such
branch predictors can handle.
This patch considers a limit that a target may set to the number of
destination addresses in a jump table.
Patch by: Evandro Menezes <e.menezes@samsung.com>, Aditya Kumar
<aditya.k7@samsung.com>, Sebastian Pop <s.pop@samsung.com>.
Differential revision: https://reviews.llvm.org/D21940
llvm-svn: 282412
Both `loopHasNoSideEffects` and `loopHasNoAbnormalExits` involve walking
the loop and maintaining similar sorts of caches. This commit changes
SCEV to compute both the predicates via a single walk, and maintain a
single cache instead of two.
llvm-svn: 282375
This change simplifies a data structure optimization in the
`BackedgeTakenInfo` class for loops with exactly one computable exit.
I've sanity checked that this does not regress compile time performance,
using sqlite3's amalgamated build.
llvm-svn: 282365
This adds 4 new functions to StringRef, which can be used to
take or drop characters while a certain condition is met, or
until a certain condition is met. They are:
take_while - Return characters until a condition is not met.
take_until - Return characters until a condition is met.
drop_while - Remove characters until a condition is not met.
drop_until - Remove characters until a condition is met.
Internally, all of these functions delegate to two additional
helper functions which can be used to search for the position
of a character meeting or not meeting a condition, which are:
find_if - Find the first character matching a predicate.
find_if_not - Find the first character not matching a predicate.
Differential Revision: https://reviews.llvm.org/D24842
llvm-svn: 282346
This adds a comment explaining why we will duplicate PartialMapping to
represent the breakdown for complex mappings (mappings with more than
one partial mapping), instead of using an array of pointer.
NFC
llvm-svn: 282326
This is a step toward statically allocate ValueMapping. Like the
previous few commits, the goal is to move toward a TableGen'ed like
structure with no dynamic allocation at all.
llvm-svn: 282324
Previously we were using the address of the unique instance of a partial
mapping in the related map to access this instance. However, when the
map grows, the whole set of instances may be moved elsewhere and the
previous addresses are not valid anymore.
Instead, keep the address of the unique heap allocated instance of a
partial mapping.
Note: I did not see any actual bugs for that problem as the number of
partial mappings dynamically allocated is small (<= 4).
llvm-svn: 282323
The NativeObjectOutput class has a design problem: it mixes up the caching
policy with the interface for output streams, which makes the client-side
code hard to follow and would for example make it harder to replace the
cache implementation in an arbitrary client.
This change separates the two aspects by moving the caching policy
to a separate field in Config, replacing NativeObjectOutput with a
NativeObjectStream class which only deals with streams and does not need to
be overridden by most clients and introducing an AddFile callback for adding
files (e.g. from the cache) to the link.
Differential Revision: https://reviews.llvm.org/D24622
llvm-svn: 282299
Summary:
For AMDGPU, we have been using the operating system component of the triple
for specifying the low-level runtime that is being used. The rationale for
this is that the host operating system (e.g. Linux) is irrelevant for GPU code,
since its execution enviroment will be mostly controled by the low-level runtime
being used to execute the code.
In most cases, higher level languages have their own runtime which is
implemented on top of the low-level runtime. The kernel ABIs of each
language mostly depend on the low-level runtime, but there may be some
slight differences between languages. OpenCL for example, may append
additional arguments to the kernel in order to pass values like global
offsets or buffers for printf. OpenMP, HCC, or other languages may want
to add their own values which differ from OpenCL.
The reason for adding a new opencl environment type is to make it possible for the backend
to distinguish between the ABIs of the higher-level languages and handle them correctly.
It seems cleaner to use the enviroment component for this rather than creating a new
OS type for every combination of low-level runtime / high-level language.
Reviewers: Anastasia, chandlerc
Subscribers: whchung, pekka.jaaskelainen, wdng, yaxunl, llvm-commits
Differential Revision: https://reviews.llvm.org/D24735
llvm-svn: 282218
In the verify method of the ValueMapping class we used to check that the
mapping exactly matches the bits of the input value. This is problematic
for statically allocated mappings because we would need a different
mapping for each different size of the value that maps on one
instruction. For instance, with such scheme, we would need a different
mapping for a value of size 1, 5, 23 whereas they all end up on a 32-bit
wide instruction.
Therefore, change the verifier to check that the meaningful bits are
covered by the mapping instead of matching them.
llvm-svn: 282214
This is another step toward TableGen'ed like structures. The BreakDown of
the mapping of the value will be statically computed by TableGen, thus
we only have to point to the right entry in the table instead of
dynamically allocate the mapping for each instruction.
We still support the dynamic allocation through a factory of
PartialMapping to ease the bring-up of the targets while the TableGen
backend is not available.
llvm-svn: 282213
Currently all nodes get added to the NextSU list when they are released,
so any candidate must be in that list, making the heuristic ineffective.
Remove it for now, we can add it back later in a working fashion if
necessary.
llvm-svn: 282200
According to MSDN (see the PR), functions which don't touch any callee-saved
registers (including %rsp) don't need any unwind info.
This patch makes LLVM not emit unwind info for such functions, to save
binary size.
Differential Revision: https://reviews.llvm.org/D24748
llvm-svn: 282185
StringRef::getInteger() exists and treats the entire string as
an integer of the specified radix, failing if any invalid characters
are encountered or the number overflows.
Sometimes you might have something like "123456foo" and you want
to get the number 123456 and leave the string "foo" remaining.
This is similar to what would be possible by using the standard
runtime library functions strtoul et al and specifying an end
pointer.
This patch adds consumeInteger(), which does exactly that. It
consumes as much as possible until an invalid character is found,
and modifies the StringRef in place so that upon return only
the portion of the StringRef after the number remains.
Differential Revision: https://reviews.llvm.org/D24778
llvm-svn: 282164
The additional fix is:
When adding debug information to a lowered phi node in mem2reg
check that we have a valid insertion point after the phi for adding
the debug information.
This change addresses the issue in pr30468 where a lowered phi was
added before a catchswitch and no debug information should be added
after the phi in this case.
Differential Revision: https://reviews.llvm.org/D24797
llvm-svn: 282155
This commit is basically the first step toward what will
RegisterBankInfo look when it gets TableGen'ed.
It introduces a XXXGenRegisterBankInfo.def file that is what TableGen
will issue at some point. Moreover, the RegBanks field in
RegisterBankInfo changed to reflect the static (compile time) aspect of
the information.
llvm-svn: 282131
With the new LTO API in r278338, we stopped emitting the individual
index files and imports files for some modules in the distributed backend
case (thinlto-index-only plugin option).
Specifically, this is when the linker decides not to include a module in the
link, because it was in an archive library and did not have a strong
reference to it. Not creating the expected output files makes the
distributed build system implementation more difficult, in terms of
checking for the expected outputs of the thin link, and scheduling the
backend jobs. To address this, the gold-plugin will write dummy empty
.thinlto.bc and .imports files for modules not included in the link
(which LTO never sees).
Augmented a gold v1.12+ test, since that version of gold has the handling
for notifying on modules not being included in the link.
llvm-svn: 282100
We still don't really have an equivalent of "AssertXExt" in DAG, so we don't
exploit the guarantees on the receiving side yet, but this should produce
conservatively correct code on iOS ABIs.
llvm-svn: 282069
The only implementation that exists immediately looks it up anyway, and the
information is needed to handle various parameter attributes (stored on the
function itself).
llvm-svn: 282068
TargetMachine::getNameWithPrefix and inline the result into the singular
caller." and "Remove more guts of TargetMachine::getNameWithPrefix and
migrate one check to the TLOF mach-o version." temporarily until I can
get the whole call migrated out of the TargetMachine as we could hit
places where TLOF isn't valid.
This reverts commits r281981 and r281983.
llvm-svn: 282028
This should match the existing behaviour for passing complicated struct and
array types, in particular HFAs come through like that from Clang.
For C & C++ we still need to somehow support all the weird ABI flags, or at
least those that are present in the IR (signext, byval, ...), and stack-based
parameter passing.
llvm-svn: 281977
The routines llvm::ConvertDebugDeclareToDebugValue() always returned
a true value which was never checked at the call site; change the
function return type to void.
This NFC cleanup was approved in the review https://reviews.llvm.org/D23715
llvm-svn: 281964
Use SmallVector instead of dynamically allocated arrays for the mapping of the
operands in the InstructionMapping. That way we avoid heap allocation for most
of the cases. Ultimately, we should not have to rely on such tricky, the
instances of InstructionMapping would be TableGen'ed.
This improves the compilation time of the RegBankSelect pass.
llvm-svn: 281955
The comments of SplitBlockAndInsertIfThenElse say the SplitBefore instruction will stay in the old block.
But according to the implementation(split the block at SplitBefore by using splitBasicBlock), the SplitBefore will be moved to the new block.
This patch fixes the comments.
Patch by Zhe Yu Wu.
llvm-svn: 281939
The OperandsMapper class is used heavy in RegBankSelect and each
instantiation triggered a heap allocation for the array of operands.
Instead, use a SmallVector with a big enough size such that most of the
cases do not have to use dynamically allocated memory.
This improves the compile time of the RegBankSelect pass.
llvm-svn: 281916
Summary: The call target count profile is directly derived from LBR branch->target data. This is more reliable than instruction frequency profiles that could be moved across basic block boundaries. This patches uses call target count profile to annotate call instructions.
Reviewers: davidxl, dnovillo
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D24410
llvm-svn: 281911
When porting large bodies of code from using const char*
to StringRef, it is helpful to be able to treat nullptr
as an empty string, since that it is often what it is used
to indicate in C-style code.
Differential Revision: https://reviews.llvm.org/D24697
llvm-svn: 281906
When phi nodes are created in the -mem2reg phase, the @llvm.dbg.declare
entries are converted to @llvm.dbg.value entries at the place where the
store instructions existed. However no entry is created to describe
the resulting value of the phi node.
The effect of this is especially noticeable in for loops which have a
constant for the intial value; the loop control variable's location
would be described as the intial constant value in the loop body once
the -mem2reg optimization phase was run.
This change adds the creation of the @llvm.dbg.value entries to describe
variables whose location is the result of a phi node created in -mem2reg.
Also when the phi node is finally lowered to a machine instruction it
is important that the lowered "load" instruction is placed before the
associated DEBUG_VALUE entry describing the value loaded.
Differential Revision: https://reviews.llvm.org/D23715
llvm-svn: 281895
This is a port of XRay to ARM 32-bit, without Thumb support yet. The XRay instrumentation support is moving up to AsmPrinter.
This is one of 3 commits to different repositories of XRay ARM port. The other 2 are:
https://reviews.llvm.org/D23932 (Clang test)
https://reviews.llvm.org/D23933 (compiler-rt)
Differential Revision: https://reviews.llvm.org/D23931
llvm-svn: 281878
The ValueSymbolTable is used to detect name conflict and rename
instructions automatically. This is not needed when the value
names are automatically discarded by the LLVMContext.
No functional change intended, just saving a little bit of memory.
This is a recommit of r281806 after fixing the accessor to return
a pointer instead of a reference and updating all the call-sites.
llvm-svn: 281813
The ValueSymbolTable is used to detect name conflict and rename
instructions automatically. This is not needed when the value
names are automatically discarded by the LLVMContext.
No functional change intended, just saving a little bit of memory.
llvm-svn: 281806
Recommitting after fixing AsmParser initialization and X86 inline asm
error cleanup.
Allow errors to be deferred and emitted as part of clean up to simplify
and shorten Assembly parser code. This will allow error messages to be
emitted in helper functions and be modified by the caller which has
better context.
As part of this many minor cleanups to the Parser:
* Unify parser cleanup on error
* Add Workaround for incorrect return values in ParseDirective instances
* Tighten checks on error-signifying return values for parser functions
and fix in-tree TargetParsers to be more consistent with the changes.
* Fix AArch64 test cases checking for spurious error messages that are
now fixed.
These changes should be backwards compatible with current Target Parsers
so long as the error status are correctly returned in appropriate
functions.
Reviewers: rnk, majnemer
Subscribers: aemerson, jyknight, llvm-commits
Differential Revision: https://reviews.llvm.org/D24047
llvm-svn: 281762
Enhance SCEV to compute the trip count for some loops with unknown stride.
Patch by Pankaj Chawla
Differential Revision: https://reviews.llvm.org/D22377
llvm-svn: 281732
When a phi node is finally lowered to a machine instruction it is
important that the lowered "load" instruction is placed before the
associated DEBUG_VALUE entry describing the value loaded.
Renamed the existing SkipPHIsAndLabels to SkipPHIsLabelsAndDebug to
more fully describe that it also skips debug entries. Then used the
"new" function SkipPHIsAndLabels when the debug information should not
be skipped when placing the lowered "load" instructions so that it is
placed before the debug entries.
Differential Revision: https://reviews.llvm.org/D23760
llvm-svn: 281727
Summary:
In runThinLTO we start the task numbering for ThinLTO backend
tasks depending on whether there was also a regular LTO object
(CombinedModule). However, the CombinedModule is moved at
the end of runRegularLTO, so we need to save this information and
pass it into runThinLTO. Otherwise the AddOutput callback to the client
will use the same task number for both the regular LTO object
and the first ThinLTO object, which in gold-plugin caused only
one to be end up in the output filename array and therefore passed
back to gold for the final native link.
Reviewers: pcc, mehdi_amini
Subscribers: mehdi_amini, kromanova
Differential Revision: https://reviews.llvm.org/D24643
llvm-svn: 281725
Summary:
`TargetLoweringObjectFile` can be re-used and thus `TargetLoweringObjectFile::Initialize()`
can be called multiple times causing `Mang` pointer memory leak.
Reviewers: echristo
Subscribers: llvm-commits, mehdi_amini
Differential Revision: https://reviews.llvm.org/D24659
llvm-svn: 281718
LazyCallGraph to support repeated, stable iterations, even in the face
of graph updates.
This is particularly important to allow the CGSCC pass manager to walk
the RefSCCs (and thus everything else) in a module more than once. Lots
of unittests and other tests were hard or impossible to write because
repeated CGSCC pass managers which didn't invalidate the LazyCallGraph
would conclude the module was empty after the first one. =[ Really,
really bad.
The interesting thing is that in many ways this simplifies the code. We
can now re-use the same code for handling reference edge insertion
updates of the RefSCC graph as we use for handling call edge insertion
updates of the SCC graph. Outside of adapting to the shared logic for
this (which isn't trivial, but is *much* simpler than the DFS it
replaces!), the new code involves putting newly created RefSCCs when
deleting a reference edge into the cached list in the correct way, and
to re-formulate the iterator to be stable and effective even in the face
of these kinds of updates.
I've updated the unittests for the LazyCallGraph to re-iterate the
postorder sequence and verify that this all works. We even check for
using alternating iterators to trigger the lazy formation of RefSCCs
after mutation has occured.
It's worth noting that there are a reasonable number of likely
simplifications we can make past this. It isn't clear that we need to
keep the "LeafRefSCCs" around any more. But I've not removed that mostly
because I want this to be a more isolated change.
Differential Revision: https://reviews.llvm.org/D24219
llvm-svn: 281716
The IPI stream is structurally identical to the TPI stream, but it
contains different record types. So we just re-use the TPI writing
code.
llvm-svn: 281638
It was only really there as a sentinel when instructions had to have precisely
one type. Now that registers are typed, each register really has to have a type
that is sized.
llvm-svn: 281599
Otherwise everything that needs to work out what size they are has to keep a
DataLayout handy, which is a bit silly and very annoying.
llvm-svn: 281597
The `CVType` had two redundant fields which were confusing and
error-prone to fill out. By treating member records as a distinct
type from leaf records, we are able to simplify this quite a bit.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D24432
llvm-svn: 281556
This completes being able to write all the interesting
values of a PDB TPI stream.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D24370
llvm-svn: 281555
Summary:
This fixes a dumpbin warning on objects produced by the MC assembler
when starting from text. All .debug$S sections are supposed to be marked
IMAGE_SCN_MEM_DISCARDABLE. The main, non-COMDAT .debug$S section had
this set, but any comdat ones were not being marked discardable because
there was no .section flag for it.
This change does two things:
- If the section name starts with .debug, implicitly mark the section as
discardable. This means we do the same thing as gas on .s files with
DWARF from GCC, which is important.
- Adds the 'D' flag to the .section directive on COFF to explicitly say
a section is discardable. We only emit this flag if the section name
does not start with .debug. This allows the user to explicitly tweak
their section flags without worrying about magic section names.
The only thing you can't do in this scheme is to create a
non-discardable section with a name starting with ".debug", but
hopefully users don't need that.
Reviewers: majnemer, rafael
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D24582
llvm-svn: 281554
If TBAA is on an intrinsic and it gets upgraded, it'll delete the call
instruction that we collected in a vector. Even if we were to use
WeakVH, it'll drop the TBAA and we'll hit the assert on the upgrade
path.
r263673 gave a shot to make sure the TBAA upgrade happens before
intrinsics upgrade, but failed to account for all cases.
Instead of collecting instructions in a vector, this patch makes it
just upgrade the TBAA on the fly, because metadata are always
already loaded at this point.
Differential Revision: https://reviews.llvm.org/D24533
llvm-svn: 281549
Previously the prevailing information was not honored, and commons
symbols could override a strong definition. This patch fixes it and
propose the following semantic for commons: the client should mark
as prevailing the commons that it expects the LTO implementation to
merge (i.e. take the maximum size and alignment).
It implies that commons are allowed to have multiple prevailing
definitions.
Differential Revision: https://reviews.llvm.org/D24545
llvm-svn: 281538
Summary:
It was previously not possible for tools to use solely the stackmap
information emitted to reconstruct the return addresses of callsites in
the map, which is necessary to use the information to walk a stack. This
patch adds per-function callsite counts when emitting the stackmap
section in order to resolve the problem. Note that this slightly alters
the stackmap format, so external tools parsing these maps will need to
be updated.
**Problem Details:**
Records only store their offset from the beginning of the function they
belong to. While these records and the functions are output in program
order, it is not possible to determine where the end of one function's
records are without the callsite count when processing the records to
compute return addresses.
Patch by Kavon Farvardin!
Reviewers: atrick, ributzka, sanjoy
Subscribers: nemanjai
Differential Revision: https://reviews.llvm.org/D23487
llvm-svn: 281532
is an uint64_t. However, getter function getFlags returned an unsigned,
and in function hasProperty (1 << MCFlag) was used instead of (1ULL << MCFlag).
llvm-svn: 281483
Recommitting after fixing AsmParser Initialization.
Allow errors to be deferred and emitted as part of clean up to simplify
and shorten Assembly parser code. This will allow error messages to be
emitted in helper functions and be modified by the caller which has
better context.
As part of this many minor cleanups to the Parser:
* Unify parser cleanup on error
* Add Workaround for incorrect return values in ParseDirective instances
* Tighten checks on error-signifying return values for parser functions
and fix in-tree TargetParsers to be more consistent with the changes.
* Fix AArch64 test cases checking for spurious error messages that are
now fixed.
These changes should be backwards compatible with current Target Parsers
so long as the error status are correctly returned in appropriate
functions.
Reviewers: rnk, majnemer
Subscribers: aemerson, jyknight, llvm-commits
Differential Revision: https://reviews.llvm.org/D24047
llvm-svn: 281336
descriptions now tag add instructions, and the Hexagon backend is using this to
identify loop induction statements.
Patch by Sam Parker and Sjoerd Meijer.
Differential Revision: https://reviews.llvm.org/D23601
llvm-svn: 281304
This should allow users of the library to get a range to iterate through
all the subcommands that are registered to the global parser. This
allows users to define subcommands in libraries that self-register to
have dispatch done at a different stage (like main). It allows for
writing code like the following:
for (auto *S : cl::getRegisteredSubcommands()) {
if (*S) {
// Dispatch on S->getName().
}
}
This change also contains tests that show this usage pattern.
Reviewers: zturner, dblaikie, echristo
Subscribers: llvm-commits, mehdi_amini
Differential Revision: https://reviews.llvm.org/D24489
llvm-svn: 281290
This patch reverses the edge from DIGlobalVariable to GlobalVariable.
This will allow us to more easily preserve debug info metadata when
manipulating global variables.
Fixes PR30362. A program for upgrading test cases is attached to that
bug.
Differential Revision: http://reviews.llvm.org/D20147
llvm-svn: 281284
class.
SerializationTraits provides serialize and deserialize methods corresponding to
the earlier functions, but also provides a name for the type. In future, this
name will be used to render function signatures as strings, which will in turn
be used to negotiate and verify API support between RPC clients and servers.
llvm-svn: 281254
Allow errors to be deferred and emitted as part of clean up to simplify
and shorten Assembly parser code. This will allow error messages to be
emitted in helper functions and be modified by the caller which has
better context.
As part of this many minor cleanups to the Parser:
* Unify parser cleanup on error
* Add Workaround for incorrect return values in ParseDirective instances
* Tighten checks on error-signifying return values for parser functions
and fix in-tree TargetParsers to be more consistent with the changes.
* Fix AArch64 test cases checking for spurious error messages that are
now fixed.
These changes should be backwards compatible with current Target Parsers
so long as the error status are correctly returned in appropriate
functions.
Reviewers: rnk, majnemer
Subscribers: aemerson, jyknight, llvm-commits
Differential Revision: https://reviews.llvm.org/D24047
llvm-svn: 281249
Unlike SDag, we use a separate G_GEP instruction (much simplified, only taking
a single byte offset) to preserve the pointer type information through
selection.
llvm-svn: 281205
Remove createNode() and any API that depending on it, and add
HasCreateNode to the list of checks for HasObsoleteCustomizations. Now
an ilist *never* allocates (this was already true for iplist).
This factors out all the differences between iplist and ilist. I'll aim
to rename both to "owning_ilist" eventually, to call out the interesting
(not exactly intrusive) ownership semantics. In the meantime, I've left
both names around to reduce code churn.
One of the deleted APIs is the ilist copy constructor. I've lifted up
and tested iplist::cloneFrom (ala simple_ilist::cloneFrom) as a
replacement.
Users of ilist<> and iplist<> that want the list to allocate nodes have
a few options:
- use std::list;
- use AllocatorList or BumpPtrList (or build a similarly trivial list);
- use cloneFrom (which is explicit at the call site); or
- allocate at the call site.
See r280573, r281177, r281181, and r281182 for examples of what to do if
you're updating out-of-tree code.
llvm-svn: 281184
- Add AllocatorList, a non-intrusive list that owns an LLVM-style
allocator and provides a std::list-like interface (trivially built on
top of simple_ilist),
- add a typedef (and unit tests) for BumpPtrList, and
- use BumpPtrList for the list of llvm::yaml::Token (i.e., TokenQueueT).
TokenQueueT has no need for the complexity of an intrusive list. The
only reason to inherit from ilist was to customize the allocator.
TokenQueueT was the only example in-tree of using ilist<> in a truly
non-intrusive way.
Moreover, this removes the final use of the non-intrusive
ilist_traits<>::createNode (after r280573, r281177, and r281181). I
have a WIP patch that removes this customization point (and the API that
relies on it) that I plan to commit soon.
Note: AllocatorList owns the allocator, which limits the viable API
(e.g., splicing must be on the same list). For now I've left out
any problematic API. It wouldn't be hard to split AllocatorList into
two layers: an Impl class that calls DerivedT::getAlloc (via CRTP), and
derived classes that handle Allocator ownership/reference/etc semantics;
and then implement splice with appropriate assertions; but TBH we should
probably just customize the std::list allocators at that point.
llvm-svn: 281182
Force IVUsers to be moved instead of copied, properly update Parent
pointers in IVStrideUse when IVUsers is moved, and make sure we have
move constructors available in iplist and ilist.
I came across this in a WIP patch that deleted the copy constructors
from ilist. I was surprised to find that IVUsersAnalysis couldn't be
registered in the new pass manager.
It's not clear to me whether IVUsers was getting moved only when empty,
but if it was being moved when it was non-empty then this fixes a
pointer invalidation bug and should give some sort of speedup. Note
that the bugfix would be necessary even for a copy constructor.
llvm-svn: 281181
ilist_iterator::reset was unnecessary API, and wasn't any clearer (or
safer) at the call site than constructing a temporary and assigning it
to the iterator.
llvm-svn: 281175
Now that MachineBasicBlock::reverse_instr_iterator knows when it's at
the end (since r281168 and r281170), implement
MachineBasicBlock::reverse_iterator directly on top of an
ilist::reverse_iterator by adding an IsReverse template parameter to
MachineInstrBundleIterator. This replaces another hard-to-reason-about
use of std::reverse_iterator on list iterators, matching the changes for
ilist::reverse_iterator from r280032 (see the "out of scope" section at
the end of that commit message). MachineBasicBlock::reverse_iterator
now has a handle to the current node and has obvious invalidation
semantics.
r280032 has a more detailed explanation of how list-style reverse
iterators (invalidated when the pointed-at node is deleted) are
different from vector-style reverse iterators like std::reverse_iterator
(invalidated on every operation). A great motivating example is this
commit's changes to lib/CodeGen/DeadMachineInstructionElim.cpp.
Note: If your out-of-tree backend deletes instructions while iterating
on a MachineBasicBlock::reverse_iterator or converts between
MachineBasicBlock::iterator and MachineBasicBlock::reverse_iterator,
you'll need to update your code in similar ways to r280032. The
following table might help:
[Old] ==> [New]
delete &*RI, RE = end() delete &*RI++
RI->erase(), RE = end() RI++->erase()
reverse_iterator(I) std::prev(I).getReverse()
reverse_iterator(I) ++I.getReverse()
--reverse_iterator(I) I.getReverse()
reverse_iterator(std::next(I)) I.getReverse()
RI.base() std::prev(RI).getReverse()
RI.base() ++RI.getReverse()
--RI.base() RI.getReverse()
std::next(RI).base() RI.getReverse()
(For more details, have a look at r280032.)
llvm-svn: 281172
Add an assertion to the MachineInstrBundleIterator from instr_iterator
that the underlying iterator is valid. This is possible know that we
can check ilist_node::isSentinel (since r281168), and is consistent with
the constructors from MachineInstr* and MachineInstr&.
Avoiding the new assertion in operator== and operator!= requires four
(!!!!) new overloads each.
(As an aside, I'm strongly in favour of:
- making the conversion from instr_iterator explicit;
- making the conversion from pointer explicit;
- making the conversion from reference explicit; and
- removing all the extra overloads of operator== and operator!= except
const_instr_iterator.
I'm not signing up for that at this point, but being clear about when
something is an MachineInstr-iterator (possibly instr_end()) vs
MachineInstr-bundle-iterator (possibly end()) vs MachineInstr* (possibly
nullptr) vs MachineInstr& (known valid) would surely make code
cleaner... and it would remove a ton of boilerplate from
MachineInstrBundleIterator operators.)
llvm-svn: 281170
This is a prep commit before fixing MachineBasicBlock::reverse_iterator
invalidation semantics, ala r281167 for ilist::reverse_iterator. This
changes MachineBasicBlock::Instructions to track which node is the
sentinel regardless of LLVM_ENABLE_ABI_BREAKING_CHECKS.
There's almost no functionality change (aside from ABI). However, in
the rare configuration:
#if !defined(NDEBUG) && !defined(LLVM_ENABLE_ABI_BREAKING_CHECKS)
the isKnownSentinel() assertions in ilist_iterator<>::operator* suddenly
have teeth for MachineInstr. If these assertions start firing for your
out-of-tree backend, have a look at the suggestions in the commit
message for r279314, and at some of the commits leading up to it that
avoid dereferencing the end() iterator.
llvm-svn: 281168
This adds two declarative configuration options for intrusive lists
(available for simple_ilist, iplist, and ilist). Both of these options
affect ilist_node interoperability and need to be passed both to the
node and the list. Instead of adding a new traits class, they're
specified as optional template parameters (in any order).
The two options:
1. Pass ilist_sentinel_tracking<true> or ilist_sentinel_tracking<false>
to control whether there's a bit on ilist_node "prev" pointer
indicating whether it's the sentinel. The default behaviour is to
use a bit if and only if LLVM_ENABLE_ABI_BREAKING_CHECKS.
2. Pass ilist_tag<TagA> and ilist_tag<TagB> to allow insertion of a
single node into two different lists (simultaneously).
I have an immediate use-case for (1) ilist_sentinel_tracking: fixing the
validation semantics of MachineBasicBlock::reverse_iterator to match
ilist::reverse_iterator (ala r280032: see the comments at the end of the
commit message there). I'm adding (2) ilist_tag in the same commit to
validate that the options framework supports expansion. Justin Bogner
mentioned this might enable a possible cleanup in SelectionDAG, but I'll
leave this to others to explore. In the meantime, the unit tests and
the comments for simple_ilist and ilist_node have usage examples.
Note that there's a layer of indirection to support optional,
out-of-order, template paramaters. Internal classes are templated on an
instantiation of the non-variadic ilist_detail::node_options.
User-facing classes use ilist_detail::compute_node_options to compute
the correct instantiation of ilist_detail::node_options.
The comments for ilist_detail::is_valid_option describe how to add new
options (e.g., ilist_packed_int<int NumBits>).
llvm-svn: 281167
Summary:
An IR load can be invariant, dereferenceable, neither, or both. But
currently, MI's notion of invariance is IR-invariant &&
IR-dereferenceable.
This patch splits up the notions of invariance and dereferenceability at
the MI level. It's NFC, so adds some probably-unnecessary
"is-dereferenceable" checks, which we can remove later if desired.
Reviewers: chandlerc, tstellarAMD
Subscribers: jholewinski, arsenm, nemanjai, llvm-commits
Differential Revision: https://reviews.llvm.org/D23371
llvm-svn: 281151
should have been (1ULL << MCID::XYZ). Currently this works because enum Flag
has 31 values, but extending it will result in a compile warnings/errors.
This was part of the accepted patch in https://reviews.llvm.org/D23601, but
it was suggested to apply this first as a separate patch.
llvm-svn: 281149
... and make a few ilist-internal API changes, in preparation for
changing how ilist_node is templated. The only effect for ilist users
should be changing the friend target from llvm::ilist_node_access to
llvm::ilist_detail::NodeAccess (which is only necessary when they
inherit privately from ilist_node).
- Split out SpecificNodeAccess, which has overloads of getNodePtr and
getValuePtr that are untemplated.
- Use more typedefs to prevent more changes later.
- Force inheritance to use *NodeAccess (to emphasize that ilist *users*
shouldn't be doing this).
There should be no functionality change here.
llvm-svn: 281142
Summary:
I want to separate out the notions of invariance and dereferenceability
at the MI level, so that they correspond to the equivalent concepts at
the IR level. (Currently an MI load is MI-invariant iff it's
IR-invariant and IR-dereferenceable.)
First step is renaming this function.
Reviewers: chandlerc
Subscribers: MatzeB, jfb, llvm-commits
Differential Revision: https://reviews.llvm.org/D23370
llvm-svn: 281125
I tested this with "ninja check-llvm-codegen" on a Release build with
all architectures enabled, and again with a Debug build on x86.
Found with llvm-cov.
Differential Revision: https://reviews.llvm.org/D24433
llvm-svn: 281120
SmallVectors are convenient, but they don't cover every use case.
In particular, they are fairly large (3 pointers + one element) and
there is no way to take ownership of the buffer to put it somewhere
else. This patch then adds a lower lever interface that works with
any buffer.
llvm-svn: 281082
This simplifies a lot of code, and will actually be necessary for
an upcoming patch to serialize TPI record hash values.
The idea before was that visitors should be examining records, not
modifying them. But this is no longer true with a visitor that
constructs a CVRecord from Yaml. To handle this until now, we
were doing some fixups on CVRecord objects at a higher level, but
the code is really awkward, and it makes sense to just have the
visitor write the bytes into the CVRecord. In doing so I uncovered
a few bugs related to `Data` and `RawData` and fixed those.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D24362
llvm-svn: 281067
This writes the full sequence of type records described in
Yaml to the TPI stream of the PDB file.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D24316
llvm-svn: 281063
These were added in r281051, which, I am embarrassed to admit, has an
incomplete commit message that I forgot to update before pushing. You
can ignore element (2) in that list.
llvm-svn: 281054
1) On some platforms, sizeof(SDNodeBits) == 1, so we were only copying
one byte out of the bitfield when we wanted to copy two, and we were
leaving half of the return value of getRawSubclassData() undefined.
2) Something something bitfields, not sure exactly what the issue or fix
is, yet. (TODO)
Summary:
Previously we were assuming that SDNodeBits covered all of SDNode's
anonymous subclass data bitfield union. But that's not right; it might
have size 1, in which it clearly doesn't.
This patch adds a field that does cover the whole union and adds
static_asserts to ensure it stays correct.
Reviewers: ahatanak, chandlerc
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D24223
llvm-svn: 281051
These instructions were only necessary when type information was stored in the
MachineInstr (because only generic MachineInstrs possessed a type). Now that
it's in MachineRegisterInfo, COPY and PHI work fine.
llvm-svn: 281037
We want each register to have a canonical type, which means the best place to
store this is in MachineRegisterInfo rather than on every MachineInstr that
happens to use or define that register.
Most changes following from this are pretty simple (you need an MRI anyway if
you're going to be doing any transformations, so just check the type there).
But legalization doesn't really want to check redundant operands (when, for
example, a G_ADD only ever has one type) so I've made use of MCInstrDesc's
operand type field to encode these constraints and limit legalization's work.
As an added bonus, more validation is possible, both in MachineVerifier and
MachineIRBuilder (coming soon).
llvm-svn: 281035
Summary:
While woring on mapping attributes in the C API, it clearly appeared that the recent changes in the API on the C++ side left Function and Call/Invoke with an attribute API that grew in an ad hoc manner. This makes it difficult to work with it, because one doesn't know which overloads exists and which do not.
Make sure that getter/setter function exists for both enum and string version. Remove inconsistent getter/setter, unless they have many callsites.
This should make it easier to work with attributes in the future.
This doesn't change how attribute works.
Reviewers: bkramer, whitequark, mehdi_amini, void
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D21514
llvm-svn: 281019
mapping a yaml field to an object in code has always been
a stateless operation. You could still pass state by using the
`setContext` function of the YAMLIO object, but this represented
global state for the entire yaml input. In order to have
context-sensitive state, it is necessary to pass this state in
at the granularity of an individual mapping.
This patch adds support for this type of context-sensitive state.
You simply pass an additional argument of type T to the
`mapRequired` or `mapOptional` functions, and provided you have
specialized a `MappingContextTraits<U, T>` class with the
appropriate mapping function, you can pass this context into
the mapping function.
Reviewed By: chandlerc
Differential Revision: https://reviews.llvm.org/D24162
llvm-svn: 280977
And associated commits, as they broke the Thumb bots.
This reverts commit r280935.
This reverts commit r280891.
This reverts commit r280888.
llvm-svn: 280967
Summary:
This allows specifying instructions that are available only in specific assembler variant. If AsmVariantName is specified then instruction will be presented only in MatchTable for this variant. If not specified then assembler variants will be determined based on AsmString.
Also this allows splitting assembler match tables in same way as it is done in dissasembler.
Reviewers: ab, tstellarAMD, craig.topper, vpykhtin
Subscribers: wdng
Differential Revision: https://reviews.llvm.org/D24249
llvm-svn: 280952
Refactor replaceDominatedUsesWith to have a flag to control whether to replace uses in BB itself.
Summary: This is in preparation for LoopSink pass which calls replaceDominatedUsesWith to update after sinking.
llvm-svn: 280949
CGP tail-duplicates rets into blocks that end with a call that feed the ret.
This puts the call in tail position, potentially allowing the DAG builder to
lower it as a tail call. To avoid tail duplication in cases where we won't
form the tail call, CGP tried to predict whether this is going to be possible,
and avoids doing it when lowering as a tail call will definitely fail.
However, it was being too conservative by always throwing away calls to
functions with a signext/zeroext attribute on the return type.
Instead, we can use the same logic the builder uses to determine whether the
attributes work out.
Differential Revision: https://reviews.llvm.org/D24315
llvm-svn: 280894
This is a port of XRay to ARM 32-bit, without Thumb support yet. The XRay instrumentation support is moving up to AsmPrinter.
This is one of 3 commits to different repositories of XRay ARM port. The other 2 are:
1. https://reviews.llvm.org/D23932 (Clang test)
2. https://reviews.llvm.org/D23933 (compiler-rt)
Differential Revision: https://reviews.llvm.org/D23931
llvm-svn: 280888
When branching to a block that immediately tail calls, it is possible to fold
the call directly into the branch if the call is direct and there is no stack
adjustment, saving one byte.
Example:
define void @f(i32 %x, i32 %y) {
entry:
%p = icmp eq i32 %x, %y
br i1 %p, label %bb1, label %bb2
bb1:
tail call void @foo()
ret void
bb2:
tail call void @bar()
ret void
}
before:
f:
movl 4(%esp), %eax
cmpl 8(%esp), %eax
jne .LBB0_2
jmp foo
.LBB0_2:
jmp bar
after:
f:
movl 4(%esp), %eax
cmpl 8(%esp), %eax
jne bar
.LBB0_1:
jmp foo
I don't expect any significant size savings from this (on a Clang bootstrap I
saw 288 bytes), but it does make the code a little tighter.
This patch only does 32-bit, but 64-bit would work similarly.
Differential Revision: https://reviews.llvm.org/D24108
llvm-svn: 280832
Summary:
Previously we were trying to represent this with the "contains" list of
the .cv_inline_linetable directive, which was not enough information.
Now we directly represent the chain of inlined call sites, so we know
what location to emit when we encounter a .cv_loc directive of an inner
inlined call site while emitting the line table of an outer function or
inlined call site. Fixes PR29146.
Also fixes PR29147, where we would crash when .cv_loc directives crossed
sections. Now we write down the section of the first .cv_loc directive,
and emit an error if any other .cv_loc directive for that function is in
a different section.
Also fixes issues with discontiguous inlined source locations, like in
this example:
volatile int unlikely_cond = 0;
extern void __declspec(noreturn) abort();
__forceinline void f() {
if (!unlikely_cond) abort();
}
int main() {
unlikely_cond = 0;
f();
unlikely_cond = 0;
}
Previously our tables gave bad location information for the 'abort'
call, and the debugger wouldn't snow the inlined stack frame for 'f'.
It is important to emit good line tables for this code pattern, because
it comes up whenever an asan bug occurs in an inlined function. The
__asan_report* stubs are generally placed after the normal function
epilogue, leading to discontiguous regions of inlined code.
Reviewers: majnemer, amccarth
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D24014
llvm-svn: 280822
This adds a copy of the demangler in libcxxabi.
The code also has no dependencies on anything else in LLVM. To enforce
that I added it as another library. That way a BUILD_SHARED_LIBS will
fail if anyone adds an use of StringRef for example.
The no llvm dependency combined with the fact that this has to build
on linux, OS X and Windows required a few changes to the code. In
particular:
No constexpr.
No alignas
On OS X at least this library has only one global symbol:
__ZN4llvm16itanium_demangleEPKcPcPmPi
My current plan is:
Commit something like this
Change lld to use it
Change lldb to use it as the fallback
Add a few #ifdefs so that exactly the same file can be used in
libcxxabi to export abi::__cxa_demangle.
Once the fast demangler in lldb can handle any names this
implementation can be replaced with it and we will have the one true
demangler.
llvm-svn: 280732
Currently the pass updates branch weights in the IR if the function has
any PGO info (entry frequency is set). However we could still have
regions of the CFG that does not have branch weights collected (e.g. a
cold region). In this case we'd use static estimates. Since static
estimates for branches are determined independently, they are
inconsistent. Updating them can "randomly" inflate block frequencies.
I've run into this in a completely cold loop of h264ref from
SPEC. -Rpass-with-hotness showed the loop to be completely cold during
inlining (before JT) but completely hot during vectorization (after JT).
The new testcase demonstrate the problem. We check array elements
against 1, 2 and 3 in a loop. The check against 3 is the loop-exiting
check. The block names should be self-explanatory.
In this example, jump threading incorrectly updates the weight of the
loop-exiting branch to 0, drastically inflating the frequency of the
loop (in the range of billions).
There is no run-time profile info for edges inside the loop, so branch
probabilities are estimated. These are the resulting branch and block
frequencies for the loop body:
check_1 (16)
(8) / |
eq_1 | (8)
\ |
check_2 (16)
(8) / |
eq_2 | (8)
\ |
check_3 (16)
(1) / |
(loop exit) | (15)
|
(back edge)
First we thread eq_1 -> check_2 to check_3. Frequencies are updated to
remove the frequency of eq_1 from check_2 and then from the false edge
leaving check_2. Changed frequencies are highlighted with * *:
check_1 (16)
(8) / |
eq_1~ | (8)
/ |
/ check_2 (*8*)
/ (8) / |
\ eq_2 | (*0*)
\ \ |
` --- check_3 (16)
(1) / |
(loop exit) | (15)
|
(back edge)
Next we thread eq_1 -> check_3 and eq_2 -> check_3 to check_1 as new
back edges. Frequencies are updated to remove the frequency of eq_1 and
eq_3 from check_3 and then the false edge leaving check_3 (changed
frequencies are highlighted with * *):
check_1 (16)
(8) / |
eq_1~ | (8)
/ |
/ check_2 (*8*)
/ (8) / |
/-- eq_2~ | (*0*)
(back edge) |
check_3 (*0*)
(*0*) / |
(loop exit) | (*0*)
|
(back edge)
As a result, the loop exit edge ends up with 0 frequency which in turn makes
the loop header to have maximum frequency.
There are a few potential problems here:
1. The profile data seems odd. There is a single profile sample of the
loop being entered. On the other hand, there are no weights inside the
loop.
2. Based on static estimation we shouldn't set edges to "extreme"
values, i.e. extremely likely or unlikely.
3. We shouldn't create profile metadata that is calculated from static
estimation. I am not sure what policy is but it seems to make sense to
treat profile metadata as something that is known to originate from
profiling. Estimated probabilities should only be reflected in BPI/BFI.
Any one of these would probably fix the immediate problem. I went for 3
because I think it's a good policy to have and added a FIXME about 2.
Differential Revision: https://reviews.llvm.org/D24118
llvm-svn: 280713
Use ADT/BitmaskEnum for DINode::DIFlags for the following purposes:
Get rid of unsigned int for flags to avoid problems on platforms with sizeof(int) < 4
Flags are now strongly typed
Patch by: Victor Leschuk <vleschuk@gmail.com>
Differential Revision: https://reviews.llvm.org/D23766
llvm-svn: 280700
Use ADT/BitmaskEnum for DINode::DIFlags for the following purposes:
* Get rid of unsigned int for flags to avoid problems on platforms with sizeof(int) < 4
* Flags are now strongly typed
Patch by: Victor Leschuk <vleschuk@gmail.com>
Differential Revision: https://reviews.llvm.org/D23766
llvm-svn: 280686
The latest MSVC update apparently resolve the call from the
const ref variant to itself, leading to an infinite
recursion. It is not clear to me why the r-value overload is
not selected. `ValueT` is a pointer type, and the functional-style
cast in the call `insert_as(ValueT(V), LookupKey);` should result
in a r-value ref. A bug in MSVC?
Differential Revision: https://reviews.llvm.org/D23956
llvm-svn: 280685
CGP currently drops select's MD_prof profile data when
generating conditional branch which can lead to bad
code layout. The patch fixes the issue.
Differential Revision: http://reviews.llvm.org/D24169
llvm-svn: 280600
Because the recent change about ODR type uniquing in the context,
we can reach types defined in another module during IR linking.
This triggered some assertions in case we IR link without starting
from an empty module. To alleviate that, we can self-map metadata
defined in the destination module so that they won't be visited.
Differential Revision: https://reviews.llvm.org/D23841
llvm-svn: 280599
The only intrusive thing about SparseBitVector's usage of ilist<> was
that new was usually called externally. There were no custom traits.
It seems like the reason to switch to ilist in r41855 was to avoid
pointer invalidation, but std::list<> has that feature too. Maybe
std::list<>::emplace makes this a little more obvious than it was then.
Switch over to std::list<> and simplify the code.
llvm-svn: 280573
Inheriting from std::iterator uses more boiler-plate than manual
typedefs. Avoid that in both ilist_iterator and
MachineInstrBundleIterator.
This has the side effect of removing ilist_iterator from certain ADL
lookups in namespace std; calls to std::next need to be qualified by
"std::" that didn't have to before. The one case of this in-tree was
operating on a temporary, so I used the more compact operator++.
llvm-svn: 280570
Split out iplist_impl from iplist, and change SymbolTableList to inherit
directly from iplist_impl. This makes it more straightforward to add
new template paramaters to iplist [*]:
- iplist_impl takes a "base" list that provides the intrusive
functionality (usually simple_ilist<T>) and a traits class.
- iplist no longer takes a "Traits" template parameter. It only takes
the value_type, T, and instantiates iplist_impl with simple_ilist<T>
and ilist_traits<T>.
- SymbolTableList now inherits from iplist_impl, instead of iplist.
Note for out-of-tree code: if you have an iplist whose second template
parameter was *not* the default (i.e., not ilist_traits<YourT>), you
have three options:
- Stop using a custom traits class, and instead specialize
ilist_traits<YourT>. This is the usual thing to do.
- Specialize iplist<YourT> to pass your custom traits class into
iplist_impl.
- Create your own trivial list type that passes your custom traits class
into iplist_impl (see SymbolTableList<> for an example).
[*]: The eventual goal is to start tracking a sentinel bit on the
MachineInstr list even when LLVM_ENABLE_ABI_BREAKING_CHECKS is off,
which will enable MachineBasicBlock::reverse_iterator to have normal
list invalidation semantics that matching the new
iplist<>::reverse_iterator from r280032.
llvm-svn: 280569
Delete the dead code for Write(ilist_iterator) in the IR Verifier,
inline report(ilist_iterator) at its call sites in the MachineVerifier,
and use simple_ilist<>::iterator in SymbolTableListTraits.
The only remaining reference to ilist_iterator outside of the ilist
implementation is from MachineInstrBundleIterator. I'll get rid of that
in a follow-up.
llvm-svn: 280565
This test was using the wrong type, and so not actually testing much.
ilist_iterator constructors weren't going through ilist_node_access, so
they didn't actually work with private inheritance.
llvm-svn: 280564
1. 0xNN and NNh are accepted as valid hexadecimal numbers, but 0xNNh is not.
0xNN and NNh may come with optional U or L suffix.
2. NNb is accepted as a valid binary (base-2) number, but 0bNN is not.
NNb may come with optional U or L suffix.
Differential Revision: https://reviews.llvm.org/D22112
llvm-svn: 280555
Previously we were splitting our records at 0xFFFF bytes, which the
Microsoft tools don't like.
Should fix failure on the new Windows self-host buildbot.
This length appears in microsoft-pdb/PDB/dbi/dbiimpl.h
llvm-svn: 280522
For the store of a wide value merged from a pair of values, especially int-fp pair,
sometimes it is more efficent to split it into separate narrow stores, which can
remove the bitwise instructions or sink them to colder places.
Now the feature is only enabled on x86 target, and only store of int-fp pair is
splitted. It is possible that the application scope gets extended with perf evidence
support in the future.
Differential Revision: https://reviews.llvm.org/D22840
llvm-svn: 280505
Crash was possible if match() method
was called on object that was moved or object
created with empty constructor.
Testcases updated.
DIfferential revision: https://reviews.llvm.org/D24123
llvm-svn: 280473
r280425 | dehao | 2016-09-01 16:15:50 -0700 (Thu, 01 Sep 2016) | 9 lines
Refactor LICM pass in preparation for LoopSink pass.
Summary: LoopSink pass uses some common function in LICM. This patch refactor the LICM code to make it usable by LoopSink pass (https://reviews.llvm.org/D22778).
r280429 | dehao | 2016-09-01 16:31:25 -0700 (Thu, 01 Sep 2016) | 9 lines
Refactor LICM to expose canSinkOrHoistInst to LoopSink pass.
Summary: LoopSink pass shares the same canSinkOrHoistInst functionality with LICM pass. This patch exposes this function in preparation of https://reviews.llvm.org/D22778
llvm-svn: 280453