Summary:
I noticed that the .imports files emitted for distributed ThinLTO
backends do not have consistent ordering. This is because StringMap
iteration order is not guaranteed to be deterministic. Since we already
have a std::map with this information, used when emitting the individual
index files (ModuleToSummariesForIndex), use it for the imports files as
well.
This issue is likely causing some unnecessary rebuilds of the ThinLTO
backends in our distributed build system as the imports files are inputs
to those backends.
Reviewers: pcc, steven_wu, mehdi_amini
Subscribers: mehdi_amini, inglorion, eraman, steven_wu, dexonsmith, llvm-commits
Differential Revision: https://reviews.llvm.org/D48783
llvm-svn: 336721
When manually finishing the object writer in dsymutil, it's possible
that there are pending labels that haven't been resolved. This results
in an assertion when the assembler tries to fixup a label that doesn't
have an address yet.
Differential revision: https://reviews.llvm.org/D49131
llvm-svn: 336688
This accessor is useful and could be slightly more efficient than
Str.val().data() because you can avoid StringRef instantiation.
Differential Revision: https://reviews.llvm.org/D49133
llvm-svn: 336683
debug compilation dir when compiling assembly files with -g.
Part of PR38050.
Patch by Siddhartha Bagaria!
Differential Revision: https://reviews.llvm.org/D48988
llvm-svn: 336680
Parsing invalid UTF-8 input is now a parse error.
Creating JSON values from invalid UTF-8 now triggers an assertion, and
(in no-assert builds) substitutes the unicode replacement character.
Strings retrieved from json::Value are always valid UTF-8.
llvm-svn: 336657
Summary:
Support for this option is needed for building Linux kernel.
This is a very frequently requested feature by kernel developers.
More details : https://lkml.org/lkml/2018/4/4/601
GCC option description for -fdelete-null-pointer-checks:
This Assume that programs cannot safely dereference null pointers,
and that no code or data element resides at address zero.
-fno-delete-null-pointer-checks is the inverse of this implying that
null pointer dereferencing is not undefined.
This feature is implemented in LLVM IR in this CL as the function attribute
"null-pointer-is-valid"="true" in IR (Under review at D47894).
The CL updates several passes that assumed null pointer dereferencing is
undefined to not optimize when the "null-pointer-is-valid"="true"
attribute is present.
Reviewers: t.p.northover, efriedma, jyknight, chandlerc, rnk, srhines, void, george.burgess.iv
Reviewed By: efriedma, george.burgess.iv
Subscribers: eraman, haicheng, george.burgess.iv, drinkcat, theraven, reames, sanjoy, xbolva00, llvm-commits
Differential Revision: https://reviews.llvm.org/D47895
llvm-svn: 336613
I don't think there's a need to use `const char *`. In most (probably all?)
cases, we need a length of a name later, so discarding a length will
lead to a wasted effort.
Differential Revision: https://reviews.llvm.org/D49046
llvm-svn: 336612
delegate method (and unit test).
The name 'replace' better captures what the old delegate method did: it
returned materialization responsibility for a set of symbols to the VSO.
The new delegate method delegates responsibility for a set of symbols to a new
MaterializationResponsibility instance. This can be used to split responsibility
between multiple threads, or multiple materialization methods.
llvm-svn: 336603
Summary:
This patch adds support for the atomicrmw instructions and the strong
cmpxchg instruction to the IRTranslator.
I've left out weak cmpxchg because LangRef.rst isn't entirely clear on what
difference it makes to the backend. As far as I can tell from the code, it
only matters to AtomicExpandPass which is run at the LLVM-IR level.
Reviewers: ab, t.p.northover, qcolombet, rovka, aditya_nandakumar, volkan, javed.absar
Reviewed By: qcolombet
Subscribers: kristof.beyls, javed.absar, igorb, llvm-commits
Differential Revision: https://reviews.llvm.org/D40092
llvm-svn: 336589
Summary:
This adds a reverse transform for the instcombine canonicalizations
that were added in D47980, D47981.
As discussed later, that was worse at least for the code size,
and potentially for the performance, too.
https://rise4fun.com/Alive/Zmpl
Reviewers: craig.topper, RKSimon, spatel
Reviewed By: spatel
Subscribers: reames, llvm-commits
Differential Revision: https://reviews.llvm.org/D48768
llvm-svn: 336585
This patch ports hasDedicatedExits, getUniqueExitBlocks and
getUniqueExitBlock in Loop to LoopBase so that they can be used
from other LoopBase sub-classes.
Reviewers: chandlerc, sanjoy, hfinkel, fhahn
Reviewed By: chandlerc
Differential Revision: https://reviews.llvm.org/D48817
llvm-svn: 336572
Summary:
This patch adds a new "integer" ValueType, and renames Number -> Double.
This allows us to preserve the full precision of int64_t when parsing integers
from the wire, or constructing from an integer.
The API is unchanged, other than giving asInteger() a clearer contract.
In addition, always output doubles with enough precision that parsing will
reconstruct the same double.
Reviewers: simon_tatham
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D46209
llvm-svn: 336541
Summary:
This consists of four main parts:
- an type json::Expr representing JSON values of dynamic kind, which can be
composed, inspected, and modified
- a JSON parser from string -> json::Expr
- a JSON printer from json::Expr -> string, with optional pretty-printing
- a convention for mapping json::Expr <=> native types (fromJSON/toJSON)
Mapping functions are provided for primitives (e.g. int, vector) and the
ObjectMapper helper helps implement fromJSON for struct/object types.
Based on clangd's usage, a couple of places I'd appreciate review attention:
- fromJSON returns only bool. A richer error-signaling mechanism may be useful
to provide useful messages, or let recursive fromJSONs (containers/structs)
do careful error recovery.
- should json::obj be always explicitly written (like json::ary)
- there's no streaming parse API. I suspect there are some simple wins like
a callback API where the document is a long array, and each element is small.
But this can probably be bolted on easily when we see the need.
Reviewers: bkramer, labath
Subscribers: mgorny, ilya-biryukov, ioeric, MaskRay, llvm-commits
Differential Revision: https://reviews.llvm.org/D45753
llvm-svn: 336534
When emitting the DWARF accelerator tables from dsymutil, we don't have
a DwarfDebug instance and we use a custom class to represent Dwarf
compile units. This patch adds an interface AccelTableWriterInfo to
abstract these from the Dwarf5AccelTableWriter, so we can have a custom
implementation for this in dsymutil.
Differential revision: https://reviews.llvm.org/D49031
llvm-svn: 336529
This allows us to handle masking in a very similar way to the default rounding version that uses llvm.fma.
I had to add new rounding mode CodeGenOnly instructions to support isel when we can't find a movss to grab the upper bits from to use the b_Int instruction.
Fast-isel tests have been updated to match new clang codegen.
We are currently having trouble folding fneg into the new intrinsic. I'm going to correct that in a follow up patch to keep the size of this one down.
A future patch will also remove the old intrinsics.
llvm-svn: 336506
Splits off isKnownNeverZeroFloat to handle +/- 0 float cases.
This will make it easier to be more aggressive with the integer isKnownNeverZero tests (similar to ValueTracking), use computeKnownBits etc.
Differential Revision: https://reviews.llvm.org/D48969
llvm-svn: 336492
The checking is done deeper inside MachineBasicBlock, but this will
hopefully help to find issues when porting the machine outliner to a
target where Liveness tracking is broken (like ARM).
Differential Revision: https://reviews.llvm.org/D49023
llvm-svn: 336481
For certain APIs, the return value of the function does not distinguish
between failure (which populates errno) and other non-error conditions
(which do not set errno).
For example, `fgets` returns `NULL` both when an error has occurred, or
upon EOF. If `errno` is already `EINTR` for whatever reason, then
```
RetryAfterSignal(nullptr, fgets, ...);
```
on a stream that has reached EOF would infinite loop.
Fix this by setting `errno` to `0` before each attempt in
`RetryAfterSignal`.
Patch by Ricky Zhou!
Differential Revision: https://reviews.llvm.org/D48755
llvm-svn: 336479
appendToVector used the wrong overload of SmallVector::append, resulting
in it appending the same element to a vector `getSize()` times. This did
not cause a problem when initially committed because appendToVector was
only used to append 1-element operands.
This changes appendToVector to use the correct overload of append().
Testing: ./unittests/IR/IRTests --gtest_filter='*DIExpressionTest*'
llvm-svn: 336466
The replaceAllDbgUsesWith utility helps passes preserve debug info when
replacing one value with another.
This improves upon the existing insertReplacementDbgValues API by:
- Updating debug intrinsics in-place, while preventing use-before-def of
the replacement value.
- Falling back to salvageDebugInfo when a replacement can't be made.
- Moving the responsibiliy for rewriting llvm.dbg.* DIExpressions into
common utility code.
Along with the API change, this teaches replaceAllDbgUsesWith how to
create DIExpressions for three basic integer and pointer conversions:
- The no-op conversion. Applies when the values have the same width, or
have bit-for-bit compatible pointer representations.
- Truncation. Applies when the new value is wider than the old one.
- Zero/sign extension. Applies when the new value is narrower than the
old one.
Testing:
- check-llvm, check-clang, a stage2 `-g -O3` build of clang,
regression/unit testing.
- This resolves a number of mis-sized dbg.value diagnostics from
Debugify.
Differential Revision: https://reviews.llvm.org/D48676
llvm-svn: 336451
The enhanced version will be used in D48893 and related patches
and an almost identical (fadd is different) version is proposed
in D28907, so adding this as a preliminary step.
llvm-svn: 336444
Previously we only iterated over functions reachable from the set of
external functions in the module. But since some of the passes under
this (notably the always-inliner and coroutine lowerer) are required for
correctness, they need to run over everything.
This just adds an extra layer of iteration over the CallGraph to keep
track of which functions we've already visited and get the next batch of
SCCs.
Should fix PR38029.
llvm-svn: 336419
The intrinsics can be implemented with a f32/f64 llvm.fma intrinsic and an insert into a zero vector.
There are a couple regressions here due to SelectionDAG not being able to pull an fneg through an extract_vector_elt. I'm not super worried about this though as InstCombine should be able to do it before we get to SelectionDAG.
llvm-svn: 336416
Summary:
Error's new operator<< is the first way to print an error without consuming it.
formatv() can now print objects with an operator<< that works with raw_ostream.
Reviewers: bkramer
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D48966
llvm-svn: 336412
This upgrades all of the intrinsics to use fneg instructions to convert fma into fmsub/fnmsub/fnmadd/fmsubadd. And uses a select instruction for masking.
This matches how clang uses the intrinsics these days.
llvm-svn: 336409
Summary: The lib paths are not correctly picked up for OpenEmbedded sysroots
(like arm-oe-linux-gnueabi). I fix this in a follow-up clang patch. But in
order to add the correct libs I need to detect if the vendor is oe. For this
reason, it is first necessary to teach llvm to detect oe vendor, which is what
this patch does.
Reviewers: chandlerc, compnerd, rengolin, javed.absar
Reviewed By: compnerd
Subscribers: kristof.beyls, dexonsmith, llvm-commits
Differential Revision: https://reviews.llvm.org/D48861
llvm-svn: 336401
writing them to a buffer and re-loading them.
Also introduces a multithreaded variant of SimpleCompiler
(MultiThreadedSimpleCompiler) for compiling IR concurrently on multiple
threads.
These changes are required to JIT IR on multiple threads correctly.
No test case yet. I will be looking at how to modify LLI / LLJIT to test
multithreaded JIT support soon.
llvm-svn: 336385
that has required alignment. This avoids issues that keep coming up with
function pointers being less aligned.
I'm pretty annoyed that we can't take advantage of function alignment
even on platforms where they *are* aligned, but build modes and other
things make taking advantage of it somewhere between hard and
impossible. The best case scenario would still embed various build modes
into the ABI causing really hard to debug issues if you compiled one
object file differently from another. =/
This should at least bring the bots back that were having trouble with
this.
llvm-svn: 336337
There's a regression in here due to inability to combine fneg inputs of X86ISD::FMSUB/FNMSUB/FNMADD nodes.
More removals to come, but I wanted to stop and fix the regression that showed up in this first.
llvm-svn: 336303
It appears that the function pointer we use there isn't reliably 4-byte
aligned. I have no idea why or how we could correct this, so for now we
just regress the Windows performance some.
Someone with access to Windows could try working on a fix. At the very
least we could use a double indirection rather than a table, but maybe
there is some way to fully restore this optimization. I don't want to
play too much with this when I don't have access to the platform and
this at least should restore the last bots.
llvm-svn: 336178
Putting `sizeof(T) <= 16` into the parameter of a `std::conditional`
causes every version of MSVC I've tried to crash:
https://godbolt.org/g/eqVULL
Really frustrating, but an extra layer of indirection through an
instantiated type gives a working way to access this computed constant.
llvm-svn: 336170
Summary:
This patch is the first in a series of patches related to the [[ http://lists.llvm.org/pipermail/llvm-dev/2018-June/123883.html | RFC - A new dominator tree updater for LLVM ]].
This patch introduces the DomTreeUpdater class, which provides a cleaner API to perform updates on available dominator trees (none, only DomTree, only PostDomTree, both) using different update strategies (eagerly or lazily) to simplify the updating process.
—Prior to the patch—
- Directly calling update functions of DominatorTree updates the data structure eagerly while DeferredDominance does updates lazily.
- DeferredDominance class cannot be used when a PostDominatorTree also needs to be updated.
- Functions receiving DT/DDT need to branch a lot which is currently necessary.
- Functions using both DomTree and PostDomTree need to call the update function separately on both trees.
- People need to construct an additional DeferredDominance class to use functions only receiving DDT.
—After the patch—
Patch by Chijun Sima <simachijun@gmail.com>.
Reviewers: kuhar, brzycki, dmgreen, grosser, davide
Reviewed By: kuhar, brzycki
Author: NutshellySima
Subscribers: vsk, mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D48383
llvm-svn: 336163
introducing llvm::trivially_{copy,move}_constructible type traits.
This uses a completely portable implementation of these traits provided
by Richard Smith. You can see it on compiler explorer in all its glory:
https://godbolt.org/g/QEDZjW
I have transcribed it, clang-formatted it, added some comments, and made
the tests fit into a unittest file.
I have also switched llvm::unique_function over to use these new, much
more portable traits. =D
Hopefully this will fix the build bot breakage from my prior commit.
llvm-svn: 336161
supporting move-only closures.
Most of the core optimizations for std::function are here plus
a potentially novel one that detects trivially movable and destroyable
functors and implements those with fewer indirections.
This is especially useful as we start trying to add concurrency
primitives as those often end up with move-only types (futures,
promises, etc) and wanting them to work through lambdas.
As further work, we could add better support for things like const-qualified
operator()s to support more algorithms, and r-value ref qualified operator()s
to model call-once. None of that is here though.
We can also provide our own llvm::function that has some of the optimizations
used in this class, but with copy semantics instead of move semantics.
This is motivated by increasing usage of things like executors and the task
queue where it is useful to embed move-only types like a std::promise within
a type erased function. That isn't possible without this version of a type
erased function.
Differential Revision: https://reviews.llvm.org/D48349
llvm-svn: 336156
On darwin, all virtual sections have zerofill type, and having a
.zerofill directive in a non-virtual section is not allowed. Instead of
asserting, show a nicer error.
In order to use the equivalent of .zerofill in a non-virtual section,
the usage of .zero of .space is required.
This patch replaces the assert with an error.
Differential Revision: https://reviews.llvm.org/D48517
llvm-svn: 336127
Summary:
This patch is the first in a series of patches related to the [[ http://lists.llvm.org/pipermail/llvm-dev/2018-June/123883.html | RFC - A new dominator tree updater for LLVM ]].
This patch introduces the DomTreeUpdater class, which provides a cleaner API to perform updates on available dominator trees (none, only DomTree, only PostDomTree, both) using different update strategies (eagerly or lazily) to simplify the updating process.
—Prior to the patch—
- Directly calling update functions of DominatorTree updates the data structure eagerly while DeferredDominance does updates lazily.
- DeferredDominance class cannot be used when a PostDominatorTree also needs to be updated.
- Functions receiving DT/DDT need to branch a lot which is currently necessary.
- Functions using both DomTree and PostDomTree need to call the update function separately on both trees.
- People need to construct an additional DeferredDominance class to use functions only receiving DDT.
—After the patch—
Patch by Chijun Sima <simachijun@gmail.com>.
Reviewers: kuhar, brzycki, dmgreen, grosser, davide
Reviewed By: kuhar, brzycki
Subscribers: vsk, mgorny, llvm-commits
Author: NutshellySima
Differential Revision: https://reviews.llvm.org/D48383
llvm-svn: 336114
Summary:
This patch introduce new intrinsic -
strip.invariant.group that was described in the
RFC: Devirtualization v2
Reviewers: rsmith, hfinkel, nlopes, sanjoy, amharc, kuhar
Subscribers: arsenm, nhaehnle, JDevlieghere, hiraditya, xbolva00, llvm-commits
Differential Revision: https://reviews.llvm.org/D47103
Co-authored-by: Krzysztof Pszeniczny <krzysztof.pszeniczny@gmail.com>
llvm-svn: 336073
This is a simple implementation of the unroll-and-jam classical loop
optimisation.
The basic idea is that we take an outer loop of the form:
for i..
ForeBlocks(i)
for j..
SubLoopBlocks(i, j)
AftBlocks(i)
Instead of doing normal inner or outer unrolling, we unroll as follows:
for i... i+=2
ForeBlocks(i)
ForeBlocks(i+1)
for j..
SubLoopBlocks(i, j)
SubLoopBlocks(i+1, j)
AftBlocks(i)
AftBlocks(i+1)
Remainder Loop
So we have unrolled the outer loop, then jammed the two inner loops into
one. This can lead to a simpler inner loop if memory accesses can be shared
between the now jammed loops.
To do this we have to prove that this is all safe, both for the memory
accesses (using dependence analysis) and that ForeBlocks(i+1) can move before
AftBlocks(i) and SubLoopBlocks(i, j).
Differential Revision: https://reviews.llvm.org/D41953
llvm-svn: 336062
This adds functionality to the outliner that allows targets to
specify certain functions that should be outlined from by default.
If a target supports default outlining, then it specifies that in
its TargetOptions. In the case that it does, and the user hasn't
specified that they *never* want to outline, the outliner will
be added to the pass pipeline and will run on those default functions.
This is a preliminary patch for turning the outliner on by default
under -Oz for AArch64.
https://reviews.llvm.org/D48776
llvm-svn: 336040
and diretory.
Also cleans up all the associated naming to be consistent and removes
the public access to the pass ID which was unused in LLVM.
Also runs clang-format over parts that changed, which generally cleans
up a bunch of formatting.
This is in preparation for doing some internal cleanups to the pass.
Differential Revision: https://reviews.llvm.org/D47352
llvm-svn: 336028
Summary:
MemoryPhis now have APIs analogous to BB Phis to remove an incoming value/block.
The MemorySSAUpdater uses the above APIs when updating MemorySSA given a set of dead blocks about to be deleted.
Reviewers: george.burgess.iv
Subscribers: sanjoy, jlebar, Prazek, llvm-commits
Differential Revision: https://reviews.llvm.org/D48396
llvm-svn: 336015
Extends the CFGPrinter and CallPrinter with heat colors based on heuristics or
profiling information. The colors are enabled by default and can be toggled
on/off for CFGPrinter by using the option -cfg-heat-colors for both
-dot-cfg[-only] and -view-cfg[-only]. Similarly, the colors can be toggled
on/off for CallPrinter by using the option -callgraph-heat-colors for both
-dot-callgraph and -view-callgraph.
Patch by Rodrigo Caetano Rocha!
Differential Revision: https://reviews.llvm.org/D40425
llvm-svn: 335996
Initial patch adding assembly support for Armv8.4-A.
Besides adding v8.4 as a supported architecture to the usual places, this also
adds target features for the different crypto algorithms. Armv8.4-A introduced
new crypto algorithms, made them optional, and allows different combinations:
- none of the v8.4 crypto functions are supported, which is independent of the
implementation of the Armv8.0 SHA1 and SHA2 instructions.
- the v8.4 SHA512 and SHA3 support is implemented, in this case the Armv8.0
SHA1 and SHA2 instructions must also be implemented.
- the v8.4 SM3 and SM4 support is implemented, which is independent of the
implementation of the Armv8.0 SHA1 and SHA2 instructions.
- all of the v8.4 crypto functions are supported, in this case the Armv8.0 SHA1
and SHA2 instructions must also be implemented.
The v8.4 crypto instructions are added to AArch64 only, and not AArch32,
and are made optional extensions to Armv8.2-A.
The user-facing Clang options will map on these new target features, their
naming will be compatible with GCC and added in follow-up patches.
The Armv8.4-A instruction sets can be downloaded here:
https://developer.arm.com/products/architecture/a-profile/exploration-tools
Differential Revision: https://reviews.llvm.org/D48625
llvm-svn: 335953
This change adds experimental support for SHT_RELR sections, proposed
here: https://groups.google.com/forum/#!topic/generic-abi/bX460iggiKg
Definitions for the new ELF section type and dynamic array tags, as well
as the encoding used in the new section are all under discussion and are
subject to change. Use with caution!
Author: rahulchaudhry
Differential Revision: https://reviews.llvm.org/D47919
llvm-svn: 335922
The format string for formatv allows to specify a custom padding
character instead of the default space. This custom character was
parsed correctly, but not passed on to the formatter.
Patch by Marcel Köppe
Differential Revision: https://reviews.llvm.org/D48140
llvm-svn: 335915
Summary:
The InlinerFunctionImportStats will collect and dump stats regarding how
many function inlined into the module were imported by ThinLTO.
Reviewers: wmi, dexonsmith
Subscribers: mehdi_amini, inglorion, llvm-commits, eraman
Differential Revision: https://reviews.llvm.org/D48729
llvm-svn: 335914
FileOutputBuffer creates a temp file and on commit atomically
renames the temp file to the destination file. Sometimes we
want to modify an existing file in place, but still have the
atomicity guarantee. To do this we can initialize the contents
of the temp file from the destination file (if it exists), that
way the resulting FileOutputBuffer can have only selective
bytes modified. Committing will then atomically replace the
destination file as desired.
llvm-svn: 335902
Targets should be able to define whether or not they support the outliner
without the outliner being added to the pass pipeline. Before this, the
outliner pass would be added, and ask the target whether or not it supports the
outliner.
After this, it's possible to query the target in TargetPassConfig, before the
outliner pass is created. This ensures that passing -enable-machine-outliner
will not modify the pass pipeline of any target that does not support it.
https://reviews.llvm.org/D48683
llvm-svn: 335887
Add NoTrapAfterNoreturn target option which skips emission of traps
behind noreturn calls even if TrapUnreachable is enabled.
Enable the feature on Mach-O to save code size; Comments suggest it is
not possible to enable it for the other users of TrapUnreachable.
rdar://41530228
DifferentialRevision: https://reviews.llvm.org/D48674
llvm-svn: 335877
This pass is being added in order to make the information available to BasicAA,
which can't do caching of this information itself, but possibly this information
may be useful for other passes.
Incorporates code based on Daniel Berlin's implementation of Tarjan's algorithm.
Differential Revision: https://reviews.llvm.org/D47893
llvm-svn: 335857
Summary:
This allows the implicit ArrayRef conversions to kick in when e.g.
comparing ArrayRef to a SmallVector.
Reviewers: zturner, dblaikie
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D48632
llvm-svn: 335839
=== Generating the CG Profile ===
The CGProfile module pass simply gets the block profile count for each BB and scans for call instructions. For each call instruction it adds an edge from the current function to the called function with the current BB block profile count as the weight.
After scanning all the functions, it generates an appending module flag containing the data. The format looks like:
```
!llvm.module.flags = !{!0}
!0 = !{i32 5, !"CG Profile", !1}
!1 = !{!2, !3, !4} ; List of edges
!2 = !{void ()* @a, void ()* @b, i64 32} ; Edge from a to b with a weight of 32
!3 = !{void (i1)* @freq, void ()* @a, i64 11}
!4 = !{void (i1)* @freq, void ()* @b, i64 20}
```
Differential Revision: https://reviews.llvm.org/D48105
llvm-svn: 335794
The code to emit the pieces of the MSF file were actually in
PDBFileBuilder. Move this to MSFBuilder so that we can
theoretically emit an MSF without having a PDB file.
llvm-svn: 335789
Summary:
The instantiation of the drop_begin function template usually fails because the functions begin() and end() do not exist. Only when using on a container from the std namespace (or `llvm::iterator_range`s of something derived from `std::iterator`), they are matched to std::begin() and std::end() due to Koenig-lookup.
Explicitly use llvm::adl_begin and llvm::adl_end to make drop_begin applicable to anything iterable (including C-style arrays).
A solution for general `llvm::iterator_range`s was already tried in r244620, but got reverted in r244621 due to MSVC not liking it.
Reviewers: dblaikie, grosbach, aaron.ballman, ruiu
Reviewed By: dblaikie, aaron.ballman
Subscribers: aaron.ballman, llvm-commits
Differential Revision: https://reviews.llvm.org/D48598
llvm-svn: 335772
Now that we have the ability to legalize based on MMO's. Add support for
legalizing based on AtomicOrdering and use it to correct the legalization
of the atomic instructions.
Also extend all() to be a variadic template as this ruleset now requires
3 and 4 argument versions.
llvm-svn: 335767
Summary:
Rather than just print the GUID, when it is available in the index,
print the global name as well in the function import thin link debug
messages. Names will be available when the combined index is being
built by the same process, e.g. a linker or "llvm-lto2 run".
Reviewers: davidxl
Subscribers: mehdi_amini, inglorion, eraman, steven_wu, llvm-commits
Differential Revision: https://reviews.llvm.org/D48612
llvm-svn: 335760
If you've already loaded an IRObjectFile and need access to the
Modules themselves you shouldn't have to reparse a byte stream to do
it. Adds an accessor for the modules in IRObjectFile.
llvm-svn: 335759
It isn't safe to outline sequences of instructions where x16/x17/nzcv live
across the sequence.
This teaches the outliner to check whether or not a specific canidate has
x16/x17/nzcv live across it and discard the candidate in the case that that is
true.
https://bugs.llvm.org/show_bug.cgi?id=37573https://reviews.llvm.org/D47655
llvm-svn: 335758
I think the intrinsics named 'avx512.mask.' should refer to the previous behavior of taking a mask argument in the intrinsic instead of using a 'select' or 'and' instruction in IR to accomplish the masking. This is more consistent with the goal that eventually we will have no intrinsics that have masking builtin. When we reach that goal, we should have no intrinsics named "avx512.mask".
llvm-svn: 335744
This patch adds support for the q versions of the dup
(load-to-all-lanes) NEON intrinsics, such as vld2q_dup_f16() for
example.
Currently, non-q versions of the dup intrinsics are implemented
in clang by generating IR that first loads the elements of the
structure into the first lane with the lane (to-single-lane)
intrinsics, and then propagating it other lanes. There are at
least two problems with this approach. First, there are no
double-spaced to-single-lane byte-element instructions. For
example, there is no such instruction as 'vld2.8 { d0[0], d2[0]
}, [r0]'. That means we cannot rely on the to-single-lane
intrinsics and instructions to implement the q versions of the
dup intrinsics. Note that to-all-lanes instructions do support
all sizes of data items, including bytes.
The second problem with the current approach is that we need a
separate vdup instruction to propagate the structure to each
lane. So for vld4q_dup_f16() we would need four vdup instructions
in addition to the initial vld instruction.
This patch introduces dup LLVM intrinsics and reworks handling of
the currently supported (non-q) NEON dup intrinsics to expand
them into those LLVM intrinsics, thus eliminating the need for
using to-single-lane intrinsics and instructions.
Additionally, this patch adds support for u64 and s64 dup NEON
intrinsics. These are marked as Arch64-only in the ARM NEON
Reference, but it seems there are no reasons to not support them
in AArch32 mode. Please correct, if that is wrong.
That's what we generate with this patch applied:
vld2q_dup_f16:
vld2.16 {d0[], d2[]}, [r0]
vld2.16 {d1[], d3[]}, [r0]
vld3q_dup_f16:
vld3.16 {d0[], d2[], d4[]}, [r0]
vld3.16 {d1[], d3[], d5[]}, [r0]
vld4q_dup_f16:
vld4.16 {d0[], d2[], d4[], d6[]}, [r0]
vld4.16 {d1[], d3[], d5[], d7[]}, [r0]
Differential Revision: https://reviews.llvm.org/D48439
llvm-svn: 335733
https://reviews.llvm.org/D47566
Change wording from "Must be backwards compatible" to
"Must match hardware definition" for enums that are
defined by hardware.
llvm-svn: 335705
LLJIT is a prefabricated ORC based JIT class that is meant to be the go-to
replacement for MCJIT. Unlike OrcMCJITReplacement (which will continue to be
supported) it is not API or bug-for-bug compatible, but targets the same
use cases: Simple, non-lazy compilation and execution of LLVM IR.
LLLazyJIT extends LLJIT with support for function-at-a-time lazy compilation,
similar to what was provided by LLVM's original (now long deprecated) JIT APIs.
This commit also contains some simple utility classes (CtorDtorRunner2,
LocalCXXRuntimeOverrides2, JITTargetMachineBuilder) to support LLJIT and
LLLazyJIT.
Both of these classes are works in progress. Feedback from JIT clients is very
welcome!
llvm-svn: 335670
This addresses post-commit feedback about the name 'skipDebugInfo' being
misleading. This name could be interpreted as meaning 'a function that
skips instructions with debug locations'.
The new name, 'skipDebugIntrinsics', makes it clear that this function
only skips debug info intrinsics.
Thanks to Adrian Prantl for pointing this out!
llvm-svn: 335667
It is legal for a PHI node not to have a live value in a predecessor
as long as the end of the predecessor is jointly dominated by an undef
value.
llvm-svn: 335607
Summary:
Adds assembly parsing support for the module summary index (follow on
to r333335 which added the assembly writing support).
I added support to llvm-as to invoke the index parsing, so that it can
create either a bitcode file with a Module and a per-module index, or
a combined index without a Module.
I will send follow on patches soon to do the following:
- add support to tools such as llvm-lto2 to parse the per-module indexes
from assembly instead of bitcode when testing the thin link.
- verification support.
Depends on D47844 and D47842.
Reviewers: pcc, dexonsmith, mehdi_amini
Subscribers: inglorion, eraman, steven_wu, llvm-commits
Differential Revision: https://reviews.llvm.org/D47905
llvm-svn: 335602
Summary:
This is a follow-up to r334830 and r335031.
In the valueCoversEntireFragment check we now also handle
the situation when there is a variable length array (VLA)
involved, and the length of the array has been reduced to
a constant.
The ConvertDebugDeclareToDebugValue functions that are related
to PHI nodes and load instructions now avoid inserting dbg.value
intrinsics when the value does not, for certain, cover the
variable/fragment that should be described.
In r334830 we assumed that the value always covered the entire
var/fragment and we had assertions in the code to show that
assumption. However, those asserts failed when compiling code
with VLAs, so we removed the asserts in r335031. Now when we
know that the valueCoversEntireFragment check can fail also for
PHI/Load instructions we avoid to insert the faulty dbg.value
intrinsic in such situations. Compared to the Store instruction
scenario we simply drop the dbg.value here (as the variable does
not change its value due to PHI/Load, so an earlier dbg.value
describing the variable should still be valid).
Reviewers: aprantl, vsk, efriedma
Reviewed By: aprantl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D48547
llvm-svn: 335580
Summary:
Adds a string saver to the ModuleSummaryIndex so it can store value
names in the case of adding a ValueInfo for a GUID when we don't
have the name stored in a Module string table. This is motivated
by the upcoming summary parser patch, where we will read value names
from the summary entry and want to store them, even when a Module
is not available.
Currently this allows us to store the name in the legacy bitcode case,
and I have added a test to show that.
Reviewers: pcc, dexonsmith
Subscribers: mehdi_amini, inglorion, eraman, steven_wu, llvm-commits
Differential Revision: https://reviews.llvm.org/D47842
llvm-svn: 335570
This recommits r335562 and 335563 as a single commit.
The frontend will surround the intrinsic with the appropriate marshalling to/from a scalar type to match the sigature of the builtin that software expects.
By exposing the vXi1 type directly in the llvm intrinsic we make it available to optimizers much earlier. This can enable the scalar marshalling code to be optimized away.
llvm-svn: 335568
Summary:
Without this change we only add module paths to the combined index when
there is a module hash or at least one global value. Make this more
consistent by adding the module to the index whenever there is a summary
section, and it is a per-module summary (had a MODULE_CODE_SOURCE_FILENAME
record).
Since we will no longer add module paths lazily, add a new interface to get
the module info from the index that asserts it is already added.
Fixes PR37899.
Reviewers: Vlad, pcc
Subscribers: mehdi_amini, inglorion, steven_wu, llvm-commits
Differential Revision: https://reviews.llvm.org/D48511
llvm-svn: 335567
Summary:
I discovered when writing the summary parsing support that the
per-module index builder and writer are computing the GUID from the
value name alone (ignoring the linkage type). This was ok since those
GUID were not emitted in the bitcode, and there are never multiple
conflicting names in a single module.
However, I don't see a reason for making the GUID computation different
for the per-module case. It also makes things simpler on the parsing
side to have the GUID computation consistent. So this patch changes the
summary analysis phase and the per-module summary writer to compute the
GUID using the facility on the GlobalValue.
Reviewers: pcc, dexonsmith
Subscribers: llvm-commits, inglorion
Differential Revision: https://reviews.llvm.org/D47844
llvm-svn: 335560
unswitching of switches.
This works much like trivial unswitching of switches in that it reliably
moves the switch out of the loop. Here we potentially clone the entire
loop into each successor of the switch and re-point the cases at these
clones.
Due to the complexity of actually doing nontrivial unswitching, this
patch doesn't create a dedicated routine for handling switches -- it
would duplicate far too much code. Instead, it generalizes the existing
routine to handle both branches and switches as it largely reduces to
looping in a few places instead of doing something once. This actually
improves the results in some cases with branches due to being much more
careful about how dead regions of code are managed. With branches,
because exactly one clone is created and there are exactly two edges
considered, somewhat sloppy handling of the dead regions of code was
sufficient in most cases. But with switches, there are much more
complicated patterns of dead code and so I've had to move to a more
robust model generally. We still do as much pruning of the dead code
early as possible because that allows us to avoid even cloning the code.
This also surfaced another problem with nontrivial unswitching before
which is that we weren't as precise in reconstructing loops as we could
have been. This seems to have been mostly harmless, but resulted in
pointless LCSSA PHI nodes and other unnecessary cruft. With switches, we
have to get this *right*, and everything benefits from it.
While the testing may seem a bit light here because we only have two
real cases with actual switches, they do a surprisingly good job of
exercising numerous edge cases. Also, because we share the logic with
branches, most of the changes in this patch are reasonably well covered
by existing tests.
The new unswitch now has all of the same fundamental power as the old
one with the exception of the single unsound case of *partial* switch
unswitching -- that really is just loop specialization and not
unswitching at all. It doesn't fit into the canonicalization model in
any way. We can add a loop specialization pass that runs late based on
profile data if important test cases ever come up here.
Differential Revision: https://reviews.llvm.org/D47683
llvm-svn: 335553
This removes debug locations from ConstantSDNode and ConstantSDFPNode.
When this kind of node is materialized we no longer create a line table
entry which jumps back to the constant's first point of use. This makes
single-stepping behavior smoother, and it matches the model used by IR,
where Constants have no locations. See this thread for more context:
http://lists.llvm.org/pipermail/llvm-dev/2018-June/124164.html
I'd like to handle constant BuildVectorSDNodes and to try to eliminate
passing SDLocs to SelectionDAG::getConstant*() in follow-up commits.
Differential Revision: https://reviews.llvm.org/D48468
llvm-svn: 335497
There are quite a few if statements that enumerate all these cases. It gets
even worse in our fork of LLVM where we also have a Triple::cheri (which
is mips64 + CHERI instructions) and we had to update all if statements that
check for Triple::mips64 to also handle Triple::cheri. This patch helps to
reduce our diff to upstream and should also make some checks more readable.
Reviewed By: atanasyan
Differential Revision: https://reviews.llvm.org/D48548
llvm-svn: 335493
I thought I fixed this in r308673, but that fix was
very broken. The assumption that any frame index can be used
in place of another was more widespread than I realized.
Even when stack slot sharing was disabled, this was still
replacing frame index uses with a different ID with a different
stack slot.
Really fix this by doing the coloring per-stack ID, so all of
the coloring logically done in a separate namespace. This is a lot
simpler than trying to figure out how to change the color if
the stack ID is different.
llvm-svn: 335488
We can prove that some delinearized subscripts do not wrap around to become
negative by the fact that they are from inbound geps of load/store locations.
This helps improve the delinearisation in cases where we can't prove that they
are non-negative from SCEV alone.
Differential Revision: https://reviews.llvm.org/D48481
llvm-svn: 335481
There's probably a better solution, but adding spaces
in the IR vector examples sidesteps the problem without
uglifying the plain text.
llvm-svn: 335468
It's easy for domination numbers to get out-of-date, and this is no more
costly than any of the other verifiers we already have, so it seems nice
to have.
A stage3 build with this Works On My Machine, so this hasn't caught any
bugs... yet. :)
llvm-svn: 335444
We have ThreadPool, which can execute work asynchronously on N
background threads, but sometimes you need to make sure the work
is executed asynchronously but also serially. That is, if task
B is enqueued after task A, then task B should not begin until
task A has completed. This patch adds such a class.
Differential Revision: https://reviews.llvm.org/D48240
llvm-svn: 335440
Summary:
A WebAssemblyException object contains BBs that belong to a 'catch' part
of the try-catch-end structure. Because CFGSort requires all the BBs
within a catch part to be sorted together as it does for loops, this
pass calculates the nesting structure of catch part of exceptions in a
function. Now this assumes the use of Windows EH instructions.
Reviewers: dschuff, majnemer
Subscribers: jfb, mgorny, sbc100, jgravelle-google, sunfish, llvm-commits
Differential Revision: https://reviews.llvm.org/D44134
llvm-svn: 335439
With the removal of intrin.h in an earlier patch, the intrinsics
that we were actually using were manually re-declared, however
several of them were missed leading to compilation failures with
MSVC. Fix those.
llvm-svn: 335427
Implements PR34259
Intrinsics.h is a very popular header. Most LLVM TUs care about things
like dbg_value, but they don't care how they are implemented. After I
split these out, IntrinsicImpl.inc is 1.7 MB, so this saves each LLVM TU
from scanning 1.7 MB of source that gets pre-processed away.
It also means we can modify intrinsic properties without triggering a
full rebuild, but that's probably less of a win.
I think the next best thing to do would be to split out the target
intrinsics into their own header. Very, very few TUs care about
target-specific intrinsics. It's very hard to split up the target
independent intrinsics like llvm.expect, assume, and dbg.value, though.
llvm-svn: 335407
This is repeatably worth 0.3s compile time on MathExtras.cpp. This is a
very popular header, and it basically pulls all Intel intrinsics into
every LLVM TU. Let's not do that.
llvm-svn: 335406
Summary:
This affects what sections are displayed as "DATA" in llvm-objdump.
The other user llvm-size is unaffected.
Before, a "TEXT" section is also "DATA", which seems weird.
The sh_flags condition matches that of bfd's SEC_DATA but the sh_type
condition uses (== SHF_PROGBITS) instead of bfd's (!= SHT_NOBITS).
bfd's SEC_DATA is not appealing as so many sections will be shown as DATA.
Reviewers: jyknight, Bigcheese
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D48472
llvm-svn: 335405
DWARF v5 explicitly represents file #0 in the line table. Prior
versions did not, so ".loc 0" is still an error in those cases.
Differential Revision: https://reviews.llvm.org/D48452
llvm-svn: 335350
With compilation fix.
Original commit message:
D39788 added a '.stack-size' section containing metadata on function stack sizes
to output ELF files behind the new -stack-size-section flag.
This change does following two things on top:
1) Imagine the case when there are -ffunction-sections flag given and there are text sections in COMDATs.
The patch adds a '.stack-size' section into corresponding COMDAT group, so that linker will be able to
eliminate them fast during resolving the COMDATs.
2) Patch sets a SHF_LINK_ORDER flag and links '.stack-size' with the corresponding .text.
With that linker will be able to do -gc-sections on dead stack sizes sections.
Differential revision: https://reviews.llvm.org/D46874
llvm-svn: 335336
D39788 added a '.stack-size' section containing metadata on function stack sizes
to output ELF files behind the new -stack-size-section flag.
This change does following two things on top:
1) Imagine the case when there are -ffunction-sections flag given and there are text sections in COMDATs.
The patch adds a '.stack-size' section into corresponding COMDAT group, so that linker will be able to
eliminate them fast during resolving the COMDATs.
2) Patch sets a SHF_LINK_ORDER flag and links '.stack-size' with the corresponding .text.
With that linker will be able to do -gc-sections on dead stack sizes sections.
Differential revision: https://reviews.llvm.org/D46874
llvm-svn: 335332
This is the first pass in the main pipeline to use the legacy PM's
ability to run function analyses "on demand". Unfortunately, it turns
out there are bugs in that somewhat-hacky approach. At the very least,
it leaks memory and doesn't support -debug-pass=Structure. Unclear if
there are larger issues or not, but this should get the sanitizer bots
back to green by fixing the memory leaks.
llvm-svn: 335320
This patch adds support for generating a call graph profile from Branch Frequency Info.
The CGProfile module pass simply gets the block profile count for each BB and scans for call instructions. For each call instruction it adds an edge from the current function to the called function with the current BB block profile count as the weight.
After scanning all the functions, it generates an appending module flag containing the data. The format looks like:
!llvm.module.flags = !{!0}
!0 = !{i32 5, !"CG Profile", !1}
!1 = !{!2, !3, !4} ; List of edges
!2 = !{void ()* @a, void ()* @b, i64 32} ; Edge from a to b with a weight of 32
!3 = !{void (i1)* @freq, void ()* @a, i64 11}
!4 = !{void (i1)* @freq, void ()* @b, i64 20}
Differential Revision: https://reviews.llvm.org/D48105
llvm-svn: 335306
Summary:
This initiates a discussion on changing Polly accordingly while re-applying r335197 (D48338).
I have never worked on Polly. The proposed change to param_div_div_div_2.ll is not educated, but just patterns that match the output.
All LLVM files are already reviewed in D48338.
Reviewers: jdoerfert, bollu, efriedma
Subscribers: jlebar, sanjoy, hiraditya, llvm-commits, bixia
Differential Revision: https://reviews.llvm.org/D48453
llvm-svn: 335292
Summary:
GCC and the binutils COFF linker do comdats differently from MSVC.
If we want to be ABI compatible, we have to do what they do, which is to
emit unique section names like ".text$_Z3foov" instead of short section
names like ".text". Otherwise, the binutils linker gets confused and
reports multiple definition errors when two object files from GCC and
Clang containing the same inline function are linked together.
The best description of the issue is probably at
https://github.com/Alexpux/MINGW-packages/issues/1677, we don't seem to
have a good one in our tracker.
I fixed up the .pdata and .xdata sections needed everywhere other than
32-bit x86. GCC doesn't use associative comdats for those, it appears to
rely on the section name.
Reviewers: smeenai, compnerd, mstorsjo, martell, mati865
Subscribers: llvm-commits, hiraditya
Differential Revision: https://reviews.llvm.org/D48402
llvm-svn: 335286
This reverts commit r335206.
As discussed here: https://reviews.llvm.org/rL333740, a fix will come
tomorrow. In the meanwhile, revert this to fix some bots.
llvm-svn: 335272
and everything that comes with it from implementation
and v3 header files.
Leave definition in v2 header files for backwards
compatibility.
Differential Revision: https://reviews.llvm.org/D48191
llvm-svn: 335267
Summary:
This also removes the need for atomic pseudo instructions, since
we select the correct encoding directly in SITargetLowering::lowerImage
for dimension-aware image intrinsics.
Mesa uses dimension-aware image intrinsics since
commit a9a7993441.
Change-Id: I7473d20009476a4ed6d919cae4e6dca9ff42e77a
Reviewers: arsenm, rampitec, mareko, tpr, b-sumner
Subscribers: kzhuravl, wdng, yaxunl, dstuttard, t-tye, llvm-commits
Differential Revision: https://reviews.llvm.org/D48167
llvm-svn: 335231
Summary:
Use the expanded features of the TableGen generic tables to avoid manually
adding the combinatorially exploded set of intrinsics. The
getAMDGPUImageDimIntrinsic lookup function is early-out,
i.e. non-AMDGPU intrinsics will never look at the underlying table.
Use a generic approach for getting the new intrinsic overload to keep the
code simple, and make the image dmask handling more generic:
- handle non-sampler image loads
- handle the case where the set of demanded elements is not a prefix
There is some overlap between this code and an optimization that happens
in the backend during code generation. They currently complement each other:
- only the codegen optimization can generate vec3 loads
- only the InstCombine optimization can handle D16
The InstCombine optimization also likely covers more cases since the
codegen optimization is fairly ad-hoc. Ideally, we'll remove the optimization
in codegen once the infrastructure for vec3 is in place (which will probably
take a long time).
Modify the test cases to use dimension-aware intrinsics. This makes it
easier to see that the test coverage for the new intrinsics is equivalent,
and the old style intrinsics will be removed in a follow-up commit anyway.
Change-Id: I4b91ea661413d13004956fe4ef7d13d41b8ce3ad
Reviewers: arsenm, rampitec, majnemer
Subscribers: kzhuravl, wdng, mgorny, yaxunl, dstuttard, tpr, t-tye, llvm-commits
Differential Revision: https://reviews.llvm.org/D48165
llvm-svn: 335230
Summary:
Having TableGen patterns for image intrinsics is hitting limitations:
for D16 we already have to manually pre-lower the packing of data
values, and we will have to do the same for A16 eventually.
Since there is already some custom C++ code anyway, it is arguably easier
to just do everything in C++, now that we can use the beefed-up generic
tables backend of TableGen to provide all the required metadata and map
intrinsics to corresponding opcodes. With this approach, all image
intrinsic lowering happens in SITargetLowering::lowerImage. That code is
dense due to all the cases that it handles, but it should still be easier
to follow than what we had before, by virtue of it all being done in a
single location, and by virtue of not relying on the TableGen pattern
magic that very few people really understand.
This means that we will have MachineSDNodes with MIMG instructions
during DAG combining, but that seems alright: previously we had
intrinsic nodes instead, but those are similarly opaque to the generic
CodeGen infrastructure, and the final pattern matching just did a 1:1
translation to machine instructions anyway. If anything, the fact that
we now merge the address words into a vector before DAG combine should
be an advantage.
Change-Id: I417f26bd88f54ce9781c1668acc01f3f99774de6
Reviewers: arsenm, rampitec, rtaylor, tstellar
Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits
Differential Revision: https://reviews.llvm.org/D48017
llvm-svn: 335228
Summary:
This is essentially a rewrite of the backend which introduces TableGen
base classes GenericEnum, GenericTable, and SearchIndex. They allow
generating custom enums and tables with lookup functions using
separately defined records as the underlying database.
Also added as part of this change:
- Lookup functions may use indices composed of multiple fields.
- Instruction fields are supported similar to Intrinsic fields.
- When the lookup key has contiguous numeric values, the lookup
function will directly index into the table instead of using a binary
search.
The existing SearchableTable functionality is internally mapped to the
new primitives.
Change-Id: I444f3490fa1dbfb262d7286a1660a2c4308e9932
Reviewers: arsenm, tra, t.p.northover
Subscribers: wdng, llvm-commits
Differential Revision: https://reviews.llvm.org/D48013
llvm-svn: 335225
Summary:
This also allows inner foreach loops to have a list that depends on
the iteration variable of an outer foreach loop. The test cases show
some very simple examples of how this can be used.
This was perhaps the last remaining major non-orthogonality in the
TableGen frontend.
Change-Id: I79b92d41a5c0e7c03cc8af4000c5e1bda5ef464d
Reviewers: tra, simon_tatham, craig.topper, MartinO, arsenm
Subscribers: wdng, llvm-commits
Differential Revision: https://reviews.llvm.org/D47431
llvm-svn: 335221
This enables da-delinearize in Dependence Analysis for delinearizing array
accesses into multiple dimensions. This can help to increase the power of
Dependence analysis on multi-dimensional arrays and prevent having to fall
back to the slower and less accurate MIV tests. It adds static checks on the
bounds of the arrays to ensure that one dimension doesn't overflow into
another, and brings our code in line with our tests.
Differential Revision: https://reviews.llvm.org/D45872
llvm-svn: 335217
Summary:
In some cases, these operands lacked the IsDebug property, which is meant to signal that
they should not affect codegen. This patch adds a check for this property in the
MachineVerifier and adds it where it was missing.
This includes refactorings to use MachineInstrBuilder construction functions instead of
manually setting up the intrinsic everywhere.
Patch by: JesperAntonsson
Reviewers: aprantl, rnk, echristo, javed.absar
Reviewed By: aprantl
Subscribers: qcolombet, sdardis, nemanjai, JDevlieghere, atanasyan, llvm-commits
Differential Revision: https://reviews.llvm.org/D48319
llvm-svn: 335214
r335150 should resolve the issues with the clang-with-thin-lto-ubuntu
and clang-with-lto-ubuntu builders.
Original message:
This patch updates IPSCCP to use PredicateInfo to propagate
facts to true branches predicated by EQ and to false branches
predicated by NE.
As a follow up, we should be able to extend it to also propagate additional
facts about nonnull.
Reviewers: davide, mssimpso, dberlin, efriedma
Reviewed By: davide, dberlin
llvm-svn: 335206
conditions feeding a chain of `and`s or `or`s for a branch.
Much like with full non-trivial unswitching, we rely on the pass manager
to handle iterating until all of the profitable unswitches have been
done. This is to allow other more profitable unswitches to fire on any
of the cloned, simpler versions of the loop if viable.
Threading the partial unswiching through the non-trivial unswitching
logic motivated some minor refactorings. If those are too disruptive to
make it reasonable to review this patch, I can separate them out, but
it'll be somewhat timeconsuming so I wanted to send it for initial
review as-is. Feel free to tell me whether it warrants pulling apart.
I've tried to re-use (and factor out) logic form the partial trivial
unswitching, but not as much could be shared as I had haped. Still, this
wasn't as bad as I naively expected.
Some basic testing is added, but I probably need more. Suggestions for
things you'd like to see tested more than welcome. One thing I'd like to
do is add some testing that when we schedule this with loop-instsimplify
it effectively cleans up the cruft created.
Last but not least, this uncovered a bug that has been in loop cloning
the entire time for non-trivial unswitching. Specifically, we didn't
correctly add the outer-most cloned loop to the list of cloned loops.
This meant that LCSSA wouldn't be updated for it hypothetically, and
more significantly that we would never visit it in the loop pass
manager. I noticed this while checking loop-instsimplify by hand. I'll
try to separate this bugfix out into its own patch with a more focused
test. But it is just one line, so shouldn't significantly confuse the
review here.
After this patch, the only missing "feature" in this unswitch I'm aware
of us non-trivial unswitching of switches. I'll try implementing *full*
non-trivial unswitching of switches (which is at least a sound thing to
implement), but *partial* non-trivial unswitching of switches is
something I don't see any sound and principled way to implement. I also
have no interesting test cases for the latter, so I'm not really
worried. The rest of the things that need to be ported are bug-fixes and
more narrow / targeted support for specific issues.
Differential Revision: https://reviews.llvm.org/D47522
llvm-svn: 335203
Summary:
Since the value stored in the cache might be deleted or replaced with
something else, we need to use tracking ValueHandlers instead of plain
Value pointers. It was discovered in one of internal builds, and
unfortunately there is no small reproducer for the issue.
The cache was introduced in rL327328.
Reviewers: ahatanak, pete
Subscribers: hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D48407
llvm-svn: 335201
Summary:
Try to match udiv and urem patterns, and sink zext down to the leaves.
I'm not entirely sure why some unrelated tests change, but the added <nsw>s seem right.
Reviewers: sanjoy
Subscribers: jlebar, hiraditya, bixia, llvm-commits
Differential Revision: https://reviews.llvm.org/D48338
llvm-svn: 335197
Errors found processing the DW_AT_ranges attribute are propagated by lower level
routines and reported by their callers.
Reviewer: JDevlieghere
Differential Revision: https://reviews.llvm.org/D48344
llvm-svn: 335188
Summary:
Two utils methods have essentially the same functionality. This is an attempt to merge them into one.
1. lib/Transforms/Utils/Local.cpp : MergeBasicBlockIntoOnlyPred
2. lib/Transforms/Utils/BasicBlockUtils.cpp : MergeBlockIntoPredecessor
Prior to the patch:
1. MergeBasicBlockIntoOnlyPred
Updates either DomTree or DeferredDominance
Moves all instructions from Pred to BB, deletes Pred
Asserts BB has single predecessor
If address was taken, replace the block address with constant 1 (?)
2. MergeBlockIntoPredecessor
Updates DomTree, LoopInfo and MemoryDependenceResults
Moves all instruction from BB to Pred, deletes BB
Returns if doesn't have a single predecessor
Returns if BB's address was taken
After the patch:
Method 2. MergeBlockIntoPredecessor is attempting to become the new default:
Updates DomTree or DeferredDominance, and LoopInfo and MemoryDependenceResults
Moves all instruction from BB to Pred, deletes BB
Returns if doesn't have a single predecessor
Returns if BB's address was taken
Uses of MergeBasicBlockIntoOnlyPred that need to be replaced:
1. lib/Transforms/Scalar/LoopSimplifyCFG.cpp
Updated in this patch. No challenges.
2. lib/CodeGen/CodeGenPrepare.cpp
Updated in this patch.
i. eliminateFallThrough is straightforward, but I added using a temporary array to avoid the iterator invalidation.
ii. eliminateMostlyEmptyBlock(s) methods also now use a temporary array for blocks
Some interesting aspects:
- Since Pred is not deleted (BB is), the entry block does not need updating.
- The entry block was being updated with the deleted block in eliminateMostlyEmptyBlock. Added assert to make obvious that BB=SinglePred.
- isMergingEmptyBlockProfitable assumes BB is the one to be deleted.
- eliminateMostlyEmptyBlock(BB) does not delete BB on one path, it deletes its unique predecessor instead.
- adding some test owner as subscribers for the interesting tests modified:
test/CodeGen/X86/avx-cmp.ll
test/CodeGen/AMDGPU/nested-loop-conditions.ll
test/CodeGen/AMDGPU/si-annotate-cf.ll
test/CodeGen/X86/hoist-spill.ll
test/CodeGen/X86/2006-11-17-IllegalMove.ll
3. lib/Transforms/Scalar/JumpThreading.cpp
Not covered in this patch. It is the only use case using the DeferredDominance.
I would defer to Brian Rzycki to make this replacement.
Reviewers: chandlerc, spatel, davide, brzycki, bkramer, javed.absar
Subscribers: qcolombet, sanjoy, nemanjai, nhaehnle, jlebar, tpr, kbarton, RKSimon, wmi, arsenm, llvm-commits
Differential Revision: https://reviews.llvm.org/D48202
llvm-svn: 335183
Summary:
Ideally passes should not need to pass MSSA around and do all updates through the updater.
Add convenience APIs to help with that.
Reviewers: george.burgess.iv
Subscribers: sanjoy, jlebar, llvm-commits, Prazek
Differential Revision: https://reviews.llvm.org/D48334
llvm-svn: 335179
This fixes an extremely subtle non-determinism that can only be
triggered by an unfortunate alignment of passes. In my case:
- JumpThreading does large dominator tree updates
- CorrelatedValuePropagation preserves domtree now
- LICM codegen depends on the order of children on domtree nodes
The last part is non-deterministic if the update was stored in a set.
But it turns out that the set is completely unnecessary, updates are
deduplicated at an earlier stage so we can just use a vector, which is
both more efficient and doesn't destroy the input ordering.
I didn't manage to get the 240 MB IR file reduced enough, triggering
this bug requires a lot of jump threading, so landing this without a
test case.
Differential Revision: https://reviews.llvm.org/D48392
llvm-svn: 335176
This utility should operate on Values, not Instructions. While I'm here,
I've also made it possible to skip emitting replacement dbg.values for
certain debug users (by having RewriteExpr return nullptr).
llvm-svn: 335152
Using OrderedInstructions::dominates as comparator for instructions in
BBs without dominance relation can cause a non-deterministic order
between such instructions. That in turn can cause us to materialize
copies in a non-deterministic order. While this does not effect
correctness, it causes some minor non-determinism in the final generated
code, because values have slightly different labels.
Without this patch, running -print-predicateinfo on a reasonably large
module produces slightly different output on each run.
This patch uses the dominator trees DFSInNum to order instruction from
different BBs, which should enforce a deterministic ordering and
guarantee that dominated instructions come after the instructions that
dominate them.
Reviewers: dberlin, efriedma, davide
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D48230
llvm-svn: 335150
The purpose of this utility is to make it easier for optimizations to
insert replacement dbg.values for instructions they are deleting. This
is useful in situations where salvageDebugInfo is inapplicable, say,
because the new dbg.value cannot refer to an operand of the dying value.
The utility is called insertReplacementDbgValues.
It assumes that the instruction 'From' is going to be deleted, and
inserts replacement dbg.values for each debug user of 'From'. The
newly-inserted dbg.values refer to 'To' instead of 'From'. Each
replacement dbg.value has the same location and variable as the debug
user it replaces, has a DIExpression determined by the result of
'RewriteExpr' applied to an old debug user of 'From', and is placed
before 'InsertBefore'.
This should simplify future patches, like D48331.
llvm-svn: 335144
Summary:
This is useful when hash collisions are unlikely and acceptable, e.g. in clangd
completion ranking.
Reviewers: ioeric
Subscribers: ilya-biryukov, llvm-commits
Differential Revision: https://reviews.llvm.org/D48361
llvm-svn: 335125
This patch teaches llvm-mca how to identify register writes that implicitly zero
the upper portion of a super-register.
On X86-64, a general purpose register is implemented in hardware as a 64-bit
register. Quoting the Intel 64 Software Developer's Manual: "an update to the
lower 32 bits of a 64 bit integer register is architecturally defined to zero
extend the upper 32 bits". Also, a write to an XMM register performed by an AVX
instruction implicitly zeroes the upper 128 bits of the aliasing YMM register.
This patch adds a new method named clearsSuperRegisters to the MCInstrAnalysis
interface to help identify instructions that implicitly clear the upper portion
of a super-register. The rest of the patch teaches llvm-mca how to use that new
method to obtain the information, and update the register dependencies
accordingly.
I compared the kernels from tests clear-super-register-1.s and
clear-super-register-2.s against the output from perf on btver2. Previously
there was a large discrepancy between the estimated IPC and the measured IPC.
Now the differences are mostly in the noise.
Differential Revision: https://reviews.llvm.org/D48225
llvm-svn: 335113
This fixes compilation with MinGW after SVN r333798, which added
a few functions within _WIN32 ifdefs, functions returning
std::error_code. Include everything that is needed instead of
hoping that this header being inclued transitively (which it apparently
is in MSVC builds).
llvm-svn: 335107
This patch introduces two helpers to make it easier to ignore debug
intrinsics:
- Instruction::getNextNonDebugInstruction()
This is just like Instruction::getNextNode(), except that it skips debug
info.
- skipDebugInfo(BasicBlock::iterator)
A free function which advances a BasicBlock iterator past any debug
info. This is a no-op when the iterator already points to a non-debug
instruction.
Part of: llvm.org/PR37728
Related to: https://reviews.llvm.org/D47874
Differential Revision: https://reviews.llvm.org/D48305
llvm-svn: 335083
insertOutlinerPrologue was not used by any target, and prologue-esque code was
beginning to appear in insertOutlinerEpilogue. Refactor that into one function,
buildOutlinedFrame.
This just removes insertOutlinerPrologue and renames insertOutlinerEpilogue.
llvm-svn: 335076
The optimizer is getting smarter (eg, D47986) about differentiating shuffles
based on its mask values, so we should make queries on the mask constant
operand generally available to avoid code duplication.
We'll probably use this soon in the vectorizers and instcombine (D48023 and
https://bugs.llvm.org/show_bug.cgi?id=37806).
We might clean up TTI a bit more once all of its current 'SK_*' options are
covered.
Differential Revision: https://reviews.llvm.org/D48236
llvm-svn: 335067
This simplifies the case if we already have access to the raw data that we need to store in a ConstantDataArray.
The new factor method can also be reused for implementing the factory method that gets the data as ArrayRef.
Differential Revision: https://reviews.llvm.org/D47706
llvm-svn: 335028
Summary:
Add WasmEHFuncInfo and routines to calculate and fill in this struct to
keep track of unwind destination information. This will be used in
other EH related passes.
Reviewers: dschuff
Subscribers: sbc100, jgravelle-google, sunfish, chrib, llvm-commits
Differential Revision: https://reviews.llvm.org/D48263
llvm-svn: 335005
CompileOnDemandLayer2 is a replacement for CompileOnDemandLayer built on the ORC
Core APIs. Functions in added modules are extracted and compiled lazily.
CompileOnDemandLayer2 supports multithreaded JIT'd code, and compilation on
multiple threads.
llvm-svn: 334967
Allow a tied operand of a different operand class in InstAliases,
so that the operand can be printed (and added to the MC instruction)
as the appropriate register. For example, 'GPR64as32', which would
be printed/parsed as a 32bit register and should match a tied 64bit
register operand, where the former is a sub-register of the latter.
This patch also generalizes the constraint checking to an overrideable
method in MCTargetAsmParser, so that target asmparsers can specify
whether a given operand satisfies the tied register constraint.
Reviewers: olista01, rengolin, fhahn, SjoerdMeijer, samparker, dsanders, craig.topper
Reviewed By: fhahn
Differential Revision: https://reviews.llvm.org/D47714
llvm-svn: 334942
DominatorTreeBase::getNode does not modify its parameter and this change
allows callers that only have access to const pointers to use it without
casting.
Reviewers: kuhar, dblaikie, chandlerc
Reviewed By: dblaikie
Differential Revision: https://reviews.llvm.org/D48231
llvm-svn: 334892
This patch adds a simple const_iterator implementation for SmallSet by
delegating to either a SmallVector::const_iterator or
std::set::const_iterator, depending on which storage is used by the
SmallSet.
Reviewers: dblaikie, craig.topper
Reviewed By: dblaikie
Differential Revision: https://reviews.llvm.org/D47942
llvm-svn: 334887
Summary:
Obviates the need for mask/clear/setFlags helpers.
There are some expressions here which can be simplified, but to keep
this easy to review, I have not simplified them in this patch.
No functional change.
Reviewers: sanjoy
Subscribers: hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D48237
llvm-svn: 334874
So far, we've only handled special cases of PatFrag like ImmLeaf. This patch
adds support for the remaining cases using similar mechanisms.
Like most C++ code from SelectionDAG, GISel and DAGISel expect to operate on
different types and representations and as such the code is not compatible
between the two. It's therefore necessary to add an alternative implementation
in the GISelPredicateCode field.
The target test for this feature could easily be done with IntImmLeaf and this
would save on a little boilerplate. The reason I've chosen to implement this
using PatFrag.GISelPredicateCode and not IntImmLeaf is because I was unable to
find a rule that was blocked solely by lack of support for PatFrag predicates. I
found that the ones I investigated as being likely candidates for the test
were further blocked by other things.
llvm-svn: 334871
Modify ExpandStrictFPOp(...) to handle nodes that have scalar
operands.
Also, add a Strict FMA test and do some other light cleanup in the
Strict FP code.
Differential Revision: https://reviews.llvm.org/D48149
llvm-svn: 334863
Enables using the high and high-adjusted symbol modifiers on thread local
storage modifers in powerpc assembly. Needed to be able to support 64 bit
thread-pointer and dynamic-thread-pointer access sequences.
Differential Revision: https://reviews.llvm.org/D47754
llvm-svn: 334856
Add support for the "@high" and "@higha" symbol modifiers in powerpc64 assembly.
The modifiers represent accessing the segment consiting of bits 16-31 of a
64-bit address/offset.
Differential Revision: https://reviews.llvm.org/D47729
llvm-svn: 334855
This is r334704 (which was reverted in r334732) with a fix for
types like x86_fp80. We need to use getTypeAllocSizeInBits and
not getTypeStoreSizeInBits to avoid dropping debug info for
such types.
Original commit msg:
> Summary:
> Do not convert a DbgDeclare to DbgValue if the store
> instruction only refer to a fragment of the variable
> described by the DbgDeclare.
>
> Problem was seen when for example having an alloca for an
> array or struct, and there were stores to individual elements.
> In the past we inserted a DbgValue intrinsics for each store,
> just as if the store wrote the whole variable.
>
> When handling store instructions we insert a DbgValue that
> indicates that the variable is "undefined", as we do not know
> which part of the variable that is updated by the store.
>
> When ConvertDebugDeclareToDebugValue is used with a load/phi
> instruction we assert that the referenced value is large enough
> to cover the whole variable. Afaict this should be true for all
> scenarios where those methods are used on trunk. If the assert
> blows in the future I guess we could simply skip to insert a
> dbg.value instruction.
>
> In the future I think we should examine which part of the variable
> that is accessed, and add a DbgValue instrinsic with an appropriate
> DW_OP_LLVM_fragment expression.
>
> Reviewers: dblaikie, aprantl, rnk
>
> Reviewed By: aprantl
>
> Subscribers: JDevlieghere, llvm-commits
>
> Tags: #debug-info
>
> Differential Revision: https://reviews.llvm.org/D48024
llvm-svn: 334830
Instruction bundling is only supported on descendants of the
MCEncodedFragment type. By moving the bundling functionality and
MCSubtargetInfo to this class it makes it easier to set and extract the
MCSubtargetInfo when it is necessary.
This is a refactoring change that will make it easier to pass the
MCSubtargetInfo through to writeNops when nop padding is required.
Differential Revision: https://reviews.llvm.org/D45959
llvm-svn: 334814
In particular, when asked to print a MemoryAccess, we'll now print where
defs are optimized to, and we'll print optimized access types.
This patch also introduces an operator<< to make printing AliasResults
easier.
Patch by Juneyoung Lee!
Differential Revision: https://reviews.llvm.org/D47860
llvm-svn: 334760
This reverts rL331412. We didn't up using fragment atoms
in the wasm object writer after all.
Differential Revision: https://reviews.llvm.org/D48173
llvm-svn: 334734
If WaitUntilReady is set to true then blockingLookup will return once all
requested symbols are ready. If WaitUntilReady is set to false then
blockingLookup will return as soon as all requested symbols have been
resolved. In the latter case, if any error occurs in finalizing the symbols it
will be reported to the ExecutionSession, rather than returned by
blockingLookup.
llvm-svn: 334722
In some cases, for example when compiling a preprocessed file, the
front-end is not able to provide an MD5 checksum for all files. When
that happens, omit the MD5 checksums from the final DWARF, because
DWARF doesn't have a way to indicate that some but not all files have
a checksum.
When assembling a .s file, and some but not all .file directives
provide an MD5 checksum, issue a warning and don't emit MD5 into the
DWARF.
Fixes PR37623.
Differential Revision: https://reviews.llvm.org/D48135
llvm-svn: 334710
Summary:
Do not convert a DbgDeclare to DbgValue if the store
instruction only refer to a fragment of the variable
described by the DbgDeclare.
Problem was seen when for example having an alloca for an
array or struct, and there were stores to individual elements.
In the past we inserted a DbgValue intrinsics for each store,
just as if the store wrote the whole variable.
When handling store instructions we insert a DbgValue that
indicates that the variable is "undefined", as we do not know
which part of the variable that is updated by the store.
When ConvertDebugDeclareToDebugValue is used with a load/phi
instruction we assert that the referenced value is large enough
to cover the whole variable. Afaict this should be true for all
scenarios where those methods are used on trunk. If the assert
blows in the future I guess we could simply skip to insert a
dbg.value instruction.
In the future I think we should examine which part of the variable
that is accessed, and add a DbgValue instrinsic with an appropriate
DW_OP_LLVM_fragment expression.
Reviewers: dblaikie, aprantl, rnk
Reviewed By: aprantl
Subscribers: JDevlieghere, llvm-commits
Tags: #debug-info
Differential Revision: https://reviews.llvm.org/D48024
llvm-svn: 334704
This is failing to compile when LLVM_ENABLE_THREADS is false,
and the fix is not immediately obvious, so reverting while I look
into it.
llvm-svn: 334658
Previously ThreadPool could only queue async "jobs", i.e. work
that was done for its side effects and not for its result. It's
useful occasionally to queue async work that returns a value.
From an API perspective, this is very intuitive. The previous
API just returned a shared_future<void>, so all we need to do is
make it return a shared_future<T>, where T is the type of value
that the operation returns.
Making this work required a little magic, but ultimately it's not
too bad. Instead of keeping a shared queue<packaged_task<void()>>
we just keep a shared queue<unique_ptr<TaskBase>>, where TaskBase
is a class with a pure virtual execute() method, then have a
templated derived class that stores a packaged_task<T()>. Everything
else works out pretty cleanly.
Differential Revision: https://reviews.llvm.org/D48115
llvm-svn: 334643
Returning optional is much safer.
The previous API had potential to cause use of undefined variables, if
the value passed by pointer was accidentally read afterwards.
Differential Revision: https://reviews.llvm.org/D48137
llvm-svn: 334634
On Windows we've observed that if you open a file, write to it, map it into
memory and close the file handle, the contents of the memory mapping can
sometimes be incorrect. That was what we did when adding an entry to the
ThinLTO cache using the TempFile and MemoryBuffer classes, and it was causing
intermittent build failures on Chromium's ThinLTO bots on Windows. More
details are in the associated Chromium bug (crbug.com/786127).
We can prevent this from happening by keeping a handle to the file open while
the mapping is active. So this patch changes the mapped_file_region class to
duplicate the file handle when mapping the file and close it upon unmapping it.
One gotcha is that the file handle that we keep open must not have been
created with FILE_FLAG_DELETE_ON_CLOSE, as otherwise the operating system
will prevent other processes from opening the file. We can achieve this
by avoiding the use of FILE_FLAG_DELETE_ON_CLOSE altogether. Instead,
we use SetFileInformationByHandle with FileDispositionInfo to manage the
delete-on-close bit. This lets us remove the hack that we used to use to
clear the delete-on-close bit on a file opened with FILE_FLAG_DELETE_ON_CLOSE.
A downside of using SetFileInformationByHandle/FileDispositionInfo as
opposed to FILE_FLAG_DELETE_ON_CLOSE is that it prevents us from using
CreateFile to open the file while the flag is set, even within the same
process. This doesn't seem to matter for almost every client of TempFile,
except for LockFileManager, which calls sys::fs::create_link to create a
hard link from the lock file, and in the process of doing so tries to open
the file. To prevent this change from breaking LockFileManager I changed it
to stop using TempFile by effectively reverting r318550.
Differential Revision: https://reviews.llvm.org/D48051
llvm-svn: 334630
Summary:
For targets I'm not familiar with, I've automatically made the "default to 1 for each resource" behaviour explicit in the td files.
For more obvious cases, I've ventured a fix.
Some notes:
- Exynos is especially fishy.
- AArch64SchedThunderX2T99.td had some truncated entries. If I understand correctly, the person who wrote that interpreted the ResourceCycle as a range. I made the decision to use the upper/lower bound for consistency with the 'Latency' value. I'm sure there is a better choice.
- The change to X86ScheduleBtVer2.td is an NFC, it just makes values more explicit.
Also see PR37310.
Reviewers: RKSimon, craig.topper, javed.absar
Subscribers: kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D46356
llvm-svn: 334586
Summary:
This method was not correct for entries in DWO files as it assumed it
could just add up the CU and DIE offsets to get the absolute DIE offset.
This is not correct for the DWO files, as here the CU offset will
reference the skeleton unit, whereas the DIE offset will be the offset
in the full unit in the DWO file.
Unfortunately, this means that we are not able to determine the absolute
DIE offset using the information in the .debug_names section alone,
which means we have to offload some of this work to the users of this
class.
To demonstrate how this can be done, I've added/fixed the ability to
lookup entries using accelerator tables in DWO files in llvm-dwarfdump.
To make this happen, I've needed to make two extra changes in other
classes:
- made the DWARFContext method to lookup a CU based on the section
offset public. I've needed this functionality to lookup a CU, and this
seems like a useful thing in general.
- made DWARFUnit::getDWOId call extractDIEsIfNeeded. Before this, the
DWOId was filled in only if the root DIE happened to be parsed
before we called the accessor. Since the lazy parsing is supposed to
happen under the hood, calling extractDIEsIfNeeded seems appropriate.
Reviewers: JDevlieghere, aprantl, dblaikie
Subscribers: mgrang, llvm-commits
Differential Revision: https://reviews.llvm.org/D48009
llvm-svn: 334578
If a VSO has a fallback definition generator attached it will be called during
lookup (and lookupFlags) for any unresolved symbols. The definition generator
can add new definitions to the VSO for any unresolved symbol. This allows VSOs
to generate new definitions on demand.
The immediate use case for this code is supporting VSOs that can import
definitions found via dlsym on demand.
llvm-svn: 334538
All COFF targets should use @IMGREL32 relocations for symbol differences
against __ImageBase. Do the same for getSectionForConstant, so that
immediates lowered to globals get merged across TUs.
Patch by Chris January
Differential Revision: https://reviews.llvm.org/D47783
llvm-svn: 334523
Apparently, MachineInstr class definition as well as pretty much all of
the machine passes assume that the only kind of MachineInstr's operands
that is variadic for variadic opcodes is explicit non-definitions.
In particular, this assumption is made by MachineInstr::defs(), uses(),
and explicit_uses() methods, as well as by MachineCSE pass.
The assumption is incorrect judging from at least TableGen backend
implementation, that recognizes variable_ops in OutOperandList, and the
very existence of G_UNMERGE_VALUES generic opcode, or ARM load multiple
instructions, all of which have variadic defs.
In particular, MachineCSE pass breaks MIR with CSE'able G_UNMERGE_VALUES
instructions in it.
This commit implements MachineInstr::getNumExplicitDefs() similar to
pre-existing MachineInstr::getNumExplicitOperands(), fixes
MachineInstr::defs(), uses(), and explicit_uses(), and fixes MachineCSE
pass.
As the issue addressed seems to affect only machine passes that could be
ran mid-GlobalISel pipeline at the moment, the other passes aren't fixed
by this commit, like MachineLICM: that could be done on per-pass basis
when (if ever) they get adopted for GlobalISel.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D45640
llvm-svn: 334520
- Do not emit following assembler directives:
- .hsa_code_object_version
- .hsa_code_object_isa
- .amd_amdgpu_isa
- .amd_amdgpu_hsa_metadata
- .amd_amdgpu_pal_metadata
- Do not emit .note entries
- Cleanup and bring in sync kernel descriptor header file
- Emit kernel descriptor into .rodata with appropriate relocations and
alignments
llvm-svn: 334519
This simplifies some code which had StringRefs to begin with, and
makes other code more complicated which had const char* to begin
with.
In the end, I think this makes for a more idiomatic and platform
agnostic API. Not all platforms launch process with null terminated
c-string arrays for the environment pointer and argv, but the api
was designed that way because it allowed easy pass-through for
posix-based platforms. There's a little additional overhead now
since on posix based platforms we'll be takign StringRefs which
were constructed from null terminated strings and then copying
them to null terminate them again, but from a readability and
usability standpoint of the API user, I think this API signature
is strictly better.
llvm-svn: 334518
As discussed on PR33744, this patch relaxes ShuffleKind::SK_Alternate which requires shuffle masks to only match an alternating pattern from its 2 sources:
e.g. v4f32: <0,5,2,7> or <4,1,6,3>
This seems far too restrictive as most SIMD hardware which will implement it using a general blend/bit-select instruction, so replaces it with SK_Select, permitting elements from either source as long as they are inline:
e.g. v4f32: <0,5,2,7>, <4,1,6,3>, <0,1,6,7>, <4,1,2,3> etc.
This initial patch just updates the name and cost model shuffle mask analysis, later patch reviews will update SLP to better utilise this - it still limits itself to SK_Alternate style patterns.
Differential Revision: https://reviews.llvm.org/D47985
llvm-svn: 334513
Don't provide the assembler source as the "root file" unless the user
asked to have debug info for the assembler source (with -g).
If the source doesn't provide an explicit ".file 0" then (a) use the
compilation directory as directory #0, and (b) use the file #1 info
for file #0 also.
Differential Revision: https://reviews.llvm.org/D48055
llvm-svn: 334512
Name table occupies a big chunk of size in current binary format sample profile.
In order to reduce its size, the patch changes the sample writer/reader to
save/restore MD5Hash of names in the name table. Sample annotation phase will
also use MD5Hash of name to query samples accordingly.
Experiment shows compact binary format can reduce the size of sample profile by
2/3 compared with binary format generally.
Differential Revision: https://reviews.llvm.org/D47955
llvm-svn: 334447
Summary:
This kind of functionality is useful to other project apart from clang.
LLDB works with version numbers a lot, but it does not have a convenient
abstraction for this. Moving this class to a lower level library allows
it to be freely used within LLDB.
Since this class is used in a lot of places in clang, and it used to be
in the clang namespace, it seemed appropriate to add it to the list of
adopted classes in LLVM.h to avoid prefixing all uses with "llvm::".
Also, I didn't find any tests specific for this class, so I wrote a
couple of quick ones for the more interesting bits of functionality.
Reviewers: zturner, erik.pilkington
Subscribers: mgorny, cfe-commits, llvm-commits
Differential Revision: https://reviews.llvm.org/D47887
llvm-svn: 334399
I took some liberties and quoted fewer characters than before,
based on an article from MSDN which says that only certain characters
cause an arg to require quoting. This seems to be incorrect, though,
and worse it seems to be a difference in Windows version. The bot
that fails is Windows 7, and I can't reproduce the failure on Win
10. But it's definitely related to quoting and special characters,
because both tests that fail have a * in the argument, which is one
of the special characters that would cause an argument to be quoted
before but not any longer after the new patch.
Since I don't have Win 7, all I can do is just guess that I need to
restore the old quoting rules. So this patch does that in hopes that
it fixes the problem on Windows 7.
llvm-svn: 334375
We currently support them only in AArch64. The NEON Reference,
however, says they are 'ARMv7, ARMv8' intrinsics.
Differential Revision: https://reviews.llvm.org/D47447
llvm-svn: 334361
This reverts commit 65243b6d19143cb7a03f68df0169dcb63e8b4632.
Seems like it's not a flake. It might have something to do with
the '*' character being in a command line.
llvm-svn: 334356
There were a few linux compilation failures, but other than that
I think this was just a flake that caused the tests to fail. I'm
going to resubmit and see if the failures go away, if not I'll
revert again.
llvm-svn: 334355
This reverts commit 10d2e88e87150a35dc367ba30716189d2af26774.
This is causing some test failures for some reason, reverting
while I investigate.
llvm-svn: 334354
This function was internal to Program.inc, but I've needed this
on several occasions when I've had to use CreateProcess without
llvm's sys::Execute functions. In doing so, I noticed that the
function was written using unsafe C-string access and was pretty
hard to understand / make sense of, so I've also re-written the
functions to use more modern LLVM constructs.
llvm-svn: 334353
This is a recommit of r333506, which was reverted in r333518.
The original commit message is below.
In r325551 many calls of malloc/calloc/realloc were replaces with calls of
their safe counterparts defined in the namespace llvm. There functions
generate crash if memory cannot be allocated, such behavior facilitates
handling of out of memory errors on Windows.
If the result of *alloc function were checked for success, the function was
not replaced with the safe variant. In these cases the calling function made
the error handling, like:
T *NewElts = static_cast<T*>(malloc(NewCapacity*sizeof(T)));
if (NewElts == nullptr)
report_bad_alloc_error("Allocation of SmallVector element failed.");
Actually knowledge about the function where OOM occurred is useless. Moreover
having a single entry point for OOM handling is convenient for investigation
of memory problems. This change removes custom OOM errors handling and
replaces them with calls to functions `llvm::safe_*alloc`.
Declarations of `safe_*alloc` are moved to a separate include file, to avoid
cyclic dependency in SmallVector.h
Differential Revision: https://reviews.llvm.org/D47440
llvm-svn: 334344
An expression like
(zext i2 {(trunc i32 (1 + %B) to i2),+,1}<%while.body> to i32)
will become zero exactly when the nested value becomes zero in its type.
Strip injective operations from the input value in howFarToZero to make
the value simpler.
Differential Revision: https://reviews.llvm.org/D47951
llvm-svn: 334318
NFC here, this just raises some platform specific ifdef hackery
out of a class and creates proper platform-independent typedefs
for the relevant things. This allows these typedefs to be
reused in other places without having to reinvent this preprocessor
logic.
llvm-svn: 334294