This adds two new utility functions findLoopControlBlock and findLoopPreheader
to MachineLoop and MachineLoopInfo. These functions are refactored and taken
from the Hexagon target as they are target independent; thus this is intendend to
be a non-functional change.
Differential Revision: https://reviews.llvm.org/D22959
llvm-svn: 278661
The new version has several advantages:
1) IMSHO it's more readable and neater
2) It handles loads and stores properly
3) It can handle any number of incoming blocks rather than just two. I'll be taking advantage of this in a followup patch.
With this change we can now finally sink load-modify-store idioms such as:
if (a)
return *b += 3;
else
return *b += 4;
=>
%z = load i32, i32* %y
%.sink = select i1 %a, i32 5, i32 7
%b = add i32 %z, %.sink
store i32 %b, i32* %y
ret i32 %b
When this works for switches it'll be even more powerful.
llvm-svn: 278660
Summary:
The assembler currently does not check the branch target for CBZ/CBNZ
instructions, which only permit branching forwards with a positive offset. This
adds validation for the branch target to ensure negative PC-relative offsets are
not encoded into the instruction, whether specified as a literal or as an
assembler symbol.
Reviewers: rengolin, t.p.northover
Subscribers: llvm-commits, rengolin
Differential Revision: https://reviews.llvm.org/D23312
llvm-svn: 278659
If a loop is not rotated (for example when optimizing for size), the latch is not the backedge. If we promote an expression to post-inc form, we not only increase register pressure and add a COPY for that IV expression but for all IVs!
Motivating testcase:
void f(float *a, float *b, float *c, int n) {
while (n-- > 0)
*c++ = *a++ + *b++;
}
It's imperative that the pointer increments be located in the latch block and not the header block; if not, we cannot use post-increment loads and stores and we have to keep both the post-inc and pre-inc values around until the end of the latch which bloats register usage.
llvm-svn: 278658
We are trying to prove that one group of operands is a subset of
another. We did this by populating two Sets and determining that every
element within one was inside the other.
However, this is unnecessary. We can simply construct a single set and
test if each operand is within it.
llvm-svn: 278641
1. Use shuffle to insert element i1 into vector. The previous implementation was incorrect ( dest_bit OR src_bit , it doesn't clear the bit if src_bit=0 )
2. Improve shuffle i1 vector, use CVT2MASK if supported instead TRUNCATE.
Differential Revision: http://reviews.llvm.org/D23347
llvm-svn: 278623
IRCE has the ability to further version pre-loops and post-loops that it
created, but this isn't useful at all. This change teaches IRCE to
leave behind some metadata in the loops it creates (by cloning the main
loop) so that these new loops are not re-processed by IRCE.
Today this bug is hidden by another bug -- IRCE does not update LoopInfo
properly so the loop pass manager does not re-invoke IRCE on the loops
it split out. However, once the latter is fixed the bug addressed in
this change causes IRCE to infinite-loop in some cases (e.g. it splits
out a pre-loop, a pre-pre-loop from that, a pre-pre-pre-loop from that
and so on).
llvm-svn: 278617
The auto-upgrade path could be called before the VST (global
names) was fully parsed, and thus intrinsic names were not
available and the autoupgrade logic could not operate.
Fix link failures with ThinLTO.
This is a recommit of r278610 with a different fix.
llvm-svn: 278615
LowerTargetConstantPool is not properly setting the TargetFlag to indicate
desired relocation. Coding error, the offset parameter was omitted, so the
TargetFlag was used as the offset, and the TargetFlag defaulted to zero.
This only affects -fpic compilation, and only those items created in a
Constant Pool, for example a vector of constants. Halide ran into this issue.
llvm-svn: 278614
The (negative) test case is supposed to check that IRCE does not muck
with range checks it cannot handle, not that it does the right thing in
the absence of profiling information.
llvm-svn: 278612
Loops containing `indirectbr` may not be in simplified form, even after
running LoopSimplify. Reject then gracefully, instead of tripping an
assert.
llvm-svn: 278611
The auto-upgrade path could be called before the VST (global
names) was fully parsed, and thus intrinsic names were not
available and the autoupgrade logic could not operate.
Fix link failures with ThinLTO.
llvm-svn: 278610
Summary:
Refactor the existing support into a LoopDataPrefetch implementation
class and a LoopDataPrefetchLegacyPass class that invokes it.
Add a new LoopDataPrefetchPass for the new pass manager that utilizes
the LoopDataPrefetch implementation class.
Reviewers: mehdi_amini
Subscribers: sanjoy, mzolotukhin, nemanjai, llvm-commits
Differential Revision: https://reviews.llvm.org/D23483
llvm-svn: 278591
`IVVisitor::visitCast` used to have the invariant that if the
instruction it was passed was a sext or zext instruction, the result of
the instruction would be wider than the induction variable. This is no
longer true after rL275037, so this change teaches `IndVarSimplify` s
implementation of `IVVisitor::visitCast` to work with the relaxed
invariant.
A corresponding change to SimplifyIndVar to preserve the said invariant
after rL275037 would also work, but given how `IVVisitor::visitCast` is
spelled (no indication of said invariant), I figured the current fix is
cleaner.
Fixes PR28935.
llvm-svn: 278584
Summary:
In getVectorizablePrefix, this is less efficient (because we have to
iterate over the BB twice), but boy is it simpler. Given how much
trouble we've had here, I think the simplicity gain is worthwhile.
In reorder(), this is actually more efficient, as
DominatorTree::dominates iterates over the BB from the beginning when
the two instructions are in the same BB.
Reviewers: asbirlea
Subscribers: arsenm, llvm-commits, mzolotukhin
Differential Revision: https://reviews.llvm.org/D23472
llvm-svn: 278580
Summary:
This test was resulting in asan/valgrind failures due to undefined
DWARF register mappings for WebAssembly, and was disabled in r278495.
These have been resolved.
Reviewers: sunfish, dschuff
Subscribers: bkramer, llvm-commits, jfb
Differential Revision: https://reviews.llvm.org/D23459
llvm-svn: 278576
Summary:
If the backend does not define LLVM/DWARF register mappings, the associated
variables are undefined since the map initializer is called by auto-generated
TableGen routines. This patch initializes the pointers and sizes to nullptr
and zero, respectively, and checks that they are valid before searching
for a mapping.
Reviewers: grosbach, dschuff
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D23458
llvm-svn: 278574
InnerLoopVectorizer shouldn't handle a loop with cycles inside the loop
body, even if that cycle isn't a natural loop.
Fixes PR28541.
Differential Revision: https://reviews.llvm.org/D22952
llvm-svn: 278573
On a Windows build of Chromium, r278532 (up to r278539)
X86FrameLowering::emitEpilogue because it wasn't wary enough of the
return of MachineBasicBlock::getFirstTerminator. Guard all the uses
here.
Note that r278532 *looks* like an NFC commit (just an API change), but
it removes a couple of layers of abstraction and is probably causing
optimization differences in MSVC.
llvm-svn: 278572
They aren't static, and moving them to the entry block across something
else will only result in tears.
Root cause of http://crbug.com/636558.
llvm-svn: 278571
Pattern match has some paths which can operate on constant instructions,
but not all. This adds a version of m_value() to return const Value* and
changes ICmp matching to use auto so that it can match both constant and
mutable instructions.
Tests also included for both mutable and constant ICmpInst matching.
This will be used in a future commit to constify ValueTracking.cpp.
llvm-svn: 278570
This bring LLVM-generated PTX closer to what nvcc generates and avoids
triggering issues in ptxas.
For instance, ptxas does not accept .s16 (or .u16) registers as operands
for .fp16 instructions.
Differential Revision: https://reviews.llvm.org/D23460
llvm-svn: 278568
Summary:
The BitcodeWriterPass was ported a couple years ago, and predates the
PassInfoMixin. Make BitcodeWriterPass from that base class.
Should BitcodeWriterPass be added to the PassRegistry.def file? It seems
like that is only for passes that can be added arbitrarily, e.g. via the
-passes flag to the opt tool. Whereas the bitcode writer is added
specially based on the output type (and requires an output stream and
other parameters). For now I have left it out of the PassRegistry, but
let me know if it should go there.
Finally, I was considering an NFC change of the legacy WriteBitcodePass
to BitcodeWriterLegacyPass to make its usage clearer and more consistent
with other legacy passes. WDYT?
Reviewers: mehdi_amini
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D23465
llvm-svn: 278566
Trunk would try to create something like "stp x9, x8, [x0], #512", which isn't actually a valid instruction.
Differential revision: https://reviews.llvm.org/D23368
llvm-svn: 278559
This contains the two missing checks for LC_SEGMENT load command fields.
And checks for the Mach-O sections fields that would make them invalid.
With the new checks, some of the existing malformed file checks now trips one
of these instead of the issue it was having before so those tests were adjusted.
llvm-svn: 278557
Summary: It triggers exponential behavior when the DAG has many branches.
Reviewers: hfinkel, kbarton
Subscribers: iteratee, nemanjai, echristo
Differential Revision: https://reviews.llvm.org/D23428
llvm-svn: 278548
The original `ExecuteCommand()` called `system()` from the C library.
The C library implementation of this on macOS contains a mutex which
serializes calls to `system()`. This prevented the `-jobs=` flag
from running copies of the fuzzing binary in parallel which is
the opposite of what is intended.
To fix this on macOS an alternative implementation of `ExecuteCommand()`
is provided that can be used concurrently. This is provided in
`FuzzerUtilDarwin.cpp` which is guarded to only compile code on Apple
platforms. The existing implementation has been moved to a new file
`FuzzerUtilLinux.cpp` which is guarded to only compile code on Linux.
This commit includes a simple test to check that LibFuzzer is being
executed in parallel when requested.
Differential Revision: https://reviews.llvm.org/D22742
llvm-svn: 278544
No one is using the capability to implement next and prev another way
(since lld stopped doing it in r278468). Remove the customization point
by moving the API from ilist_nextprev_traits<T> to ilist_node_access.
The old traits class is still useful/necessary API as a target for
friends of node types that inherit privately from ilist_node.
Eventually I plan to either remove it entirely or move the template
parameters to the methods.
(Note: if there's desire to bring back customization of next/prev
pointers in the future (e.g., to pack some bits in there), I think a
traits class like this is an awkward way to accomplish it. Instead, we
should change ilist<T> to be ilist<ilist_node<T>>, and give an extra
template parameter to ilist_node.)
llvm-svn: 278532
This is part of an effort to constify ValueTracking.cpp. This change is
to methods which need const Value* instead of Value* to go with the upcoming
changes to ValueTracking.
llvm-svn: 278528
Summary: The refined propagation algorithm is more accurate and robust.
Reviewers: davidxl, dnovillo
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D23224
llvm-svn: 278522
Currently X86ISelLowering has a similar transformation for sexts:
sext(add_nsw(x, C)) --> add(sext(x), C_sext)
In this change I extend this code to handle zexts as well.
Reviewed By: spatel
Differential Revision: https://reviews.llvm.org/D23359
llvm-svn: 278520
Recursive calls to aliasCheck from alias[GEP|Select|PHI] may result in a second call to GetUnderlyingObject for a Value, whose underlying object is already computed. This patch ensures that in this situations, the underlying object is not computed again, and the result of the previous call is resued.
https://reviews.llvm.org/D22305
llvm-svn: 278519
This re-factoring could cause the following slight changes in generated
code, though none were observed during testing:
- MachineScheduler could decide not to cluster some loads/stores if
there are other load/stores with non-pairable opcodes that have the
same base register and offset as a pairable set of load/stores. One
case of different MachineScheduler pairing did show up in my testing,
but it wasn't due to this issue, but due
BaseMemOpClusterMutation::clusterNeighboringMemOps() being unstable
w.r.t. the order it considers memory operations. See PR28942.
- The ImplicitNullChecks optimization could be done for more load/store
opcodes. This optimization isn't done for C/C++ code, so it didn't
show up in my testing.
Reviewers: mcrosier, t.p.northover
Subscribers: aemerson, rengolin, mcrosier, llvm-commits
Differential Revision: https://reviews.llvm.org/D23365
llvm-svn: 278515
Rewrite Visited[Cond] = getValueFromConditionImpl(..., Visited) statement which can lead to a memory corruption since getValueFromConditionImpl changes Visited map and invalidates the iterators.
llvm-svn: 278514
Share code for the (mostly problematic) embedded sentinel traits.
- Move the LLVM_NO_SANITIZE("object-size") attribute to
ilist_half_embedded_sentinel_traits and ilist_embedded_sentinel_traits
(previously it spread throughout the code duplication).
- Add an ilist_full_embedded_sentinel_traits which has no UB (but has
the downside of storing the complete node).
- Replace all the custom sentinel traits in LLVM with a declaration of
ilist_sentinel_traits that inherits from one of the embedded sentinel
traits classes.
There are still custom sentinel traits in other LLVM subprojects. I'll
remove those in a follow-up.
Nothing at all should be changing here, this is just rearranging code.
Note that the final goal here is to remove the sentinel traits
altogether, settling on the memory layout of
ilist_half_embedded_sentinel_traits without the UB. This intermediate
step moves the logic into ilist.h.
llvm-svn: 278513
lto::InputFile::Symbol::getCommonSize should return uint64_t instead of
size_t since it is returning the result of DataLayout::getTypeAllocSize
which returns uint64_t, and the result of getCommonSize is assigned to a
uint64_t variable. On 32-bit builds size_t is unsigned int and there are
type errors. This was introduced in r278338.
llvm-svn: 278512
...and the two followup commits:
Revert "[Sparc][Leon] Missed resetting option flags from check-in 278489."
Revert "[Sparc][Leon] Errata fixes for various errata in different
versions of the Leon variants of the Sparc 32 bit processor."
This reverts commit r274856, r278489, and r278492.
llvm-svn: 278511
Summary:
Port the NameAnonFunction pass and add a test.
Depends on D23439.
Reviewers: mehdi_amini
Subscribers: llvm-commits, mehdi_amini
Differential Revision: https://reviews.llvm.org/D23440
llvm-svn: 278509
Summary:
Port the ModuleSummaryAnalysisWrapperPass to the new pass manager.
Use it in the ported BitcodeWriterPass (similar to how we use the
legacy ModuleSummaryAnalysisWrapperPass in the legacy WriteBitcodePass).
Also, pass the -module-summary opt flag through to the new pass
manager pipeline and through to the bitcode writer pass, and add
a test that uses it.
Reviewers: mehdi_amini
Subscribers: llvm-commits, mehdi_amini
Differential Revision: https://reviews.llvm.org/D23439
llvm-svn: 278508
Take range metadata into account for conditions like this:
%length = load i32, i32* %length_ptr, !range !{i32 0, i32 2147483647}
%cmp = icmp ult i32 %a, %length
This is a common pattern for range checks where the length of the array is dynamically loaded.
Reviewed By: sanjoy
Differential Revision: https://reviews.llvm.org/D23267
llvm-svn: 278496
The PALIGNR target shuffle decode was not taking into account that DecodePALIGNRMask (rather oddly) expects the operands to be in reverse order, nor was it detecting unary patterns, causing combines to combine with the incorrect input.
The cgbuiltin, auto upgrade and instruction comments code correctly swap the operands so are not affected.
llvm-svn: 278494
Currently LVI can only gather value constraints from comparisons like:
* icmp <pred> Val, ...
* icmp ult (add Val, Offset), ...
In fact we can handle any predicate in latter comparisons.
Reviewed By: sanjoy
Differential Revision: https://reviews.llvm.org/D23357
llvm-svn: 278493
The nature of the errata are listed in the comments preceding the errata fix passes. Relevant unit tests are implemented for each of these.
These changes update older versions of these errata fixes with improvements to code and unit tests.
Differential Revision: https://reviews.llvm.org/D21960
llvm-svn: 278489
Summary:
1. Make coroutine representation more robust against optimization that may duplicate instruction by introducing coro.id intrinsics that returns a token that will get fed into coro.alloc and coro.begin. Due to coro.id returning a token, it won't get duplicated and can be used as reliable indicator of coroutine identify when a particular coroutine call gets inlined.
2. Move last three arguments of coro.begin into coro.id as they will be shared if coro.begin will get duplicated.
3. doc + test + code updated to support the new intrinsic.
Reviewers: mehdi_amini, majnemer
Subscribers: mehdi_amini, llvm-commits
Differential Revision: https://reviews.llvm.org/D23412
llvm-svn: 278481
Remove all ilist_iterator to pointer casts. There were two reasons for
casts:
- Checking for an uninitialized (i.e., null) iterator. I added
MachineInstrBundleIterator::isValid() to check for that case.
- Comparing an iterator against the underlying pointer value while
avoiding converting the pointer value to an iterator. This is
occasionally necessary in MachineInstrBundleIterator, since there is
an assertion in the constructors that the underlying MachineInstr is
not bundled (but we don't care about that if we're just checking for
pointer equality).
To support the latter case, I rewrote the == and != operators for
ilist_iterator and MachineInstrBundleIterator.
- The implicit constructors now use enable_if to exclude
const-iterator => non-const-iterator conversions from overload
resolution (previously it was a compiler error on instantiation, now
it's SFINAE).
- The == and != operators are now global (friends), and are not
templated.
- MachineInstrBundleIterator has overloads to compare against both
const_pointer and const_reference. This avoids the implicit
conversions to MachineInstrBundleIterator that assert, instead just
checking the address (and I added unit tests to confirm this).
Notably, the only remaining uses of ilist_iterator::getNodePtrUnchecked
are in ilist.h, and no code outside of ilist*.h directly relies on this
UB end-iterator-to-pointer conversion anymore. It's still needed for
ilist_*sentinel_traits, but I'll clean that up soon.
llvm-svn: 278478
Allow an ilist_iterator to be constructed from an ilist_node, and give
access to the underlying ilist_node as well.
This will be used immediately in lld to support a type-erasure use case.
Longer term, they'll stick around once the iterator is using
ilist_node<NodeTy>* instead of NodeTy*.
llvm-svn: 278467
"insert_subreg, subreg_to_reg, and reg_sequence" instructions' after
adjusting some unittest checks.
This is to solve PR28852. The restriction was added at 2010 to make better register
coalescing. We assumed that it was not necessary any more. Testing results on x86
supported the assumption.
We will look closely to any performance impact it will bring and will be prepared
to help analyzing performance problem found on other architectures.
Differential Revision: https://reviews.llvm.org/D23210
llvm-svn: 278466
To fix PR28014, this patch restricts tail merging to blocks that belong to the
same loop after MBP.
Differential Revision: https://reviews.llvm.org/D23191
llvm-svn: 278463
This method had some duplicate code when we did or did not have a dom tree. Refactor
it to remove the duplication, but also clean up the control flow to have less duplication.
llvm-svn: 278450
Summary: This is a follow up to r278389, where I have introduced the bug
Reviewers: mehdi_amini
Differential Revision: https://reviews.llvm.org/D23436
llvm-svn: 278442
Summary:
Notice that the data layout is changed: instead of using
std::pair<PointerIntPair<NodeType*, 1>, ChildItTy>, now use
std::pair<NodeRef, Optional<ChildItTy>>.
A NFC but worth noticing change is operator==(), since we only compare
an iterator against end(), it's better to put an assert there and make
people noticed when it fails.
Reviewers: dblaikie, chandlerc
Subscribers: mzolotukhin, llvm-commits
Differential Revision: https://reviews.llvm.org/D23146
llvm-svn: 278437
There were 2 versions of this method. A public one which takes a
const Instruction* and a private implementation which takes a mutable
Value* and casts to an Instruction*.
There was no need for the 2 versions as all callers pass a const Instruction*
and there was no need for a mutable pointer as we only do analysis here.
llvm-svn: 278434
Summary:
This patch adds IsVariadicFunction bit to summary in order
to not import variadic functions. Inliner doesn't inline
variadic functions because it is hard to reason about it.
This one small fix improves Importer by about 16%
(going from 86% to 100% of imported functions that are
inlined anywhere)
on some spec benchmarks like 'int' and others.
Reviewers: eraman, mehdi_amini, tejohnson
Subscribers: mehdi_amini, llvm-commits
Differential Revision: https://reviews.llvm.org/D23339
llvm-svn: 278432
This helped to improved memory-folding and register coalescing optimizations.
Also, this patch fixed the tracker #17229.
Reviewer: Craig Topper.
Differential Revision: https://reviews.llvm.org/D23108
llvm-svn: 278431
It's sharing the integer G_CONSTANT for now since I don't *think* it creates
any ambiguity (even on weird archs). If that turns out wrong we can create a
G_PTRCONSTANT or something.
llvm-svn: 278423
When legal, extending trip count in the loop control logic generates better code compared to truncating IV. This is because
(1) extending trip count is a loop invariant operation (see genLoopLimit where we prove trip count is loop invariant).
(2) Scalar Evolution seems to have problems understanding trunc when computing loop trip count. So removing them allows better analysis performed in Scalar Evolution. (In particular this fixes PR 28363 which is the motivation for this change).
I am not going to perform any performance test. Any degradation caused by this should be an indication of a bug elsewhere.
To prove legality, we rely on SCEV to prove zext(trunc(IV)) == IV (or similarly for sext). If this holds, we can prove equivalence of trunc(IV)==ExitCnt (1) and IV == zext(ExitCnt). Simply take zext of boths sides of (1) and apply the proven equivalence.
This commit contains changes in a newly added testcase which was not included in the previous commit (which was reverted later on).
https://reviews.llvm.org/D23075
llvm-svn: 278421
Summary:
This is an extension of the fix in r271424. That fix dealt with builder
insert points being moved by SCEV expansion, but only for the lifetime
of the expand call. This change modifies the interface so that LSR can
safely call expand multiple times at the same insert point and do the
right thing if one of the expansions decides to move the original insert
point.
This is a fix for PR28719.
Reviewers: sanjoy
Subscribers: llvm-commits, mcrosier, mzolotukhin
Differential Revision: https://reviews.llvm.org/D23342
llvm-svn: 278413
Summary:
This fixes PR 28933 by making sure GVNHoist does not try to recreate memory
accesses when it has not actually moved them.
Reviewers: sebpop
Subscribers: llvm-commits, george.burgess.iv
Differential Revision: https://reviews.llvm.org/D23411
llvm-svn: 278401
Summary: Make Optional's behavior the same as the coming std::optional.
Reviewers: dblaikie
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D23178
llvm-svn: 278397
Check MachineInstr::isDebugValue for the same instruction as we're
calling isSchedBoundary, avoiding the possibility of dereferencing
end().
This is a functionality change even when I!=end(). Matthias had a look
and agrees this is the right resolution (as opposed to checking for
end()).
This is triggered by a huge number of tests, but they happen to
magically pass right now. I found this because WIP patches for PR26753
convert them into crashes.
llvm-svn: 278394
Summary:
Keep track of all methods for which we have devirtualized at least
one call and then print them sorted alphabetically. That allows to
avoid duplicates and also makes the order deterministic.
Add optimization names into the remarks, so that it's easier to
understand how has each method been devirtualized.
Fix a bug when wrong methods could have been reported for
tryVirtualConstProp.
Reviewers: kcc, mehdi_amini
Differential Revision: https://reviews.llvm.org/D23297
llvm-svn: 278389
subreg_to_reg, and reg_sequence" instructions.
This is to solve PR28852. The restriction was added at 2010 to make better register
coalescing. We assumed that it was not necessary any more. Testing results on x86
supported the assumption.
We will look closely to any performance impact it will bring and will be prepared
to help analyzing performance problem found on other architectures.
Differential Revision: https://reviews.llvm.org/D23210
llvm-svn: 278384
This adds a createFunctionInliningPass pass that takes an InlineParams object and use this to create the pre-inliner pass. This prevents the regular inliner's threshold flag from influencing the preinliner.
Differential revision: https://reviews.llvm.org/D23377
llvm-svn: 278377
From the point of view of register assignment, byval parameters are
ignored: a byval parameter is not going to be assigned to a register,
and it will not affect the assignments of subsequent parameters.
When matching registers with parameters in the bit tracker, make sure
to skip byval parameters before advancing the registers.
llvm-svn: 278375
Summary: Some backends, like WebAssembly, use virtual registers instead of physical registers. This crashes the DbgValueHistoryCalculator pass, which assumes that all registers are physical. Instead, skip virtual registers when iterating aliases, and assume that they are clobbered.
Reviewers: dexonsmith, dschuff, aprantl
Subscribers: yurydelendik, llvm-commits, jfb, sunfish
Differential Revision: https://reviews.llvm.org/D22590
llvm-svn: 278371
Check for end() before skipping through debug values. This avoids
dereferencing end() when the instruction is the final one in the basic
block. (It still assumes that a debug value will not be the final
instruction in the basic block. No tests seemed to violate that.)
Many Hexagon tests trigger this, but they happen to magically pass right
now. I found this because WIP patches for PR26753 convert them into
crashes.
llvm-svn: 278355
After machine block placement, MBBs may not have terminators, and it is
appropriate to check for the end iterator here. We can fold the check
into the next if, as well. This look is really just looking for BBs that
end in CATCHRET.
llvm-svn: 278350
ExecutionEngine::runFunction is supposed to allow execution of arbitrary
function types, but MCJIT can only reasonably support a limited subset of
main-linke function types. This patch documents this limitation, and fixes
MCJIT::runFunction to abort with a meaningful error at runtime if called with
an unsupported function type.
llvm-svn: 278348
End iterators are usually sentinels, not actually Instruction* at all.
Stop casting to it just to get an iterator back.
There is likely no observable functionality change here right now
(although this is relying on UB, I doubt it was triggering anything),
but I'll be removing the cast soon.
llvm-svn: 278346
Check for an end iterator from MachineBasicBlock::getFirstTerminator in
llvm::getFuncletMembership. If this is turned into an assertion, it
fires in 48 X86 testcases (for example,
CodeGen/X86/regalloc-spill-at-ehpad.ll).
Since this is likely a latent bug (shouldn't all basic blocks end with a
terminator?) I've filed PR28938.
llvm-svn: 278344
This restores commit r278330, with fixes for a few bot failures:
- Fix a late change I had made to the save temps output file that I
missed due to existing files sitting on my disk
- Fix a bunch of Windows bot failures with "ambiguous call to overloaded
function" due to confusion between llvm::make_unique vs
std::make_unique (preface the new make_unique calls with "llvm::")
- Attempt to fix a modules bot failure by adding a missing include
to LTO/Config.h.
Original change:
Resolution-based LTO API.
Summary:
This introduces a resolution-based LTO API. The main advantage of this API over
existing APIs is that it allows the linker to supply a resolution for each
symbol in each object, rather than the combined object as a whole. This will
become increasingly important for use cases such as ThinLTO which require us
to process symbol resolutions in a more complicated way than just adjusting
linkage.
Patch by Peter Collingbourne.
Reviewers: rafael, tejohnson, mehdi_amini
Subscribers: lhames, tejohnson, mehdi_amini, llvm-commits
Differential Revision: https://reviews.llvm.org/D20268
llvm-svn: 278338
When legal, extending trip count in the loop control logic generates better code compared to truncating IV. This is because
(1) extending trip count is a loop invariant operation (see genLoopLimit where we prove trip count is loop invariant).
(2) Scalar Evolution seems to have problems understanding trunc when computing loop trip count. So removing them allows better analysis performed in Scalar Evolution. (In particular this fixes PR 28363 which is the motivation for this change).
I am not going to perform any performance test. Any degradation caused by this should be an indication of a bug elsewhere.
To prove legality, we rely on SCEV to prove zext(trunc(IV)) == IV (or similarly for sext). If this holds, we can prove equivalence of trunc(IV)==ExitCnt (1) and IV == zext(ExitCnt). Simply take zext of boths sides of (1) and apply the proven equivalence.
https://reviews.llvm.org/D23075
llvm-svn: 278334
This reverts commit r278330.
I made a change to the save temps output that is causing issues with the
bots. Didn't realize this because I had older output files sitting on
disk in my test output directory.
llvm-svn: 278331
Summary:
This introduces a resolution-based LTO API. The main advantage of this API over
existing APIs is that it allows the linker to supply a resolution for each
symbol in each object, rather than the combined object as a whole. This will
become increasingly important for use cases such as ThinLTO which require us
to process symbol resolutions in a more complicated way than just adjusting
linkage.
Patch by Peter Collingbourne.
Reviewers: rafael, tejohnson, mehdi_amini
Subscribers: lhames, tejohnson, mehdi_amini, llvm-commits
Differential Revision: https://reviews.llvm.org/D20268
Address review comments
llvm-svn: 278330
The previous implementation (not custom) doesn't enforce zeroing off upper bits. The assumption is that i1 PRODUCER (truncate and extractelement) must zero all upper bits, so i1 CONSUMER instructions ( test, zext, save, etc) can be done without additional zeroing.
Make extractelement i1 lowering custom for all vector i1.
Differential Revision: http://reviews.llvm.org/D23246
llvm-svn: 278328
This patch helps avoid false dependencies on undef registers by updating the machine instructions' undef operand to use a register that the instruction is truly dependent on, or use a register with clearance higher than Pref.
Pseudo example:
loop:
xmm0 = ...
xmm1 = vcvtsi2sdl eax, xmm0<undef>
... = inst xmm0
jmp loop
In this example, selecting xmm0 as the undef register creates false dependency between loop iterations.
This false dependency cannot be solved by inserting an xor before vcvtsi2sdl because xmm0 is alive at the point of the vcvtsi2sdl instruction.
Selecting a different register instead of xmm0, especially a register that is not used in the loop, will eliminate this problem.
Differential Revision: https://reviews.llvm.org/D22466
llvm-svn: 278321
Change --no-pgo-warn-missing to -pgo-warn-missing-function
and negate the default. /NFC
Add more test to make sure the warning is off by default
llvm-svn: 278314
The YAML representation was always outputting the root node of an export trie even if the trie was empty. While this doesn't really have any functional impact, it does add visual clutter to the yaml file.
llvm-svn: 278307
It's more than just inttoptr, but the others can't be tested until we have
support for non-trivial constants (they currently get unavoidably folded to a
ConstantInt).
llvm-svn: 278303
Summary:
This patch define and implement amdgcn image intrinsics with sampler.
1. define vdata type to be llvm_anyfloat_ty, address type to be llvm_anyfloat_ty,
and rsrc type to be llvm_anyint_ty. As a result, we expect the intrinsics name
to have three suffixes to overload each of these three types;
2. D128 as well as two other flags are implied in the three types, for example,
if you use v8i32 as resource type, then r128 is 0!
3. don't expose TFE flag, and other flags are exposed in the instruction order:
unrm, glc, slc, lwe and da.
Differential Revision: http://reviews.llvm.org/D22838
Reviewed by:
arsenm and tstellarAMD
llvm-svn: 278291
Summary:
I think it is much better this way.
When I firstly saw line:
Cost += InlineConstants::LastCallToStaticBonus;
I though that this is a bug, because everywhere where the cost is being reduced
it is usuing -=.
Reviewers: eraman, tejohnson, mehdi_amini
Subscribers: llvm-commits, mehdi_amini
Differential Revision: https://reviews.llvm.org/D23222
llvm-svn: 278290
If AnalyzeBranch can't analyze a block and it is possible to
fallthrough, then duplicating the block doesn't make sense, as only one
block can be the layout predecessor for the un-analyzable fallthrough.
Submitted wit a test case, but NOTE: the test case doesn't currently
fail. However, the test case fails with D20505 and would have saved me
some time debugging.
llvm-svn: 278288
The export table is not considered part of the object file symbol table,
so we have to look through it separately.
Reviewers: kcc
Differential Revision: https://reviews.llvm.org/D23321
llvm-svn: 278284
Insert before the skip branch if one is created.
This is a somewhat more natural placement relative
to the skip branches, and makes it possible to implement
analyzeBranch for skip blocks.
The test changes are mostly due to a quirk where
the block label is not emitted if there is a terminator
that is not also a branch.
llvm-svn: 278273
Add the $arch-registered-target features that clang uses to disable
tests that require a registered backend, so that we can run the sancov
tests on Windows. LLVM's lit suite did not appear to have a per-test way
to do this, and I would rather not split up the sancov tests into
architecture directories.
Split out of https://reviews.llvm.org/D23321
llvm-svn: 278271
Summary:
See the new test case for one that was (non-deterministically) crashing
on trunk and deterministically hit the assertion that I added in D23302.
Basically, the machine function contains a sequence
DS_WRITE_B32 %vreg4, %vreg14:sub0, ...
DS_WRITE_B32 %vreg4, %vreg14:sub0, ...
%vreg14:sub1<def> = COPY %vreg14:sub0
and SILoadStoreOptimizer::mergeWrite2Pair merges the two DS_WRITE_B32
instructions into one before calling repairIntervalsInRange.
Now repairIntervalsInRange wants to repair %vreg14, in particular, and
ends up trying to repair %vreg14:sub1 as well, but that only becomes
active _after_ the range that is to be repaired, hence the crash due
to LR.find(...) == LR.begin() at the start of repairOldRegInRange.
I believe that just skipping those subrange is fine, but again, not too
familiar with that code.
Reviewers: MatzeB, kparzysz, tstellarAMD
Subscribers: llvm-commits, MatzeB
Differential Revision: https://reviews.llvm.org/D23303
llvm-svn: 278268
This change makes it possible for tail-duplication and tail-merging to
be disjoint. By being less aggressive when merging during layout, there are no
overlapping cases between tail-duplication and tail-merging, provided the
thresholds are disjoint.
There is a remaining TODO to benchmark the succ_size() test for non-layout tail
merging.
llvm-svn: 278265
Summary: make_scope_exit() is described in C++ proposal p0052r2, which uses RAII to do cleanup works at scope exit.
Reviewers: chandlerc
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D22796
llvm-svn: 278251
We are seeing r276077 drastically increasing compiler time for our larger
benchmarks in PGO profile generation build (both clang based and IR based
mode) -- it can be 20x slower than without the patch (like from 30 secs to
780 secs)
The increased time are all in pass LCSSA. The problematic code is about
PostProcessPHIs after use-rewrite. Note that the InsertedPhis from ssa_updater
is accumulating (never been cleared). Since the inserted PHIs are added to the
candidate for each rewrite, The earlier ones will be repeatedly added. Later
when adding the new PHIs to the work-list, we don't check the duplication
either. This can result in extremely long work-list that containing tons of
duplicated PHIs.
This patch fixes the issue by hoisting the code out of the loop.
Differential Revision: http://reviews.llvm.org/D23344
llvm-svn: 278250