Even when we don't create a remainder loop (that is, when we unroll by 2), we
may duplicate nested loops into the remainder. This is complicated by the fact
the remainder may itself be either inserted into an outer loop, or at the top
level. In the latter case, we may need to create new top-level loops.
Differential Revision: https://reviews.llvm.org/D29156
llvm-svn: 293124
Summary:
Previously we assumed that the result of sqrt(x) always had 0 as its
sign bit. But sqrt(-0) == -0.
Reviewers: hfinkel, efriedma, sanjoy
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D28928
llvm-svn: 293115
This allows MIR passes to emit optimization remarks with the same level
of functionality that is available to IR passes.
It also hooks up the greedy register allocator to report spills. This
allows for interesting use cases like increasing interleaving on a loop
until spilling of registers is observed.
I still need to experiment whether reporting every spill scales but this
demonstrates for now that the functionality works from llc
using -pass-remarks*=<pass>.
Differential Revision: https://reviews.llvm.org/D29004
llvm-svn: 293110
Code region is the only part of this class that is IR-specific. Code
region is moved down in the inheritance tree to a new derived class,
called DiagnosticInfoIROptimization.
All the existing remarks are derived from this new class now.
This allows the new MIR pass-remark classes to be derived from
DiagnosticInfoOptimizationBase.
Also because we keep the name DiagnosticInfoOptimizationBase, the clang
parts don't need any adjustment.
Differential Revision: https://reviews.llvm.org/D29003
llvm-svn: 293109
Document the current practice regarding dropping metadata on modules,
functions and global variables.
Differential Revision: https://reviews.llvm.org/D29110
llvm-svn: 293101
Mach-O files don’t have size information about the symbols in the object file
format unlike ELF.
Also add the part of the fix to llvm-nm that was missed with r290001 so
-arch armv7m works.
rdar://25681018
llvm-svn: 293099
This reverts commit r292680. It is causing significantly worse
performance and test timeouts in our internal builds. I have already
routed reproduction instructions your way.
llvm-svn: 293092
This is not a list of pairs, it is a hash table data structure. We now
correctly parse this out and dump it from llvm-pdbdump.
We still need to understand the conditions that lead to a type
getting an entry in the hash adjuster table. That will be done
in a followup investigation / patch.
Differential Revision: https://reviews.llvm.org/D29090
llvm-svn: 293090
Later code expects the vector loads produced to be directly
concatenable, which means we shouldn't pad anything except the last load
produced with UNDEF.
llvm-svn: 293088
Summary:
This is the first in a series of patches to add a simple, generalized updater to MemorySSA.
For MemorySSA, every def is may-def, instead of the normal must-def.
(the best way to think of memoryssa is "everything is really one variable, with different versions of that variable at different points in the program).
This means when updating, we end up having to do a bunch of work to touch defs below and above us.
In order to support this quickly, i have ilist'd all the defs for each block. ilist supports tags, so this is quite easy. the only slightly messy part is that you can't have two iplists for the same type that differ only whether they have the ownership part enabled or not, because the traits are for the value type.
The verifiers have been updated to test that the def order is correct.
Reviewers: george.burgess.iv
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D29046
llvm-svn: 293085
Summary:
This patch prepares more for tail call support in XRay. Until the logging part supports tail calls, this is just staging, so it seems LLVM part is mostly ready with this patch.
Related: https://reviews.llvm.org/D28948 (compiler-rt)
Reviewers: dberris, rengolin
Reviewed By: dberris
Subscribers: llvm-commits, iid_iunknown, aemerson
Differential Revision: https://reviews.llvm.org/D28947
llvm-svn: 293080
Change getReservedRegs() to not mark a register as reserved and then
revert that decision in some cases. Motivated by the discussion in
https://reviews.llvm.org/D29056
llvm-svn: 293073
This patch introduces guard based loop predication optimization. The new LoopPredication pass tries to convert loop variant range checks to loop invariant by widening checks across loop iterations. For example, it will convert
for (i = 0; i < n; i++) {
guard(i < len);
...
}
to
for (i = 0; i < n; i++) {
guard(n - 1 < len);
...
}
After this transformation the condition of the guard is loop invariant, so loop-unswitch can later unswitch the loop by this condition which basically predicates the loop by the widened condition:
if (n - 1 < len)
for (i = 0; i < n; i++) {
...
}
else
deoptimize
This patch relies on an NFC change to make ScalarEvolution::isMonotonicPredicate public (revision 293062).
Reviewed By: sanjoy
Differential Revision: https://reviews.llvm.org/D29034
llvm-svn: 293064
This is a partial fix for Bug 31520 - [guards] canonicalize guards in instcombine
Reviewed By: apilipenko
Differential Revision: https://reviews.llvm.org/D29075
Patch by Maxim Kazantsev.
llvm-svn: 293061
Summary:
Lifetime extension wasn't triggered on the result of BuildMI because the
reference was non-const. However, instead of adding a const, I've
removed the reference entirely as RVO should kick in anyway.
Reviewers: rovka, bkramer
Reviewed By: bkramer
Subscribers: aemerson, rengolin, dberris, llvm-commits, kristof.beyls
Differential Revision: https://reviews.llvm.org/D29124
llvm-svn: 293059
This is a partial fix for Bug 31520 - [guards] canonicalize guards in instcombine
Reviewed By: apilipenko
Differential Revision: https://reviews.llvm.org/D29074
Patch by Maxim Kazantsev.
llvm-svn: 293058
This is a partial fix for Bug 31520 - [guards] canonicalize guards in instcombine
Reviewed By: majnemer, apilipenko
Differential Revision: https://reviews.llvm.org/D29071
Patch by Maxim Kazantsev.
llvm-svn: 293056
instructions.
If number of instructions in horizontal reduction list is not power of 2
then only PowerOf2Floor(NumberOfInstructions) last elements are actually
vectorized, other instructions remain scalar. Patch tries to vectorize
the remaining elements either.
Differential Revision: https://reviews.llvm.org/D28959
llvm-svn: 293042
Floating point intrinsics in LLVM are generally not speculatively
executed, since most of them are defined to behave the same as libm
functions, which set errno.
However, the @llvm.powi.* intrinsics do not correspond to any libm
function, and lacks any defined error handling semantics in LangRef.
It most certainly does not alter errno.
llvm-svn: 293041
The previous patch (https://reviews.llvm.org/rL289538) got reverted because of a bug. Chandler also requested some changes to the algorithm.
http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20161212/413479.html
This is an updated patch. The key difference is that collectBitProviders (renamed to calculateByteProvider) now collects the origin of one byte, not the whole value. It simplifies the implementation and allows to stop the traversal earlier if we know that the result won't be used.
From the original commit:
Match a pattern where a wide type scalar value is loaded by several narrow loads and combined by shifts and ors. Fold it into a single load or a load and a bswap if the targets supports it.
Assuming little endian target:
i8 *a = ...
i32 val = a[0] | (a[1] << 8) | (a[2] << 16) | (a[3] << 24)
=>
i32 val = *((i32)a)
i8 *a = ...
i32 val = (a[0] << 24) | (a[1] << 16) | (a[2] << 8) | a[3]
=>
i32 val = BSWAP(*((i32)a))
This optimization was discussed on llvm-dev some time ago in "Load combine pass" thread. We came to the conclusion that we want to do this transformation late in the pipeline because in presence of atomic loads load widening is irreversible transformation and it might hinder other optimizations.
Eventually we'd like to support folding patterns like this where the offset has a variable and a constant part:
i32 val = a[i] | (a[i + 1] << 8) | (a[i + 2] << 16) | (a[i + 3] << 24)
Matching the pattern above is easier at SelectionDAG level since address reassociation has already happened and the fact that the loads are adjacent is clear. Understanding that these loads are adjacent at IR level would have involved looking through geps/zexts/adds while looking at the addresses.
The general scheme is to match OR expressions by recursively calculating the origin of individual bytes which constitute the resulting OR value. If all the OR bytes come from memory verify that they are adjacent and match with little or big endian encoding of a wider value. If so and the load of the wider type (and bswap if needed) is allowed by the target generate a load and a bswap if needed.
Reviewed By: RKSimon, filcab, chandlerc
Differential Revision: https://reviews.llvm.org/D27861
llvm-svn: 293036
Add support for:
* i1 add
* i1 function arguments, if passed through registers
* i1 returns, with ABI signext/zeroext
Differential Revision: https://reviews.llvm.org/D27706
llvm-svn: 293035
At the moment, this means supporting the signext/zeroext attribute on the return
type of the function. For function arguments, signext/zeroext should be handled
by the caller, so there's nothing for us to do until we start lowering calls.
Note that this does not include support for other extensions (i8 to i16), those
will be added later.
Differential Revision: https://reviews.llvm.org/D27705
llvm-svn: 293034
If dominator tree has no roots, the pass that calculates it is
likely to be skipped. It occures, for instance, in the case of
entities with linkage available_externally. Do not run tree
verification in such case.
Differential Revision: https://reviews.llvm.org/D28767
llvm-svn: 293033
Summary:
A patch to enable the llvm-xray graph subcommand to color edges and
vertices based on statistics and to annotate vertices with statistics.
Depends on D27243
Reviewers: dblaikie, dberris
Reviewed By: dberris
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D28225
llvm-svn: 293031
Enable the next form (intel style):
"mov <reg64>, <largeImm>"
which is should be available,
where <largeImm> stands for immediates which exceed the range of a singed 32bit integer
Differential Revision: https://reviews.llvm.org/D28988
llvm-svn: 293030
Conservatively disable sinking and merging inline-asm instructions as doing so
can potentially create arguments that cannot satisfy the inline-asm constraints.
For example, SimplifyCFG used to do the following transformation:
(before)
if.then:
%0 = call i32 asm "rorl $2, $0", "=&r,0,n"(i32 %r6, i32 8)
br label %if.end
if.else:
%1 = call i32 asm "rorl $2, $0", "=&r,0,n"(i32 %r6, i32 6)
br label %if.end
(after)
%.sink = select i1 %tobool, i32 6, i32 8
%0 = call i32 asm "rorl $2, $0", "=&r,0,n"(i32 %r6, i32 %.sink)
This would result in a crash in the backend since only immediate integer operands
are permitted for constraint "n".
rdar://problem/30110806
Differential Revision: https://reviews.llvm.org/D29111
llvm-svn: 293025
clang already emits this with -cl-no-signed-zeros, but codegen
doesn't do anything with it. Treat it like the other fast math
attributes, and change one place to use it.
llvm-svn: 293024
I found root class should be instantiated for variadic tempate to instantiate static member explicitly.
This will fix failures in mingw DLL build.
llvm-svn: 293017
Leave early ifcvt disabled for now since there are some
shader-db regressions.
This causes some immediate improvements, but could be better.
The cost checking that the pass does is based on critical path
length for out of order CPUs which we do not want so it skips out
on many cases we want.
llvm-svn: 293016
Looks like our cmake goop for handling .inc->td dependencies doesn't
track the .td files.
This manifests as cmake complaining about missing files since r293009.
Force a rerun to avoid that.
llvm-svn: 293012
loops.
We do this by reconstructing the newly added loops after the unroll
completes to avoid threading pass manager details through all the mess
of the unrolling infrastructure.
I've enabled some extra assertions in the LPM to try and catch issues
here and enabled a bunch of unroller tests to try and make sure this is
sane.
Currently, I'm manually running loop-simplify when needed. That should
go away once it is folded into the LPM infrastructure.
Differential Revision: https://reviews.llvm.org/D28848
llvm-svn: 293011
This surprisingly isn't NFC because there are patterns to select GPR
sub to SUBSWrr (rather than SUBWrr/rs); SUBS is later optimized to
SUB if NZCV is dead. From ISel's perspective, both are fine.
llvm-svn: 293010
Summary:
When we decide that the result of the invoke instruction need to be spilled, we need to insert the spill into a block that is on the normal edge coming out of the invoke instruction. (Prior to this change the code would insert the spill immediately after the invoke instruction, which breaks the IR, since invoke is a terminator instruction).
In the following example, we will split the edge going into %cont and insert the spill there.
```
%r = invoke double @print(double 0.0) to label %cont unwind label %pad
cont:
%0 = call i8 @llvm.coro.suspend(token none, i1 false)
switch i8 %0, label %suspend [i8 0, label %resume
i8 1, label %cleanup]
resume:
call double @print(double %r)
```
Reviewers: majnemer
Reviewed By: majnemer
Subscribers: mehdi_amini, llvm-commits, EricWF
Differential Revision: https://reviews.llvm.org/D29102
llvm-svn: 293006
Summary:
This lets you select which sort of spilling you want, either s[0:1] or 64-bit loads from s[0:1].
Patch By: Dave Airlie
Reviewers: nhaehnle, arsenm, tstellarAMD
Reviewed By: arsenm
Subscribers: mareko, llvm-commits, kzhuravl, wdng, yaxunl, tony-tye
Differential Revision: https://reviews.llvm.org/D25428
llvm-svn: 293000
There was a bug here where we were using p0 instead of s32 for the
selector type in the landingpad. Instead of hardcoding these types we
should get the types from the landingpad instruction directly.
Note that we replicate an assert from SDAG here to only support
two-valued landingpads.
llvm-svn: 292995
for CPU_SUBTYPE_ARM_V7S and CPU_SUBTYPE_ARM_V7K.
For these two cpusubtypes they should default to a cortex-a7 CPU
to give proper disassembly without a -mcpu= flag.
rdar://27431703
llvm-svn: 292993
The sequence like this:
v_cmpx_le_f32_e32 vcc, 0, v0
s_branch BB0_30
s_cbranch_execnz BB0_30
; BB#29:
exp null off, off, off, off done vm
s_endpgm
BB0_30:
; %endif110
is likely wrong. The s_branch instruction will unconditionally jump
to BB0_30 and the skip block (exp done + endpgm) inserted for
performing the kill instruction will never be executed. This results
in a GPU hang with Star Ruler 2.
The s_branch instruction is added during the "Control Flow Optimizer"
pass which seems to re-organize the basic blocks, and we assume
that SI_KILL_TERMINATOR is always the last instruction inside a
basic block. Thus, after inserting a skip block we just go to the
next BB without looking at the subsequent instructions after the
kill, and the s_branch op is never removed.
Instead, we should remove the unconditional out branches and let
skip the two instructions if the exec mask is non-zero.
This patch fixes the GPU hang and doesn't introduce any regressions
with "make check".
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=99019
Patch by Samuel Pitoiset <samuel.pitoiset@gmail.com>
llvm-svn: 292985
This switches to the workaround that HSA defaults to
for the mesa path.
This should be applied to the 4.0 branch.
Patch by Vedran Miletić <vedran@miletic.net>
llvm-svn: 292982
Summary: In iterative sample pgo where profile is collected from PGOed binary, we may see indirect call targets promoted and inlined in the profile. Before profile annotation, we need to make this happen in order to annotate correctly on IR. This patch explicitly promotes these indirect calls and inlines them before profile annotation.
Reviewers: xur, davidxl
Reviewed By: davidxl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D29040
llvm-svn: 292979
When demangling a CV-qualified function type with a final reference type
parameter, we would treat the reference type parameter as a r-value ref
accidentally. This would result in the improper decoration of the
function type itself.
Resolves PR31741!
llvm-svn: 292976
Reason: broke ASAN bots with a global buffer overflow.
http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-fast/builds/2291
Each test contains 20-30K test cases but takes only several (from 4 to 10)
seconds to complete on average machine. The tests cover the majority of
AMDGPU Gfx7/Gfx8 instructions, including many dark corners, and intended
to quickly find out if something is broken.
llvm-svn: 292974
Summary:
GVNHoist performs all the optimizations that MLSM does to loads, in a
more general way, and in a faster time bound (MLSM is N^3 in most
cases, N^4 in a few edge cases).
This disables the load portion.
Note that the way ld_hoist_st_sink.ll is written makes one think that
the loads should be moved to the while.preheader block, but
1. Neither MLSM nor GVNHoist do it (they both move them to identical places).
2. MLSM couldn't possibly do it anyway, as the while.preheader block
is not the head of the diamond, while.body is. (GVNHoist could do it
if it was legal).
3. At a glance, it's not legal anyway because the in-loop load
conflict with the in-loop store, so the loads must stay in-loop.
I am happy to update the test to use update_test_checks so that
checking is tighter, just was going to do it as a followup.
Note that i can find no particular benefit to the store portion on any
real testcase/benchmark i have (even size-wise). If we really still
want it, i am happy to commit to writing a targeted store sinker, just
taking the code from the MemorySSA port of MergedLoadStoreMotion
(which is N^2 worst case, and N most of the time).
We can do what it does in a much better time bound.
We also should be both hoisting and sinking stores, not just sinking
them, anyway, since whether we should hoist or sink to merge depends
basically on luck of the draw of where the blockers are placed.
Nonetheless, i have left it alone for now.
Reviewers: chandlerc, davide
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D29079
llvm-svn: 292971
When demangling a CV-qualified function type with a final parameter with
a reference type, we would insert the CV qualification on the parameter
rather than the function, and in the process adjust the insertion point
by one extra, splitting the type name. This avoids doing so, even
though the attribution is still incorrect.
llvm-svn: 292965
Summary: As per title. This will add the instructiions we are interested in in the worklist.
Reviewers: mehdi_amini, majnemer, andreadb
Differential Revision: https://reviews.llvm.org/D29081
llvm-svn: 292957
Regalloc creates COPY instructions which do not formally use VALU.
That results in v_mov instructions displaced after exec mask modification.
One pass which do it is SIOptimizeExecMasking, but potentially it can be
done by other passes too.
This patch adds a pass immediately after regalloc to add implicit exec
use operand to all VGPR copy instructions.
Differential Revision: https://reviews.llvm.org/D28874
llvm-svn: 292956
In order to follow the pattern of the existing 'slow-misaligned-128store'
option, rename the option 'no-quad-ldst-pairs' to 'slow-paired-128'.
llvm-svn: 292954
Summary:
This is in keeping with LLVM convention. The classes are InstPrinters, but the library is ${target}AsmPrinter.
This patch is in response to bryant pointing out to me that Lanai was the only backend deviating from convention here. Thanks!
Reviewers: jpienaar, bryant
Subscribers: mgorny, jgosnell, llvm-commits
Differential Revision: https://reviews.llvm.org/D29043
llvm-svn: 292953
Also fixes a much worse bug where we emitted the wrong gap size for the
def range uncovered by the test for this issue.
Fixes PR31726.
llvm-svn: 292949
The comment talked about replacing vpmovzxwd+vpslld+vpsrad with vpmovsxwd - which isn't valid as we're sign extending a <8 x i1> bool vector not an all/nobits <8 x i16>
llvm-svn: 292948
Summary:
When conditional branches with complex conditions are split into
multiple branches in SelectionDAGBuilder::FindMergedConditions, also
handle inverted conditions. These may sometimes appear without having
been optimized by InstCombine when CodeGenPrepare decides to sink and
duplicate cmp instructions, causing them to have only one use. This
problem can be increased by e.g. GVNHoist hiding more cmps from
InstCombine by combining equivalent cmps from different blocks.
For example codegen X & !(Y | Z) as:
jmp_if_X TmpBB
jmp FBB
TmpBB:
jmp_if_notY Tmp2BB
jmp FBB
Tmp2BB:
jmp_if_notZ TBB
jmp FBB
Reviewers: bogner, MatzeB, qcolombet
Subscribers: llvm-commits, hiraditya, mcrosier, sebpop
Differential Revision: https://reviews.llvm.org/D28380
llvm-svn: 292944
and UNSUPPORTED"
After r292904 llvm-lit fails to emit the test results in the XML format for
Apple's internal buildbots.
rdar://30164800
llvm-svn: 292942
The comment in ISDOpcodes.h for EXTRACT_VECTOR_ELT now explains that the high
bits are undefined if the result is extended.
Review: Hal Finkel
llvm-svn: 292933
a lazy-asserting PoisoningVH.
AssertVH is fundamentally incompatible with cache-invalidation of
analysis results. The invaliadtion happens after the AssertingVH has
already fired. Instead, use a PoisoningVH that will assert if the
dangling handle is ever used rather than merely be assigned or
destroyed.
This patch also removes all of the (numerous) doomed attempts to work
around this fundamental incompatibility. It is a pretty significant
simplification IMO.
The most interesting change is in the Inliner where we still do some
clearing because we don't want to rely on the coarse grained
invalidation strategy of the containing pass manager. However, I prefer
the approach that contains this logic to the cleanup phase of the
Inliner, and I think we could enhance the CGSCC analysis management
layer to make this even better in the future if desired.
The rest is straight cleanup.
I've also added a test for one of the harder cases to work around: when
a *module analysis* contains many AssertingVHes pointing at functions.
Differential Revision: https://reviews.llvm.org/D29006
llvm-svn: 292928
AssertingVH that delays any reported error until the handle is *used*.
This allows data structures to contain handles which become dangling
provided the data structure is cleaned up afterward rather than used for
anything interesting.
The implementation is moderately horrible in part because it works to
leave AssertingVH in place, undisturbed. If at some point there is
consensus that this is simply how AssertingVH should be used, it can be
substantially simplified.
This remains a boring pointer in a non-asserts build as you would
expect. The only place we pay cost is in asserts builds.
I plan to use this as a basis for replacing the asserting VHs that
currently dangle in the new PM until invalidation occurs in both LVI and
SCEV.
Differential Revision: https://reviews.llvm.org/D29061
llvm-svn: 292925
Each test contains 20-30K test cases but takes only several (from 4 to 10)
seconds to complete on average machine. The tests cover the majority of
AMDGPU Gfx7/Gfx8 instructions, including many dark corners, and intended
to quickly find out if something is broken.
llvm-svn: 292922
The test fails when there is a symlink on the path because then the path
returned by current_path will not match the one we have set. Instead of
doing a string match check the unique id of the two files.
llvm-svn: 292916
Added early out for single undef input - we were already supporting (and testing) this in the constant folding code, we just do it quicker now
Drop undef handling from demanded elts code now that we handle it fully in InstCombiner::visitCallInst
llvm-svn: 292913
Summary:
Use the O_CLOEXEC flag only when it is available. Some old systems (e.g.
SLES10) do not support this flag. POSIX explicitly guarantees that this
flag can be checked for using #if, so there is no need for a CMake
check.
In case O_CLOEXEC is not supported, fall back to fcntl(FD_CLOEXEC)
instead.
Reviewers: rnk, rafael, mgorny
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D28894
llvm-svn: 292912
Summary:
This adds a cross-platform way of setting the current working directory
analogous to the existing current_path() function used for retrieving
it. The function will be used in lldb.
Reviewers: rafael, silvas, zturner
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D29035
llvm-svn: 292907
A `lit` condition line is now a comma-separated list of boolean expressions.
Comma-separated expressions act as if each expression were on its own
condition line:
For REQUIRES, if every expression is true then the test will run.
For UNSUPPORTED, if every expression is false then the test will run.
For XFAIL, if every expression is false then the test is expected to succeed.
As a special case "XFAIL: *" expects the test to fail.
Examples:
# Test is expected fail on 64-bit Apple simulators and pass everywhere else
XFAIL: x86_64 && apple && !macosx
# Test is unsupported on Windows and on non-Ubuntu Linux
# and supported everywhere else
UNSUPPORTED: linux && !ubuntu, system-windows
Syntax:
* '&&', '||', '!', '(', ')'. 'true' is true. 'false' is false.
* Each test feature is a true identifier.
* Substrings of the target triple are true identifiers for UNSUPPORTED
and XFAIL, but not for REQUIRES. (This matches the current behavior.)
* All other identifiers are false.
* Identifiers are [-+=._a-zA-Z0-9]+
Differential Revision: https://reviews.llvm.org/D18185
llvm-svn: 292904
Removed data members ReduxWidth and MinVecRegSize + some C++11 stylish
improvements.
Differential Revision: https://reviews.llvm.org/D29010
llvm-svn: 292899
A `lit` condition line is now a comma-separated list of boolean expressions.
Comma-separated expressions act as if each expression were on its own
condition line:
For REQUIRES, if every expression is true then the test will run.
For UNSUPPORTED, if every expression is false then the test will run.
For XFAIL, if every expression is false then the test is expected to succeed.
As a special case "XFAIL: *" expects the test to fail.
Examples:
# Test is expected fail on 64-bit Apple simulators and pass everywhere else
XFAIL: x86_64 && apple && !macosx
# Test is unsupported on Windows and on non-Ubuntu Linux
# and supported everywhere else
UNSUPPORTED: linux && !ubuntu, system-windows
Syntax:
* '&&', '||', '!', '(', ')'. 'true' is true. 'false' is false.
* Each test feature is a true identifier.
* Substrings of the target triple are true identifiers for UNSUPPORTED
and XFAIL, but not for REQUIRES. (This matches the current behavior.)
* All other identifiers are false.
* Identifiers are [-+=._a-zA-Z0-9]+
Differential Revision: https://reviews.llvm.org/D18185
llvm-svn: 292896
With this change dominator tree remains in sync after each step of loop
peeling.
Differential Revision: https://reviews.llvm.org/D29029
llvm-svn: 292895
This refactor allows parallel calls to be made via an arbitrary async call
dispatcher. In particular, this allows ParallelCallGroup to be used with
derived RPC classes that expose custom async RPC call operations.
llvm-svn: 292891
Verifications of dominator tree and loop info are expensive operations
so they are disabled by default. They can be enabled by command line
options -verify-dom-info and -verify-loop-info. These options however
enable checks only in files Dominators.cpp and LoopInfo.cpp. If some
transformation changes dominaror tree and/or loop info, it would be
convenient to place similar checks to the files implementing the
transformation.
This change makes corresponding flags global, so they can be used in
any file to optionally turn verification on.
llvm-svn: 292889
The GeneralShuffle::add() method used to have an assert that made sure that
source elements were at least as big as the destination elements. This was
wrong, since it is actually expected that an EXTRACT_VECTOR_ELT node with a
smaller source element type than the return type gets extended.
Therefore, instead of asserting this, it is just checked and if this is the
case 'false' is returned from the GeneralShuffle::add() method. This case
should be very rare and is not handled further by the backend.
Review: Ulrich Weigand.
llvm-svn: 292888
Summary:
This seemed to be an oversight seeing as DenseMap has these conversions.
This patch does the following:
- Adds a default constructor to the iterators.
- Allows DenseSet::ConstIterators to be copy constructed from DenseSet::Iterators
- Allows mutual comparison between Iterators and ConstIterators.
All of these are available in the DenseMap implementation, so the implementation here is trivial.
Reviewers: dblaikie, dberris
Reviewed By: dberris
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D28999
llvm-svn: 292879
Summary:
This teaches getNode to simplify extracting from Undef. This is similar to what is done for EXTRACT_VECTOR_ELT. It also adds support for extracting from CONCAT_VECTOR when we can reuse one of the inputs to the concat. These seem like simple non-target specific optimizations.
For X86 we currently handle undef in extractSubvector, but not all EXTRACT_SUBVECTOR creations go through there.
Ultimately, my motivation here is to simplify extractSubvector and remove custom lowering for EXTRACT_SUBVECTOR since we don't do anything but handle undef and BUILD_VECTOR optimizations, but those should be DAG combines.
Reviewers: RKSimon, delena
Reviewed By: RKSimon
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D29000
llvm-svn: 292876
Summary:
There's a comment in XorSlowCase that says "0^0==1" which isn't true. 0 xored with 0 is still 0. So I don't think we need to clear any unused bits here.
Now there is no difference between XorSlowCase and AndSlowCase/OrSlowCase other than the operation being performed
Reviewers: majnemer, MatzeB, chandlerc, bkramer
Reviewed By: MatzeB
Subscribers: chfast, llvm-commits
Differential Revision: https://reviews.llvm.org/D28986
llvm-svn: 292873
A register unit may be allocatable and non-reserved but some of the
register(tuples) built with it are reserved. We still need to calculate
liveness in this case.
Note to out of tree targets: If you start seeing machine verifier errors
with this commit, it probably means that you do not properly mark super
registers of reserved register as reserved. See for example r292836 or
r292870 for example on how to fix that.
rdar://29996737
Differential Revision: https://reviews.llvm.org/D28881
llvm-svn: 292871
When a register like R1 is reserved, X1 should be reserved as well. This
was already done "manually" when 64bit code was enabled, however using
the markSuperRegs() function on the base register is more convenient and
allows to use the checksAllSuperRegsMarked() function even in 32bit mode
to avoid accidental breakage in the future.
This is also necessary to allow https://reviews.llvm.org/D28881
Differential Revision: https://reviews.llvm.org/D29056
llvm-svn: 292870
Running non-LCSSA-preserving LoopSimplify followed by LCSSA on (roughly) the
same loop is incorrect, since LoopSimplify may break LCSSA arbitrarily higher
in the loop nest. Instead, run LCSSA first, and then run LCSSA-preserving
LoopSimplify on the result.
This fixes PR31718.
Differential Revision: https://reviews.llvm.org/D29055
llvm-svn: 292854
Summary: promoteIndirectCall should be a utility function that could be invoked by other optimization passes.
Reviewers: davidxl
Reviewed By: davidxl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D29051
llvm-svn: 292850
Summary:
The LibFunc::Func enum holds enumerators named for libc functions.
Unfortunately, there are real situations, including libc implementations, where
function names are actually macros (musl uses "#define fopen64 fopen", for
example; any other transitively visible macro would have similar effects).
Strictly speaking, a conforming C++ Standard Library should provide any such
macros as functions instead (via <cstdio>). However, there are some "library"
functions which are not part of the standard, and thus not subject to this
rule (fopen64, for example). So, in order to be both portable and consistent,
the enum should not use the bare function names.
The old enum naming used a namespace LibFunc and an enum Func, with bare
enumerators. This patch changes LibFunc to be an enum with enumerators prefixed
with "LibFFunc_". (Unfortunately, a scoped enum is not sufficient to override
macros.)
There are additional changes required in clang.
Reviewers: rsmith
Subscribers: mehdi_amini, mzolotukhin, nemanjai, llvm-commits
Differential Revision: https://reviews.llvm.org/D28476
llvm-svn: 292848
When calculating kills, a register may be considered live because a part
of it is live, but if there is a use of that (whole) register, the whole
register (and its subregisters) need to be added to the live set.
llvm-svn: 292845
Summary:
This patch changes the layout of DoubleAPFloat, and adjust all
operations to do either:
1) (IEEEdouble, IEEEdouble) -> (uint64_t, uint64_t) -> PPCDoubleDoubleImpl,
then run the old algorithm.
2) Do the right thing directly.
1) includes multiply, divide, remainder, mod, fusedMultiplyAdd, roundToIntegral,
convertFromString, next, convertToInteger, convertFromAPInt,
convertFromSignExtendedInteger, convertFromZeroExtendedInteger,
convertToHexString, toString, getExactInverse.
2) includes makeZero, makeLargest, makeSmallest, makeSmallestNormalized,
compare, bitwiseIsEqual, bitcastToAPInt, isDenormal, isSmallest,
isLargest, isInteger, ilogb, scalbn, frexp, hash_value, Profile.
I could split this into two patches, e.g. use
1) for all operatoins first, then incrementally change some of them to
2). I didn't do that, because 1) involves code that converts data between
PPCDoubleDoubleImpl and (IEEEdouble, IEEEdouble) back and forth, and may
pessimize the compiler. Instead, I find easy functions and use
approach 2) for them directly.
Next step is to implement move multiply and divide from 1) to 2). I don't
have plans for other functions in 1).
Differential Revision: https://reviews.llvm.org/D27872
llvm-svn: 292839
in llvm-objdump for Mach-O files add the printing of the
x86_thread_state32_t in the same format as
otool-classic(1) on darwin.
To do this the 32-bit x86 general tread state
needed to be defined in include/llvm/Support/MachO.h .
rdar://30110111
llvm-svn: 292829
Since we're now avoiding operations using narrow scalar integer types,
we have to legalize the integer side of the FP conversions.
This requires teaching the legalizer how to do that.
llvm-svn: 292828
Since r279760, we've been marking as legal operations on narrow integer
types that have wider legal equivalents (for instance, G_ADD s8).
Compared to legalizing these operations, this reduced the amount of
extends/truncates required, but was always a weird legalization decision
made at selection time.
So far, we haven't been able to formalize it in a way that permits the
selector generated from SelectionDAG patterns to be sufficient.
Using a wide instruction (say, s64), when a narrower instruction exists
(s32) would introduce register class incompatibilities (when one narrow
generic instruction is selected to the wider variant, but another is
selected to the narrower variant).
It's also impractical to limit which narrow operations are matched for
which instruction, as restricting "narrow selection" to ranges of types
clashes with potentially incompatible instruction predicates.
Concerns were also raised regarding MIPS64's sign-extended register
assumptions, as well as wrapping behavior.
See discussions in https://reviews.llvm.org/D26878.
Instead, legalize the operations.
Should we ever revert to selecting these narrow operations, we should
try to represent this more accurately: for instance, by separating
a "concrete" type on operations, and an "underlying" type on vregs, we
could move the "this narrow-looking op is really legal" decision to the
legalizer, and let the selector use the "underlying" vreg type only,
which would be guaranteed to map to a register class.
In any case, we eventually should mitigate:
- the performance impact by selecting no-op extract/truncates to COPYs
(which we currently do), and the COPYs to register reuses (which we
don't do yet).
- the compile-time impact by optimizing away extract/truncate sequences
in the legalizer.
llvm-svn: 292827
This is a series of patches to enable adding of machine sched
models for ARM processors easier and compact. They define new
sched-readwrites for groups of ARM instructions. This has been
missing so far, and as a consequence, machine scheduler models
for individual sub-targets have tended to be larger than they
needed to be.
The current patch focuses on floating-point instructions.
Reviewers: Diana Picus (rovka), Renato Golin (rengolin)
Differential Revision: https://reviews.llvm.org/D28194
llvm-svn: 292825
Summary:
Add a new load command LC_BUILD_VERSION. It is a generic version of
LC_*_VERSION_MIN load_command used on Apple platforms. Instead of having
a seperate load command for each platform, LC_BUILD_VERSION is recording
platform info as an enum. It also records SDK version, min_os, and tools
that used to build the binary.
rdar://problem/29781291
Reviewers: enderby
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D29044
llvm-svn: 292824
Summary:
Next round of extra tests for MSSA.
I have a prototype invariant.group handling implementation
that fixes all the FIXMEs, and I think it will be
easier to see what is the difference if I firstly
post this, and then only fix fixits.
Reviewers: george.burgess.iv, dberlin
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D29022
llvm-svn: 292797
Vector immediate load instructions should have the isAsCheapAsAMove, isMoveImm
and isReMaterializable flags set. With them, these instruction will get
hoisted out of loops.
Review: Ulrich Weigand
llvm-svn: 292790
bots ever since d0k fixed the CHECK lines so that it did something at
all.
It isn't actually testing SCEV directly but LSR, so move it into LSR and
the x86-specific tree of tests that already exists there. Target
dependence is common and unavoidable with the current design of LSR.
llvm-svn: 292774
invalidation of deleted functions in GlobalDCE.
This was always testing a bug really triggered in GlobalDCE. Right now
we have analyses with asserting value handles into IR. As long as those
remain, when *deleting* an IR unit, we cannot wait for the normal
invalidation scheme to kick in even though it was designed to work
correctly in the face of these kinds of deletions. Instead, the pass
needs to directly handle invalidating the analysis results pointing at
that IR unit.
I've tought the Inliner about this and this patch teaches GlobalDCE.
This will handle the asserting VH case in the existing test as well as
other issues of the same fundamental variety. I've moved the test into
the GlobalDCE directory and added a comment explaining what is going on.
Note that we cannot simply require LVI here because LVI is too lazy.
llvm-svn: 292773
While this is covered by a clang test case, we should have something
locally to LLVM that immediately checks the inliner doesn't leave
analyses to dangling IR bodies.
llvm-svn: 292772
clearing its body. This is essential to avoid triggering asserting value
handles in analyses on the function's body.
I'm working on a test case for this behavior in LLVM, but Clang has
a great one that managed to trigger this on all of the bots already.
llvm-svn: 292770
become unavailable.
The AssumptionCache is now immutable but it still needs to respond to
DomTree invalidation if it ended up caching one.
This lets us remove one of the explicit invalidates of LVI but the
other one continues to avoid hitting a latent bug.
llvm-svn: 292769
Add the `--strip-underscore` option to llvm-cxxfilt to strip the leading
underscore. This is useful for when dealing with targets which add a
leading underscore.
llvm-svn: 292759
new PM's inliner.
The bug happens when we refine an SCC after having computed a proxy for
the FunctionAnalysisManager, and then proceed to compute fresh analyses
for functions in the *new* SCC using the manager provided by the old
SCC's proxy. *And* when we manage to mutate a function in this new SCC
in a way that invalidates those analyses. This can be... challenging to
reproduce.
I've managed to contrive a set of functions that trigger this and added
a test case, but it is a bit brittle. I've directly checked that the
passes run in the expected ways to help avoid the test just becoming
silently irrelevant.
This gets the new PM back to passing the LLVM test suite after the PGO
improvements landed.
llvm-svn: 292757
logging pass and analyses information.
This is particularly useful when filtering the debug log for
a particular function or loop where something got inappropriately
cached.
llvm-svn: 292755
I noticed that this function got called twice in compiled code to create succ_begin and succ_end iterators. Adding this directive helps the compiler share the call.
Ideally we'd just make this method available for inlining since its quite simple, but the current header file arrangements don't allow that.
llvm-svn: 292754