The option is introduced with only one possible value
-polly-stmt-granularity=bb which represents the current behaviour, which
is outlined into the new function buildSequentialBlockStmts().
More options will be added in future commits.
llvm-svn: 314900
We make sure that the final reload of an invariant scalar memory access uses the
same stack slot into which the invariant memory access was stored originally.
Earlier, this was broken as we introduce a new stack slot aside of the preload
stack slot, which remained uninitialized and caused our escaping loads to
contain garbage. This happened due to us clearing the pre-populated values
in EscapeMap after kernel code generation. We address this issue by preserving
the original host values and restoring them after kernel code generation.
EscapeMap is not expected to be used during kernel code generation, hence we
clear it during kernel generation to make sure that any unintended uses are
noticed.
llvm-svn: 314894
This test XFAILs two test that start to fail when verifying DT's
DFS numbers, as per Tobias' suggestion.
Related VerifyDFSNumbers patch: D38331.
llvm-svn: 314800
Iterate over statement instructions instead over basic block
instructions when creating MemoryAccesses. It allows making the creation
of MemoryAccesses independent of how the basic blocks are split into
multiple ScopStmts.
llvm-svn: 314665
Create the MemoryAccesses of invariant loads separately and before
all other MemoryAccesses.
Invariant loads are classified as synthesizable and therefore are not
contained in any statement. When iterating over all instructions of all
statements, the invariant loads are consequently not processed and
iterating over them separately becomes necessary.
This patch can change the order in which MemoryAccesses are created, but
otherwise has no functional change.
Some temporary code is introduced to ensure correctness, but will be
removed in the next commit.
llvm-svn: 314664
Instructions that compute escaping values might be synthesizable and
therefore not contained in any ScopStmt. When buildAccessFunctions is
changed to only iterate over the instruction list of statement,
"free" instructions still need to be written. We do this after the
main MemoryAccesses have been created.
This can change the order in which MemoryAccesses are created, but has
otherwise no functional change.
llvm-svn: 314663
Decouple handling of exit block PHIs and other MemoryAccesses. Exit PHIs
only need the PHI handling part of buildAccessFunctions but requires
code for skipping them in while creating other MemoryAcesses.
This change will make it easier to modify how statement MemoryAccesses
are created without considering the exit block special case.
llvm-svn: 314662
Loads before the SCoP are always invariant within the SCoP and
therefore are no "required invariant loads". An assertion failes in
ScopBuilder when it finds such an invariant load.
Fix by not adding such loads to the required invariant load list. This
likely will cause the region to be not considered a valid SCoP.
We may want to unconditionally accept instructions defined before
the region as valid invariant conditions instead of rejecting them.
This fixes a compilation crash of SPEC CPU2006 453.povray's
render.cpp.
llvm-svn: 314636
This matches the behavior we already have in lib/Codegen/CodeGeneration.cpp and
makes sure that we fall back to the original code. It seems when invariant load
hoisting was introduced to the GPGPU backend we missed to reset the RTC flag,
such that kernels where invariant load hoisting failed executed the 'optimized'
SCoP, which however is set to a simple 'unreachable'. Unsurprisingly, this
results in hard to debug issues that are a lot of fun to debug.
llvm-svn: 314624
These functions print a multi-line and sorted representation of unions
of polyhedra. Each polyhedron (basic_{ast/map}) has its own line.
First sort key is the polyhedron's hierachical space structure.
Secondary sort key is the lower bound of the polyhedron, which should
ensure that the polyhedral are printed in approximately ascending order.
Example output of dumpPw():
[p_0, p_1, p_2] -> {
Stmt0[0] -> [0, 0];
Stmt0[i0] -> [i0, 0] : 0 < i0 <= 5 - p_2;
Stmt1[0] -> [0, 2] : p_1 = 1 and p_0 = -1;
Stmt2[0] -> [0, 1] : p_1 >= 3 + p_0;
Stmt3[0] -> [0, 3];
}
In contrast dumpExpanded() prints each point in the sets, unless there
is an unbounded dimension that cannot be expandend.
This is useful for reduced test cases where the loop counts are set to
some constant to understand a bug.
Example output of dumpExpanded(
{ [MemRef_A[i0] -> [i1]] : (exists (e0 = floor((1 + i1)/3): i0 = 1 and
3e0 <= i1 and 3e0 >= -1 + i1 and i1 >= 15 and i1 <= 25)) or (exists (e0
= floor((i1)/3): i0 = 0 and 3e0 < i1 and 3e0 >= -2 + i1 and i1 > 0 and
i1 <= 11)) }):
{
[MemRef_A[0] ->[1]];
[MemRef_A[0] ->[2]];
[MemRef_A[0] ->[4]];
[MemRef_A[0] ->[5]];
[MemRef_A[0] ->[7]];
[MemRef_A[0] ->[8]];
[MemRef_A[0] ->[10]];
[MemRef_A[0] ->[11]];
[MemRef_A[1] ->[15]];
[MemRef_A[1] ->[16]];
[MemRef_A[1] ->[18]];
[MemRef_A[1] ->[19]];
[MemRef_A[1] ->[21]];
[MemRef_A[1] ->[22]];
[MemRef_A[1] ->[24]];
[MemRef_A[1] ->[25]]
}
Differential Revision: https://reviews.llvm.org/D38349
llvm-svn: 314525
Summary:
Add a document which describes:
- GEMM performance comparison.
- An experiment that measures the compile time impact
of enabling Polly when compiling LLVM+Clang+Polly.
Contributed-by: Theodoros Theodoridis<theodoros.theodoridis@inf.ethz.ch>
Differential Revision: https://reviews.llvm.org/D38330
llvm-svn: 314419
In order for debuggers to be able to call an inline method, it must have
been instantiated somewhere. The dump() methods are usually not used, so
add an instantiation in debug builds.
This allows to call .dump() on any isl++ object from the gcc/gdb and
Visual Studio debugger in debug builds with assertions enabled.
In optimized builds, even with assertions enabled, the dump() methods
are also inlined in GICHelper.cpp, so no externally visible symbols
will be available either.
Differential Revision: https://reviews.llvm.org/D38198
llvm-svn: 314395
In case a PHI node follows an error block we can assume that the incoming value
can only come from the node that is not an error block. As a result, conditions
that seemed non-affine before are now in fact affine.
This is a recommit of r312663 after fixing
test/Isl/CodeGen/phi_after_error_block_outside_of_scop.ll
llvm-svn: 314075
Such RTCs may introduce integer wrapping intrinsics with more than 64 bit,
which are translated to library calls on AOSP that are not part of the
runtime and will consequently cause linker errors.
Thanks to Eli Friedman for reporting this issue and reducing the test case.
llvm-svn: 314065
Remove an assertion that tests the injectivity of the
PHIRead -> PHIWrite relation. That is, allow a single PHI write to be
used by multiple PHI reads. This may happen due to some statements
containing the PHI write not having the statement instances that would
overwrite the previous incoming value due to (assumed/invalid) contexts.
This result in that PHI write is mapped to multiple targets which is not
supported. Codegen will select one one of the targets using
getAddressFunction(). However, the runtime check should protect us from
this case ever being executed.
We therefore allow injective PHI relations. Additional calculations to
detect/santitize this case would probably not be worth the compuational
effort.
This fixes llvm.org/PR34485
llvm-svn: 313902
Before this patch, ScopInfo::getValueDef(SAI) used
getStmtFor(Instruction*) to find the MemoryAccess that writes a
MemoryKind::Value. In cases where the value is synthesizable within the
statement that defines, the instruction is not added to the statement's
instruction list, which means getStmtFor() won't return anything.
If the synthesiable instruction is not synthesiable in a different
statement (due to being defined in a loop that and ScalarEvolution
cannot derive its escape value), we still need a MemoryKind::Value
and a write to it that makes it available in the other statements.
Introduce a separate map for this purpose.
This fixes MultiSource/Benchmarks/MallocBench/cfrac where
-polly-simplify could not find the writing MemoryAccess for a use. The
write was not marked as required and consequently was removed.
Because this could in principle happen as well for PHI scalars,
add such a map for PHI reads as well.
llvm-svn: 313881
Since -polly-codegen reports itself to preserve DependenceInfo and IslAstInfo,
we might get those analysis that were computed by a different ScopInfo for a
different Scop structure. This would be unfortunate because DependenceInfo and
IslAstInfo hold references to resources allocated by
ScopInfo/ScopBuilder/Scop (e.g. isl_id). If -polly-codegen and
DependenceInfo/IslAstInfo do not agree on which Scop to use, unpredictable
things can happen.
When the ScopInfo/Scop object is freed, there is a high probability that the
new ScopInfo/Scop object will be created at the same heap position with the
same address. Comparing whether the Scop or ScopInfo address is the expected
therefore is unreliable.
Instead, we compare the address of the isl_ctx object. Both, DependenceInfo
and IslAstInfo must hold a reference to the isl_ctx object to ensure it is
not freed before the destruction of those analyses which might happen after
the destruction of the Scop/ScopInfo they refer to. Hence, the isl_ctx
will not be freed and its address not reused as long there is a
DependenceInfo or IslAstInfo around.
This fixes llvm.org/PR34441
llvm-svn: 313842
Fix walking over the schedule tree to collect its properties
(Number of permutable bands etc.).
Also add regression tests for these statistics.
llvm-svn: 313750
Computing the reaching definition in forwardTree() can take a long time
if the coefficients are large. When the forwarding is
carried-out (doIt==true), forwardTree() must execute entirely or not at
all to get a consistent output, which means we cannot just allow
out-of-quota errors to happen in the middle of the processing.
We introduce the class IslQuotaScope which allows to opt-in code that is
conformant and has been tested with out-of-quota events. In case of
ForwardOpTree, out-of-quota is allowed during the operand tree
examination, but not during the transformation. The same forwardTree()
recursion is used for examination and execution, meaning that the
reaching definition has already been computed in the examination tree
walk and cached for reuse in the transformation tree walk.
This should fix the time-out of grtestutils.ll of the asop buildbot. If
the compilation still takes too long, we can reduce the max-operations
allows for -polly-optree.
Differential Revision: https://reviews.llvm.org/D37984
llvm-svn: 313690
cl::opt<unsigned long> is not specialized and hence the option
-polly-optree-max-ops impossible to use.
Replace by supported option cl::opt<unsigned>.
Also check for an error state when computing the written value, which
happens when the quota runs out.
llvm-svn: 313546
In r301670 IR verification was disabled. Since then, CodeGen writing
malformed IR would only be noticed by unpredictable behavior in
follow-up passes (e.g. segfaults, infinite loops) or IR verification in
the backend assert builds.
Re-enable -polly-codegen-verify at for the regression tests to ensure
that malformed IR is detected where Polly generated malformed IR in the
past and changes in CodeGen are at least partially covered by
check-polly
(otherwise malformed IR may only get noticed when the buildbots run the
test-suite).
Differential Revision: https://reviews.llvm.org/D37969
llvm-svn: 313527
This is a resubmission of r313270. It broke standalone builds of
compiler-rt because we were not correctly generating the llvm-lit
script in the standalone build directory.
The fixes incorporated here attempt to find llvm/utils/llvm-lit
from the source tree returned by llvm-config. If present, it
will generate llvm-lit into the output directory. Regardless,
the user can specify -DLLVM_EXTERNAL_LIT to point to a specific
lit.py on their file system. This supports the use case of
someone installing lit via a package manager. If it cannot find
a source tree, and -DLLVM_EXTERNAL_LIT is either unspecified or
invalid, then we print a warning that tests will not be able
to run.
Differential Revision: https://reviews.llvm.org/D37756
llvm-svn: 313407
This patch is still breaking several multi-stage compiler-rt bots.
I already know what the fix is, but I want to get the bots green
for now and then try re-applying in the morning.
llvm-svn: 313335
This patch simplifies LLVM's lit infrastructure by enforcing an ordering
that a site config is always run before a source-tree config.
A significant amount of the complexity from lit config files arises from
the fact that inside of a source-tree config file, we don't yet know if
the site config has been run. However it is *always* required to run
a site config first, because it passes various variables down through
CMake that the main config depends on. As a result, every config
file has to do a bunch of magic to try to reverse-engineer the location
of the site config file if they detect (heuristically) that the site
config file has not yet been run.
This patch solves the problem by emitting a mapping from source tree
config file to binary tree site config file in llvm-lit.py. Then, during
discovery when we find a config file, we check to see if we have a
target mapping for it, and if so we use that instead.
This mechanism is generic enough that it does not affect external users
of lit. They will just not have a config mapping defined, and everything
will work as normal.
On the other hand, for us it allows us to make many simplifications:
* We are guaranteed that a site config will be executed first
* Inside of a main config, we no longer have to assume that attributes
might not be present and use getattr everywhere.
* We no longer have to pass parameters such as --param llvm_site_config=<path>
on the command line.
* It is future-proof, meaning you don't have to edit llvm-lit.in to add
support for new projects.
* All of the duplicated logic of trying various fallback mechanisms of
finding a site config from the main config are now gone.
One potentially noteworthy thing that was required to implement this
change is that whereas the ninja check targets previously used the first
method to spawn lit, they now use the second. In particular, you can no
longer run lit.py against the source tree while specifying the various
`foo_site_config=<path>` parameters. Instead, you need to run
llvm-lit.py.
Differential Revision: https://reviews.llvm.org/D37756
llvm-svn: 313270
The remaining parts produced by the full partial tile isolation can contain
hot spots that are worth to be optimized. Currently, we rely on the simple
loop unrolling pass, LiCM and the SLP vectorizer to optimize such parts.
However, the approach can suffer from the lack of the information about
aliasing that Polly provides using additional alias metadata or/and the lack
of the information required by simple loop unrolling pass.
This patch is the first step to optimize the remaining parts. To do it, we
unroll and separate them. In case of, for instance, Intel Kaby Lake, it helps
to increase the performance of the generated code from 39.87 GFlop/s to
49.23 GFlop/s.
The next possible step is to avoid unrolling performed by Polly in case of
isolated and remaining parts and rely only on simple loop unrolling pass and
the Loop vectorizer.
Reviewed-by: Tobias Grosser <tobias@grosser.es>
Differential Revision: https://reviews.llvm.org/D37692
llvm-svn: 312929
Update CodegenCleanup using the function-level passes added by
populatePassManager that run between EP_EarlyAsPossible and
EP_VectorizerStart in -O3.
The changes in particular are:
- Added pass create arguments, e.g. ExpensiveCombines for InstCombine.
- Remove reroll pass. The option -reroll-loops is disabled by default.
- Add passes run with UnitAtATime, which is the default.
- Add instances of LibCallsShrinkWrap, TailCallElimination, SCCP
(sparse conditional constant propagation), Float2Int
that did not run before.
- Add instances of GVN as in the default pipeline.
Notes:
- GVNHoist, GVNSink, NewGVN are still disabled in the -O3 pipeline.
- The optimization level and other optimization parameters are not
accessible outside of PassManagerBuilder, hence we cannot add passes
depending on these.
Differential Revision: https://reviews.llvm.org/D37571
llvm-svn: 312875
The type of NewValue might change due to ScalarEvolution
looking though bitcasts. The synthesized NewValue therefore
becomes the type before the bitcast.
llvm-svn: 312718
This reverts commit
r312410 - [ScopDetect/Info] Look through PHIs that follow an error block
The commit caused generation of invalid IR due to accessing a parameter
that does not dominate the SCoP.
llvm-svn: 312663
Up to now ZoneAlgo considered array elements access by something else
than a LoadInst or StoreInst as not analyzable. This patch removes that
restriction by using the unknown ValInst to describe the written
content, repectively the element type's null value in case of memset.
Differential Revision: https://reviews.llvm.org/D37362
llvm-svn: 312630
Since r312249 instructions of a entry block of region statements are
not marked as root anymore and hence can theoretically be removed
if unused. Theoretically, because the instruction list was not changed.
Still, MemoryAccesses for unused instructions were removed. This lead
to a failed assertion in the code generator when the MemoryAccess for
the still listed instruction was not found.
This hould fix the
Assertion failed: ArrayAccess && "No array access found for instruction!",
file ScopInfo.h, line 1494
compiler crashes.
llvm-svn: 312566