In case a PHI node follows an error block we can assume that the incoming value
can only come from the node that is not an error block. As a result, conditions
that seemed non-affine before are now in fact affine.
This is a recommit of r312663 after fixing
test/Isl/CodeGen/phi_after_error_block_outside_of_scop.ll
llvm-svn: 314075
Such RTCs may introduce integer wrapping intrinsics with more than 64 bit,
which are translated to library calls on AOSP that are not part of the
runtime and will consequently cause linker errors.
Thanks to Eli Friedman for reporting this issue and reducing the test case.
llvm-svn: 314065
Remove an assertion that tests the injectivity of the
PHIRead -> PHIWrite relation. That is, allow a single PHI write to be
used by multiple PHI reads. This may happen due to some statements
containing the PHI write not having the statement instances that would
overwrite the previous incoming value due to (assumed/invalid) contexts.
This result in that PHI write is mapped to multiple targets which is not
supported. Codegen will select one one of the targets using
getAddressFunction(). However, the runtime check should protect us from
this case ever being executed.
We therefore allow injective PHI relations. Additional calculations to
detect/santitize this case would probably not be worth the compuational
effort.
This fixes llvm.org/PR34485
llvm-svn: 313902
Before this patch, ScopInfo::getValueDef(SAI) used
getStmtFor(Instruction*) to find the MemoryAccess that writes a
MemoryKind::Value. In cases where the value is synthesizable within the
statement that defines, the instruction is not added to the statement's
instruction list, which means getStmtFor() won't return anything.
If the synthesiable instruction is not synthesiable in a different
statement (due to being defined in a loop that and ScalarEvolution
cannot derive its escape value), we still need a MemoryKind::Value
and a write to it that makes it available in the other statements.
Introduce a separate map for this purpose.
This fixes MultiSource/Benchmarks/MallocBench/cfrac where
-polly-simplify could not find the writing MemoryAccess for a use. The
write was not marked as required and consequently was removed.
Because this could in principle happen as well for PHI scalars,
add such a map for PHI reads as well.
llvm-svn: 313881
Since -polly-codegen reports itself to preserve DependenceInfo and IslAstInfo,
we might get those analysis that were computed by a different ScopInfo for a
different Scop structure. This would be unfortunate because DependenceInfo and
IslAstInfo hold references to resources allocated by
ScopInfo/ScopBuilder/Scop (e.g. isl_id). If -polly-codegen and
DependenceInfo/IslAstInfo do not agree on which Scop to use, unpredictable
things can happen.
When the ScopInfo/Scop object is freed, there is a high probability that the
new ScopInfo/Scop object will be created at the same heap position with the
same address. Comparing whether the Scop or ScopInfo address is the expected
therefore is unreliable.
Instead, we compare the address of the isl_ctx object. Both, DependenceInfo
and IslAstInfo must hold a reference to the isl_ctx object to ensure it is
not freed before the destruction of those analyses which might happen after
the destruction of the Scop/ScopInfo they refer to. Hence, the isl_ctx
will not be freed and its address not reused as long there is a
DependenceInfo or IslAstInfo around.
This fixes llvm.org/PR34441
llvm-svn: 313842
Fix walking over the schedule tree to collect its properties
(Number of permutable bands etc.).
Also add regression tests for these statistics.
llvm-svn: 313750
cl::opt<unsigned long> is not specialized and hence the option
-polly-optree-max-ops impossible to use.
Replace by supported option cl::opt<unsigned>.
Also check for an error state when computing the written value, which
happens when the quota runs out.
llvm-svn: 313546
In r301670 IR verification was disabled. Since then, CodeGen writing
malformed IR would only be noticed by unpredictable behavior in
follow-up passes (e.g. segfaults, infinite loops) or IR verification in
the backend assert builds.
Re-enable -polly-codegen-verify at for the regression tests to ensure
that malformed IR is detected where Polly generated malformed IR in the
past and changes in CodeGen are at least partially covered by
check-polly
(otherwise malformed IR may only get noticed when the buildbots run the
test-suite).
Differential Revision: https://reviews.llvm.org/D37969
llvm-svn: 313527
This is a resubmission of r313270. It broke standalone builds of
compiler-rt because we were not correctly generating the llvm-lit
script in the standalone build directory.
The fixes incorporated here attempt to find llvm/utils/llvm-lit
from the source tree returned by llvm-config. If present, it
will generate llvm-lit into the output directory. Regardless,
the user can specify -DLLVM_EXTERNAL_LIT to point to a specific
lit.py on their file system. This supports the use case of
someone installing lit via a package manager. If it cannot find
a source tree, and -DLLVM_EXTERNAL_LIT is either unspecified or
invalid, then we print a warning that tests will not be able
to run.
Differential Revision: https://reviews.llvm.org/D37756
llvm-svn: 313407
This patch is still breaking several multi-stage compiler-rt bots.
I already know what the fix is, but I want to get the bots green
for now and then try re-applying in the morning.
llvm-svn: 313335
This patch simplifies LLVM's lit infrastructure by enforcing an ordering
that a site config is always run before a source-tree config.
A significant amount of the complexity from lit config files arises from
the fact that inside of a source-tree config file, we don't yet know if
the site config has been run. However it is *always* required to run
a site config first, because it passes various variables down through
CMake that the main config depends on. As a result, every config
file has to do a bunch of magic to try to reverse-engineer the location
of the site config file if they detect (heuristically) that the site
config file has not yet been run.
This patch solves the problem by emitting a mapping from source tree
config file to binary tree site config file in llvm-lit.py. Then, during
discovery when we find a config file, we check to see if we have a
target mapping for it, and if so we use that instead.
This mechanism is generic enough that it does not affect external users
of lit. They will just not have a config mapping defined, and everything
will work as normal.
On the other hand, for us it allows us to make many simplifications:
* We are guaranteed that a site config will be executed first
* Inside of a main config, we no longer have to assume that attributes
might not be present and use getattr everywhere.
* We no longer have to pass parameters such as --param llvm_site_config=<path>
on the command line.
* It is future-proof, meaning you don't have to edit llvm-lit.in to add
support for new projects.
* All of the duplicated logic of trying various fallback mechanisms of
finding a site config from the main config are now gone.
One potentially noteworthy thing that was required to implement this
change is that whereas the ninja check targets previously used the first
method to spawn lit, they now use the second. In particular, you can no
longer run lit.py against the source tree while specifying the various
`foo_site_config=<path>` parameters. Instead, you need to run
llvm-lit.py.
Differential Revision: https://reviews.llvm.org/D37756
llvm-svn: 313270
The remaining parts produced by the full partial tile isolation can contain
hot spots that are worth to be optimized. Currently, we rely on the simple
loop unrolling pass, LiCM and the SLP vectorizer to optimize such parts.
However, the approach can suffer from the lack of the information about
aliasing that Polly provides using additional alias metadata or/and the lack
of the information required by simple loop unrolling pass.
This patch is the first step to optimize the remaining parts. To do it, we
unroll and separate them. In case of, for instance, Intel Kaby Lake, it helps
to increase the performance of the generated code from 39.87 GFlop/s to
49.23 GFlop/s.
The next possible step is to avoid unrolling performed by Polly in case of
isolated and remaining parts and rely only on simple loop unrolling pass and
the Loop vectorizer.
Reviewed-by: Tobias Grosser <tobias@grosser.es>
Differential Revision: https://reviews.llvm.org/D37692
llvm-svn: 312929
The type of NewValue might change due to ScalarEvolution
looking though bitcasts. The synthesized NewValue therefore
becomes the type before the bitcast.
llvm-svn: 312718
This reverts commit
r312410 - [ScopDetect/Info] Look through PHIs that follow an error block
The commit caused generation of invalid IR due to accessing a parameter
that does not dominate the SCoP.
llvm-svn: 312663
Up to now ZoneAlgo considered array elements access by something else
than a LoadInst or StoreInst as not analyzable. This patch removes that
restriction by using the unknown ValInst to describe the written
content, repectively the element type's null value in case of memset.
Differential Revision: https://reviews.llvm.org/D37362
llvm-svn: 312630
Since r312249 instructions of a entry block of region statements are
not marked as root anymore and hence can theoretically be removed
if unused. Theoretically, because the instruction list was not changed.
Still, MemoryAccesses for unused instructions were removed. This lead
to a failed assertion in the code generator when the MemoryAccess for
the still listed instruction was not found.
This hould fix the
Assertion failed: ArrayAccess && "No array access found for instruction!",
file ScopInfo.h, line 1494
compiler crashes.
llvm-svn: 312566
Before this patch, OpTree did not consider forwarding an operand tree consisting
of only single LoadInst as useful. The motivation was that, like an access to a
read-only variable, it would just replace one MemoryAccess by another. However,
in contrast to read-only accesses, this would replace a scalar access by an
array access, which is something worth doing.
In addition, leaving scalar MemoryAccess is problematic in that VirtualUse
prioritizes inter-Stmt use over intra-Stmt. It was possible that the same LLVM
value has a MemoryAccess for accessing the remote Stmt's LoadInst as well as
having the same LoadInst in its own instruction list (due to being forwarded
from another operand tree).
With this patch we ensure that if a LoadInst is forwarded is any operand tree,
also the operand tree containing just the LoadInst is forwarded as well, which
effectively removes the scalar MemoryAccess such that only the array access
remains, not both.
Thanks Michael for the detailed explanation.
Reviewers: Meinersbur, bellu, singam-sanjay, gareevroman
Subscribers: hfinkel, pollydev, llvm-commits
Tags: #polly
Differential Revision: https://reviews.llvm.org/D37424
llvm-svn: 312456
In certain situations, the context in the isl_ast_build could result for the
min/max locations of our alias sets to become empty, which would cause an
internal error in isl, which is then unable to derive a value for these
expressions. Check these conditions before code generating expressions and
instead assume that alias check succeeded. This is valid, as the corresponding
memory accesses will not be executed under any valid context.
This fixed llvm.org/PR34432. Thanks to Qirun Zhang for reporting.
llvm-svn: 312455
In case a PHI node follows an error block we can assume that the incoming value
can only come from the node that is not an error block. As a result, conditions
that seemed non-affine before are now in fact affine.
llvm-svn: 312410
In Polly, we specifically add a paramter to represent the outermost dimension
size of fortran arrays. We do this because this information is statically
available from the fortran metadata generated by dragonegg.
However, we were only materializing these parameters (meaning, creating an
llvm::Value to back the isl_id) from *memory accesses*. This is wrong,
we should materialize parameters from *scop array info*.
It is wrong because if there is a case where we detect 2 fortran arrays,
but only one of them is accessed, we may not materialize the other array's
dimensions at all.
This is incorrect. We fix this by looping over all
`polly::ScopArrayInfo` in a scop, rather that just all `polly::MemoryAccess`.
Differential Revision: https://reviews.llvm.org/D37379
llvm-svn: 312350
Mark scalar dependences for different statements belonging to same BB
as 'Inter'.
Contributed-by: Nandini Singhal <cs15mtech01004@iith.ac.in>
Differential Revision: https://reviews.llvm.org/D37147
llvm-svn: 312324
Summary:
After region statements now also have instruction lists, this is a
straightforward extension.
Reviewers: Meinersbur, bollu, singam-sanjay, gareevroman
Reviewed By: Meinersbur
Subscribers: hfinkel, pollydev, llvm-commits
Tags: #polly
Differential Revision: https://reviews.llvm.org/D37298
llvm-svn: 312249
This is useful when we face certain intrinsics such as `llvm.exp.*`
which cannot be lowered by the NVPTX backend while other intrinsics can.
So, we would need to keep blacklists of intrinsics that cannot be
handled by the NVPTX backend. It is much simpler to try and promote
all intrinsics to libdevice versions.
This patch makes function/intrinsic very uniform, and will always try to use
a libdevice version if it exists.
Differential Revision: https://reviews.llvm.org/D37056
llvm-svn: 312239
The adds code generation support for the previous commit.
This patch has been re-applied, after the memory issue in the previous patch
has been fixed.
llvm-svn: 312211
By using statement lists in the entry blocks of region statements, instruction
level analyses also work on region statements.
We currently only model the entry block of a region statements, as this is
sufficient for most transformations the known-passes currently execute. Modeling
instructions in the presence of control flow (e.g. infinite loops) is left
out to not increase code complexity too much. It can be added when good use
cases are found.
This change set is reapplied, after a memory corruption issue had been fixed.
llvm-svn: 312210
By using statement lists in the entry blocks of region statements, instruction
level analyses also work on region statements.
We currently only model the entry block of a region statements, as this is
sufficient for most transformations the known-passes currently execute. Modeling
instructions in the presence of control flow (e.g. infinite loops) is left
out to not increase code complexity too much. It can be added when good use
cases are found.
llvm-svn: 312128
This patch allows annotating of metadata in ir instruction
(with "polly_split_after"), which specifies where to split a particular
scop statement.
Contributed-by: Nandini Singhal <cs15mtech01004@iith.ac.in>
Differential Revision: https://reviews.llvm.org/D36402
llvm-svn: 312107
ZoneAlgo used to bail out for the complete SCoP if it encountered
something violating its assumption. This meant the neither OpTree can
forward any load nor DeLICM do anything in such cases, even if their
transformations are unrelated to the violations.
This patch adds a list of compatible elements (currently with the
granularity of entire arrays) that can be used for analysis. OpTree
and DeLICM can then check whether their transformations only concern
compatible elements, and skip non-compatible ones.
This will be useful for e.g. Polybench's benchmarks covariance,
correlation, bicg, doitgen, durbin, gramschmidt, adi that have
assumption violation, but which are not necessarily relevant
for all transformations.
Differential Revision: https://reviews.llvm.org/D37219
llvm-svn: 311929
In cases where the entry block of a scop was not contained in a loop that was
part of the scop region and at the same time there was a loop surrounding the
scop, we missed to count the loops in the scop and consequently did not consider
the scop profitable. We correct this by only moving to the loop parent, in case
the current loop is loop contained in the scop.
This increases the number of loops in COSMO which we assume to be profitable
from 3974 to 4981.
llvm-svn: 311863
Whether a partial write is tautological/unsatisfiable not only
depends on the access domain, but also on the domain covered
by its node in the AST.
In the example below, there are two instances of Stmt_cond_false. It may have a partial write access that is not executed in instance Stmt_cond_false(0).
for (int c0 = 0; c0 < tmp5; c0 += 1) {
Stmt_for_body344(c0);
if (tmp5 >= c0 + 2)
Stmt_cond_false(c0);
Stmt_cond_end(c0);
}
if (tmp5 <= 0) {
Stmt_for_body344(0);
Stmt_cond_false(0);
Stmt_cond_end(0);
}
Isl cannot derive a subscript for an array element that is never accessed.
This caused an error in that no subscript expression has been generated
in IslNodeBuilder::createNewAccesses, but BlockGenerator expected one
to exist because there is an execution of that write, just not in that
ast node.
Fixed by instead of determining whether the access domain is empty,
inspect whether isl generated a constant "false" ast expression in
the current ast node.
This should fix a compiler crash of the aosp buildbot.
llvm-svn: 311663
Summary:
This patch comes directly after https://reviews.llvm.org/D34982 which allows fully indexed expansion of MemoryKind::Array. This patch allows expansion for MemoryKind::Value and MemoryKind::PHI.
MemoryKind::Value seems to be working with no majors modifications of D34982. A test case has been added. Unfortunatly, no "run time" checks can be done for now because as @Meinersbur explains in a comment on D34982, DependenceInfo need to be cleared and reset to take expansion into account in the remaining part of the Polly pipeline. There is no way to do that in Polly for now.
MemoryKind::PHI is not working. Test case is in place, but not working. To expand MemoryKind::Array, we expand first the write and then after the reads. For MemoryKind::PHI, the idea of the current implementation is to exchange the "roles" of the read and write and expand first the read according to its domain and after the writes.
But with this strategy, I still encounter the problem of union_map in new access map.
For example with the following source code (source code of the test case) :
```
void mse(double A[Ni], double B[Nj]) {
int i,j;
double tmp = 6;
for (i = 0; i < Ni; i++) {
for (int j = 0; j<Nj; j++) {
tmp = tmp + 2;
}
B[i] = tmp;
}
}
```
Polly gives us the following statements and memory accesses :
```
Statements {
Stmt_for_body
Domain :=
{ Stmt_for_body[i0] : 0 <= i0 <= 9999 };
Schedule :=
{ Stmt_for_body[i0] -> [i0, 0, 0] };
ReadAccess := [Reduction Type: NONE] [Scalar: 1]
{ Stmt_for_body[i0] -> MemRef_tmp_04__phi[] };
MustWriteAccess := [Reduction Type: NONE] [Scalar: 1]
{ Stmt_for_body[i0] -> MemRef_tmp_11__phi[] };
Instructions {
%tmp.04 = phi double [ 6.000000e+00, %entry.split ], [ %add.lcssa, %for.end ]
}
Stmt_for_inc
Domain :=
{ Stmt_for_inc[i0, i1] : 0 <= i0 <= 9999 and 0 <= i1 <= 9999 };
Schedule :=
{ Stmt_for_inc[i0, i1] -> [i0, 1, i1] };
MustWriteAccess := [Reduction Type: NONE] [Scalar: 1]
{ Stmt_for_inc[i0, i1] -> MemRef_tmp_11__phi[] };
ReadAccess := [Reduction Type: NONE] [Scalar: 1]
{ Stmt_for_inc[i0, i1] -> MemRef_tmp_11__phi[] };
MustWriteAccess := [Reduction Type: NONE] [Scalar: 1]
{ Stmt_for_inc[i0, i1] -> MemRef_add_lcssa__phi[] };
Instructions {
%tmp.11 = phi double [ %tmp.04, %for.body ], [ %add, %for.inc ]
%add = fadd double %tmp.11, 2.000000e+00
%exitcond = icmp ne i32 %inc, 10000
}
Stmt_for_end
Domain :=
{ Stmt_for_end[i0] : 0 <= i0 <= 9999 };
Schedule :=
{ Stmt_for_end[i0] -> [i0, 2, 0] };
MustWriteAccess := [Reduction Type: NONE] [Scalar: 1]
{ Stmt_for_end[i0] -> MemRef_tmp_04__phi[] };
ReadAccess := [Reduction Type: NONE] [Scalar: 1]
{ Stmt_for_end[i0] -> MemRef_add_lcssa__phi[] };
MustWriteAccess := [Reduction Type: NONE] [Scalar: 0]
{ Stmt_for_end[i0] -> MemRef_B[i0] };
Instructions {
%add.lcssa = phi double [ %add, %for.inc ]
store double %add.lcssa, double* %arrayidx, align 8
%exitcond5 = icmp ne i64 %indvars.iv.next, 10000
}
}
```
and the following dependences :
```
{ Stmt_for_inc[i0, 9999] -> Stmt_for_end[i0] : 0 <= i0 <= 9999;
Stmt_for_inc[i0, i1] -> Stmt_for_inc[i0, 1 + i1] : 0 <= i0 <= 9999 and 0 <= i1 <= 9998;
Stmt_for_body[i0] -> Stmt_for_inc[i0, 0] : 0 <= i0 <= 9999;
Stmt_for_end[i0] -> Stmt_for_body[1 + i0] : 0 <= i0 <= 9998 }
```
When trying to expand this memory access :
```
{ Stmt_for_inc[i0, i1] -> MemRef_tmp_11__phi[] };
```
The new access map would look like this :
```
{ Stmt_for_inc[i0, 9999] -> MemRef_tmp_11__phi_exp[i0] : 0 <= i0 <= 9999; Stmt_for_inc[i0, i1] ->MemRef_tmp_11__phi_exp[i0, 1 + i1] : 0 <= i0 <= 9999 and 0 <= i1 <= 9998 }
```
The idea to implement the expansion for PHI access is an idea from @Meinersbur and I don't understand why my implementation does not work. I should have miss something in the understanding of the idea.
Contributed by: Nicolas Bonfante <nicolas.bonfante@gmail.com>
Reviewers: Meinersbur, simbuerg, bollu
Reviewed By: Meinersbur
Subscribers: llvm-commits, pollydev, Meinersbur
Differential Revision: https://reviews.llvm.org/D36647
llvm-svn: 311619
Loop with zero iteration are, syntactically, loops. They have been
excluded from the loop counter even for the non-profitable counters.
This seems to be unintentially as the sentinel value of '0' minimal
iterations does exclude such loops.
Fix by never considering the iteration count when the sentinel
value of 0 is found.
This makes the recently added NumTotalLoops couter redundant
with NumLoopsOverall, which now is equivalent. Hence, NumTotalLoops
is removed as well.
Note: The test case 'ScopDetect/statistics.ll' effectively does not
check profitability, because -polly-process-unprofitable is passed
to all test cases.
llvm-svn: 311551
Summary:
ScopDetection used to check if a loop withing a region was infinite and emitted a diagnostic in such cases. After r310940 there's no point checking against that situation, as infinite loops don't appear in regions anymore.
The test failure was observed on these two polly buildbots:
http://lab.llvm.org:8011/builders/polly-arm-linux/builds/8368http://lab.llvm.org:8011/builders/polly-amd64-linux/builds/10310
This patch XFAILs `ReportLoopHasNoExit.ll` and turns infinite loop detection into an assert.
Reviewers: grosser, sanjoy, bollu
Reviewed By: grosser
Subscribers: efriedma, aemerson, kristof.beyls, dberlin, llvm-commits
Tags: #polly
Differential Revision: https://reviews.llvm.org/D36776
llvm-svn: 311503
Summary:
There is no need to emit alias metadata for scalars, as basicaa will easily
distinguish them from arrays. This reduces the size of the metadata we generate.
This is especially useful after we moved to -polly-position=before-vectorizer,
where a lot more scalar dependences are introduced, which increased the size of
the alias analysis metadata and made us commonly reach the limits after which
we do not emit alias metadata that have been introduced to prevent quadratic
growth of this alias metadata.
This improves 2mm performance from 1.5 seconds to 0.17 seconds.
Reviewers: Meinersbur, bollu, singam-sanjay
Reviewed By: Meinersbur
Subscribers: pollydev, llvm-commits
Tags: #polly
Differential Revision: https://reviews.llvm.org/D37028
llvm-svn: 311498
Currently, in case of GEMM and the pattern matching based optimizations, we
use only the SLP Vectorizer out of two LLVM vectorizers. Since the Loop
Vectorizer can get in the way of optimal code generation, we disable the Loop
Vectorizer for the innermost loop using mark nodes and emitting the
corresponding metadata.
Reviewed-by: Tobias Grosser <tobias@grosser.es>
Differential Revision: https://reviews.llvm.org/D36928
llvm-svn: 311473
Dragonegg generates most function parameters as pointers to the actual
parameters. However, it does not mark these parameters with the
dereferencable attribute.
Polly is conservative when it comes to invariant load
hoisting, thus we add runtime checks to invariant load hoisted pointers
when we do not know that pointers are dereferencable. This is correct behaviour,
but is a performance penalty.
Add a flag that allows all pointer parameters to be dereferencable. That
way, polly can speculatively load-hoist paramters to functions without
runtime checks.
Differential Revision: https://reviews.llvm.org/D36461
llvm-svn: 311329