Error blocks may contain arbitrary instructions, among them some which we can
not modeled correctly. As we do not generate ScopStmts for error blocks anyhow
there is no point in trying to generate access functions for them.
This fixes llvm.org/PR25494
llvm-svn: 252794
For complex inputs our current approach of construction the boundary context
may in rare cases become computationally so expensive that it is better to
abort. This change adds a compute out check that bounds the compuations we
spend on boundary context construction and bails out if this limit is reached.
We can probably make our boundary construction algorithm more efficient, but
this requires some more investigation and probably also some additional changes
to isl. Until these have been added, we bound the compile time to ensure our
buildbots are green.
llvm-svn: 252758
In certain rare cases (mostly -polly-process-unprofitable on large sequences
of conditions - often without any loop), we see some compile-time timeouts due
to the construction of an overly complex assumption context. This change limits
the number of disjuncts to 150 (adjustable), to prevent us from creating
assumptions contexts that are too large for even the compilation to finish.
The limit has been choosen as large as possible to make sure we do not
unnecessarily drop test coverage. If such cases also appear in
-polly-process-unprofitable=false mode we may need to think about this again,
as the current limitations may still allow assumptions that are way to complex
to be checked profitably at run-time.
There is also certainly room for improvement regarding how (and how efficient)
we construct an assumed context, but this requires some more thinking.
This completes llvm.org/PR25458
llvm-svn: 252750
Basic blocks that are always executed can not be error blocks as their execution
can not possibly be an unlikely event. In this commit we tighten the check
if an error block to basic blcoks that do not dominate the exit condition, but
that dominate all exiting blocks of the scop.
llvm-svn: 252726
r252713 introduced a couple of regressions due to later basic blocks refering
to instructions defined in error blocks which have not yet been modeled.
This commit is currently just encoding limitations of our modeling and code
generation backends to ensure correctness. In theory, we should be able to
generate and optimize such regions, as everything that is dominated by an error
region is assumed to not be executed anyhow. We currently just lack the code
to make this happen in practice.
llvm-svn: 252725
Thinking more about the last commit I came to realize that for testing the
new functionality it is sufficient to verify that the iteration domains
we construct for a simple test case do not contain any of the complexity that
caused compile time issues for larger inputs.
llvm-svn: 252714
Previously, we just skipped error blocks during scop construction. With
this change we make sure we can construct domains for error blocks such that
these domains can be forwarded to subsequent basic blocks.
This change ensures that basic blocks that post-dominate and are dominated by
a basic block that branches to an error condition have the very same iteration
domain as the branching basic block. Before, this change we would construct
a domain that excludes all error conditions. Such domains could become _very_
complex and were undesirable to build.
Another solution would have been to drop these constraints using a
dominance/post-dominance check instead of modeling the error blocks. Such
a solution could also work in case of unreachable statements or infinite
loops in the scop. However, as we currently (to my believe incorrectly) model
unreachable basic blocks in the post-dominance tree, such a solution is not
yet feasible and requires first a change to LLVM's post-dominance tree
construction.
This commit addresses the most sever compile time issue reported in:
http://llvm.org/PR25458
llvm-svn: 252713
Especially for structs, the SAI object of a base pointer does not
describe all the types that the user might expect when he loads from
that base pointer. While we will still cast integers and pointers we
will now reload the value with the correct type if floating point and
non-floating point values are involved. However, there are now TODOs
where we use bitcasts instead of a proper conversion or reloading.
This fixes bug 25479.
llvm-svn: 252706
We now create all invariant equivalence classes for required invariant loads
instead of creating them on-demand. This way we can check if a parameter
references an invariant load that is actually not executed and was therefor
not materialized. If that happens the parameter is not materialized either.
This fixes bug 25469.
llvm-svn: 252701
In case we also model scalar reads it can happen that a pointer appears in both
a scalar read access as well as the base pointer of an array access. As this
is a little surprising, we add a specific test case to document this behaviour.
To my understanding it should be OK to have a read from an array A[] and
read/write accesses to A[...]. isl is treating these arrays as unrelated as
their dimensionality differs. This seems to be correct as A[] remains constant
throughout the execution of the scop and is not affected by the reads/writes to
A[...]. If this causes confusion, it might make sense to make this behaviour
more obvious by using different names (e.g., A_scalar[], A[...]).
llvm-svn: 252615
Memory references are now printed as follows:
Old New
Scalars: i64 MemRef_val[*] i64 MemRef_val;
Arrays: i64 MemRef_A[*][%m][%o][8] i64 MemRef_A[*][%m][%o];
We do not print any more information about the element size in the type. Such
information has already been available in a comment after the scalar/array
declaration. It was redundant and did not match well with what people were used
from C.
llvm-svn: 252602
Scalar reloads in the generated entering block were not recognized as
dominating the subregions locks when there were multiple entering
nodes. This resulted in values defined in there not being copied.
As a fix, we unconditionally add the BBMap of the generated entering
node to the generated entry. This fixes part of llvm.org/PR25439.
This reverts 252449 and reapplies r252445. Its test was failing
indeterministically due to r252375 which was reverted in r252522.
llvm-svn: 252540
The dominance of the generated non-affine subregion block was based on
the scop's merge block, therefore resulted in an invalid DominanceTree.
It resulted in some values as assumed to be unusable in the actual
generated exit block.
We detect the case that the exit block has been moved and decide
dominance using the BB at the original exit. If we create another exit
node, that exit nodes is dominated by the one generated from where the
original exit resides. This fixes llvm.org/PR25438 and part of
llvm.org/PR25439.
llvm-svn: 252526
It introduced indeterminism as it was iterating over an address-indexed
hashtable. The corresponding bug PR25438 will be fixed in a successive
commit.
llvm-svn: 252522
This reverts commit 9775824b265e574fc541e975d64d3e270243b59d due to a
failing unit test.
Please check and correct the unit test and commit again.
llvm-svn: 252449
Scalar reloads in the generated entering block were not recognized as
dominating the subregions locks when there were multiple entering
nodes. This resulted in values defined in there not being copied.
As a fix, we unconditionally add the BBMap of the generated entering
node to the generated entry. This fixes part of llvm.org/PR25439.
llvm-svn: 252445
If a SCoP contains error blocks we cannot use the domain constraints
to simplify the assumptions as the domain is already influenced by the
assumptions we took. Before this patch we did that and some assumptions
became self-fulfilling as they were implied by the domain constraints.
llvm-svn: 252424
Even if a scalar and memory access have the same base pointer, we cannot use
one SAI object as the type but also the number of dimensions are wrong. For
the attached test case this caused a crash in the invariant load hoisting,
though it could cause various other problems too.
This fixes bug 25428 and a execution time bug in MallocBench/cfrac.
Reported-by: Jeremy Huddleston Sequoia <jeremyhu@apple.com>
llvm-svn: 252422
When we bail out early we make the partially build new code path
practically dead, though it was not unreachable. To remove dominance
problems we now make it not only dead but also prevent the control
flow to join with the original code path, thus allow to use original
values after the SCoP without any PHI nodes.
This fixes bug 25447.
llvm-svn: 252420
While the program cannot cause a dependence cycle between invariant
loads, additional constraints (e.g., to ensure finite loops) can
introduce them. It is hard to detect them in the SCoP description,
thus we will only check for them at code generation time. If such a
recursion is detected we will bail out the code generation and place a
"false" runtime check to guarantee the original code is used.
This fixes bug 25443.
llvm-svn: 252412
After loop versioning, a dominance check of a non-affine subregion's
exit node causes the dominance check to always fail on any block in the
subregion if it shares the same exit block with the scop. The
subregion's exit block has become polly_merge_new_and_old, which also
receives the control flow of the generated code. This would cause that
any value for implicit stores is assumed to be not from the scop.
We check dominance with the generated exit node instead.
This fixes llvm.org/PR25438
llvm-svn: 252375
We were adding all generated values in non-affine subregions to be used
for the subregions generated exit block. The thought was that only
values that are dominating the original exit block can be used there.
But it is possible for synthesizable values to be expanded in any
block. If the same values is also used for implicit writes, it would
try to reuse already synthesized values even if not dominating the exit
block.
The fix is to only add values to the list of values usable in the exit
block only if it is dominating the exit block. This fixes
llvm.org/PR25412.
llvm-svn: 252301
Before this commit memory reference identifiers have only been unique per
basic block, but not per (non-affine) ScopStmt. This commit now uses the
MemoryAccess base pointer to uniquely identify each Memory access.
llvm-svn: 252200
For generating scalar writes of non-affine subregions, all except phi
writes are generated in the exit block. The phi writes are generated in
the incoming block for which we errornously used the same BBMap. This
can conflict if a value for one block is synthesized, and then reused
for another block which is not dominated by the first block. This is
fixed by using block-specific BBMaps for phi writes.
llvm-svn: 252172
An incoming value from a block the is not inside the scop is an
external use, even if the phi is inside the scop. A previous fix in
r251208 did not apply if the phi is inside a non-affine subregion. We
move the check for this phi case before the non-affine subregion check.
llvm-svn: 252157
To simplify and correct the preloading of a base pointer origin, e.g.,
the base pointer for the current indirect invariant load, we now just
check if there is an invariant access class that involves the base
pointer of the current class.
llvm-svn: 251962
We do not need to model read-only statements in the SCoP as they will
not cause any side effects that are visible to the outside anyway.
Removing them should safe us time and might even simplify the ASTs we
generate.
Differential Revision: http://reviews.llvm.org/D14272
llvm-svn: 251948
If a base pointer of a preloaded value has a base pointer origin, thus it is
an indirect invariant load, we have to make sure the base pointer origin is
preloaded first.
llvm-svn: 251946
ScalarEvolution doesn't allow the operands of an AddRec to be variant in the
loop of the AddRec. When we rewrite parameter SCEVs it might seem like the
new SCEV violates this property and ScalarEvolution will trigger an
assertion. To avoid this we move the start part out of an AddRec when we
rewrite it, thus avoid the operands to be possibly variant completely.
llvm-svn: 251945
If a base pointer load is preloaded, we have change the base pointer of
the derived SAI. However, as the derived SAI relationship is is
coarse grained, we need to check if we actually preloaded the base
pointer or a different element of the base pointer SAI array.
llvm-svn: 251881
In some cases different memory accesses access the very same array using a
different multi-dimensional array layout where the same dimensions have
different sizes. Instead of asserting when encountering this issue, we
gracefully bail out for this scop.
This fixes llvm.org/PR25252
llvm-svn: 251791
Volatile or atomic memory accesses are currently not supported. Neither did
we think about any special handling needed nor do we support the unknown
instructions the alias set tracker turns them into sometimes. Before this
patch, us not supporting unkown instructions in an alias set caused the
following assertion failures:
Assertion `AG.size() > 1 && "Alias groups should contain at least two accesses"'
failed
llvm-svn: 251234
When verifying if a scop is still valid we rerun all analysis, but did not
update DetectionContextMap. This change ensures that information, e.g. about
non-affine regions, is correctly updated
llvm-svn: 251227
the size expression.
We previously only checked if the size expression is 'undef', but allowed size
expressions of the form 'undef * undef' by accident. After this change we now
require size expressions to be affine which implies no 'undef' appears anywhere
in the expression.
llvm-svn: 251225
of the Region are external.
During code generation we split off the parts of the PHI nodes in the entry
block, which have incoming blocks that are not part of the region. As these
split-off PHI nodes then are external uses, we consequently also need to model
these uses in ScopInfo.
llvm-svn: 251208
Such PHI nodes can not only appear in the ExitBlock of the Scop, but indeed
any scalar PHI node above the scop and used in the scop is modeled as scalar
read access.
llvm-svn: 251198
We isolate full tiles from partial tiles to be able to, for example, vectorize
loops with parametric lower and/or upper bounds.
If we use -polly-vectorizer=stripmine, we can see execution-time improvements:
correlation from 1m7361s to 0m5720s (-67.05 %), covariance from 1m5561s to
0m5680s (-63.50 %), ary3 from 2m3201s to 1m2361s (-46.72 %), CrystalMk from
8m5565s to 7m4285s (-13.18 %).
The current full/partial tile separation increases compile-time more than
necessary. As a result, we see in compile time regressions, for example, for 3mm
from 0m6320s to 0m9881s (56.34%). Some of this compile time increase is expected
as we generate more IR and consequently more time is spent in the LLVM backends.
However, a first investiagation has shown that a larger portion of compile time
is unnecessarily spent inside Polly's parallelism detection and could be
eliminated by propagating existing knowledge about vector loop parallelism.
Before enabling -polly-vectorizer=stripmine by default, it is necessary to
address this compile-time issue.
Contributed-by: Roman Gareev <gareevroman@gmail.com>
Reviewers: jdoerfert, grosser
Subscribers: grosser, #polly
Differential Revision: http://reviews.llvm.org/D13779
llvm-svn: 250809
New values were always synthesized in the block of the instruction
that needed them. This is incorrect for PHI node whose' value must be
defined in the respective incoming block. This patch temporarily moves
the builder's insert point to the incoming block while synthesizing phi
node arguments.
This fixes PR25241 (http://llvm.org/bugs/show_bug.cgi?id=25241)
llvm-svn: 250693
There are several different kinds of constants that could occur in a
branch condition, however we can only handle the most interesting one
namely constant integers. To this end we have to treat others as
non-affine.
This fixes bug 25244.
llvm-svn: 250669
We build the schedule based on a traversal of the region and accumulate
information for each loop in it. The total schedule is associated with the
loop surrounding the SCoP, though it can happen that there are blocks in the
SCoP which are part of loops that are only partially in the SCoP. Instead of
associating information with them (they are not part of the SCoP and
consequently are not modeled) we have to associate the schedule information
with the surrounding loop if any.
This fixes bug 25240.
llvm-svn: 250668
Accesses that have a relative offset (in bytes) that is not divisible
by the type size (in bytes) will be represented as empty in the SCoP
description. This is on its own not good but it also crashed the
invariant load hoisting. This patch will fix the latter problem while
the former should be addressed too.
This fixes bug 25236.
llvm-svn: 250664
If the base pointer of a load is invariant and defined in the SCoP but
not loaded we cannot hoist the load as we would not hoist the base
pointer definition.
This fixes bug 25237.
llvm-svn: 250663
Sorting is replaced by a demand driven code generation that will pre-load a
value when it is needed or, if it was not needed before, at some point
determined by the order of invariant accesses in the program. Only in very
little cases this demand driven pre-loading will kick in, though it will
prevent us from generating faulty code. An example where it is needed is
shown in:
test/ScopInfo/invariant_loads_complicated_dependences.ll
Invariant loads that appear in parameters but are not on the top-level (e.g.,
the parameter is not a SCEVUnknown) will now be treated correctly.
Differential Revision: http://reviews.llvm.org/D13831
llvm-svn: 250655
Polly can now be used as a analysis only tool as long as the code
generation is disabled. However, we do not have an alternative to the
independent blocks pass in place yet, though in the relevant cases
this does not seem to impact the performance much. Nevertheless, a
virtual alternative that allows the same transformations without
changing the input region will follow shortly.
llvm-svn: 250652
Expressing this in terms of BlockGenerator::getOrCreateAlloca(const
ScopArrayInfo *Array) does not work as the MemoryAccess BasePtr is in case of
invariant load hoisting different to the ScopArrayInfo BasePtr. Until this is
investigated and fixed, we move back to code that just uses the baseptr of
MemoryAccess.
llvm-svn: 250637
Instead of generating implicit loads within basic blocks, put them
before the instructions of the statment itself, including non-affine
subregions. The region's entry node is dominating all blocks in the
region and therefore the loaded value will be available there.
Implicit writes in block-stmts were already stored back at the end of
the block. Now, also generate the stores of non-affine subregions when
leaving the statement, i.e. in the exiting block.
This change is required for array-mapped implicits ("De-LICM") to
ensure that there are no dependencies of demoted scalars within
statments. Statement load all required values, operator on copied in
registers, and then write back the changed value to the demoted memory.
Lifetimes analysis within statements becomes unecessary.
Differential Revision: http://reviews.llvm.org/D13487
llvm-svn: 250625
Accesses for exit node phis will be handled separately by
buildPHIAccesses if there is more than one exiting edge,
buildScalarDependences does not need to create additional SCALAR
accesses.
This is a corrected version of r250517, which was reverted in r250607.
Differential Revision: http://reviews.llvm.org/D13848
llvm-svn: 250622
In r250408 'CHECK-NEXT: br' lines were removed as they also matched a
'%polly.subregion.iv.inc' instruction and did consequently not check what they
were supposed to check. However, without these lines we can not test that the
.s2a instructions that are not any more generated since r250411 really are not
emitted. Hence, we add back the CHECK-NEXT lines to ensure there are really no
instructions generated between the store that we check for and the branch at the
end of the basic block. To ensure we do not match too early, we now check for
'br i1' or 'br label'.
llvm-svn: 250435
When pulling a llvm::Value to be written as a PHI write, the former
code did only check whether it is within the same basic block, but it
could also be the same non-affine subregion. In that case some
unecessary pair of MemoryAccesses would have been created.
Two unit test were explicitely checking for the unecessary writes,
including the comments that the writes are unecessary.
llvm-svn: 250411
They happen to match
%polly.subregion.iv.inc = add i32 %polly.subregion.iv, 1
^^ ^^
that is, are misleading in what they actually check.
llvm-svn: 250408
When sharing the same map from old to new value, CodeGeneration would
reuse the same new value for each basic block. However, the SCEV
expander might emit code in a basic block that does not dominate a use
of the SCEV in another basic block. This test checks whether both such
blocks have their own expanded new values.
llvm-svn: 250389
We harden one test case by ensuring no additional stores may possibly be
introduced between the stores we check for and the basic block terminator
statements.
We also add a test case for the situation where a value that is passed from
a non-affine region to a PHI node does not dominate the exit of the non-affine
region. This case has come up in patch reviews, so we make sure it is properly
handled today and in the future.
llvm-svn: 250217
We also allow such products for cases where 'Parameter' is loaded within the
scop, but where we can dynamically verify that the value of 'Parameter' remains
unchanged during the execution of the scop.
This change relies on Polly's new RequiredILS tracking infrastructure recently
contributed by Johannes.
llvm-svn: 250019
The domain generation can handle lazy && and || by default but eager
evaluated expressions were dismissed as non-affine. With this patch we
will allow arbitrary combinations of and/or bit-operations in the
conditions of branches.
Differential Revision: http://reviews.llvm.org/D13624
llvm-svn: 249971
If a (assumed) invariant location is loaded multiple times we
generated a parameter for each location. However, this caused compile
time problems for several benchmarks (e.g., 445_gobmk in SPEC2006 and
BT in the NAS benchmarks). Additionally, the code we generate is
suboptimal as we preload the same location multiple times and perform
the same checks on all the parameters that refere to the same value.
With this patch we consolidate the invariant loads in three steps:
1) During SCoP initialization required invariant loads are put in
equivalence classes based on their pointer operand. One
representing load is used to generate a parameter for the whole
class, thus we never generate multiple parameters for the same
location.
2) During the SCoP simplification we remove invariant memory
accesses that are in the same equivalence class. While doing so
we build the union of all execution domains as it is only
important that the location is at least accessed once.
3) During code generation we only preload one element of each
equivalence class with the unified execution domain. All others
are mapped to that preloaded value.
Differential Revision: http://reviews.llvm.org/D13338
llvm-svn: 249853
Drop an unused flag polly-allow-non-scev-backedge-taken-count and also
its occurrences from the tests.
Contributed-by: Chris Jenneisch <chrisj@codeaurora.org>
Differential Revision: http://reviews.llvm.org/D13400
llvm-svn: 249675
This replaces the support for user defined error functions by a
heuristic that tries to determine if a call to a non-pure function
should be considered "an error". If so the block is assumed not to be
executed at runtime. While treating all non-pure function calls as
errors will allow a lot more regions to be analyzed, it will also
cause us to dismiss a lot again due to an infeasible runtime context.
This patch tries to limit that effect. A non-pure function call is
considered an error if it is executed only in conditionally with
regards to a cheap but simple heuristic.
llvm-svn: 249611
This patch allows invariant loads to be used in the SCoP description,
e.g., as loop bounds, conditions or in memory access functions.
First we collect "required invariant loads" during SCoP detection that
would otherwise make an expression we care about non-affine. To this
end a new level of abstraction was introduced before
SCEVValidator::isAffineExpr() namely ScopDetection::isAffine() and
ScopDetection::onlyValidRequiredInvariantLoads(). Here we can decide
if we want a load inside the region to be optimistically assumed
invariant or not. If we do, it will be marked as required and in the
SCoP generation we bail if it is actually not invariant. If we don't
it will be a non-affine expression as before. At the moment we
optimistically assume all "hoistable" (namely non-loop-carried) loads
to be invariant. This causes us to expand some SCoPs and dismiss them
later but it also allows us to detect a lot we would dismiss directly
if we would ask e.g., AliasAnalysis::canBasicBlockModify(). We also
allow potential aliases between optimistically assumed invariant loads
and other pointers as our runtime alias checks are sound in case the
loads are actually invariant. Together with the invariant checks this
combination allows to handle a lot more than LICM can.
The code generation of the invariant loads had to be extended as we
can now have dependences between parameters and invariant (hoisted)
loads as well as the other way around, e.g.,
test/Isl/CodeGen/invariant_load_parameters_cyclic_dependence.ll
First, it is important to note that we cannot have real cycles but
only dependences from a hoisted load to a parameter and from another
parameter to that hoisted load (and so on). To handle such cases we
materialize llvm::Values for parameters that are referred by a hoisted
load on demand and then materialize the remaining parameters. Second,
there are new kinds of dependences between hoisted loads caused by the
constraints on their execution. If a hoisted load is conditionally
executed it might depend on the value of another hoisted load. To deal
with such situations we sort them already in the ScopInfo such that
they can be generated in the order they are listed in the
Scop::InvariantAccesses list (see compareInvariantAccesses). The
dependences between hoisted loads caused by indirect accesses are
handled the same way as before.
llvm-svn: 249607
This single option replaces -polly-detect-unprofitable and -polly-no-early-exit
and is supposed to be the only option that disables compile-time heuristics that
aim to bail out early on scops that are believed to not benefit from Polly
optimizations.
Suggested-by: Johannes Doerfert
llvm-svn: 249426
These flags are now always passed to all tests and need to be disabled if
not needed. Disabling these flags, rather than passing them to almost all
tests, significantly simplfies our RUN: lines.
llvm-svn: 249422
Polly's profitability heuristic saves compile time by skipping trivial scops or
scops were we know no good optimization can be applied. For almost all our tests
this heuristic makes little sense as we aim for minimal test cases when testing
functionality. Hence, in almost all cases this heuristic is better be disabled.
In preparation of disabling Polly's compile time heuristic by default in the
test suite we first explicitly enable it in the couple of test cases that really
use it (or run with/without heuristic side-by-side).
llvm-svn: 249418
This test case was XFAILed under the assumption Polly is unable to detect the
scop. However, disabling Polly's profitability heuristics is sufficient to
detect this scop.
llvm-svn: 249414
A statement with an empty domain complicates the invariant load
hoisting and does not help any subsequent analysis or transformation.
In fact it might introduce parameter dimensions or increase the
schedule dimensionality. To this end, we remove statements with an
empty domain early in the SCoP simplification.
llvm-svn: 249276
Before isValidCFG() could hide the fact that a loop is non-affine by
over-approximation. This is problematic if a subregion of the loop contains
an exit/latch block and is over-approximated. Now we do not over-approximate
in the isValidCFG function if we check loop control. If such control is
non-affine the whole loop is over-approximated, not only a subregion.
llvm-svn: 249273
We have to skip accesses in non-affine subregions during hoisting as
they might not be executed under the same condition as the entry of
the non-affine subregion.
llvm-svn: 249139
This moves the construction of ScopStmt to the beginning of the
ScopInfo pass. The late creation was a result of the earlier separation
of ScopInfo and TempScopInfo. This will avoid introducing more
ScopStmt-like maps in future commits. The AccFuncMap will also be
removed in some future commit. DomainMap might also be included into
ScopStmt.
The order in which ScopStmt are created changes and initially creates
empty statements that are removed in a simplification.
Differential Revision: http://reviews.llvm.org/D13341
llvm-svn: 249132
If a value is globally mapped (IslNodeBuilder::ValueMap) and
referenced in the code that will be put into a subfunction, we hand
down the new value to the subfunction.
This patch also removes code that handed down all invariant loads to
the subfunction. Instead, only needed invariant loads are given to the
subfunction. There are two possible reasons for an invariant load to
be handed down:
1) The invariant load is used in a block that is placed in the
subfunction but which is not the parent of the load. In this
case, the scalar access that will read the loaded value, will
cause its base pointer (the preloaded value) to be handed down to
the subfunction.
2) The invariant load is defined and used in a block that is placed
in the subfunction. With this patch we will hand down the
preloaded value to the subfunction as the invariant load is
globally mapped to that value.
llvm-svn: 249126
When error blocks are not terminated by an unreachable they have successors
that might only be reachable via error blocks. Additionally, branches in
error blocks are not checked during SCoP detection, thus we might not be able
to handle them. With this patch we do not try to model error block exit
conditions. Anything that is only reachable via error blocks is ignored too,
as it will not be executed in the optimized version of the SCoP anyway.
llvm-svn: 249099
The user can provide function names with
-polly-error-functions=name1,name2,name3
that will be treated as error functions. Any call to them is assumed
not to be executed.
This feature is mainly for developers to play around with the new
"error block" feature.
llvm-svn: 249098
Instructions which we can synthesis from a SCEV expression are not
generated directly, but only when they are used as an operand of
another instruction. This avoids generating unnecessary instructions
and works more reliably than first inserting them and then deleting
them later on.
This commit was reverted in r248860 due to a remaining miscompile, where
we forgot to synthesis the operand values that were referenced from scalar
writes. test/Isl/CodeGen/scalar-store-from-same-bb.ll tests that we do this
now correctly.
llvm-svn: 248900
Before we unconditinoally forced all users outside the SCoP to use
the preloaded value. However, if the SCoP is not executed due to the
runtime checks, we need to use the original value because it might not
be invariant in the first place.
llvm-svn: 248881
As a first step in the direction of assumed invariant loads (loads
that are not written in some context) we now detect and hoist
definitively invariant loads. These invariant loads will be preloaded
in the code generation and used in the optimized version of the SCoP.
If the load is only conditionally executed the preloaded version will
also only be executed under the same condition, hence we will never
access memory that wouldn't have been accessed otherwise. This is also
the most distinguishing feature to licm.
As hoisting can make statements empty we will simplify the SCoP and
remove empty statements that would otherwise cause artifacts in the
code generation.
Differential Revision: http://reviews.llvm.org/D13194
llvm-svn: 248861
This reverts commit 07830c18d789ee72812d5b5b9b4f8ce72ebd4207.
The commit broke at least one test in lnt,
MultiSource/Benchmarks/Ptrdist/bc/number.c
was miss compiled and the test produced a wrong result.
One Polly test case that was added later was adjusted too.
llvm-svn: 248860
Every once in a while we see code that accesses memory with different types,
e.g. to perform operations on a piece of memory using type 'float', but to copy
data to this memory using type 'int'. Modeled in C, such codes look like:
void foo(float A[], float B[]) {
for (long i = 0; i < 100; i++)
*(int *)(&A[i]) = *(int *)(&B[i]);
for (long i = 0; i < 100; i++)
A[i] += 10;
}
We already used the correct types during normal operations, but fall back to our
detected type as soon as we import changed memory access functions. For these
memory accesses we may generate invalid IR due to a mismatch between the element
type of the array we detect and the actual type used in the memory access. To
address this issue, we always cast the newly created address of a memory access
back to the type of the memory access where the address will be used.
llvm-svn: 248781
Instructions which we can synthesis from a SCEV expression are not generated
directly, but only when they are used as an operand of another instruction. This
avoids generating unnecessary instruction and works more reliably than first
inserting them and then deleting them later on.
Suggested-by: Johannes Doerfert <doerfert@cs.uni-saarland.de>
Differential Revision: http://reviews.llvm.org/D13208
llvm-svn: 248712
This patch allows switch instructions with affine conditions in the
SCoP. Also switch instructions in non-affine subregions are allowed.
Both did not require much changes to the code, though there was some
refactoring needed to integrate them without code duplication.
In the llvm-test suite the number of profitable SCoPs increased from
135 to 139 but more importantly we can handle more benchmarks and user
inputs without preprocessing.
Differential Revision: http://reviews.llvm.org/D13200
llvm-svn: 248701
This check was needed at some point but seems not useful anymore. Only
one adjustment in the domain generation was needed to cope with the
cases this check prevented from happening before.
llvm-svn: 248695
We now only delete trivially dead instructions in the BB we copy (copyBB), but
not in any other BB. Only for copyBB we know that there will _never_ be any
future uses of instructions that have no use after copyBB has been generated.
Other instructions in the AST that have been generated by IslNodeBuilder may
look dead at the moment, but may possibly still be referenced by GlobalMaps. If
we delete them now, later uses would break surprisingly.
We do not have a test case that breaks due to us deleting too many instructions.
This issue was found by inspection.
llvm-svn: 248688
After having generated a new user statement a couple of inefficient or
trivially dead instructions may remain. This commit runs instruction
simplification over the newly generated blocks to ensure unneeded
instructions are removed right away.
This commit does adds simplification for non-affine subregions which was not
yet part of 248681.
llvm-svn: 248683
Otherwise, part of the computation will be just simplified away when we add
instruction simplification support to the RegionGenerator.
llvm-svn: 248682
After having generated a new user statement a couple of inefficient or trivially
dead instructions may remain. This commit runs instruction simplification over
the newly generated blocks to ensure unneeded instructions are removed right
away.
This commit does not yet add simplification for non-affine subregions.
llvm-svn: 248681
This commit basically reverts r246427 but still solves the issue
tackled by that commit. Instead of emitting initialization code in the
beginning of the start block we now generate parallel code in its own
block and thereby guarantee separation. This is necessary as we cannot
generate code for hoisted loads prior to the start block but it still
needs to be placed prior to everything else.
llvm-svn: 248674
The new domain construction algorithm now correctly models this test case (and
derives an empty run-time condition). Add this test case to ensure we do not
regress.
llvm-svn: 248669
When the whole SCoP is a non-affine region we need to use the
surrounding loop in the construction of the schedule as that is
the one that will be looked up after the schedule generation.
This fixes bug 24947
llvm-svn: 248667
When recovering multi-dimensional memory accesses, it may happen that different
accesses to the same base array are recovered with different dimensionality.
This patch ensures that the dimensionalities are unified by adding zero valued
dimensions to acesses with lower dimensionality. When starting to model
fixed-size arrays as multi-dimensional in 247906, this has not been taken
care of.
llvm-svn: 248662
This change addresses three issues:
- Read only scalars that enter a PHI node through an edge that comes from
outside the scop are not modeled any more, as such PHI nodes will always
be initialized to this initial value right before the SCoP is entered.
- For PHI nodes that depend on a scalar value that is defined outside the
scop, but where the scalar values is passed through an edge that itself
comes from a BB that is part of the region, we introduce in this basic
block a read of the out-of-scop value to ensure it's value is available
to write it into the PHI alloc location.
- Read only uses of scalars by PHI nodes are ignored in the general read only
handling code, as they are taken care of by the general PHI node modeling
code.
llvm-svn: 248535
After the merge of TempScopInfo into ScopInfo the analysis output
remained because of the existing unit tests. These remains are removed
and the units tests converted to match the equivalent output of
ScopInfo's analysis output. The unit tests are also moved into the
directory of ScopInfo tests.
Differential Revision: http://reviews.llvm.org/D13116
llvm-svn: 248485
A missing return statement that previously did not have a visibly negative
effect caused after some data-structure changes in r248024 multi-dimensional
accesses to be modeled both multi-dimensional as well as linearized. This
commit adds the missing return to avoid the incorrect double modeling as
well as the compile time increases it caused.
llvm-svn: 248171
If we encounter a <nsw> tagged AddRec for a loop we know the trip count of
that loop has to be bounded or the semantics is undefined anyway. Hence, we
only need to add unbounded assumptions if no such AddRec is known.
llvm-svn: 248128
So far we ignored the unbounded parts of the iteration domain, however
we need to assume they do not occure at all to remain sound if they do.
llvm-svn: 248126
We now add loop carried information during the second traversal of the
region instead of in a intermediate step in-between. This makes the
generation simpler, removes code and should even be faster.
llvm-svn: 248125
In order to allow multiple back edges we:
- compute the conditions under which each back edge is taken
- build the union over all these conditions, thus the condition that
any back edge is taken
- apply the same logic to the union we applied to a single back edge
llvm-svn: 248120
As we currently do not perform any optimizations that targets (or is
even aware) small trip counts we will skip them when we count the
loops in a region.
llvm-svn: 248119
All MemoryAccess objects will be owned by ScopInfo::AccFuncMap which
previously stored the IRAccess objects. Instead of creating new
MemoryAccess objects, the already created ones are reused, but their
order might be different now. Some fields of IRAccess and MemoryAccess
had the same meaning and are merged.
This is the last step of fusioning TempScopInfo.{h|cpp} and
ScopInfo.{h.cpp}. Some refactoring might still make sense.
Differential Revision: http://reviews.llvm.org/D12843
llvm-svn: 248024
If the GEP instructions give us enough insights, model scalar accesses as
multi-dimensional (and generate the relevant run-time checks to ensure
correctness). This will allow us to simplify the dependence computation in
a subsequent commit.
llvm-svn: 247906
This will allow to generate non-wrap assumptions for integer expressions
that are part of the SCoP. We compare the common isl representation of
the expression with one computed with modulo semantic. For all parameter
combinations they are not equal we can have integer overflows.
The nsw flags are respected when the modulo representation is computed,
nuw and nw flags are ignored for now.
In order to not increase compile time to much, the non-wrap assumptions
are collected in a separate boundary context instead of the assumed
context. This helps compile time as the boundary context can become
complex and it is therefor not advised to use it in other operations
except runtime check generation. However, the assumed context is e.g.,
used to tighten dependences. While the boundary context might help to
tighten the assumed context it is doubtful that it will help in practice
(it does not effect lnt much) as the boundary (or no-wrap assumptions)
only restrict the very end of the possible value range of parameters.
PET uses a different approach to compute the no-wrap context, though lnt runs
have shown that this version performs slightly better for us.
llvm-svn: 247732
Add polly-check-format as dependency of check-polly if clang-format is
available in the same build.
Differential Revision: http://reviews.llvm.org/D12850
llvm-svn: 247600
At some point we build loop trip counts using this method. It was replaced by
a simpler trick that works only for affine (e.g., not modulo) constraints and
relies on the removal of unbounded parts. In order to allow modulo constrains
again we go back to the former, more accurate method.
llvm-svn: 247540
The function use_after_scop would iterate from 0 to 1024 and accessing element A[1024] where A has only valid indexes from 0 to 1023. Polly detects the situation of unconditionally undefined behavior and bail out in ScopInfo as non-feasible for optimization.
Other tests add impossible context assumptions as well, hance might show the same problem.
llvm-svn: 247412
Hoist runtime checks in the loop nest if they guard an "error" like event.
Such events are recognized as blocks with an unreachable terminator or a call
to the ubsan function that deals with out of bound accesses. Other "error"
events can be added easily.
We will ignore these blocks when we detect/model/optmize and code generate SCoPs
but we will make sure that they would not have been executed using the assumption
framework.
llvm-svn: 247310
As we do not rely on ScalarEvolution any more we do not need to get
the backedge taken count. Additionally, our domain generation handles
everything that is affine and has one latch and our ScopDetection will
over-approximate everything else.
This change will therefor allow loops with:
- one latch
- exiting conditions that are affine
Additionally, it will not check for structured control flow anymore.
Hence, loops and conditionals are not necessarily single entry single
exit regions any more.
Differential Version: http://reviews.llvm.org/D12758
llvm-svn: 247289
The TempScopInfo (-polly-analyze-ir) pass is removed and its work taken
over by ScopInfo (-polly-scops). Several tests depend on
-polly-analyze-ir and use -polly-scops instead which for the moment
prints the output of both passes. This again is not expected by some
other tests, especially those with negative searches, which have been
adapted.
Differential Version: http://reviews.llvm.org/D12694
llvm-svn: 247288
This patch replaces the last legacy part of the domain generation, namely the
ScalarEvolution part that was used to obtain loop bounds. We now iterate over
the loops in the region and propagate the back edge condition to the header
blocks. Afterwards we propagate the new information once through the whole
region. In this process we simply ignore unbounded parts of the domain and
thereby assume the absence of infinite loops.
+ This patch already identified a couple of broken unit tests we had for
years.
+ We allow more loops already and the step to multiple exit and multiple back
edges is minimal.
+ It allows to model the overflow checks properly as we actually visit
every block in the SCoP and know where which condition is evaluated.
- It is currently not compatible with modulo constraints in the
domain.
Differential Revision: http://reviews.llvm.org/D12499
llvm-svn: 247279
The support for modulo expressions is not comlete and makes the new
domain generation harder. As the currently broken domain generation
needs to be replaced, we will first swap in the new, fixed domain
generation and make it compatible with the modulo expressions later.
llvm-svn: 247278