Check the ModRefBehaviour of functions in order to decide whether or
not a call instruction might be acceptable.
Differential Revision: http://reviews.llvm.org/D5227
llvm-svn: 261866
The generated dedicated subregion exit block was assumed to have the same
dominance relation as the original exit block. This is incorrect if the exit
block receives other edges than only from the subregion, which results in that
e.g. the subregion's entry block does not dominate the exit block.
llvm-svn: 261865
Replace Scop::getStmtForBasicBlock and Scop::getStmtForRegionNode, and
add overloads for llvm::Instruction and llvm::RegionNode.
getStmtFor and overloads become the common interface to get the Stmt
that contains something. Named after LoopInfo::getLoopFor and
RegionInfo::getRegionFor.
llvm-svn: 261791
This is also be caught by the function verifier, but disconnected from
the place that produced it. Catch it already at creation to be able to
reason more directly about the cause.
llvm-svn: 261790
This allows other passes and transformations to use some of the existing AST
building infrastructure. This is not yet used in Polly itself.
llvm-svn: 261496
We now always print the reason why the code did not pass the LLVM verifier and
we also allow to disable verfication with -polly-codegen-verify=false. Before
this change the first assertion had generally no information why or what might
have gone wrong and it was also impossible to -view-cfg without recompile. This
change makes debugging bugs that result in incorrect IR a lot easier.
llvm-svn: 261320
After we moved isl_ctx into Scop, we need to free the isl_ctx after
freeing all isl objects, which requires the ScopInfo pass to be freed
at last. But this is not guaranteed by the PassManager, and we need
extra code to free the isl_ctx at the right time.
We introduced a shared pointer to manage the isl_ctx, and distribute
it to all analyses that create isl objects. As such, whenever we free
an analyses with the shared_ptr (and also free the isl objects which
are created by the analyses), we decrease the (shared) reference
counter of the shared_ptr by 1. Whenever the reference counter reach
0 in the releaseMemory function of an analysis, that analysis will
be the last one that hold any isl objects, and we can safely free the
isl_ctx with that analysis.
Differential Revision: http://reviews.llvm.org/D17241
llvm-svn: 261100
We now distinguish invariant loads to the same memory location if they
have different types. This will cause us to pre-load an invariant
location once for each type that is used to access it. However, we can
thereby avoid invalid casting, especially if an array is accessed
though different typed/sized invariant loads.
This basically reverts the changes in r260023 but keeps the test
cases.
llvm-svn: 260045
We also disable this feature by default, as there are still some issues in
combination with invariant load hoisting that slipped through my initial
testing.
llvm-svn: 260025
Always use access-instruction pointer type to load the invariant values.
Otherwise mismatches between ScopArrayInfo element type and memory access
element type will result in invalid casts. These type mismatches are after
r259784 a lot more common and also arise with types of different size, which
have not been handled before.
Interestingly, this change actually simplifies the code, as we now have only
one code path that is always taken, rather then a standard code path for the
common case and a "fixup" code path that replaces the standard code path in
case of mismatching types.
llvm-svn: 260009
This allows code such as:
void multiple_types(char *Short, char *Float, char *Double) {
for (long i = 0; i < 100; i++) {
Short[i] = *(short *)&Short[2 * i];
Float[i] = *(float *)&Float[4 * i];
Double[i] = *(double *)&Double[8 * i];
}
}
To model such code we use as canonical element type of the modeled array the
smallest element type of all original array accesses, if type allocation sizes
are multiples of each other. Otherwise, we use a newly created iN type, where N
is the gcd of the allocation size of the types used in the accesses to this
array. Accesses with types larger as the canonical element type are modeled as
multiple accesses with the smaller type.
For example the second load access is modeled as:
{ Stmt_bb2[i0] -> MemRef_Float[o0] : 4i0 <= o0 <= 3 + 4i0 }
To support code-generating these memory accesses, we introduce a new method
getAccessAddressFunction that assigns each statement instance a single memory
location, the address we load from/store to. Currently we obtain this address by
taking the lexmin of the access function. We may consider keeping track of the
memory location more explicitly in the future.
We currently do _not_ handle multi-dimensional arrays and also keep the
restriction of not supporting accesses where the offset expression is not a
multiple of the access element type size. This patch adds tests that ensure
we correctly invalidate a scop in case these accesses are found. Both types of
accesses can be handled using the very same model, but are left to be added in
the future.
We also move the initialization of the scop-context into the constructor to
ensure it is already available when invalidating the scop.
Finally, we add this as a new item to the 2.9 release notes
Reviewers: jdoerfert, Meinersbur
Differential Revision: http://reviews.llvm.org/D16878
llvm-svn: 259784
We support now code such as:
void multiple_types(char *Short, char *Float, char *Double) {
for (long i = 0; i < 100; i++) {
Short[i] = *(short *)&Short[2 * i];
Float[i] = *(float *)&Float[4 * i];
Double[i] = *(double *)&Double[8 * i];
}
}
To support such code we use as element type of the modeled array the smallest
element type of all original array accesses. Accesses with larger types are
modeled as multiple accesses with the smaller type.
For example the second load access is modeled as:
{ Stmt_bb2[i0] -> MemRef_Float[o0] : 4i0 <= o0 <= 3 + 4i0 }
To support jscop-rewritable memory accesses we need each statement instance to
only be assigned a single memory location, which will be the address at which
we load the value. Currently we obtain this address by taking the lexmin of
the access function. We may consider keeping track of the memory location more
explicitly in the future.
llvm-svn: 259587
MemAccInst wraps the common members of LoadInst and StoreInst. Also use
of this class in:
- ScopInfo::buildMemoryAccess
- BlockGenerator::generateLocationAccessed
- ScopInfo::addArrayAccess
- Scop::buildAliasGroups
- Replace every use of polly::getPointerOperand
Reviewers: jdoerfert, grosser
Differential Revision: http://reviews.llvm.org/D16530
llvm-svn: 258947
Ensure that there is at most one phi write access per PHINode and
ScopStmt. In particular, this would be possible for non-affine
subregions with multiple exiting blocks. We replace multiple MAY_WRITE
accesses by one MUST_WRITE access. The written value is constructed
using a PHINode of all exiting blocks. The interpretation of the PHI
WRITE's "accessed value" changed from the incoming value to the PHI like
for PHI READs since there is no unique incoming value.
Because region simplification shuffles around PHI nodes -- particularly
with exit node PHIs -- the PHINodes at analysis time does not always
exist anymore in the code generation pass. We instead remember the
incoming block/value pair in the MemoryAccess.
Differential Revision: http://reviews.llvm.org/D15681
llvm-svn: 258809
Both functions implement the same functionality, with the difference that
getNewScalarValue assumes that globals and out-of-scop scalars can be directly
reused without loading them from their corresponding stack slot. This is correct
for sequential code generation, but causes issues with outlining code e.g. for
OpenMP code generation. getNewValue handles such cases correctly.
Hence, we can replace getNewScalarValue with getNewValue. This is not only more
future proof, but also eliminates a bunch of code.
The only functionality that was available in getNewScalarValue that is lost
is the on-demand creation of scalar values. However, this is not necessary any
more as scalars are always loaded at the beginning of each basic block and will
consequently always be available when scalar stores are generated. As this was
not the case in older versions of Polly, it seems the on-demand loading is just
some older code that has not yet been removed.
Finally, generateScalarLoads also generated loads for values that are loop
invariant, available in GlobalMap and which are preferred over the ones loaded
in generateScalarLoads. Hence, we can just skip the code generation of such
scalar values, avoiding the generation of dead code.
Differential Revision: http://reviews.llvm.org/D16522
llvm-svn: 258799
In Polly, after hoisting loop invariant loads outside loop, the alignment
information for hoisted loads are missing, this patch restore them.
Contributed-by: Lawrence Hu <lawrence@codeaurora.org>
Differential Revision: http://reviews.llvm.org/D16160
llvm-svn: 258105
should perform loop interchanges itself.
This also fixes a bug we see due to the "loop-interchange" pass producing
incorrect IR when compiling linpack-pc.c from the LLVM test-suite with
"-polly-position=before-vectorizer".
Reviewed-by: Tobias Grosser <tobias@grosser.es>
llvm-svn: 257495
getAccessFor does not guarantee a certain access to be returned in case an
instruction is related to multiple accesses. However, in the vector code
generation we want to know the stride of the array access of a store
instruction. By using getArrayAccessFor we ensure we always get the correct
memory access.
This patch fixes a potential bug, but I was unable to produce a failing test
case. Several existing test cases cover this code, but all of them already
passed out of luck (or the specific but not-guaranteed order in which we build
memory accesses).
llvm-svn: 255715
When generating scalar loads/stores separately the vector code has not been
updated. This commit adds code to generate scalar loads for vector code as well
as code to assert in case scalar stores are encountered within a vector loop.
llvm-svn: 255714
When rewriting the access functions of load/store statements, we are only
interested in the actual array memory location. The current code just took
the very first memory access, which could be a scalar or an array access. As
a result, we failed to update access functions even though this was requested
via .jscop.
llvm-svn: 255713
This change should not change the behavior of Polly today, but it allows
external constants to be remapped e.g. when targetting multiple LLVM modules.
llvm-svn: 255506
Over time different vocabulary has been introduced to describe the different
memory objects in Polly, resulting in different - often inconsistent - naming
schemes in different parts of Polly. We now standartize this to the following
scheme:
KindArray, KindValue, KindPHI, KindExitPHI
| ------- isScalar -----------|
In most cases this naming scheme has already been used previously (this
minimizes changes and ensures we remain consistent with previous publications).
The main change is that we remove KindScalar to clearify the difference between
a scalar as a memory object of kind Value, PHI or ExitPHI and a value (former
KindScalar) which is a memory object modeling a llvm::Value.
We also move all documentation to the Kind* enum in the ScopArrayInfo class,
remove the second enum in the MemoryAccess class and update documentation to be
formulated from the perspective of the memory object, rather than the memory
access. The terms "Implicit"/"Explicit", formerly used to describe memory
accesses, have been dropped. From the perspective of memory accesses they
described the different memory kinds well - especially from the perspective of
code generation - but just from the perspective of a memory object it seems more
straightforward to talk about scalars and arrays, rather than explicit and
implicit arrays. The last comment is clearly subjective, though. A less
subjective reason to go for these terms is the historic use both in mailing list
discussions and publications.
llvm-svn: 255467
When introducing separate control flow for the original and optimized code we
introduce now a special 'ExitingBlock':
\ /
EnteringBB
|
SplitBlock---------\
_____|_____ |
/ EntryBB \ StartBlock
| (region) | |
\_ExitingBB_/ ExitingBlock
| |
MergeBlock---------/
|
ExitBB
/ \
This 'ExitingBlock' contains code such as the final_reloads for scalars, which
previously were just added to whichever statement/loop_exit/branch-merge block
had been generated last. Having an explicit basic block makes it easier to
find these constructs when looking at the CFG.
llvm-svn: 255107
Re-run canonicalization passes after Polly's code generation.
The set of passes currently added here are nearly all the passes between
--polly-position=early and --polly-position=before-vectorizer, i.e. all
passes that would usually run after Polly.
In order to run these only if Polly actually modified the code, we add a
function attribute "polly-optimzed" to a function that contains
generated code. The cleanup pass is skipped if the function does not
have this attribute.
There is no support by the (legacy) PassManager to run passes only under
some conditions. One could have wrapped all transformation passes to run
only when CodeGeneration changed the code, but the analyses would run
anyway. This patch creates an independent pass manager. The
disadvantages are that all analyses have to re-run even if preserved and
it does not honor compiler switches like the PassManagerBuilder does.
Differential Revision: http://reviews.llvm.org/D14333
llvm-svn: 254150
Previously, accesses that originate from PHI nodes in the exit block
were registered as SCALAR. In some context they are treated as scalars,
but it makes a difference in others. We used to check whether the
AccessInstruction is a terminator to differentiate the cases.
This patch introduces an MemoryAccess origin EXIT_PHI and a
ScopArrayInfo kind KIND_EXIT_PHI to make this case more explicit. No
behavioural change intended.
Differential Revision: http://reviews.llvm.org/D14688
llvm-svn: 254149
IVs of loops for which the loop header is in the subregion, but not the entire
loop may be incremented outside of the subregion and can consequently not be
kept private to the subregion. Instead, they need to and are modeled as virtual
loops in the iteration domains. As this is the case, generating new subregion
induction variables for such loops is not needed and indeed wrong as they would
hide the virtual induction variables modeled in the scop.
This fixes a miscompile in MultiSource/Benchmarks/Ptrdist/bc and
MultiSource/Benchmarks/nbench/. Thanks Michael and Johannes for their
investiagations and helpful observations regarding this bug.
llvm-svn: 252860
Especially for structs, the SAI object of a base pointer does not
describe all the types that the user might expect when he loads from
that base pointer. While we will still cast integers and pointers we
will now reload the value with the correct type if floating point and
non-floating point values are involved. However, there are now TODOs
where we use bitcasts instead of a proper conversion or reloading.
This fixes bug 25479.
llvm-svn: 252706
We now create all invariant equivalence classes for required invariant loads
instead of creating them on-demand. This way we can check if a parameter
references an invariant load that is actually not executed and was therefor
not materialized. If that happens the parameter is not materialized either.
This fixes bug 25469.
llvm-svn: 252701
Since 252422 we do not only distinguish two ScopArrayInfo kinds, PHI nodes
and others, but work with three kind of ScopArrayInfo objects. SCALAR, PHI and
ARRAY objects. Instead of keeping two boolean flags isPHI and isScalar and
wonder what an ScopArrayInfo object of kind (!isScalar && isPHI) is, we
list now explicitly the three different possible types of memory objects.
This change also allows us to remove the confusing nested pairs that have
been used in ArrayInfoMapTy.
llvm-svn: 252620
In polly the first dimensions of an array as well as all scalars do not carry
any size information. This commit makes this explicit in the interface of
getDimensionSize. Before this commit getDimensionSize(0) returned the size of
the first dimension that carried a size. After this commit getDimensionSize(i)
will either return the size of dimension 'i' or assert in case 'i' does not
carry a size or does not exist at all.
This very same behaviour was already present in getDimensionSizePw(). This
commit also adds assertions that ensure getDimensionSizePw() is called
appropriately.
llvm-svn: 252607
Scalar reloads in the generated entering block were not recognized as
dominating the subregions locks when there were multiple entering
nodes. This resulted in values defined in there not being copied.
As a fix, we unconditionally add the BBMap of the generated entering
node to the generated entry. This fixes part of llvm.org/PR25439.
This reverts 252449 and reapplies r252445. Its test was failing
indeterministically due to r252375 which was reverted in r252522.
llvm-svn: 252540
The dominance of the generated non-affine subregion block was based on
the scop's merge block, therefore resulted in an invalid DominanceTree.
It resulted in some values as assumed to be unusable in the actual
generated exit block.
We detect the case that the exit block has been moved and decide
dominance using the BB at the original exit. If we create another exit
node, that exit nodes is dominated by the one generated from where the
original exit resides. This fixes llvm.org/PR25438 and part of
llvm.org/PR25439.
llvm-svn: 252526
It introduced indeterminism as it was iterating over an address-indexed
hashtable. The corresponding bug PR25438 will be fixed in a successive
commit.
llvm-svn: 252522
This reverts commit 9775824b265e574fc541e975d64d3e270243b59d due to a
failing unit test.
Please check and correct the unit test and commit again.
llvm-svn: 252449
Scalar reloads in the generated entering block were not recognized as
dominating the subregions locks when there were multiple entering
nodes. This resulted in values defined in there not being copied.
As a fix, we unconditionally add the BBMap of the generated entering
node to the generated entry. This fixes part of llvm.org/PR25439.
llvm-svn: 252445
Even if a scalar and memory access have the same base pointer, we cannot use
one SAI object as the type but also the number of dimensions are wrong. For
the attached test case this caused a crash in the invariant load hoisting,
though it could cause various other problems too.
This fixes bug 25428 and a execution time bug in MallocBench/cfrac.
Reported-by: Jeremy Huddleston Sequoia <jeremyhu@apple.com>
llvm-svn: 252422
When we bail out early we make the partially build new code path
practically dead, though it was not unreachable. To remove dominance
problems we now make it not only dead but also prevent the control
flow to join with the original code path, thus allow to use original
values after the SCoP without any PHI nodes.
This fixes bug 25447.
llvm-svn: 252420
The bail out in r252412 left the code generation without verifying the (so
far) generated IR. This will change now and ensure we always run the
verifier.
Suggested-by: Tobias Grosser <tobias@grosser.es>
llvm-svn: 252419
While the program cannot cause a dependence cycle between invariant
loads, additional constraints (e.g., to ensure finite loops) can
introduce them. It is hard to detect them in the SCoP description,
thus we will only check for them at code generation time. If such a
recursion is detected we will bail out the code generation and place a
"false" runtime check to guarantee the original code is used.
This fixes bug 25443.
llvm-svn: 252412
After loop versioning, a dominance check of a non-affine subregion's
exit node causes the dominance check to always fail on any block in the
subregion if it shares the same exit block with the scop. The
subregion's exit block has become polly_merge_new_and_old, which also
receives the control flow of the generated code. This would cause that
any value for implicit stores is assumed to be not from the scop.
We check dominance with the generated exit node instead.
This fixes llvm.org/PR25438
llvm-svn: 252375
Remove all the implicit ilist iterator conversions from polly, in
preparation for making them illegal in ADT. There was one oddity I came
across: at line 95 of lib/CodeGen/LoopGenerators.cpp, there was a
post-increment `Builder.GetInsertPoint()++`.
Since it was a no-op, I removed it, but I admit I wonder if it might be
a bug (both before and after this change)? Perhaps it should be a
pre-increment?
llvm-svn: 252357
We were adding all generated values in non-affine subregions to be used
for the subregions generated exit block. The thought was that only
values that are dominating the original exit block can be used there.
But it is possible for synthesizable values to be expanded in any
block. If the same values is also used for implicit writes, it would
try to reuse already synthesized values even if not dominating the exit
block.
The fix is to only add values to the list of values usable in the exit
block only if it is dominating the exit block. This fixes
llvm.org/PR25412.
llvm-svn: 252301
For generating scalar writes of non-affine subregions, all except phi
writes are generated in the exit block. The phi writes are generated in
the incoming block for which we errornously used the same BBMap. This
can conflict if a value for one block is synthesized, and then reused
for another block which is not dominated by the first block. This is
fixed by using block-specific BBMaps for phi writes.
llvm-svn: 252172
To simplify and correct the preloading of a base pointer origin, e.g.,
the base pointer for the current indirect invariant load, we now just
check if there is an invariant access class that involves the base
pointer of the current class.
llvm-svn: 251962
If a base pointer of a preloaded value has a base pointer origin, thus it is
an indirect invariant load, we have to make sure the base pointer origin is
preloaded first.
llvm-svn: 251946
If a base pointer load is preloaded, we have change the base pointer of
the derived SAI. However, as the derived SAI relationship is is
coarse grained, we need to check if we actually preloaded the base
pointer or a different element of the base pointer SAI array.
llvm-svn: 251881
We remove -polly-detect-unprofitable and -polly-no-early-exit. Both have been
superseeded by -polly-process-unprofitable and were only kept as aliases for
our buildbots to continue to work. As all buildbots have been moved to the new
options, we can now remove the old ones for good.
llvm-svn: 251787
These maps are only needed during the construction of a single region statement.
Clearing them is important, as we otherwise get an assert in case some of the
referenced values are erased before the RegionGenerator is deleted.
llvm-svn: 251341
Such PHI nodes can not only appear in the ExitBlock of the Scop, but indeed
any scalar PHI node above the scop and used in the scop is modeled as scalar
read access.
llvm-svn: 251198
This change adds code to directly code-generate multi-exit PHI nodes, instead
of trying to reuse the EscapeMap infrastructure for this. Using escape maps
adds a level of indirection that is hard to understand and - more importantly -
breaks in certain cases.
Specifically, the original code relied on simplifyRegion() to split the original
PHI node in two PHI nodes, one merging the values coming from within the scop
and a second that merges the first PHI node with the values that come from
outside the scop. To generate code the first PHI node is then just handled like
any other in-scop value that is used somewhere outside the scop. This fails for
the case where all values from inside the scop are identical, as the first PHI
node is in such cases automatically simplified and eliminated by LLVM right at
construction. As a result, there is no instruction that can be pass to the
EscapeMap handling, which means the references in the second PHI node are not
updated and may still reference values from within the original scop that do not
dominate it.
Our new code iterates directly over all modeled ScopArrayInfo objects that
represent multi-exit PHI nodes and generates code for them without relying on
the EscapeMap infrastructure. Hence, it works also for the case where the first
PHI node is eliminated.
llvm-svn: 251191
New values were always synthesized in the block of the instruction
that needed them. This is incorrect for PHI node whose' value must be
defined in the respective incoming block. This patch temporarily moves
the builder's insert point to the incoming block while synthesizing phi
node arguments.
This fixes PR25241 (http://llvm.org/bugs/show_bug.cgi?id=25241)
llvm-svn: 250693
Sorting is replaced by a demand driven code generation that will pre-load a
value when it is needed or, if it was not needed before, at some point
determined by the order of invariant accesses in the program. Only in very
little cases this demand driven pre-loading will kick in, though it will
prevent us from generating faulty code. An example where it is needed is
shown in:
test/ScopInfo/invariant_loads_complicated_dependences.ll
Invariant loads that appear in parameters but are not on the top-level (e.g.,
the parameter is not a SCEVUnknown) will now be treated correctly.
Differential Revision: http://reviews.llvm.org/D13831
llvm-svn: 250655
Polly can now be used as a analysis only tool as long as the code
generation is disabled. However, we do not have an alternative to the
independent blocks pass in place yet, though in the relevant cases
this does not seem to impact the performance much. Nevertheless, a
virtual alternative that allows the same transformations without
changing the input region will follow shortly.
llvm-svn: 250652
Expressing this in terms of BlockGenerator::getOrCreateAlloca(const
ScopArrayInfo *Array) does not work as the MemoryAccess BasePtr is in case of
invariant load hoisting different to the ScopArrayInfo BasePtr. Until this is
investigated and fixed, we move back to code that just uses the baseptr of
MemoryAccess.
llvm-svn: 250637
This allows the caller to get the alloca locations of an array without the
need to thank if Array is a PHI or a non-PHI Array. We directly make use of this
in BlockGenerator::getOrCreateAlloca(MemoryAccess &Access).
llvm-svn: 250628
Instead of generating implicit loads within basic blocks, put them
before the instructions of the statment itself, including non-affine
subregions. The region's entry node is dominating all blocks in the
region and therefore the loaded value will be available there.
Implicit writes in block-stmts were already stored back at the end of
the block. Now, also generate the stores of non-affine subregions when
leaving the statement, i.e. in the exiting block.
This change is required for array-mapped implicits ("De-LICM") to
ensure that there are no dependencies of demoted scalars within
statments. Statement load all required values, operator on copied in
registers, and then write back the changed value to the demoted memory.
Lifetimes analysis within statements becomes unecessary.
Differential Revision: http://reviews.llvm.org/D13487
llvm-svn: 250625
Instead of checking at code generation time for each ScopStmt if a scalar has
external uses, we just iterate over the ScopArrayInfo descriptions we have and
check each of these for possible external uses.
Besides being somehow clearer, this approach has the benefit that we will always
create valid LLVM-IR even in case we disable the code generation of ScopStmt
bodies e.g. for testing purposes.
llvm-svn: 250608
Helper functions in the BlockGenerators.h/cpp introduce dependences
from the frontend to the backend of Polly. As they are used in
ScopDetection, ScopInfo, etc. we move them to the ScopHelper file.
llvm-svn: 249919
If a (assumed) invariant location is loaded multiple times we
generated a parameter for each location. However, this caused compile
time problems for several benchmarks (e.g., 445_gobmk in SPEC2006 and
BT in the NAS benchmarks). Additionally, the code we generate is
suboptimal as we preload the same location multiple times and perform
the same checks on all the parameters that refere to the same value.
With this patch we consolidate the invariant loads in three steps:
1) During SCoP initialization required invariant loads are put in
equivalence classes based on their pointer operand. One
representing load is used to generate a parameter for the whole
class, thus we never generate multiple parameters for the same
location.
2) During the SCoP simplification we remove invariant memory
accesses that are in the same equivalence class. While doing so
we build the union of all execution domains as it is only
important that the location is at least accessed once.
3) During code generation we only preload one element of each
equivalence class with the unified execution domain. All others
are mapped to that preloaded value.
Differential Revision: http://reviews.llvm.org/D13338
llvm-svn: 249853
This patch allows invariant loads to be used in the SCoP description,
e.g., as loop bounds, conditions or in memory access functions.
First we collect "required invariant loads" during SCoP detection that
would otherwise make an expression we care about non-affine. To this
end a new level of abstraction was introduced before
SCEVValidator::isAffineExpr() namely ScopDetection::isAffine() and
ScopDetection::onlyValidRequiredInvariantLoads(). Here we can decide
if we want a load inside the region to be optimistically assumed
invariant or not. If we do, it will be marked as required and in the
SCoP generation we bail if it is actually not invariant. If we don't
it will be a non-affine expression as before. At the moment we
optimistically assume all "hoistable" (namely non-loop-carried) loads
to be invariant. This causes us to expand some SCoPs and dismiss them
later but it also allows us to detect a lot we would dismiss directly
if we would ask e.g., AliasAnalysis::canBasicBlockModify(). We also
allow potential aliases between optimistically assumed invariant loads
and other pointers as our runtime alias checks are sound in case the
loads are actually invariant. Together with the invariant checks this
combination allows to handle a lot more than LICM can.
The code generation of the invariant loads had to be extended as we
can now have dependences between parameters and invariant (hoisted)
loads as well as the other way around, e.g.,
test/Isl/CodeGen/invariant_load_parameters_cyclic_dependence.ll
First, it is important to note that we cannot have real cycles but
only dependences from a hoisted load to a parameter and from another
parameter to that hoisted load (and so on). To handle such cases we
materialize llvm::Values for parameters that are referred by a hoisted
load on demand and then materialize the remaining parameters. Second,
there are new kinds of dependences between hoisted loads caused by the
constraints on their execution. If a hoisted load is conditionally
executed it might depend on the value of another hoisted load. To deal
with such situations we sort them already in the ScopInfo such that
they can be generated in the order they are listed in the
Scop::InvariantAccesses list (see compareInvariantAccesses). The
dependences between hoisted loads caused by indirect accesses are
handled the same way as before.
llvm-svn: 249607
Value maps are created and used in many places and it is not always
possible to include CodeGen/Blockgenerators.h. To this end, ValueMapT
now lives in the ScopHelper.h which does not have any dependences itself.
This patch also replaces uses of different other value map types with
the ValueMapT.
llvm-svn: 249606
Do not use "Map[Key] == nullptr" to check if a Key is in the map, but use
"Map.find(Key) == Map.end()". Map[Key] always adds Key into the map, a
side-effect we do not want.
Found by inspection. This is hard to test outside of a targetted unit test,
which seems too much overhead for this individual issue.
llvm-svn: 249544
This single option replaces -polly-detect-unprofitable and -polly-no-early-exit
and is supposed to be the only option that disables compile-time heuristics that
aim to bail out early on scops that are believed to not benefit from Polly
optimizations.
Suggested-by: Johannes Doerfert
llvm-svn: 249426
When the ScopAnnotator was a class member variable some of the maps it contains
have not been properly cleared. As a result we had dangling pointers to
llvm::Value(s) which got detected by the AssertingVH we recently added.
No test case as this issue is hard to reproduce reliably as subsequent
optimizations need to delete some of the llvm::Values we still keep in our
lists.
llvm-svn: 249269
The use of const qualified Value pointers prevents the use of AssertingVH. We
could probably think of adding const support to AssertingVH, but as const
correctness seems to currently provide limited benefit in Polly, we do not do
this yet.
llvm-svn: 249266
There have been various places where llvm::DenseMap<const llvm::Value *,
llvm::Value *> types have been defined, but all types have been expected to be
identical. We make this more clear by consolidating the different types and use
BlockGenerator::ValueMapT wherever there is a need for types to match
BlockGenerator::ValueMapT.
llvm-svn: 249264
By using asserting value handles, we will get assertions when we forget to clear
any of the Value maps instead of difficult to debug undefined behavior.
llvm-svn: 249237
If a value is globally mapped (IslNodeBuilder::ValueMap) and
referenced in the code that will be put into a subfunction, we hand
down the new value to the subfunction.
This patch also removes code that handed down all invariant loads to
the subfunction. Instead, only needed invariant loads are given to the
subfunction. There are two possible reasons for an invariant load to
be handed down:
1) The invariant load is used in a block that is placed in the
subfunction but which is not the parent of the load. In this
case, the scalar access that will read the loaded value, will
cause its base pointer (the preloaded value) to be handed down to
the subfunction.
2) The invariant load is defined and used in a block that is placed
in the subfunction. With this patch we will hand down the
preloaded value to the subfunction as the invariant load is
globally mapped to that value.
llvm-svn: 249126
Because we handle more than SCEV does it is not possible to rewrite an
expression on the top-level using the SCEVParameterRewriter only. With
this patch we will do the rewriting on demand only and also
recursively, thus not only on the top-level.
llvm-svn: 248916
Instructions which we can synthesis from a SCEV expression are not
generated directly, but only when they are used as an operand of
another instruction. This avoids generating unnecessary instructions
and works more reliably than first inserting them and then deleting
them later on.
This commit was reverted in r248860 due to a remaining miscompile, where
we forgot to synthesis the operand values that were referenced from scalar
writes. test/Isl/CodeGen/scalar-store-from-same-bb.ll tests that we do this
now correctly.
llvm-svn: 248900
Before we unconditinoally forced all users outside the SCoP to use
the preloaded value. However, if the SCoP is not executed due to the
runtime checks, we need to use the original value because it might not
be invariant in the first place.
llvm-svn: 248881
As a first step in the direction of assumed invariant loads (loads
that are not written in some context) we now detect and hoist
definitively invariant loads. These invariant loads will be preloaded
in the code generation and used in the optimized version of the SCoP.
If the load is only conditionally executed the preloaded version will
also only be executed under the same condition, hence we will never
access memory that wouldn't have been accessed otherwise. This is also
the most distinguishing feature to licm.
As hoisting can make statements empty we will simplify the SCoP and
remove empty statements that would otherwise cause artifacts in the
code generation.
Differential Revision: http://reviews.llvm.org/D13194
llvm-svn: 248861
This reverts commit 07830c18d789ee72812d5b5b9b4f8ce72ebd4207.
The commit broke at least one test in lnt,
MultiSource/Benchmarks/Ptrdist/bc/number.c
was miss compiled and the test produced a wrong result.
One Polly test case that was added later was adjusted too.
llvm-svn: 248860
Every once in a while we see code that accesses memory with different types,
e.g. to perform operations on a piece of memory using type 'float', but to copy
data to this memory using type 'int'. Modeled in C, such codes look like:
void foo(float A[], float B[]) {
for (long i = 0; i < 100; i++)
*(int *)(&A[i]) = *(int *)(&B[i]);
for (long i = 0; i < 100; i++)
A[i] += 10;
}
We already used the correct types during normal operations, but fall back to our
detected type as soon as we import changed memory access functions. For these
memory accesses we may generate invalid IR due to a mismatch between the element
type of the array we detect and the actual type used in the memory access. To
address this issue, we always cast the newly created address of a memory access
back to the type of the memory access where the address will be used.
llvm-svn: 248781
Instructions which we can synthesis from a SCEV expression are not generated
directly, but only when they are used as an operand of another instruction. This
avoids generating unnecessary instruction and works more reliably than first
inserting them and then deleting them later on.
Suggested-by: Johannes Doerfert <doerfert@cs.uni-saarland.de>
Differential Revision: http://reviews.llvm.org/D13208
llvm-svn: 248712
This patch allows switch instructions with affine conditions in the
SCoP. Also switch instructions in non-affine subregions are allowed.
Both did not require much changes to the code, though there was some
refactoring needed to integrate them without code duplication.
In the llvm-test suite the number of profitable SCoPs increased from
135 to 139 but more importantly we can handle more benchmarks and user
inputs without preprocessing.
Differential Revision: http://reviews.llvm.org/D13200
llvm-svn: 248701
We now only delete trivially dead instructions in the BB we copy (copyBB), but
not in any other BB. Only for copyBB we know that there will _never_ be any
future uses of instructions that have no use after copyBB has been generated.
Other instructions in the AST that have been generated by IslNodeBuilder may
look dead at the moment, but may possibly still be referenced by GlobalMaps. If
we delete them now, later uses would break surprisingly.
We do not have a test case that breaks due to us deleting too many instructions.
This issue was found by inspection.
llvm-svn: 248688
The changes affect methods that are part of the Pass interface and
include:
- Comments that describe the methods purpose.
- A consistent use of the keywords override and virtual.
Additionally, the printScop method is now optional and removed from
SCoP passes that do not implement it.
llvm-svn: 248685
After having generated a new user statement a couple of inefficient or
trivially dead instructions may remain. This commit runs instruction
simplification over the newly generated blocks to ensure unneeded
instructions are removed right away.
This commit does adds simplification for non-affine subregions which was not
yet part of 248681.
llvm-svn: 248683
After having generated a new user statement a couple of inefficient or trivially
dead instructions may remain. This commit runs instruction simplification over
the newly generated blocks to ensure unneeded instructions are removed right
away.
This commit does not yet add simplification for non-affine subregions.
llvm-svn: 248681
This commit basically reverts r246427 but still solves the issue
tackled by that commit. Instead of emitting initialization code in the
beginning of the start block we now generate parallel code in its own
block and thereby guarantee separation. This is necessary as we cannot
generate code for hoisted loads prior to the start block but it still
needs to be placed prior to everything else.
llvm-svn: 248674
There are three possible reasons to add a memory memory access: For explicit load and stores, for llvm::Value defs/uses, and to emulate PHI nodes (the latter two called implicit accesses). Previously MemoryAccess only stored IsPHI. Register accesses could be identified through the isScalar() method if it was no IsPHI. isScalar() determined the number of dimensions of the underlaying array, scalars represented by zero dimensions.
For the work on de-LICM, implicit accesses can have more than zero dimensions, making the distinction of isScalars() useless, hence now stored explicitly in the MemoryAccess. Instead, we replace it by isImplicit() and avoid the term scalar for zero-dimensional arrays as it might be confused with llvm::Value which are also often referred to as scalars (or alternatively, as registers).
No behavioral change intended, under the condition that it was impossible to create explicit accesses to zero-dimensional "arrays".
llvm-svn: 248616
All MemoryAccess objects will be owned by ScopInfo::AccFuncMap which
previously stored the IRAccess objects. Instead of creating new
MemoryAccess objects, the already created ones are reused, but their
order might be different now. Some fields of IRAccess and MemoryAccess
had the same meaning and are merged.
This is the last step of fusioning TempScopInfo.{h|cpp} and
ScopInfo.{h.cpp}. Some refactoring might still make sense.
Differential Revision: http://reviews.llvm.org/D12843
llvm-svn: 248024
The TempScopInfo (-polly-analyze-ir) pass is removed and its work taken
over by ScopInfo (-polly-scops). Several tests depend on
-polly-analyze-ir and use -polly-scops instead which for the moment
prints the output of both passes. This again is not expected by some
other tests, especially those with negative searches, which have been
adapted.
Differential Version: http://reviews.llvm.org/D12694
llvm-svn: 247288
This prepares for a series of patches that merges TempScopInfo into ScopInfo to
reduce Polly's code complexity. Only ScopInfo.{cpp|h} will be left thereafter.
Moving the code of TempScopInfo in one commit makes the mains diffs simpler to
understand.
In detail, merging the following classes is planned:
TempScopInfo into ScopInfo
TempScop into Scop
IRAccess into MemoryAccess
Only moving code, no functional changes intended.
Differential Version: http://reviews.llvm.org/D12693
llvm-svn: 247274
Not all users of our IslNodeBuilder will attach scheduling information to the
AST in the same way IslAstInfo is doing it today. By going through a virtual
function when extracting the schedule of an AST node other users can provide
their own functions for extract scheduling information in case they attach
scheduling information in a different way to the AST nodes.
No functional change for Polly itself intended.
llvm-svn: 247126
While we do not need to model PHI nodes in the region exit (as it is not part
of the SCoP), we need to prepare for the case that the exit block is split in
code generation to create a single exiting block. If this will happen, hence
if the region did not have a single exiting block before, we will model the
operands of the PHI nodes as escaping scalars in the SCoP.
Differential Revision: http://reviews.llvm.org/D12051
llvm-svn: 247078
Certain backends, e.g. NVPTX, do not support '.' in function names. Hence,
we ensure all '.' are replaced by '_' when generating function names for
subfunctions. For the current OpenMP code generation, this is not strictly
necessary, but future uses cases (e.g. GPU offloading) need this issue to be
fixed.
llvm-svn: 246980
When this option is enabled, Polly will emit printf calls for each scalar
load/and store which dump the scalar value loaded/stored at run time.
This patch also refactors the RuntimeDebugBuilder to use variadic templates
when generating CPU printfs. As result, it now becomes easier to print
strings that consist of a set of arguments. Also, as a single printf
call is emitted, it is more likely for such strings to be emitted atomically
if executed multi-threaded.
llvm-svn: 246941
Some of the structures are renamed, subfunction introduced to clarify the
individual steps and comments are added describing their functionality.
llvm-svn: 246929
In the common case, the access functions are not modified, hence there is no
need to obtain the IslAstBuild context at all. This should not only be minimally
faster, but this also allows the IslNodeBuilder to work on asts that are not
annotated with isl_ast_builds as long as the memory accesses are not modified.
llvm-svn: 246928
By inspection the update of the GlobalMaps in the RegionGenerator seems unneed,
and is removed as also no test cases fail when dropping this. Johannes Doerfert
confirmed that this is indeed save:
"I think that code was needed when we did not use the scalar codegen by default.
Now everything defined in a non-affine region should be communicated via memory
and reloaded in the user block. Hence, we should be good removing this code."
llvm-svn: 246926
When computing the index expressions for new, multi-dimensional memory accesses
these new index expressions may reference original llvm::Values that are not
transfered into the OpenMP subfunction. Using GlobalMap we now replace
references to such values with the rewritten values that have e.g. been passed
to the OpenMP subfunction.
llvm-svn: 246923
The GlobalMap variable used in BlockGenerator should always reference the same
list througout the entire code generation, hence we can make it a member
variable to avoid passing it around through every function call.
History: Before we switched to the SCEV based code generation the GlobalMap
also contained a mapping form old to new induction variables, hence it was
different for each ScopStmt, which is why we passed it as function argument
to copyStmt. The new SCEV based code generation now uses a separate mapping
called LTS -> LoopToSCEV that maps each original loop to a new loop iteration
variable provided as a SCEVExpr. The GlobalMap is currently mostly used for
OpenMP code generation, where references to parameters in the original function
need to be rewritten to the locations of these variables after they have been
passed to the subfunction.
Suggested-by: Johannes Doerfert <doerfert@cs.uni-saarland.de>
llvm-svn: 246920
Our OpenMP code generation generated part of its launching code directly into
the start basic block and without this change the scalar initialization was
run _after_ the OpenMP threads have been launched. This resulted in
uninitialized scalar values to be used.
llvm-svn: 246427
Scalar dependences between scop statements have caused troubles during parallel
code generation as we did not pass on the new stack allocation created for such
scalars to the parallel subfunctions. This change now detects all scalar
reads/writes in parallel subfunctions, creates the allocas for these scalar
objects, passes the resulting memory locations to the subfunctions and ensures
that within the subfunction requests for these memory locations will return the
rewritten values.
Johannes suggested as a future optimization to privatizing some of the scalars
in the subfunction.
llvm-svn: 246414
We already modeled read-only dependences to scalar values defined outside the
scop as memory reads and also generated read accesses from the corresponding
alloca instructions that have been used to pass these scalar values around
during code generation. However, besides for PHI nodes that have already been
handled, we failed to store the orignal read-only scalar values into these
alloc. This commit extends the initialization of scalar values to all read-only
scalar values used within the scop.
llvm-svn: 246394
The current code really tries hard to use getNewScalarValue(), which checks if
not the original value, but a possible copy or demoted value needs to be stored.
In this calling context it seems, that we _always_ use the ScalarValue that
comes from the incoming PHI node, but never any other value. As also no test
cases fail, it seems right to just drop this call to getNewScalarValue and
remove the parameters that are not needed any more.
Johannes suggested that code like this might be needed for parallel code
generation with offloading, but it was still unclear if/what exactly would
be needed. As the parallel code generation does currently not support scalars
at all, we will remove this code for now and add relevant code back when
complitng the support of scalars in the parallel code generation.
Reviewers: jdoerfert
Subscribers: pollydev, llvm-commits
Differential Revision: http://reviews.llvm.org/D12470
llvm-svn: 246389
Our code generation currently does not support scalar references to metadata
values. Hence, it would crash if we try to model scalar dependences to metadata
values. Fortunately, for one of the common uses, debug information, we can
for now just ignore the relevant intrinsics and consequently the issue of how
to model scalar dependences to metadata.
llvm-svn: 246388
This commit drops some dead code. Specifically, there is no need to initialize
the virtual memory locations of scalars in BlockGenerator::handleOutsideUsers,
the function that initalizes the escape map that keeps track of out-of-scope
uses of scalar values. We already model instructions inside the scop that
are used outside the scope (escaping instructions) as scalar memory writes at
the position of the instruction. As a result, the virtual memory location of
this instructions is already initialized when code-generating the corresponding
virtual scalar write and consequently does not need to be initialized later on
when generating the set of escaping values.
Code references:
In TempScopInfo::buildScalarDependences we detect scalar cross-statement
dependences for all instructions (including PHIs) that have uses outside of the
scop's region:
// Check whether or not the use is in the SCoP.
if (!R->contains(UseParent)) {
AnyCrossStmtUse = true;
continue;
}
We use this information in TempScopInfo::buildAccessFunctions were we build
scalar write memory accesses for all these instructions:
if (!isa<StoreInst>(Inst) &&
buildScalarDependences(Inst, &R, NonAffineSubRegion)) {
// If the Instruction is used outside the statement, we need to build the
// write access.
IRAccess ScalarAccess(IRAccess::MUST_WRITE, Inst, ZeroOffset, 1, true,
Inst);
Functions.push_back(std::make_pair(ScalarAccess, Inst));
}
Reviewers: jdoerfert
Subscribers: pollydev, llvm-commits
Differential Revision: http://reviews.llvm.org/D12472
llvm-svn: 246383
For external users, the memory locations into which we generate scalar values
may be of interest. This change introduces two functions that allow to obtain
(or create) the AllocInsts for a given BasePointer.
We use this change to simplify the code in BlockGenerators.
llvm-svn: 246285
This change allows the BlockGenerator to be reused in contexts where we want to
provide different/modified isl_ast_expressions, which are not only changed to
a different access relation than the original statement, but which may indeed
be different for each code-generated instance of the statement.
We ensure testing of this feature by moving Polly's support to import changed
access functions through a jscop file to use the BlockGenerators support for
generating arbitary access functions if provided.
This commit should not change the behavior of Polly for now. The diff is rather
large, but most changes are due to us passing the NewAccesses hash table through
functions. This style, even though rather verbose, matches what is done
throughout the BlockGenerator with other per-statement properties.
llvm-svn: 246144
Instead of generating code for an empty assumed context we bail out
early. As the number of assumptions we generate increases this becomes
more and more important. Additionally, this change will allow us to
hide internal contexts that are only used in runtime checks e.g., a
boundary context with constraints not suited for simplifications.
llvm-svn: 245540
To make alias scope metadata generation work in OpenMP mode we now provide
the ScopAnnotator with information about the base pointer rewrite that happens
when passing arrays into the OpenMP subfunction.
llvm-svn: 245451
executeScopConditionally would destroy a predecessor region if it the
scop's entry was the region's exit block by forking it to polly.start
and thus creating a secnd exit out of the region. This patch "shrinks"
the predecessor region s.t. polly.split_new_and_old is not the
region's exit anymore.
llvm-svn: 245294
The SCEVExpander cannot deal with all SCEVs Polly allows in all kinds
of expressions. To this end we introduce a ScopExpander that handles
the additional expressions separatly and falls back to the
SCEVExpander for everything else.
Reviewers: grosser, Meinersbur
Subscribers: #polly
Differential Revision: http://reviews.llvm.org/D12066
llvm-svn: 245288
The new field in the MemoryAccess allows us to track a value related
to that access:
- For real memory accesses the value is the loaded result or the
stored value.
- For straigt line scalar accesses it is the access instruction
itself.
- For PHI operand accesses it is the operand value.
We use this value to simplify code which deduced information about the value
later in the Polly pipeline and was known to be error prone.
Reviewers: grosser, Meinsersbur
Subscribers: #polly
Differential Revision: http://reviews.llvm.org/D12062
llvm-svn: 245213
This allows the code generation to continue working even if a needed
value (that is reloaded anyway) was not yet demoted. Instead of
failing it will now create the location for future demotion to memory
and load from that location. The stores will use the same location and
by construction execute before the load even if the textual order in
the generated AST is otherwise.
Reviewers: grosser, Meinersbur
Subscribers: #polly
Differential Revision: http://reviews.llvm.org/D12072
llvm-svn: 245203
The July issue of TOPLAS contains a 50 page discussion of the AST generation
techniques used in Polly. This discussion gives not only an in-depth
description of how we (re)generate an imperative AST from our polyhedral based
mathematical program description, but also gives interesting insights about:
- Schedule trees: A tree-based mathematical program description that enables us
to perform loop transformations on an abstract level, while issues like the
generation of the correct loop structure and loop bounds will be taken care of
by our AST generator.
- Polyhedral unrolling: We discuss techniques that allow the unrolling of
non-trivial loops in the context of parameteric loop bounds, complex tile
shapes and conditionally executed statements. Such unrolling support enables
the generation of predicated code e.g. in the context of GPGPU computing.
- Isolation for full/partial tile separation: We discuss native support for
handling full/partial tile separation and -- in general -- native support for
isolation of boundary cases to enable smooth code generation for core
computations.
- AST generation with modulo constraints: We discuss how modulo mappings are
lowered to efficient C/LLVM code.
- User-defined constraint sets for run-time checks We discuss how arbitrary
sets of constraints can be used to automatically create run-time checks that
ensure a set of constrainst actually hold. This feature is very useful to
verify at run-time various assumptions that have been taken program
optimization.
Polyhedral AST generation is more than scanning polyhedra
Tobias Grosser, Sven Verdoolaege, Albert Cohen
ACM Transations on Programming Languages and Systems (TOPLAS), 37(4), July 2015
llvm-svn: 245157
This modifies the order in which Polly passes are executed.
Assuming a function has two scops (A and B), the order before was:
FunctionPassManager
ScopDetection
IndependentBlocks
TempScopInfo for A and B
RegionPassManager
ScopInfo for A
DependenceInfo for A
IslScheduleOptimizer for A
IslAstInfo for A
CodeGeneration for A
ScopInfo for B
DependenceInfo for B
IslScheduleOptimizer for B
IslAstInfo for B
CodeGeneration for B
After this patch:
FunctionPassManager
ScopDetection
IndependentBlocks
RegionPassManager
TempScopInfo for A
ScopInfo for A
DependenceInfo for A
IslScheduleOptimizer for A
IslAstInfo for A
CodeGeneration for A
TempScopInfo for B
ScopInfo for B
DependenceInfo for B
IslScheduleOptimizer for B
IslAstInfo for B
CodeGeneration for B
TempScopInfo for B might store information and references to the IR
that CodeGeneration for A might modify. Changing the order ensures that
the IR is not modified from the analysis of a region until code
generation.
Reviewers: grosser
Differential Revision: http://reviews.llvm.org/D12014
llvm-svn: 245091
This change extends the BlockGenerator to not only allow Instructions as
base elements of scalar dependences, but any llvm::Value. This allows
us to code-generate scalar dependences which reference function arguments, as
they arise when moddeling read-only scalar dependences.
llvm-svn: 244874
In order to have a valid region analysis, we assign all newly created blocks to the parent of the scop's region. This is correct for any pre-existing regions (including the scop's region and its parent), but does not discover any region inside the generated code. For Polly this is not necessary because we do not want to re-run Polly on its own generated code anyway.
Reviewers: grosser
Part of Differential Revision: http://reviews.llvm.org/D11867
llvm-svn: 244608
The previous code had several problems:
For newly created BasicBlocks it did not (always) call RegionInfo::setRegionFor in order to update its analysis. At the moment RegionInfo does not verify its BBMap, but will in the future. This is fixed by determining the region new BBs belong to and set it accordingly. The new executeScopConditionally() requires accurate getRegionFor information.
Which block is created by SplitEdge depends on the incoming and outgoing edges of the blocks it connects, which makes handling its output more difficult than it needs to be. Especially for finding which block has been created an to assign a region to it for the setRegionFor problem above. This patch uses an implementation for splitEdge that always creates a block between the predecessor and successor. simplifyRegion has also been simplified by using SplitBlockPredecessors instead of SplitEdge. Isolating the entries and exits have been refectored into individual functions.
Previously simplifyRegion did more than just ensuring that there is only one entering and one exiting edge. It ensured that the entering block had no other outgoing edge which was necessary for executeScopConditionally(). Now the latter uses the alternative splitEdge implementation which can handle this situation so simplifyRegion really only needs to simplify the region.
Also, executeScopConditionally assumed that there can be no PHI nodes in blocks with one incoming edge. This is wrong and LCSSA deliberately produces such edges. However, previous passes ensured that there can be no such PHIs in exit nodes, but which will no longer hold in the future.
The new code that the property that it preserves the identity of region block (the property that the memory address of the BasicBlock containing the instructions remains the same; new blocks only contain PHI nodes and a terminator), especially the entry block. As a result, there is no need to update the reference to the BasicBlock of ScopStmt that contain its instructions because they have been moved to other basic blocks.
Reviewers: grosser
Part of Differential Revision: http://reviews.llvm.org/D11867
llvm-svn: 244606
Even though read-only accesses to scalars outside of a scop do not need to be
modeled to derive valid transformations or to generate valid sequential code,
but information about them is useful when we considering memory footprint
analysis and/or kernel offloading.
llvm-svn: 243981
We use the branch instruction as the location at which a PHI-node write takes
place, instead of the PHI-node itself. This allows us to identify the
basic-block in a region statement which is on the incoming edge of the PHI-node
and for which the write access was originally introduced. As a result we can,
during code generation, avoid generating PHI-node write accesses for basic
blocks that do not preceed the PHI node without having to look at the IR
again.
This change fixes a bug which was introduced in r243420, when we started to
explicitly model PHI-node reads and writes, but dropped some additional checks
that where still necessary during code generation to not emit PHI-node writes
for basic-blocks that are not on incoming edges of the original PHI node.
Compared to the code before r243420 the new code does not need to inspect the IR
any more and we also do not generate multiple redundant writes.
llvm-svn: 243852
SCEVExpander, which we are using during code generation, only allows
instructions as insert locations, but breaks in case BasicBlock->end() iterators
are passed to it due to it trying to obtain the basic block in which code should
be generated by calling Instruction->getParent(), which is not defined for
->end() iterators.
This change adds an assert to Polly that ensures we only pass valid instructions
to SCEVExpander and it fixes one case, where we used IRBuilder->SetInsertBlock()
to set an ->end() insert location which was later passed to SCEVExpander.
In general, Polly is always trying to build up the CFG first, before we actually
insert instructions into the CFG sceleton. As a result, each basic block should
already have at least one branch instruction before we start adding code. Hence,
always requiring the IRBuilder insert location to be set to a real instruction
should always be possible.
Thanks Utpal Bora <cs14mtech11017@iith.ac.in> for his help with test case
reduction.
llvm-svn: 243830
It is common practice to keep constructors lightweight. The reasons
include:
- The vtable during the constructor's execution is set to the static
type of the object, not to the vtable of the derived class. That is,
method calls behave differently in constructors and ordinary methods.
This way it is possible to call unimplemented methods of abstract
classes, which usually results in a segmentation fault.
- If an exception is thrown in the constructor, the destructor is not
called, potentially leaking memory.
- Code in constructors cannot be called in a regular way, e.g. from
non-constructor methods of derived classes.
- Because it is common practice, people may not expect the constructor
to do more than initializing data and skip them when looking for bugs.
Not all of these are applicable to LLVM (e.g. exceptions are disabled).
This patch refactors out the computational work in the constructors of
Scop and IslAst into regular init functions and introduces static
create-functions as replacement.
Differential revision: http://reviews.llvm.org/D11491
Reviewers: grosser, jdoerfert
llvm-svn: 243677
Summary:
When translating PHI nodes into memory dependences during code generation we
require two kinds of memory. 'Normal memory' as for all scalar dependences and
'PHI node memory' to store the incoming values of the PHI node. With this
patch we now mark and track these two kinds of memories, which we previously
incorrectly marked as a single memory object.
Being aware of PHI node storage makes code generation easier, as we do not need
to guess what kind of storage a scalar reference requires. This simplifies the
code nicely.
Reviewers: jdoerfert
Subscribers: pollydev, llvm-commits
Differential Revision: http://reviews.llvm.org/D11554
llvm-svn: 243420
We hoist statements that are used on both branches of an if-condition, shorten
and unify some variable names and fold some variable declarations into their
only uses. We also drop a comment which just describes the elements the loop
iterates over.
No functional change intended.
llvm-svn: 243291
As specified in PR23888, run-time alias check generation is expensive
in terms of compile-time. This reduces the compile time by computing
minimal/maximal access only once for each base pointer
Contributed-by: Pratik Bhatu <cs12b1010@iith.ac.in>
llvm-svn: 243024
Instead of flat schedules, we now use so-called schedule trees to represent the
execution order of the statements in a SCoP. Schedule trees make it a lot easier
to analyze, understand and modify properties of a schedule, as specific nodes
in the tree can be choosen and possibly replaced.
This patch does not yet fully move our DependenceInfo pass to schedule trees,
as some additional performance analysis is needed here. (In general schedule
trees should be faster in compile-time, as the more structured representation
is generally easier to analyze and work with). We also can not yet perform the
reduction analysis on schedule trees.
For more information regarding schedule trees, please see Section 6 of
https://lirias.kuleuven.be/handle/123456789/497238
llvm-svn: 242130
LLVM's instcombine already translates power-of-two sdivs that are known to be
exact to fast ashr instructions. Hence, there is no need to add this logic
ourselves.
Pointed-out-by: Johannes Doerfert
llvm-svn: 239025
floord(a,b) === a ashr log_2 (b) holds for positive and negative a's, but
shifting only makes sense for positive values of b. The previous patch did
not consider this as isl currently always produces postive b's. To avoid future
surprises, we check that b is positive and only then apply the optimization.
We also now correctly check the return value of the dyn-cast.
No additional test case, as isl currently does not produce negative
denominators.
Reported-by: David Majnemer <david.majnemer@gmail.com>
llvm-svn: 238927
isl marks known non-negative numerators in modulo (and soon also division)
operations. We now exploit this by generating unsigned operations. This is
beneficial as unsigned operations with power-of-two denominators will be
translated by isl to fast bitshift or bitwise and operations.
llvm-svn: 238577
David Blaike suggested this as an alternative to the use of owningptr(s) for our
memory management, as value semantics allow to avoid the additional interface
complexity caused by owningptr while still providing similar memory consistency
guarantees. We could also have used a std::vector, but the use of std::vector
would yield possibly changing pointers which currently causes problems as for
example the memory accesses carry pointers to their parent statements. Such
pointers should not change.
Reviewer: jblaikie, jdoerfert
Differential Revision: http://reviews.llvm.org/D10041
llvm-svn: 238290