Since the origin AccFuncMap in ScopInfo is used by the underlying Scop
only, and it must stay alive until we delete the Scop. It will be better
if we simply move the origin AccFuncMap in ScopInfo into the Scop class.
llvm-svn: 260820
Make Scop become more portable such that we can use it in a CallGraphSCC pass.
The first step is to drop the analyses that are only used during Scop construction.
This patch drop LoopInfo from Scop.
llvm-svn: 260819
Make Scop become more portable such that we can use it in a CallGraphSCC pass.
The first step is to drop the analyses that are only used during Scop construction.
This patch drop DominatorTree from Scop.
llvm-svn: 260818
Make Scop become more portable such that we can use it in a CallGraphSCC pass.
The first step is to drop the analyses that are only used during Scop construction.
This patch drop ScopDecection from Scop.
llvm-svn: 260817
We now distinguish invariant loads to the same memory location if they
have different types. This will cause us to pre-load an invariant
location once for each type that is used to access it. However, we can
thereby avoid invalid casting, especially if an array is accessed
though different typed/sized invariant loads.
This basically reverts the changes in r260023 but keeps the test
cases.
llvm-svn: 260045
We also disable this feature by default, as there are still some issues in
combination with invariant load hoisting that slipped through my initial
testing.
llvm-svn: 260025
Invariant load hoisting of memory accesses with non-canonical element
types lacks support for equivalence classes that contain elements of
different width/size. This support should be added, but to get our buildbots
back to green, we disable load hoisting for memory accesses with non-canonical
element size for now.
llvm-svn: 260023
Always use access-instruction pointer type to load the invariant values.
Otherwise mismatches between ScopArrayInfo element type and memory access
element type will result in invalid casts. These type mismatches are after
r259784 a lot more common and also arise with types of different size, which
have not been handled before.
Interestingly, this change actually simplifies the code, as we now have only
one code path that is always taken, rather then a standard code path for the
common case and a "fixup" code path that replaces the standard code path in
case of mismatching types.
llvm-svn: 260009
The previously implemented approach is to follow value definitions and
create write accesses ("push defs") while searching for uses. This
requires the same relatively validity- and requirement conditions to be
replicated at multiple locations (PHI instructions, other instructions,
uses by PHIs).
We replace this by iterating over the uses in a SCoP ("pull in
requirements"), and add writes only when at least one read has been
added. It turns out to be simpler code because each use is only iterated
over once and writes are added for the first access that reads it. We
need another iteration to identify escaping values (uses not in the
SCoP), which also makes the difference between such accesses more
obvious. As a side-effect, the order of scalar MemoryAccess can change.
Differential Revision: http://reviews.llvm.org/D15706
llvm-svn: 259987
This allows code such as:
void multiple_types(char *Short, char *Float, char *Double) {
for (long i = 0; i < 100; i++) {
Short[i] = *(short *)&Short[2 * i];
Float[i] = *(float *)&Float[4 * i];
Double[i] = *(double *)&Double[8 * i];
}
}
To model such code we use as canonical element type of the modeled array the
smallest element type of all original array accesses, if type allocation sizes
are multiples of each other. Otherwise, we use a newly created iN type, where N
is the gcd of the allocation size of the types used in the accesses to this
array. Accesses with types larger as the canonical element type are modeled as
multiple accesses with the smaller type.
For example the second load access is modeled as:
{ Stmt_bb2[i0] -> MemRef_Float[o0] : 4i0 <= o0 <= 3 + 4i0 }
To support code-generating these memory accesses, we introduce a new method
getAccessAddressFunction that assigns each statement instance a single memory
location, the address we load from/store to. Currently we obtain this address by
taking the lexmin of the access function. We may consider keeping track of the
memory location more explicitly in the future.
We currently do _not_ handle multi-dimensional arrays and also keep the
restriction of not supporting accesses where the offset expression is not a
multiple of the access element type size. This patch adds tests that ensure
we correctly invalidate a scop in case these accesses are found. Both types of
accesses can be handled using the very same model, but are left to be added in
the future.
We also move the initialization of the scop-context into the constructor to
ensure it is already available when invalidating the scop.
Finally, we add this as a new item to the 2.9 release notes
Reviewers: jdoerfert, Meinersbur
Differential Revision: http://reviews.llvm.org/D16878
llvm-svn: 259784
Even though the commands still work, dragonegg has not been updated to work with
recent versions of LLVM. Consequently, only very old Polly versions (which we
do not support any more), can be used in this way.
This simplifies our documentation page.
llvm-svn: 259758
This includes some (optional) improvements to the isl scheduler, which we do not
use yet, as well as a fix for a bug previously also affecting Polly:
commit 662ee9b7d45ebeb7629b239d3ed43442e25bf87c
Author: Sven Verdoolaege <skimo@kotnet.org>
Date: Mon Jan 25 16:59:32 2016 +0100
isl_basic_map_realign: perform Gaussian elimination on result
Many parts of isl assume that Gaussian elimination has been
applied to the equality constraints. In particular singleton_extract_point
makes this assumption. The input to singleton_extract_point
may have undergone parameter alignment. This parameter alignment
(ultimately performed by isl_basic_map_realign) therefore
needs to make sure the result preserves this property
llvm-svn: 259757
We remove information for older versions of Polly and also shorten the overall
text. This should make it a lot easier for people to get to the important code
wight away.
llvm-svn: 259658
We support now code such as:
void multiple_types(char *Short, char *Float, char *Double) {
for (long i = 0; i < 100; i++) {
Short[i] = *(short *)&Short[2 * i];
Float[i] = *(float *)&Float[4 * i];
Double[i] = *(double *)&Double[8 * i];
}
}
To support such code we use as element type of the modeled array the smallest
element type of all original array accesses. Accesses with larger types are
modeled as multiple accesses with the smaller type.
For example the second load access is modeled as:
{ Stmt_bb2[i0] -> MemRef_Float[o0] : 4i0 <= o0 <= 3 + 4i0 }
To support jscop-rewritable memory accesses we need each statement instance to
only be assigned a single memory location, which will be the address at which
we load the value. Currently we obtain this address by taking the lexmin of
the access function. We may consider keeping track of the memory location more
explicitly in the future.
llvm-svn: 259587
We create separate functions for fixed-size multi-dimensional, parameteric-sized
multi-dimensional, as well as single-dimensional memory accesses to reduce the
complexity of a large monolithic function.
Suggested-by: Michael Kruse <llvm@meinersbur.de>
llvm-svn: 259522
There is no need to pass the size of the elements as the last size dimension
to ScopArrayInfo. This information is already available through the ElementType.
Tracking it twice is not only redundant but may result in inconsistencies.
llvm-svn: 259521
For schedule generation we assumed that the reverse post order traversal used by
the domain generation is sufficient, however it is not. Once a loop is
discovered, we have to completely traverse it, before we can generate the
schedule for any block/region that is only reachable through a loop exiting
block.
To this end, we add a "loop stack" that will keep track of loops we
discovered during the traversal but have not yet traversed completely.
We will never visit a basic block (or region) outside the most recent
(thus smallest) loop in the loop stack but instead queue such blocks
(or regions) in a waiting list. If the waiting list is not empty and
(might) contain blocks from the most recent loop in the loop stack the
next block/region to visit is drawn from there, otherwise from the
reverse post order iterator.
We exploit the new property of loops being always completed before additional
loops are processed, by removing the LoopSchedules map and instead keep all
information in LoopStack. This clarifies that we indeed always only keep a
stack of in-process loops, but will never keep incomplete schedules for an
arbitrary set of loops. As a result, we can simplify some of the existing code.
This patch also adds some more documentation about how our schedule construction
works.
This fixes http://llvm.org/PR25879
This patch is an modified version of Johannes Doerfert's initial fix.
Differential Revision: http://reviews.llvm.org/D15679
llvm-svn: 259354
In https://llvm.org/svn/llvm-project/polly/trunk@251870 code was committed to
avoid a failure in the presence of infinite loops, but the test case committed
along with this change passes without the actual change. I looked back into the
code and also checked with the original committer (Johannes), but could not find
the reason why the code is needed. The introduction of LoopStacks for
buildSchedule in one of the next commits will make it even more clear that this
code is not needed, but I remove this ahead of time to facilitate bisecting in
case I missed something.
llvm-svn: 259347
darwin requires the additional linkages of...
LLVMBitReader
LLVMMCParser
LLVMObject
LLVMProfileData
LLVMTarget
LLVMVectorize
as the darwin requires all of the weak undefined symbols in a library to be
resolved when linking it against an executable (unless
-Wl,-undefined,dynamic_lookup is used to override the default behavior of
-Wl,-undefined,error).
Contributed-by: Jack Howarth
llvm-svn: 259332
The autotools build system is based on and requires LLVM's autotools
build system to work, which has been depricated and finally removed in
r258861. Consequently we also remove the autotools build system from
Polly.
Differential Revision: http://reviews.llvm.org/D16655
llvm-svn: 259041
Before adding a MK_Value READ MemoryAccess, check whether the read is
necessary or synthesizable. Synthesizable values are later generated by
the SCEVExpander and therefore do not need to be transferred
explicitly. This can happen because the check for synthesizability has
presumbly been forgotten in the case where a phi's incoming value has
been defined in a different statement.
Differential Revision: http://reviews.llvm.org/D15687
llvm-svn: 258998
MemAccInst wraps the common members of LoadInst and StoreInst. Also use
of this class in:
- ScopInfo::buildMemoryAccess
- BlockGenerator::generateLocationAccessed
- ScopInfo::addArrayAccess
- Scop::buildAliasGroups
- Replace every use of polly::getPointerOperand
Reviewers: jdoerfert, grosser
Differential Revision: http://reviews.llvm.org/D16530
llvm-svn: 258947
Ensure that there is at most one phi write access per PHINode and
ScopStmt. In particular, this would be possible for non-affine
subregions with multiple exiting blocks. We replace multiple MAY_WRITE
accesses by one MUST_WRITE access. The written value is constructed
using a PHINode of all exiting blocks. The interpretation of the PHI
WRITE's "accessed value" changed from the incoming value to the PHI like
for PHI READs since there is no unique incoming value.
Because region simplification shuffles around PHI nodes -- particularly
with exit node PHIs -- the PHINodes at analysis time does not always
exist anymore in the code generation pass. We instead remember the
incoming block/value pair in the MemoryAccess.
Differential Revision: http://reviews.llvm.org/D15681
llvm-svn: 258809
Keep at most one value read MemoryAccess per value and statement;
multiple generated loads do not have any additional effect. As one such
MemoryAccess can cater multiple uses within the statement, the
AccessInstruction property is not unique any more and set to nullptr.
Differential Revision: http://reviews.llvm.org/D15510
llvm-svn: 258808
Ensure there is at most one write access per definition of an
llvm::Value. Keep track of already created value write access by using
a (dense) map.
Replace addValueWriteAccess by ensureValueStore which can be uses more
liberally without worrying to add redundant accesses. It will be used,
e.g. in a logical correspondant for value reads -- ensureValueReload --
to ensure that the expected definition has been written when loading it.
Differential Revision: http://reviews.llvm.org/D15483
llvm-svn: 258807
Both functions implement the same functionality, with the difference that
getNewScalarValue assumes that globals and out-of-scop scalars can be directly
reused without loading them from their corresponding stack slot. This is correct
for sequential code generation, but causes issues with outlining code e.g. for
OpenMP code generation. getNewValue handles such cases correctly.
Hence, we can replace getNewScalarValue with getNewValue. This is not only more
future proof, but also eliminates a bunch of code.
The only functionality that was available in getNewScalarValue that is lost
is the on-demand creation of scalar values. However, this is not necessary any
more as scalars are always loaded at the beginning of each basic block and will
consequently always be available when scalar stores are generated. As this was
not the case in older versions of Polly, it seems the on-demand loading is just
some older code that has not yet been removed.
Finally, generateScalarLoads also generated loads for values that are loop
invariant, available in GlobalMap and which are preferred over the ones loaded
in generateScalarLoads. Hence, we can just skip the code generation of such
scalar values, avoiding the generation of dead code.
Differential Revision: http://reviews.llvm.org/D16522
llvm-svn: 258799
Polly currently does not support irreducible control and it is probably not
worth supporting. This patch adds code that checks for irreducible control
and refuses regions containing irreducible control.
Polly traditionally had rather restrictive checks on the control flow structure
which would have refused irregular control, but within the last couple of months
most of the control flow restrictions have been removed. As part of this
generalization we accidentally allowed irregular control flow.
Contributed-by: Karthik Senthil and Ajith Pandel
llvm-svn: 258497
The test case we look at does not necessarily require irreducible control flow,
but a normal loop is sufficient to create a non-affine region containing more
than one basic block that dominates the exit node. We replace this irreducible
control flow with a normal loop for the following reasons:
1) This is easier to understand
2) We will subsequently commit a patch that ensures Polly does not process
irreducible control flow.
Within non-affine regions, we could possibly handle irreducible control flow.
llvm-svn: 258496
Polly recently got its own product in LLVM's bug tracker, which will make it
easier for people to file Polly bugs. This change updates the bugtracker links
on the Polly website.
llvm-svn: 258494
In Polly, after hoisting loop invariant loads outside loop, the alignment
information for hoisted loads are missing, this patch restore them.
Contributed-by: Lawrence Hu <lawrence@codeaurora.org>
Differential Revision: http://reviews.llvm.org/D16160
llvm-svn: 258105
When importing a schedule, do not verify the load/store alignment of
scalar accesses. Scalar loads/store are always created newly in code
generation with no alignment restrictions. Previously, scalar alignment
was checked if the access instruction happened to be a LoadInst or
StoreInst, but only its array (MK_Array) access is relevant.
This will be implicitly unit-tested when the access instruction of a
value read can be nullptr.
Differential Revision: http://reviews.llvm.org/D15680
llvm-svn: 257904
ISL 0.16 will change how sets are printed which breaks 117 unit tests
that text-compare printed sets. This patch re-formats most of these unit
tests using a script and small manual editing on top of that. When
actually updating ISL, most work is done by just re-running the script
to adapt to the changed output.
Some tests that compare IR and tests with single CHECK-lines that can be
easily updated manually are not included here.
The re-format script will also be committed afterwards. The per-test
formatter invocation command lines options will not be added in the near
future because it is ad hoc and would overwrite the manual edits.
Ideally it also shouldn't be required anymore because ISL's set printing
has become more stable in 0.16.
Differential Revision: http://reviews.llvm.org/D16095
llvm-svn: 257851
should perform loop interchanges itself.
This also fixes a bug we see due to the "loop-interchange" pass producing
incorrect IR when compiling linpack-pc.c from the LLVM test-suite with
"-polly-position=before-vectorizer".
Reviewed-by: Tobias Grosser <tobias@grosser.es>
llvm-svn: 257495
Call assumeNoOutOfBound only in updateDimensionality to process situations
when new dimensions are added and new bounds checks are required.
Contributed-by: Tobias Grosser, Gareev Roman
llvm-svn: 257170
This change clarifies that for Not-NonAffine-SubRegions we actually iterate over
the subnodes and for both NonAffine-SubRegions and BasicBlocks, we perform the
schedule construction. As a result, the tree traversal becomes trivial, the
special case for a scop consisting just of a single non-affine region
disappears and the indentation of the code is reduced.
No functional change intended.
llvm-svn: 256940
Providing an explicit PointerLikeTypeTraits implementation became necessary
since LLVM started in https://llvm.org/svn/llvm-project/llvm/trunk@256620
to automatically derive the pointer alignment from the pointer element type,
which does not work for incomplete types as used by isl. To ensure our code
still compiles, we provide an instantiation of PointerLikeTypeTraits for isl_id
which assumes no minimal alignment. isl pointers are likely to have a "higher"
alignment. We can exploit this later in case this becomes performance relevant.
llvm-svn: 256650
At code generation, scalar reads are generated before the other
statement's instructions, respectively scalar writes after them, in
contrast to array accesses which are "executed" with the instructions
they are linked to. Therefore it makes sense to not map the scalar
accesses to a place of execution. Follow-up patches will also remove
some of the directs links from a scalar access to a single instruction,
such that only having array accesses in InstructionToAccess ensures
consistency.
Differential Revision: http://reviews.llvm.org/D13676
llvm-svn: 256298
We clarify that certain code is only executed if LSchedule is != nullptr.
Previously some of these functions have been executed, but they only passed
a nullptr through. This caused some confusion when reading the code.
llvm-svn: 256209
Besides improving the documentation and the code we now assert in case the input
is invalid (N < 0) and also do not any more return a nullptr in case USet is
empty. This should make the code more readable.
llvm-svn: 256208
If a loop has a sufficiently large amount of compute instruction in its loop
body, it is unlikely that our rewrite of the loop iterators introduces large
performance changes. As Polly can also apply beneficical optimizations (such
as parallelization) to such loop nests, we mark them as profitable.
This option is currently "disabled" by default, but can be used to run
experiments. If enabled by setting it e.g. to 40 instructions, we currently
see some compile-time increases on LNT without any significant run-time
changes.
llvm-svn: 256199
.. and add some documentation. We also simplify the code by dropping an early
check that is also covered by the the later checks. This might have a small
compile time impact, but as the scops that are skipped are small we should
probably only add this back in the unlikely case that this has a notable
compile-time cost.
No functional change intended.
llvm-svn: 256149
As we already log an error when calling invalid, scops unprofitable scops are in
any case marked invalid, but returning immediately safes (a tiny bit of) compile
time and is consistent with our use of 'invalid' in the remainder of the file.
Found by inspection.
llvm-svn: 256140
Without this return we still log the incorrect array size (and do not detect
this scop), but we would unnecessarily continue to verify that access functions
are affine. As we do not need to do this, we can return right ahead and
consequently safe compile time.
This issue was found by inspection.
llvm-svn: 256139
Instead of counting all array memory accesses associated with a load
instruction, we now explicitly check that the single array access that could
(potentially) be associated with a load instruction does not exist. This helps
to document the current behavior of Polly where load instructions can indeed
have at most one associated array access. In the unlikely case this changes
in the future, we add an assert for the case where two load accesses would
prevent us to return a single memory access, but we still should communicate
that not all array memory accesses have been removed.
This addresses post-commit comments from Johannes Doerfert for commit 255776.
llvm-svn: 256136
Scops that contain many complex branches are likely to result in complex domain
conditions that consist of a large (> 100) number of conjucts. Transforming
such domains is expensive and unlikely to result in efficient code. To avoid
long compile times we detect this case and skip such scops. In the future we may
improve this by either using non-affine subregions to hide such complex
condition structures or by exploiting in certain cases properties (e.g.,
dominance) that allow us to construct the domains of a scop in a way that
results in a smaller number improving conjuncts.
Example of a code that results in complex iteration spaces:
loop.header
/ | \ \
A0 A2 A4 \
\ / \ / \
A1 A3 \
/ \ / \ |
B0 B2 B4 |
\ / \ / |
B1 B3 ^
/ \ / \ |
C0 C2 C4 |
\ / \ / /
C1 C3 /
\ / /
loop backedge
llvm-svn: 256123
The patch fixes Bug 25759 produced by inappropriate handling of unsigned
maximum SCEV expressions by SCEVRemoveMax. Without a fix, we get an infinite
loop and a segmentation fault, if we try to process, for example,
'((-1 + (-1 * %b1)) umax {(-1 + (-1 * %yStart)),+,-1}<%.preheader>)'.
It also fixes a potential issue related to signed maximum SCEV expressions.
Tested-by: Roman Gareev <gareevroman@gmail.com>
Fixed-by: Tobias Grosser <tobias@grosser.es>
Differential Revision: http://reviews.llvm.org/D15563
llvm-svn: 255922
When running 'clang -O3 -mllvm -polly -mllvm -polly-show' we now only show the
CFGs of functions with at least one detected scop. For larger files/projects
this reduces the number of graphs printed significantly and is likely what
developers want to see. The new option -polly-view-all enforces all graphs to be
printed and the exiting option -poll-view-only limites the graph printing to
functions that match a certain pattern.
This patch requires https://llvm.org/svn/llvm-project/llvm/trunk@255889 (and
vice versa) to compile correctly.
llvm-svn: 255891
Load instructions may possibly be related to multiple memory accesses, but we
are only interested in the array read access that describes the memory location
the load instructions loads from. By using getArrayAccessfor we ensure to always
obtain the right memory access.
This issue was found by inspection without having a failing test case.
llvm-svn: 255716
getAccessFor does not guarantee a certain access to be returned in case an
instruction is related to multiple accesses. However, in the vector code
generation we want to know the stride of the array access of a store
instruction. By using getArrayAccessFor we ensure we always get the correct
memory access.
This patch fixes a potential bug, but I was unable to produce a failing test
case. Several existing test cases cover this code, but all of them already
passed out of luck (or the specific but not-guaranteed order in which we build
memory accesses).
llvm-svn: 255715
When generating scalar loads/stores separately the vector code has not been
updated. This commit adds code to generate scalar loads for vector code as well
as code to assert in case scalar stores are encountered within a vector loop.
llvm-svn: 255714
When rewriting the access functions of load/store statements, we are only
interested in the actual array memory location. The current code just took
the very first memory access, which could be a scalar or an array access. As
a result, we failed to update access functions even though this was requested
via .jscop.
llvm-svn: 255713
This change should not change the behavior of Polly today, but it allows
external constants to be remapped e.g. when targetting multiple LLVM modules.
llvm-svn: 255506
This reverts commit r255471.
Johannes raised in the post-commit review of r255471 the concern that PHI
writes in non-affine regions with two exiting blocks are not really MUST_WRITE,
but we just know that at least one out of the set of all possible PHI writes
will be executed. Modeling all PHI nodes as MUST_WRITEs is probably save, but
adding the needed documentation for such a special case is probably not worth
the effort. Michael will be proposing a new patch that ensures only a single
PHI_WRITE is created for non-affine regions, which - besides other benefits -
should also allow us to use a single well-defined MUST_WRITE for such PHI
writes.
(This is not a full revert, but the condition and documentation have been
slightly extended)
llvm-svn: 255503
Before this commit, only the region's entry block was assumed to always
execute in a non-affine subregion. We replace this by a test whether it
dominates the exit block (this necessarily includes the entry block)
which should be more accurate.
llvm-svn: 255473
LLVM's IR guarantees that a value definition occurs before any use, and
also the value of a PHI must be one of the incoming values, "written"
in one of the incoming blocks. Hence, such writes are never conditional
in the context of a non-affine subregion.
llvm-svn: 255471
Over time different vocabulary has been introduced to describe the different
memory objects in Polly, resulting in different - often inconsistent - naming
schemes in different parts of Polly. We now standartize this to the following
scheme:
KindArray, KindValue, KindPHI, KindExitPHI
| ------- isScalar -----------|
In most cases this naming scheme has already been used previously (this
minimizes changes and ensures we remain consistent with previous publications).
The main change is that we remove KindScalar to clearify the difference between
a scalar as a memory object of kind Value, PHI or ExitPHI and a value (former
KindScalar) which is a memory object modeling a llvm::Value.
We also move all documentation to the Kind* enum in the ScopArrayInfo class,
remove the second enum in the MemoryAccess class and update documentation to be
formulated from the perspective of the memory object, rather than the memory
access. The terms "Implicit"/"Explicit", formerly used to describe memory
accesses, have been dropped. From the perspective of memory accesses they
described the different memory kinds well - especially from the perspective of
code generation - but just from the perspective of a memory object it seems more
straightforward to talk about scalars and arrays, rather than explicit and
implicit arrays. The last comment is clearly subjective, though. A less
subjective reason to go for these terms is the historic use both in mailing list
discussions and publications.
llvm-svn: 255467
Use it to print "null" if a MemoryAccess's access relation is not
available instead of printing nothing.
Suggested-by: Johannes Doerfert
llvm-svn: 255466
Introduce a function getStmtForRegionNode() to the corresponding
ScopStmt of a RegionNode. We can use it to call the existing
ScopStmt::isEmpty() function instead of searching for accesses.
llvm-svn: 255465
When introducing separate control flow for the original and optimized code we
introduce now a special 'ExitingBlock':
\ /
EnteringBB
|
SplitBlock---------\
_____|_____ |
/ EntryBB \ StartBlock
| (region) | |
\_ExitingBB_/ ExitingBlock
| |
MergeBlock---------/
|
ExitBB
/ \
This 'ExitingBlock' contains code such as the final_reloads for scalars, which
previously were just added to whichever statement/loop_exit/branch-merge block
had been generated last. Having an explicit basic block makes it easier to
find these constructs when looking at the CFG.
llvm-svn: 255107
This update brings in improvements to isl's 'isolate' option that reduce the
number of code versions generated. This results in both code-size and compile
time reduction for outer loop vectorization.
Thanks to Roman Garev and Sven Verdoolaege for working on this improvement.
llvm-svn: 254706
The motivation is to fix a compilation error with Visual Studio 2013.
See http://reviews.llvm.org/D14886.
Thanks to Sumanth Gundapaneni for finding the issue and suggesting a
patch.
llvm-svn: 254498
The script will checkout the most recent master from
http://repo.or.cz/isl.git into /tmp, create a distribution tarball, and
extract it as replacement of lib/External/isl. After that it can be
committed to the Polly repository.
llvm-svn: 254497
Acc==MA implies Acc->getAccessInstruction() == MA->getAccessInstruction().
Suggested as post-commit review for 254305 by Michael Kruse.
llvm-svn: 254327
The use of C++'s high-level iterator functionality instead of two while loops
and explicit iterator handling improves readability of this code.
Proposed-by: Michael Kruse <llvm@meinersbur.de>
Differential Revision: http://reviews.llvm.org/D15068
llvm-svn: 254305
Re-run canonicalization passes after Polly's code generation.
The set of passes currently added here are nearly all the passes between
--polly-position=early and --polly-position=before-vectorizer, i.e. all
passes that would usually run after Polly.
In order to run these only if Polly actually modified the code, we add a
function attribute "polly-optimzed" to a function that contains
generated code. The cleanup pass is skipped if the function does not
have this attribute.
There is no support by the (legacy) PassManager to run passes only under
some conditions. One could have wrapped all transformation passes to run
only when CodeGeneration changed the code, but the analyses would run
anyway. This patch creates an independent pass manager. The
disadvantages are that all analyses have to re-run even if preserved and
it does not honor compiler switches like the PassManagerBuilder does.
Differential Revision: http://reviews.llvm.org/D14333
llvm-svn: 254150
Previously, accesses that originate from PHI nodes in the exit block
were registered as SCALAR. In some context they are treated as scalars,
but it makes a difference in others. We used to check whether the
AccessInstruction is a terminator to differentiate the cases.
This patch introduces an MemoryAccess origin EXIT_PHI and a
ScopArrayInfo kind KIND_EXIT_PHI to make this case more explicit. No
behavioural change intended.
Differential Revision: http://reviews.llvm.org/D14688
llvm-svn: 254149
gfortran (and fortran in general?) does not compute the address of an array
element directly from the array sizes (e.g., %s0, %s1), but takes first the
maximum of the sizes and 0 (e.g., max(0, %s0)) before multiplying the resulting
value with the per-dimension array subscript expressions. To successfully
delinearize index expressions as we see them in fortran, we first filter 'smax'
expressions out of the SCEV expression, use them to guess array size parameters
and only then continue with the existing delinearization.
llvm-svn: 253995
Trying to build up access functions for any of these blocks is likely to fail,
as error blocks may contain invalid/non-representable instructions, and blocks
dominated by error blocks may reference such instructions, which wil also cause
failures. As all of these blocks are anyhow assumed to not be executed, we can
just remove them early on.
This fixes http://llvm.org/PR25596
llvm-svn: 253818
The most interesting change for Polly in this isl update is 4d5654af which
in certain cases can speed up the construction of run-time checks from an isl
set consisting of several disjuncts significantly.
llvm-svn: 253794
At some point we enforced lcssa for the loop surrounding the entry block.
This is not only questionable as it does not check any other loop but also
not needed any more.
llvm-svn: 253789
In case the original parameter instruction does not have a name, but it comes
from a load instruction where the base pointer has a name we used the name of
the load instruction to give some more intuition of where the parameter came
from. To ensure this works also through GEPs which may have complex offsets,
we originally just dropped the offsets and _only_ used the base pointer name.
As this can result in multiple parameters to get the same name, we now prefix
the parameter ID to ensure parameter names are unique. This will make it easier
to understand debug output.
This change does not affect correctness, as parameter IDs (even of the same
name) can always be distinguished through the SCEV pointer stored inside them.
llvm-svn: 253330
Without this change we may start to refuse scops in larger compilation units
just because a lot of code has already been compiled earlier.
Found by inspection. I do not yet have a good test case for this.
llvm-svn: 253050
Only when we check for wrapping we want to use the store size, for all
other cases we use the alloc size now.
Suggested by: Tobias Grosser <tobias@grosser.es>
llvm-svn: 252941