This patch cleans up the rejection log handling during the
ScopDetection. It consists of two interconnected parts:
- We keep all detection contexts for a function in order to provide
more information to the user, e.g., about the rejection of
extended/intermediate regions.
- We remove the mutable "RejectLogs" member as the information is
available through the detection contexts.
llvm-svn: 269323
Regions with one affine loop can be profitable if the loop is
distributable. To this end we will allow them to be treated as
profitable if they contain at least two non-trivial basic blocks.
llvm-svn: 269064
With this patch we will optimistically assume that the result of an unsigned
comparison is the same as the result of the same comparison interpreted as
signed.
llvm-svn: 267559
In r247147 we disabled pointer expressions because the IslExprBuilder did not
fully support them. This patch reintroduces them by simply treating them as
integers. The only special handling for pointers that is left detects the
comparison of two address_of operands and uses an unsigned compare.
llvm-svn: 265894
If a loop has no exiting blocks the region covering we use during
schedule genertion might not cover that loop properly. For now we bail
out as we would not optimize these loops anyway.
llvm-svn: 265280
If a loop has no exiting blocks the region covering we use during
schedule genertion might not cover that loop properly. For now we bail
out as we would not optimize these loops anyway.
llvm-svn: 265260
... instead of hardcoding something that has been free at some point. This fixes
a crash triggered by r265084, where the diagnostic IDs have been shifted in a
way that resulted our hardcode ID to not be assigned any implementation. Our ID
was likely already wrong earlier on, but this time we really crashed nicely.
llvm-svn: 265114
This fixes PR27035. While we now exclude MemIntrinsics from the
polyhedral model if they would access "null" we could exploit this
even more, e.g., remove all parameter combinations that would lead to
the execution of this statement from the context.
llvm-svn: 264284
This might be useful to evaluate the benefit of us handling modref funciton
calls. Also, a new bug that was triggered by modref function calls was
recently reported http://llvm.org/PR27035. To ensure the same issue does not
cause troubles for other people, we temporarily disable this until the bug
is resolved.
llvm-svn: 264140
The scope will be required in the following fix. This commit separates
the large changes that do not change behaviour from the small, but
functional change.
llvm-svn: 262664
Polly recognizes affine loops that ScalarEvolution does not, in
particular those with loop conditions that depend on hoisted invariant
loads. Check for SCEVAddRec dependencies on such loops and do not
consider their exit values as synthesizable because SCEVExpander would
generate them as expressions that depend on the original induction
variables. These are not available in generated code.
llvm-svn: 262404
removeCachedResults deletes the DetectionContext from
DetectionContextMap such that any it cannot be used anymore.
Unfortunately invalid<ReportUnprofitable> and RejectLogs.insert still do
use it. Because the memory is part of a map and not returned to to the
OS immediatly, such that the observable effect was only a memory leak
due to reference counters not decreased when the second call to
removeCachedResults does not remove the DetectionContext because because
it already has been removed.
Fix by not removing the DetectionContext prematurely. The second call to
removeCachedResults will handle it anyway.
llvm-svn: 262235
Check the ModRefBehaviour of functions in order to decide whether or
not a call instruction might be acceptable.
Differential Revision: http://reviews.llvm.org/D5227
llvm-svn: 261866
From now on we bail only if a non-trivial alias group contains a non-affine
access, not when we discover aliasing and non-affine accesses are allowed.
llvm-svn: 261863
This patch adds support for memcpy, memset and memmove intrinsics. They are
represented as one (memset) or two (memcpy, memmove) memory accesses in the
polyhedral model. These accesses have an access range that describes the
summarized effect of the intrinsic, i.e.,
memset(&A[i], '$', N);
is represented as a write access from A[i] to A[i+N].
Differential Revision: http://reviews.llvm.org/D5226
llvm-svn: 261489
First support for this feature was committed in r259784. Support for
loop invariant load hoisting with different types was added by
Johannes Doerfert in r260045 and r260886.
llvm-svn: 260965
Eliminate the global variable "InsnToMemAcc" to make Scop/ScopInfo become
more protable, such that we can safely use them in a CallGraphSCC pass.
Differential Revision: http://reviews.llvm.org/D17238
llvm-svn: 260863
This reverts commit https://llvm.org/svn/llvm-project/polly/trunk@260853
We unfortunately still have two bugs left which show only up with
-polly-process-unprofitable and which I forgot to test before committing.
llvm-svn: 260854
First support for this feature was committed in r259784. Support for
loop invariant load hoisting with different types was added by Johannes
Doerfert in r260045. This fixed the last known bug.
llvm-svn: 260853
We also disable this feature by default, as there are still some issues in
combination with invariant load hoisting that slipped through my initial
testing.
llvm-svn: 260025
This allows code such as:
void multiple_types(char *Short, char *Float, char *Double) {
for (long i = 0; i < 100; i++) {
Short[i] = *(short *)&Short[2 * i];
Float[i] = *(float *)&Float[4 * i];
Double[i] = *(double *)&Double[8 * i];
}
}
To model such code we use as canonical element type of the modeled array the
smallest element type of all original array accesses, if type allocation sizes
are multiples of each other. Otherwise, we use a newly created iN type, where N
is the gcd of the allocation size of the types used in the accesses to this
array. Accesses with types larger as the canonical element type are modeled as
multiple accesses with the smaller type.
For example the second load access is modeled as:
{ Stmt_bb2[i0] -> MemRef_Float[o0] : 4i0 <= o0 <= 3 + 4i0 }
To support code-generating these memory accesses, we introduce a new method
getAccessAddressFunction that assigns each statement instance a single memory
location, the address we load from/store to. Currently we obtain this address by
taking the lexmin of the access function. We may consider keeping track of the
memory location more explicitly in the future.
We currently do _not_ handle multi-dimensional arrays and also keep the
restriction of not supporting accesses where the offset expression is not a
multiple of the access element type size. This patch adds tests that ensure
we correctly invalidate a scop in case these accesses are found. Both types of
accesses can be handled using the very same model, but are left to be added in
the future.
We also move the initialization of the scop-context into the constructor to
ensure it is already available when invalidating the scop.
Finally, we add this as a new item to the 2.9 release notes
Reviewers: jdoerfert, Meinersbur
Differential Revision: http://reviews.llvm.org/D16878
llvm-svn: 259784
We support now code such as:
void multiple_types(char *Short, char *Float, char *Double) {
for (long i = 0; i < 100; i++) {
Short[i] = *(short *)&Short[2 * i];
Float[i] = *(float *)&Float[4 * i];
Double[i] = *(double *)&Double[8 * i];
}
}
To support such code we use as element type of the modeled array the smallest
element type of all original array accesses. Accesses with larger types are
modeled as multiple accesses with the smaller type.
For example the second load access is modeled as:
{ Stmt_bb2[i0] -> MemRef_Float[o0] : 4i0 <= o0 <= 3 + 4i0 }
To support jscop-rewritable memory accesses we need each statement instance to
only be assigned a single memory location, which will be the address at which
we load the value. Currently we obtain this address by taking the lexmin of
the access function. We may consider keeping track of the memory location more
explicitly in the future.
llvm-svn: 259587
MemAccInst wraps the common members of LoadInst and StoreInst. Also use
of this class in:
- ScopInfo::buildMemoryAccess
- BlockGenerator::generateLocationAccessed
- ScopInfo::addArrayAccess
- Scop::buildAliasGroups
- Replace every use of polly::getPointerOperand
Reviewers: jdoerfert, grosser
Differential Revision: http://reviews.llvm.org/D16530
llvm-svn: 258947
Polly currently does not support irreducible control and it is probably not
worth supporting. This patch adds code that checks for irreducible control
and refuses regions containing irreducible control.
Polly traditionally had rather restrictive checks on the control flow structure
which would have refused irregular control, but within the last couple of months
most of the control flow restrictions have been removed. As part of this
generalization we accidentally allowed irregular control flow.
Contributed-by: Karthik Senthil and Ajith Pandel
llvm-svn: 258497
If a loop has a sufficiently large amount of compute instruction in its loop
body, it is unlikely that our rewrite of the loop iterators introduces large
performance changes. As Polly can also apply beneficical optimizations (such
as parallelization) to such loop nests, we mark them as profitable.
This option is currently "disabled" by default, but can be used to run
experiments. If enabled by setting it e.g. to 40 instructions, we currently
see some compile-time increases on LNT without any significant run-time
changes.
llvm-svn: 256199
.. and add some documentation. We also simplify the code by dropping an early
check that is also covered by the the later checks. This might have a small
compile time impact, but as the scops that are skipped are small we should
probably only add this back in the unlikely case that this has a notable
compile-time cost.
No functional change intended.
llvm-svn: 256149
As we already log an error when calling invalid, scops unprofitable scops are in
any case marked invalid, but returning immediately safes (a tiny bit of) compile
time and is consistent with our use of 'invalid' in the remainder of the file.
Found by inspection.
llvm-svn: 256140
Without this return we still log the incorrect array size (and do not detect
this scop), but we would unnecessarily continue to verify that access functions
are affine. As we do not need to do this, we can return right ahead and
consequently safe compile time.
This issue was found by inspection.
llvm-svn: 256139
The patch fixes Bug 25759 produced by inappropriate handling of unsigned
maximum SCEV expressions by SCEVRemoveMax. Without a fix, we get an infinite
loop and a segmentation fault, if we try to process, for example,
'((-1 + (-1 * %b1)) umax {(-1 + (-1 * %yStart)),+,-1}<%.preheader>)'.
It also fixes a potential issue related to signed maximum SCEV expressions.
Tested-by: Roman Gareev <gareevroman@gmail.com>
Fixed-by: Tobias Grosser <tobias@grosser.es>
Differential Revision: http://reviews.llvm.org/D15563
llvm-svn: 255922
gfortran (and fortran in general?) does not compute the address of an array
element directly from the array sizes (e.g., %s0, %s1), but takes first the
maximum of the sizes and 0 (e.g., max(0, %s0)) before multiplying the resulting
value with the per-dimension array subscript expressions. To successfully
delinearize index expressions as we see them in fortran, we first filter 'smax'
expressions out of the SCEV expression, use them to guess array size parameters
and only then continue with the existing delinearization.
llvm-svn: 253995
At some point we enforced lcssa for the loop surrounding the entry block.
This is not only questionable as it does not check any other loop but also
not needed any more.
llvm-svn: 253789
r252713 introduced a couple of regressions due to later basic blocks refering
to instructions defined in error blocks which have not yet been modeled.
This commit is currently just encoding limitations of our modeling and code
generation backends to ensure correctness. In theory, we should be able to
generate and optimize such regions, as everything that is dominated by an error
region is assumed to not be executed anyhow. We currently just lack the code
to make this happen in practice.
llvm-svn: 252725
Previously, we just skipped error blocks during scop construction. With
this change we make sure we can construct domains for error blocks such that
these domains can be forwarded to subsequent basic blocks.
This change ensures that basic blocks that post-dominate and are dominated by
a basic block that branches to an error condition have the very same iteration
domain as the branching basic block. Before, this change we would construct
a domain that excludes all error conditions. Such domains could become _very_
complex and were undesirable to build.
Another solution would have been to drop these constraints using a
dominance/post-dominance check instead of modeling the error blocks. Such
a solution could also work in case of unreachable statements or infinite
loops in the scop. However, as we currently (to my believe incorrectly) model
unreachable basic blocks in the post-dominance tree, such a solution is not
yet feasible and requires first a change to LLVM's post-dominance tree
construction.
This commit addresses the most sever compile time issue reported in:
http://llvm.org/PR25458
llvm-svn: 252713