In order to compute domain conditions for conditionals we will now
traverse the region in the ScopInfo once and build the domains for
each block in the region. The SCoP statements can then use these
constraints when they build their domain.
The reason behind this change is twofold:
1) This removes a big chunk of preprocessing logic from the
TempScopInfo, namely the Conditionals we used to build there.
Additionally to moving this logic it is also simplified. Instead
of walking the dominance tree up for each basic block in the
region (as we did before), we now traverse the region only
once in order to collect the domain conditions.
2) This is the first step towards the isl based domain creation.
The second step will traverse the region similar to this step,
however it will propagate back edge conditions. Once both are in
place this conditional handling will allow multiple exit loops
additional logic.
Reviewers: grosser
Differential Revision: http://reviews.llvm.org/D12428
llvm-svn: 246398
I just learned that target triples prevent test cases to be run on other
architectures. Polly test cases are until now sufficiently target independent
to not require any target triples. Hence, we drop them.
llvm-svn: 235384
In Polly we used both the term 'scattering' and the term 'schedule' to describe
the execution order of a statement without actually distinguishing between them.
We now uniformly use the term 'schedule' for the execution order. This
corresponds to the terminology of isl.
History: CLooG introduced the term scattering as the generated code can be used
as a sequential execution order (schedule) or as a parallel dimension
enumerating different threads of execution (placement). In Polly and/or isl the
term placement was never used, but we uniformly refer to an execution order as a
schedule and only later introduce parallelism. When doing so we do not talk
about about specific placement dimensions.
llvm-svn: 235380
Scops that only read seem generally uninteresting and scops that only write are
most likely initializations where there is also little to optimize. To not
waste compile time we bail early.
Differential Revision: http://reviews.llvm.org/D7735
llvm-svn: 229820
Schedule dimensions that have the same constant value accross all statements do
not carry any information, but due to the increased dimensionality of the
schedule cost compile time. To not pay this cost, we remove constant dimensions
if possible.
llvm-svn: 225067
SCEV based code generation has been the default for two weeks after having
been tested for a long time. We now drop the support the non-scev-based code
generation.
llvm-svn: 222978
As our delinearization works optimistically, we need in some cases run-time
checks that verify our optimistic assumptions. A simple example is the
following code:
void foo(long n, long m, long o, double A[n][m][o]) {
for (long i = 0; i < 100; i++)
for (long j = 0; j < 150; j++)
for (long k = 0; k < 200; k++)
A[i][j][k] = 1.0;
}
After clang linearized the access to A and we delinearized it again to
A[i][j][k] we need to ensure that we do not access the delinearized array
out of bounds (this information is not available in LLVM-IR). Hence, we
need to verify the following constraints at run-time:
CHECK: Assumed Context:
CHECK: [o, m] -> { : m >= 150 and o >= 200 }
llvm-svn: 212198
We do this currently only for test cases where we have integer offsets that
clearly access array dimensions out-of-bound.
-; for (long i = 0; i < n; i++)
-; for (long j = 0; j < m; j++)
-; for (long k = 0; k < o; k++)
+; for (long i = 0; i < n - 3; i++)
+; for (long j = 4; j < m; j++)
+; for (long k = 0; k < o - 7; k++)
; A[i+3][j-4][k+7] = 1.0;
This will be helpful if we later want to simplify the access functions under the
assumption that they do not access memory out of bounds.
llvm-svn: 210179
SCoP invariant parameters with the different start value would deter parameter
sharing. For example, when compiling the following C code:
void foo(float *input) {
for (long j = 0; j < 8; j++) {
// SCoP begin
for (long i = 0; i < 8; i++) {
float x = input[j * 64 + i + 1];
input[j * 64 + i] = x * x;
}
}
}
Polly would creat two parameters for these memory accesses:
p_0: {0,+,256}
p_2: {4,+,256}
[j * 64 + i + 1] => MemRef_input[o0] : 4o0 = p_1 + 4i0
[j * 64 + i] => MemRef_input[o0] : 4o0 = p_0 + 4i0
These parameters only differ from start value. To enable parameter sharing,
we split the start value from SCEVAddRecExpr, so they would share a single
parameter that always has zero start value:
p0: {0,+,256}<%for.cond1.preheader>
[j * 64 + i + 1] => MemRef_input[o0] : 4o0 = 4 + p_1 + 4i0
[j * 64 + i] => MemRef_input[o0] : 4o0 = p_0 + 4i0
Such translation can make the polly-dependence much faster.
Contributed-by: Star Tan <tanmx_star@yeah.net>
llvm-svn: 187728