In order to compute domain conditions for conditionals we will now
traverse the region in the ScopInfo once and build the domains for
each block in the region. The SCoP statements can then use these
constraints when they build their domain.
The reason behind this change is twofold:
1) This removes a big chunk of preprocessing logic from the
TempScopInfo, namely the Conditionals we used to build there.
Additionally to moving this logic it is also simplified. Instead
of walking the dominance tree up for each basic block in the
region (as we did before), we now traverse the region only
once in order to collect the domain conditions.
2) This is the first step towards the isl based domain creation.
The second step will traverse the region similar to this step,
however it will propagate back edge conditions. Once both are in
place this conditional handling will allow multiple exit loops
additional logic.
Reviewers: grosser
Differential Revision: http://reviews.llvm.org/D12428
llvm-svn: 246398
I just learned that target triples prevent test cases to be run on other
architectures. Polly test cases are until now sufficiently target independent
to not require any target triples. Hence, we drop them.
llvm-svn: 235384
In Polly we used both the term 'scattering' and the term 'schedule' to describe
the execution order of a statement without actually distinguishing between them.
We now uniformly use the term 'schedule' for the execution order. This
corresponds to the terminology of isl.
History: CLooG introduced the term scattering as the generated code can be used
as a sequential execution order (schedule) or as a parallel dimension
enumerating different threads of execution (placement). In Polly and/or isl the
term placement was never used, but we uniformly refer to an execution order as a
schedule and only later introduce parallelism. When doing so we do not talk
about about specific placement dimensions.
llvm-svn: 235380
Scops that only read seem generally uninteresting and scops that only write are
most likely initializations where there is also little to optimize. To not
waste compile time we bail early.
Differential Revision: http://reviews.llvm.org/D7735
llvm-svn: 229820
Schedule dimensions that have the same constant value accross all statements do
not carry any information, but due to the increased dimensionality of the
schedule cost compile time. To not pay this cost, we remove constant dimensions
if possible.
llvm-svn: 225067
As our delinearization works optimistically, we need in some cases run-time
checks that verify our optimistic assumptions. A simple example is the
following code:
void foo(long n, long m, long o, double A[n][m][o]) {
for (long i = 0; i < 100; i++)
for (long j = 0; j < 150; j++)
for (long k = 0; k < 200; k++)
A[i][j][k] = 1.0;
}
After clang linearized the access to A and we delinearized it again to
A[i][j][k] we need to ensure that we do not access the delinearized array
out of bounds (this information is not available in LLVM-IR). Hence, we
need to verify the following constraints at run-time:
CHECK: Assumed Context:
CHECK: [o, m] -> { : m >= 150 and o >= 200 }
llvm-svn: 212198
At the moment we can handle such arrays only by conservatively assuming that
each access to such an array may touch any element in the array. It would be
great if we could improve Polly/LLVM at some point, such that we can
recover the multi-dimensionality of the accesses.
llvm-svn: 163619