Add a command line switch to set the
isl_options_set_schedule_outer_coincidence option. ISL then tries to
build schedules where the outer member of a band satisfies the
coincidence constraints.
In practice this allows loop skewing for more parallelism in inner
loops.
llvm-svn: 268222
ISL 0.16 will change how sets are printed which breaks 117 unit tests
that text-compare printed sets. This patch re-formats most of these unit
tests using a script and small manual editing on top of that. When
actually updating ISL, most work is done by just re-running the script
to adapt to the changed output.
Some tests that compare IR and tests with single CHECK-lines that can be
easily updated manually are not included here.
The re-format script will also be committed afterwards. The per-test
formatter invocation command lines options will not be added in the near
future because it is ad hoc and would overwrite the manual edits.
Ideally it also shouldn't be required anymore because ISL's set printing
has become more stable in 0.16.
Differential Revision: http://reviews.llvm.org/D16095
llvm-svn: 257851
This update brings in improvements to isl's 'isolate' option that reduce the
number of code versions generated. This results in both code-size and compile
time reduction for outer loop vectorization.
Thanks to Roman Garev and Sven Verdoolaege for working on this improvement.
llvm-svn: 254706
We isolate full tiles from partial tiles to be able to, for example, vectorize
loops with parametric lower and/or upper bounds.
If we use -polly-vectorizer=stripmine, we can see execution-time improvements:
correlation from 1m7361s to 0m5720s (-67.05 %), covariance from 1m5561s to
0m5680s (-63.50 %), ary3 from 2m3201s to 1m2361s (-46.72 %), CrystalMk from
8m5565s to 7m4285s (-13.18 %).
The current full/partial tile separation increases compile-time more than
necessary. As a result, we see in compile time regressions, for example, for 3mm
from 0m6320s to 0m9881s (56.34%). Some of this compile time increase is expected
as we generate more IR and consequently more time is spent in the LLVM backends.
However, a first investiagation has shown that a larger portion of compile time
is unnecessarily spent inside Polly's parallelism detection and could be
eliminated by propagating existing knowledge about vector loop parallelism.
Before enabling -polly-vectorizer=stripmine by default, it is necessary to
address this compile-time issue.
Contributed-by: Roman Gareev <gareevroman@gmail.com>
Reviewers: jdoerfert, grosser
Subscribers: grosser, #polly
Differential Revision: http://reviews.llvm.org/D13779
llvm-svn: 250809
These flags are now always passed to all tests and need to be disabled if
not needed. Disabling these flags, rather than passing them to almost all
tests, significantly simplfies our RUN: lines.
llvm-svn: 249422
If nothing is executed we can bail out early. Otherwise we can use the
constraints that ensure at least one statement is executed for
simplification.
llvm-svn: 245585
Register tiling in Polly is for now just an additional level of tiling which
is fully unrolled. It is disabled by default. To make this useful for more than
experiments, we still need a cost function as well as possibly further
optimizations that teach LLVM to actually put some of the values we got into
scalar registers.
llvm-svn: 245564
By default we only use one level of tiling for loops, but in general tiling
for multiple levels is trivial for us. Hence, we add a set of options that
allow people to play with a second level of tiling. If this is profitable for
some cases we can work on heuristics that allow us to identify these cases
and use two-level tiling for them.
llvm-svn: 245563
Polly uses 'prevectorization' to enable outer loop vectorization. When
vectorizing an outer loop, we strip-mine <number-of-prevec-dims> loop
iterations which are than interchanged to the innermost level such that LLVM's
inner loop vectorizer (or Polly's simple vectorizer) can easily vectorize this
loop. The number of loop iterations to strip-mine is now configurable with the
option -polly-prevect-width=<number-of-prevec-dims>.
This is mostly a debugging option. We should probably add a heuristic that
derives the number of prevectorization dimensions from the target data and
the data types used.
llvm-svn: 245424
Schedule trees are a lot easier to work with, for both humans and machines. For
humans the more structured schedule representation is easier to reason about.
Together with the more abstract isl programming interface this can result in a
lot cleaner code (see this changeset). For machines, the structured schedule and
the fact that we now use explicit piecewise affine expressions instead of
integer maps makes it easier to generate code from this schedule tree. As a
result, we can already see a slight compile-time improvement -- for 3mm from
0m0.593s to 0m0.551s seconds (-7 %). More importantly, future optimizations such
as full-partial tile separation will most likely result in more streamlined code
to be generated.
Contributed-by: Roman Gareev <gareevroman@gmail.com>
llvm-svn: 243458
Instead of flat schedules, we now use so-called schedule trees to represent the
execution order of the statements in a SCoP. Schedule trees make it a lot easier
to analyze, understand and modify properties of a schedule, as specific nodes
in the tree can be choosen and possibly replaced.
This patch does not yet fully move our DependenceInfo pass to schedule trees,
as some additional performance analysis is needed here. (In general schedule
trees should be faster in compile-time, as the more structured representation
is generally easier to analyze and work with). We also can not yet perform the
reduction analysis on schedule trees.
For more information regarding schedule trees, please see Section 6 of
https://lirias.kuleuven.be/handle/123456789/497238
llvm-svn: 242130
I just learned that target triples prevent test cases to be run on other
architectures. Polly test cases are until now sufficiently target independent
to not require any target triples. Hence, we drop them.
llvm-svn: 235384
Replacing the old band_tree based code with code that is based on the new
schedule tree [1] interface makes applying complex schedule transformations a lot
more straightforward. We now do not need to reason about the meaning of flat
schedules, but can use a more straightforward tree structure. We do not yet
exploit this a lot in the current code, but hopefully we will be able to do so
soon.
This change also allows us to drop some code, as isl now provides some higher
level interfaces to apply loop transformations such as tiling.
This change causes some small test case changes as isl uses a slightly different
way to perform loop tiling, but no significant functional changes are intended.
[1] http://impact.gforge.inria.fr/impact2014/papers/impact2014-verdoolaege.pdf
llvm-svn: 232911
These test cases did not verify the CHECK lines at all. We add the FileCheck
and also fix some broken CHECK lines. Being here, we extend the checks to
cover the whole loop structure.
llvm-svn: 232710
Scops that only read seem generally uninteresting and scops that only write are
most likely initializations where there is also little to optimize. To not
waste compile time we bail early.
Differential Revision: http://reviews.llvm.org/D7735
llvm-svn: 229820
This allows us to skip ast and code generation if we did not optimize
a SCoP and will not generate parallel or alias annotations. The
initial heuristic to exit is simple but allows improvements later on.
All failing test cases have been modified to disable early exit, thus
to keep their coverage.
Differential Revision: http://reviews.llvm.org/D7254
llvm-svn: 228851
We introduces a new flag -polly-parallel and use it to annotate the for-nodes in
the isl ast that we want to execute thread parallel (e.g., using OpenMP). We
previously already emmitted openmp annotations, but we did this for various
kinds of parallel loops, including some which we can not run in parallel.
With this patch we now have three annotations:
1) #pragma known-parallel [reduction]
2) #pragma omp for
3) #pragma simd
meaning:
1) loop has no loop carried dependences
2) loop will be executed thread-parallel
3) loop can possibly be vectorized
This patch introduces 1) and reduces the use of 2) to only the cases where we
will actually generate thread parallel code.
It is in preparation of openmp code generation in our isl backend.
Legacy:
- We also have a command line option -enable-polly-openmp. This option controls
the OpenMP code generation in CLooG. It will become an alias of
-polly-parallel after the CLooG code generation has been dropped.
http://reviews.llvm.org/D6142
llvm-svn: 221479
In Polly we used to have a mix of test cases, some that used 'opt %s' and others
that used 'opt < %s'. We now change all to use 'opt < %s'. Piping in test files
is preferable as it does prevent temporary files to be written to disk. This
brings us in line with what is usus in LLVM.
llvm-svn: 216816
+ CL-option --polly-tile-sizes=<int,...,int>
The i'th value is used as a tile size for dimension i, if
there is no i'th value, the value of --polly-default-tile-size is
used
+ CL-option --polly-default-tile-size=int
Used if no tile size is given for a dimension i
+ 3 Simple testcases
llvm-svn: 209753
In case we are at the innermost band, we try to prepare for vectorization. This
means, we look for the innermost parallel loop and strip mine this loop to the
innermost level using a strip-mine factor corresponding to the number of vector
iterations.
For whatever reason, the code that implemented this feature was broken. We now
added a comment, a test case and obviously also the right code.
llvm-svn: 203544
In case we do not have valid dependences, we do not run dead code elimination or
the schedule optimizer. This fixes an infinite loop in the dead code
elimination (PR12110).
llvm-svn: 201982
We now support regions with multiple entries and multiple exits natively.
Regions are not needed to be simplified to single entry and single exit.
We need to XFAIL two test cases as this change increases the scop coverage
and uncoveres two failures in the independent blocks pass. The first failure
will be fixed in a subsequent commit, the second one is in the non-default
-polly-codegen-scev mode and still needs to be fixed.
Contributed-by: Star Tan <tanmx_star@yeah.net>
llvm-svn: 179673
Statements with an empty iteration domain may not have a schedule assigned by
the isl schedule optimizer. As Polly expects each statement to have a schedule,
we keep the old schedule for such statements.
This fixes http://llvm.org/PR15645`
Reported-by: Johannes Doerfert <johannesdoerfert@gmx.de>
llvm-svn: 179233
The bug was within isl. To fix it, we simply update the isl version that
is used by Polly. We still have some changes within Polly to be able to
write a proper test case.
Reported-by: Sameer Sahasrabuddhe <Sameer.Sahasrabuddhe@amd.com>
llvm-svn: 166021