Register tiling in Polly is for now just an additional level of tiling which
is fully unrolled. It is disabled by default. To make this useful for more than
experiments, we still need a cost function as well as possibly further
optimizations that teach LLVM to actually put some of the values we got into
scalar registers.
llvm-svn: 245564
By default we only use one level of tiling for loops, but in general tiling
for multiple levels is trivial for us. Hence, we add a set of options that
allow people to play with a second level of tiling. If this is profitable for
some cases we can work on heuristics that allow us to identify these cases
and use two-level tiling for them.
llvm-svn: 245563
Polly uses 'prevectorization' to enable outer loop vectorization. When
vectorizing an outer loop, we strip-mine <number-of-prevec-dims> loop
iterations which are than interchanged to the innermost level such that LLVM's
inner loop vectorizer (or Polly's simple vectorizer) can easily vectorize this
loop. The number of loop iterations to strip-mine is now configurable with the
option -polly-prevect-width=<number-of-prevec-dims>.
This is mostly a debugging option. We should probably add a heuristic that
derives the number of prevectorization dimensions from the target data and
the data types used.
llvm-svn: 245424
The July issue of TOPLAS contains a 50 page discussion of the AST generation
techniques used in Polly. This discussion gives not only an in-depth
description of how we (re)generate an imperative AST from our polyhedral based
mathematical program description, but also gives interesting insights about:
- Schedule trees: A tree-based mathematical program description that enables us
to perform loop transformations on an abstract level, while issues like the
generation of the correct loop structure and loop bounds will be taken care of
by our AST generator.
- Polyhedral unrolling: We discuss techniques that allow the unrolling of
non-trivial loops in the context of parameteric loop bounds, complex tile
shapes and conditionally executed statements. Such unrolling support enables
the generation of predicated code e.g. in the context of GPGPU computing.
- Isolation for full/partial tile separation: We discuss native support for
handling full/partial tile separation and -- in general -- native support for
isolation of boundary cases to enable smooth code generation for core
computations.
- AST generation with modulo constraints: We discuss how modulo mappings are
lowered to efficient C/LLVM code.
- User-defined constraint sets for run-time checks We discuss how arbitrary
sets of constraints can be used to automatically create run-time checks that
ensure a set of constrainst actually hold. This feature is very useful to
verify at run-time various assumptions that have been taken program
optimization.
Polyhedral AST generation is more than scanning polyhedra
Tobias Grosser, Sven Verdoolaege, Albert Cohen
ACM Transations on Programming Languages and Systems (TOPLAS), 37(4), July 2015
llvm-svn: 245157
Summary: The splitExitBlock function is never called. Going to replace its functionality in successive patches that do not modify the IR.
Reviewers: grosser
Subscribers: pollydev
Projects: #polly
Differential Revision: http://reviews.llvm.org/D11865
llvm-svn: 244404
Schedule trees are a lot easier to work with, for both humans and machines. For
humans the more structured schedule representation is easier to reason about.
Together with the more abstract isl programming interface this can result in a
lot cleaner code (see this changeset). For machines, the structured schedule and
the fact that we now use explicit piecewise affine expressions instead of
integer maps makes it easier to generate code from this schedule tree. As a
result, we can already see a slight compile-time improvement -- for 3mm from
0m0.593s to 0m0.551s seconds (-7 %). More importantly, future optimizations such
as full-partial tile separation will most likely result in more streamlined code
to be generated.
Contributed-by: Roman Gareev <gareevroman@gmail.com>
llvm-svn: 243458
Instead of flat schedules, we now use so-called schedule trees to represent the
execution order of the statements in a SCoP. Schedule trees make it a lot easier
to analyze, understand and modify properties of a schedule, as specific nodes
in the tree can be choosen and possibly replaced.
This patch does not yet fully move our DependenceInfo pass to schedule trees,
as some additional performance analysis is needed here. (In general schedule
trees should be faster in compile-time, as the more structured representation
is generally easier to analyze and work with). We also can not yet perform the
reduction analysis on schedule trees.
For more information regarding schedule trees, please see Section 6 of
https://lirias.kuleuven.be/handle/123456789/497238
llvm-svn: 242130
This removes old code that has been disabled since several weeks and was hidden
behind the flags -disable-polly-intra-scop-scalar-to-array=false and
-polly-model-phi-nodes=false. Earlier, Polly used to translate scalars and
PHI nodes to single element arrays, as this avoided the need for their special
handling in Polly. With Johannes' patches adding native support for such scalar
references to Polly, this code is not needed any more. After this commit both
-polly-prepare and -polly-independent are now mostly no-ops. Only a couple of
simple transformations still remain, but they are scheduled for removal too.
Thanks again to Johannes Doerfert for his nice work in making all this code
obsolete.
llvm-svn: 240766
This version adds small integer optimization, but is not active by
default. It will be enabled in a later commit.
The schedule-fuse=min/max option has been replaced by the
serialize-sccs option. Adapting Polly was necessary, but retaining the
name polly-opt-fusion=min/max.
Differential Revision: http://reviews.llvm.org/D10505
Reviewers: grosser
llvm-svn: 240027
Running indvar before Polly is useful as this eliminates zexts as they commonly
appear when a 32 bit induction variable (type int) was used on a 64 bit system.
These zexts confuse our delinearization and prevent for example the successful
delinearization of the nussinov kernel in polybench-c-4.1.
This fixes http://llvm.org/PR23426
Suggested-by: Xing Su <xsu.llvm@outlook.com>
llvm-svn: 238643
David Blaike suggested this as an alternative to the use of owningptr(s) for our
memory management, as value semantics allow to avoid the additional interface
complexity caused by owningptr while still providing similar memory consistency
guarantees. We could also have used a std::vector, but the use of std::vector
would yield possibly changing pointers which currently causes problems as for
example the memory accesses carry pointers to their parent statements. Such
pointers should not change.
Reviewer: jblaikie, jdoerfert
Differential Revision: http://reviews.llvm.org/D10041
llvm-svn: 238290
The feature itself has been committed by Johannes in r238070. As this is the
way forward, we now enable it to ensure we get test coverage.
Thank you Johannes for this nice work!
llvm-svn: 238088
Instead of explicitly building constraints and adding them to our maps we
now use functions like map_order_le to add the relevant information to the
maps.
llvm-svn: 237934
Upcoming revisions of isl require us to include header files explicitly, which
have previously been already transitively included. Before we add them, we sort
the existing includes.
Thanks to Chandler for sort_includes.py. A simple, but very convenient script.
llvm-svn: 236930
In Polly we used both the term 'scattering' and the term 'schedule' to describe
the execution order of a statement without actually distinguishing between them.
We now uniformly use the term 'schedule' for the execution order. This
corresponds to the terminology of isl.
History: CLooG introduced the term scattering as the generated code can be used
as a sequential execution order (schedule) or as a parallel dimension
enumerating different threads of execution (placement). In Polly and/or isl the
term placement was never used, but we uniformly refer to an execution order as a
schedule and only later introduce parallelism. When doing so we do not talk
about about specific placement dimensions.
llvm-svn: 235380
We do not have buildbots or anything that tests this functionality, hence it
most likely bitrots. People interested to use this functionality can always
recover it from svn history.
llvm-svn: 233570
Replacing the old band_tree based code with code that is based on the new
schedule tree [1] interface makes applying complex schedule transformations a lot
more straightforward. We now do not need to reason about the meaning of flat
schedules, but can use a more straightforward tree structure. We do not yet
exploit this a lot in the current code, but hopefully we will be able to do so
soon.
This change also allows us to drop some code, as isl now provides some higher
level interfaces to apply loop transformations such as tiling.
This change causes some small test case changes as isl uses a slightly different
way to perform loop tiling, but no significant functional changes are intended.
[1] http://impact.gforge.inria.fr/impact2014/papers/impact2014-verdoolaege.pdf
llvm-svn: 232911
The new Dependences struct in the DependenceInfo holds all information
that was formerly part of the DependenceInfo. It also provides the
same interface for the user to access this information.
This is another step to a more general ScopPass interface that does
allow multiple SCoPs to be "in flight".
llvm-svn: 231327
We rename the Dependences pass to DependenceInfo as a first step to a
caching pass policy. The new DependenceInfo pass will later provide
"Dependences" for a SCoP.
To keep consistency the test folder is renamed too.
llvm-svn: 231308
namespace and header rather than the top-level header and using
declarations. These helpers impede modular builds and are going away.
Migrating away from them will also be necessary to start mixing in any
usage of the new pass manager.
llvm-svn: 229091
This allows us to skip ast and code generation if we did not optimize
a SCoP and will not generate parallel or alias annotations. The
initial heuristic to exit is simple but allows improvements later on.
All failing test cases have been modified to disable early exit, thus
to keep their coverage.
Differential Revision: http://reviews.llvm.org/D7254
llvm-svn: 228851
This allows us to model PHI nodes in the polyhedral description
without demoting them. The modeling however will result in the
same accesses as the demotion would have introduced.
Differential Revision: http://reviews.llvm.org/D7415
llvm-svn: 228433