forked from OSchip/llvm-project
c2fd8b411d
For schedule generation we assumed that the reverse post order traversal used by the domain generation is sufficient, however it is not. Once a loop is discovered, we have to completely traverse it, before we can generate the schedule for any block/region that is only reachable through a loop exiting block. To this end, we add a "loop stack" that will keep track of loops we discovered during the traversal but have not yet traversed completely. We will never visit a basic block (or region) outside the most recent (thus smallest) loop in the loop stack but instead queue such blocks (or regions) in a waiting list. If the waiting list is not empty and (might) contain blocks from the most recent loop in the loop stack the next block/region to visit is drawn from there, otherwise from the reverse post order iterator. We exploit the new property of loops being always completed before additional loops are processed, by removing the LoopSchedules map and instead keep all information in LoopStack. This clarifies that we indeed always only keep a stack of in-process loops, but will never keep incomplete schedules for an arbitrary set of loops. As a result, we can simplify some of the existing code. This patch also adds some more documentation about how our schedule construction works. This fixes http://llvm.org/PR25879 This patch is an modified version of Johannes Doerfert's initial fix. Differential Revision: http://reviews.llvm.org/D15679 llvm-svn: 259354 |
||
---|---|---|
.. | ||
cmake | ||
include/polly | ||
lib | ||
test | ||
tools | ||
utils | ||
www | ||
.arcconfig | ||
.arclint | ||
.gitattributes | ||
.gitignore | ||
CMakeLists.txt | ||
CREDITS.txt | ||
LICENSE.txt | ||
README |
README
Polly - Polyhedral optimizations for LLVM ----------------------------------------- http://polly.llvm.org/ Polly uses a mathematical representation, the polyhedral model, to represent and transform loops and other control flow structures. Using an abstract representation it is possible to reason about transformations in a more general way and to use highly optimized linear programming libraries to figure out the optimal loop structure. These transformations can be used to do constant propagation through arrays, remove dead loop iterations, optimize loops for cache locality, optimize arrays, apply advanced automatic parallelization, drive vectorization, or they can be used to do software pipelining.