forked from OSchip/llvm-project
ed787e7540
These functions print a multi-line and sorted representation of unions of polyhedra. Each polyhedron (basic_{ast/map}) has its own line. First sort key is the polyhedron's hierachical space structure. Secondary sort key is the lower bound of the polyhedron, which should ensure that the polyhedral are printed in approximately ascending order. Example output of dumpPw(): [p_0, p_1, p_2] -> { Stmt0[0] -> [0, 0]; Stmt0[i0] -> [i0, 0] : 0 < i0 <= 5 - p_2; Stmt1[0] -> [0, 2] : p_1 = 1 and p_0 = -1; Stmt2[0] -> [0, 1] : p_1 >= 3 + p_0; Stmt3[0] -> [0, 3]; } In contrast dumpExpanded() prints each point in the sets, unless there is an unbounded dimension that cannot be expandend. This is useful for reduced test cases where the loop counts are set to some constant to understand a bug. Example output of dumpExpanded( { [MemRef_A[i0] -> [i1]] : (exists (e0 = floor((1 + i1)/3): i0 = 1 and 3e0 <= i1 and 3e0 >= -1 + i1 and i1 >= 15 and i1 <= 25)) or (exists (e0 = floor((i1)/3): i0 = 0 and 3e0 < i1 and 3e0 >= -2 + i1 and i1 > 0 and i1 <= 11)) }): { [MemRef_A[0] ->[1]]; [MemRef_A[0] ->[2]]; [MemRef_A[0] ->[4]]; [MemRef_A[0] ->[5]]; [MemRef_A[0] ->[7]]; [MemRef_A[0] ->[8]]; [MemRef_A[0] ->[10]]; [MemRef_A[0] ->[11]]; [MemRef_A[1] ->[15]]; [MemRef_A[1] ->[16]]; [MemRef_A[1] ->[18]]; [MemRef_A[1] ->[19]]; [MemRef_A[1] ->[21]]; [MemRef_A[1] ->[22]]; [MemRef_A[1] ->[24]]; [MemRef_A[1] ->[25]] } Differential Revision: https://reviews.llvm.org/D38349 llvm-svn: 314525 |
||
---|---|---|
.. | ||
cmake | ||
docs | ||
include/polly | ||
lib | ||
test | ||
tools | ||
unittests | ||
utils | ||
www | ||
.arcconfig | ||
.arclint | ||
.gitattributes | ||
.gitignore | ||
CMakeLists.txt | ||
CREDITS.txt | ||
LICENSE.txt | ||
README |
README
Polly - Polyhedral optimizations for LLVM ----------------------------------------- http://polly.llvm.org/ Polly uses a mathematical representation, the polyhedral model, to represent and transform loops and other control flow structures. Using an abstract representation it is possible to reason about transformations in a more general way and to use highly optimized linear programming libraries to figure out the optimal loop structure. These transformations can be used to do constant propagation through arrays, remove dead loop iterations, optimize loops for cache locality, optimize arrays, apply advanced automatic parallelization, drive vectorization, or they can be used to do software pipelining.