Without this simplification for a loop nest:
void foo(long n1_a, long n1_b, long n1_c, long n1_d,
long p1_b, long p1_c, long p1_d,
float A_1[][p1_b][p1_c][p1_d]) {
for (long i = 0; i < n1_a; i++)
for (long j = 0; j < n1_b; j++)
for (long k = 0; k < n1_c; k++)
for (long l = 0; l < n1_d; l++)
A_1[i][j][k][l] += i + j + k + l;
}
the assumption:
n1_a <= 0 or (n1_a > 0 and n1_b <= 0) or
(n1_a > 0 and n1_b > 0 and n1_c <= 0) or
(n1_a > 0 and n1_b > 0 and n1_c > 0 and n1_d <= 0) or
(n1_a > 0 and n1_b > 0 and n1_c > 0 and n1_d > 0 and
p1_b >= n1_b and p1_c >= n1_c and p1_d >= n1_d)
is taken rather than the simpler assumption:
p9_b >= n9_b and p9_c >= n9_c and p9_d >= n9_d.
The former is less strict, as it allows arbitrary values of p1_* in case, the
loop is not executed at all. However, in practice these precise constraints
explode when combined across different accesses and loops. For now it seems
to make more sense to take less precise, but more scalable constraints by
default. In case we find a practical example where more precise constraints
are needed, we can think about allowing such precise constraints in specific
situations where they help.
This change speeds up the new test case from taking very long (waited at least
a minute, but it probably takes a lot more) to below a second.
llvm-svn: 296456
The assumption attached to an llvm.assume in the SCoP needs to be
combined with the domain of the surrounding statement but can
nevertheless be used to refine the context.
This fixes the problems mentioned in PR27067.
llvm-svn: 269060
A zero-extended value can be interpreted as a piecewise defined signed
value. If the value was non-negative it stays the same, otherwise it
is the sum of the original value and 2^n where n is the bit-width of
the original (or operand) type. Examples:
zext i8 127 to i32 -> { [127] }
zext i8 -1 to i32 -> { [256 + (-1)] } = { [255] }
zext i8 %v to i32 -> [v] -> { [v] | v >= 0; [256 + v] | v < 0 }
However, LLVM/Scalar Evolution uses zero-extend (potentially lead by a
truncate) to represent some forms of modulo computation. The left-hand side
of the condition in the code below would result in the SCEV
"zext i1 <false, +, true>for.body" which is just another description
of the C expression "i & 1 != 0" or, equivalently, "i % 2 != 0".
for (i = 0; i < N; i++)
if (i & 1 != 0 /* == i % 2 */)
/* do something */
If we do not make the modulo explicit but only use the mechanism described
above we will get the very restrictive assumption "N < 3", because for all
values of N >= 3 the SCEVAddRecExpr operand of the zero-extend would wrap.
Alternatively, we can make the modulo in the operand explicit in the
resulting piecewise function and thereby avoid the assumption on N. For the
example this would result in the following piecewise affine function:
{ [i0] -> [(1)] : 2*floor((-1 + i0)/2) = -1 + i0;
[i0] -> [(0)] : 2*floor((i0)/2) = i0 }
To this end we can first determine if the (immediate) operand of the
zero-extend can wrap and, in case it might, we will use explicit modulo
semantic to compute the result instead of emitting non-wrapping assumptions.
Note that operands with large bit-widths are less likely to be negative
because it would result in a very large access offset or loop bound after the
zero-extend. To this end one can optimistically assume the operand to be
positive and avoid the piecewise definition if the bit-width is bigger than
some threshold (here MaxZextSmallBitWidth).
We choose to go with a hybrid solution of all modeling techniques described
above. For small bit-widths (up to MaxZextSmallBitWidth) we will model the
wrapping explicitly and use a piecewise defined function. However, if the
bit-width is bigger than MaxZextSmallBitWidth we will employ overflow
assumptions and assume the "former negative" piece will not exist.
llvm-svn: 267408
There are three reasons why we want to record assumptions first before we
add them to the assumed/invalid context:
1) If the SCoP is not profitable or otherwise invalid without the
assumed/invalid context we do not have to compute it.
2) Information about the context are gathered rather late in the SCoP
construction (basically after we know all parameters), thus the user
might see overly complicated assumptions to be taken while they would
have been simplified later on.
3) Currently we cannot take assumptions at any point but have to wait,
e.g., for the domain generation to finish. This makes wrapping
assumptions much more complicated as they need to be and it will
have a similar effect on "signed-unsigned" assumptions later.
llvm-svn: 266068
ISL 0.16 will change how sets are printed which breaks 117 unit tests
that text-compare printed sets. This patch re-formats most of these unit
tests using a script and small manual editing on top of that. When
actually updating ISL, most work is done by just re-running the script
to adapt to the changed output.
Some tests that compare IR and tests with single CHECK-lines that can be
easily updated manually are not included here.
The re-format script will also be committed afterwards. The per-test
formatter invocation command lines options will not be added in the near
future because it is ad hoc and would overwrite the manual edits.
Ideally it also shouldn't be required anymore because ISL's set printing
has become more stable in 0.16.
Differential Revision: http://reviews.llvm.org/D16095
llvm-svn: 257851
Call assumeNoOutOfBound only in updateDimensionality to process situations
when new dimensions are added and new bounds checks are required.
Contributed-by: Tobias Grosser, Gareev Roman
llvm-svn: 257170
If an llvm.assume dominates the SCoP entry block and the assumed condition
can be expressed as an affine inequality we will now add it to the context.
Differential Revision: http://reviews.llvm.org/D14413
llvm-svn: 252851