To do this, we replicate what `CodeGeneration` does. We expose
`markNodeUnreachable` from `CodeGeneration` to `PPCGCodeGeneration`.
Differential Revision: https://reviews.llvm.org/D36457
llvm-svn: 310350
It is possible that partial writes are empty (write is never executed).
In this case, when in PHINode's incoming edge is never taken such that
the incoming write becomes an empty partial write, if enabled. The
issue is that when converting the union_map to an map, it's space
cannot be derived from the union_map itself. Rather, we need to
determine its space independently.
This fixes test-suite's MultiSource/Benchmarks/ASC_Sequoia/CrystalMk.
llvm-svn: 310348
In certain cases delicm might decide to not leave the original array write in
the loop body, but to remove it and instead leave a transformed phi node as
write access. This commit teached the matmul pattern detection to order the
memory accesses according to when the access actually happens and use this
information to detect the new pattern. This makes pattern based matmul
optimization work for 2mm and 3mm in polybench 4 after
polly-position=before-vectorizer has been enabled.
llvm-svn: 310338
Two write statements which write into the very same array slot generally are
conflicting. However, in case the value that is written is identical, this
does not cause any problem. Hence, allow such write pairs in this specific
situation.
llvm-svn: 310311
This commit implements the initial version of fully-indexed static
expansion.
```
for(int i = 0; i<Ni; i++)
for(int j = 0; j<Ni; j++)
S: B[j] = j;
T: A[i] = B[i]
```
After the pass, we want this :
```
for(int i = 0; i<Ni; i++)
for(int j = 0; j<Ni; j++)
S: B[i][j] = j;
T: A[i] = B[i][i]
```
For now we bail (fail) in the following cases:
- Scalar access
- Multiple writes per SAI
- MayWrite Access
- Expansion that leads to an access to the original array
Furthermore: We still miss checks for escaping references to the array
base pointers. A future commit will add the missing escape-checks to
stay correct in those cases. The expansion is still locked behind a
CLI-Option and should not yet be used.
Patch contributed by: Nicholas Bonfante <bonfante.nicolas@gmail.com>
Reviewers: simbuerg, Meinersbur, bollu
Reviewed By: Meinersbur
Subscribers: mgorny, llvm-commits, pollydev
Differential Revision: https://reviews.llvm.org/D34982
llvm-svn: 310304
This is an addition to the -polly-optree pass that reuses the array
content analysis from DeLICM to find array elements that contain the
same value as the value loaded when the target statement instance
is executed.
The analysis is now enabled by default.
The known content analysis could also be used to rematerialize any
llvm::Value that was written to some array element, but currently
only loads are forwarded.
Differential Revision: https://reviews.llvm.org/D36380
llvm-svn: 310279
Summary:
This resolves some "instruction does not dominate use" errors, as we used to
prepare the arrays at the location of the first kernel, which not necessarily
dominated all other kernel calls.
Reviewers: Meinersbur, bollu, singam-sanjay
Subscribers: nemanjai, pollydev, llvm-commits, kbarton
Differential Revision: https://reviews.llvm.org/D36372
llvm-svn: 310196
A Scop with a loop outside it is not handled currently by
PPCGCodeGeneration. The test case is such that the Scop has only one inner loop
that is detected. This currently breaks codegen.
The fix is to reuse the existing mechanism in `IslNodeBuilder` within
`GPUNodeBuilder.
Differential Revision: https://reviews.llvm.org/D36290
llvm-svn: 310193
Summary:
In case the option -polly-ignore-parameter-bounds is set, not all parameters
will be added to context and domains. This is useful to keep the size of the
sets and maps we work with small. Unfortunately, for AST generation it is
necessary to ensure all parameters are part of the schedule tree. Hence,
we modify the GPGPU code generation to make sure this is the case.
To obtain the necessary information we expose a new function
Scop::getFullParamSpace(). We also make a couple of functions const to be
able to make SCoP::getFullParamSpace() const.
Reviewers: Meinersbur, bollu, gareevroman, efriedma, huihuiz, sebpop, simbuerg
Subscribers: nemanjai, kbarton, pollydev, llvm-commits
Tags: #polly
Differential Revision: https://reviews.llvm.org/D36243
llvm-svn: 309939
When we have `-polly-ignore-parameter-bounds`, `Scop::Context` does not contain
all the paramters present in the program.
The construction of the `isl_multi_pw_aff` requires all the indivisual `pw_aff`
to have the same parameter dimensions. To achieve this, we used to realign
every `pw_aff` with `Scop::Context`. However, in conjunction with
`-polly-ignore-parameter-bounds`, this is now incorrect, since `Scop::Context`
does not contain all parameters.
We set this up correctly by creating a space that has all the parameters
used by all the `isl_pw_aff`. Then, we realign all `isl_pw_aff` to this space.
llvm-svn: 309934
Summary:
**Remove debug metadata from instruction to be copied to prevent the source file's debug metadata being copied into GPUModule and eventually failing Module verification and ASM string codegeneration.**
When copying the instruction onto the Module meant for the GPU, debug metadata attached to an instruction causes all related metadata to be pulled into the Module, including the DICompileUnit, which is not listed in llvm.dbg.cu of the Module. This fails the verification of the Module and generation of the ASM string.
The only debug metadata of the instruction, the DebugLoc, is unset by this patch.
This patch reattempts https://reviews.llvm.org/D35630 by targeting only those instructions that are to end up in a Module meant for the GPU.
Reviewers: grosser, bollu
Reviewed By: grosser
Subscribers: pollydev
Tags: #polly
Differential Revision: https://reviews.llvm.org/D36161
llvm-svn: 309822
The previous algorithm was to search a writes and the sours of its value
operand, and see whether the write just stores the same read value back,
which includes a search whether there is another write access between
them. This is O(n^2) in the max number of accesses in a statement
(+ the complexity of isl comparing the access functions).
The new algorithm is more similar to the one used for searching for
overwrites and coalescable writes. It scans over all accesses in order
of execution while tracking which array elements still have the same
value since it was read. This is O(n), not counting the complexity
within isl. It should be more reliable than trying to catch all
non-conforming cases in the previous approach. It is also less code.
We now also support if the write is a partial write of the read's
domain, and to some extent non-affine subregions.
Differential Revision: https://reviews.llvm.org/D36137
llvm-svn: 309734
With a lot of reads and writes to the same array in a statement,
some isl sets that capture the state between access can become
complex such that isl takes more considerable time and memory
for operations on them.
The problems identified were:
- is_subset() takes considerable time with many disjoints in the
arguments. We limit the number of disjoints to 4, any additional
information is thrown away.
- subtract() can lead to many disjoints. We instead assume that any
array element is possibly accessed, which removes all disjoints.
- subtract_domain() may lead to considerable processing, even if all
elements are are to be removed. Instead, we remove determine and
remove the affected spaces manually. No behaviour is changed.
llvm-svn: 309728
It is possible that the `HostPtr` that coresponds to an array could be
invariant load hoisted. Make sure we use the invariant load hoisted
value by using `IslNodeBuilder::getLatestValue`.
Differential Revision: https://reviews.llvm.org/D36001
llvm-svn: 309681
This allows -polly-optree to move instructions that depend on
synthesizable values.
The difficulty for synthesizable values is that their value depends on
the location. When it is moved over a loop header, and the SCEV
expression depends on the loop induction variable (SCEVAddRecExpr), it
would use the current induction variable instead of the last one.
At the moment we cannot forward PHI nodes such that crossing the header
of loops referenced by SCEVAddRecExpr is not possible (assuming the loop
header has at least two incoming blocks: for entering the loop and the
backedge, such any instruction to be forwarded must have a phi between
use and definition).
A remaining issue is when the forwarded value is used after the loop,
but is only synthesizable inside the loop. This happens e.g. if
ScalarEvolution is unable to determine the number of loop iterations or
the initial loop value. We do not forward in this situation.
Differential Revision: https://reviews.llvm.org/D36102
llvm-svn: 309609
In addition to array and PHI writes, also allow scalar value writes.
The only kind of write not allowed are writes by functions
(including memcpy/memmove/memset).
llvm-svn: 309582
Summary:
This allows us to map functions such as exp, expf, expl, for which no
LLVM intrinsics exist. Instead, we link to NVIDIA's libdevice which provides
high-performance implementations of a wide range of (math) functions. We
currently link only a small subset, the exp, cos and copysign functions. Other
functions will be enabled as needed.
Reviewers: bollu, singam-sanjay
Reviewed By: bollu
Subscribers: tstellar, tra, nemanjai, pollydev, mgorny, llvm-commits, kbarton
Tags: #polly
Differential Revision: https://reviews.llvm.org/D35703
llvm-svn: 309560
This reverts commit r309490 as it triggers on our AOSP buildbut error messages
of the form:
inlinable function call in a function with debug info must have a !dbg location
llvm-svn: 309556
Summary:
**Remove debug metadata from instruction to be copied to prevent the source file's debug metadata being copied into GPUModule and eventually failing Module verification and ASM string codegeneration.**
When copying the instruction onto the Module meant for the GPU, debug metadata attached to an instruction causes all related metadata to be pulled into the Module, including the DICompileUnit, which is not listed in llvm.dbg.cu of the Module. This fails the verification of the Module and generation of the ASM string.
The only debug metadata of the instruction, the DebugLoc, is unset by this patch.
Reviewers: grosser, bollu, Meinersbur
Reviewed By: grosser, bollu
Subscribers: pollydev
Tags: #polly
Differential Revision: https://reviews.llvm.org/D35630
llvm-svn: 309490
Write coalescing combines write accesses that
- Write the same llvm::Value.
- Write to the same array.
- Unless they do not write anything in a statement instance (partial
writes), write to the same element.
- There is no other access between them that accesses the same element.
This is particularly useful after DeLICM, which leaves partial writes to
disjoint domains.
Differential Revision: https://reviews.llvm.org/D36010
llvm-svn: 309489
After region exit simplification, the incoming block of a phi node in
the SCoP region's exit block lands outside of the region. Since we
treat SCoPs as if this already happened, we need to account for that
when looking for outside uses of scalars (i.e. escaping scalars).
llvm-svn: 309271
A PHI node's incoming block is the user of its operand, not the PHI's parent.
Assuming the PHINode's parent being the user lead to the removal of a
MemoryAccesses because its use was assumed to be inside of the SCoP.
llvm-svn: 309164
In the following loop:
int i;
for (i = 0; i < func(); i+=1)
;
SCoP:
for (int j = 0; j<n; j+=1)
S(i, j)
The value i is synthesizable in the SCoP that includes only the j-loop.
This is because i is fixed within the SCoP, it is irrelevant whether
it originates from another loop.
This fixes a strange case where a PHI was synthesiable in a SCoP,
but not its incoming value, triggering an assertion.
This should fix MultiSource/Applications/sgefa/sgefa of the
perf-x86_64-penryn-O3-polly-before-vectorizer-unprofitable buildbot.
llvm-svn: 309109
Summary:
This consists instances of two changes:
- Accept any order of checks for a specific loop form, that appear in different order in the new vs legacy-PM.
- Remove checks for specific regions.
Reviewers: grosser
Reviewed By: grosser
Subscribers: pollydev, llvm-commits
Tags: #polly
Differential Revision: https://reviews.llvm.org/D35837
llvm-svn: 308976
Invariant load hoisted scalars, and arrays whose size we can statically compute
to be 0 do not need to be allocated as arrays.
Invariant load hoisted scalars are sent to the kernel directly as parameters.
Earlier, we used to allocate `0` bytes of memory for these because our
computation of size from `PPCGCodeGeneration::getArraySize` would result in `0`.
Now, since we don't invariant loads as arrays in PPCGCodeGeneration, this
problem does not occur anymore.
Differential Revision: https://reviews.llvm.org/D35795
llvm-svn: 308971
Read-only values (values defined before the SCoP) require special
handing with -polly-analyze-read-only-scalars=true (which is the
default). If active, each use of a value requires a read access.
When a copied value uses a read-only value, we must also ensure that
such a MemoryAccess is available or is created.
Differential Revision: https://reviews.llvm.org/D35764
llvm-svn: 308876
Summary:
- We were using `.count` in `StringRef`, which matches substrings.
- We may want to use this for equality as well.
- Generalise this, so allow regexes as a parameter to `polly-only-func`.
Differential Revision: https://reviews.llvm.org/D35728
llvm-svn: 308875
If the access relation's domain is empty, the access will never be
executed. We can just remove it.
We only remove write accesses. Partial read accesses are not yet
supported and instructions in the statement might require the
llvm::Value holding the read's result to be defined.
llvm-svn: 308830
Hoisted loads can be trivially supported because there are no
MemoryAccess to be modified, the loaded value is just available
at code generation.
llvm-svn: 308826
This pass 'forwards' operand trees into statements that use them in
order to avoid scalar dependencies.
This minimal implementation handles only the case of speculatable
instructions. We will successively add support for:
- Hoisted loads
- Read-only values
- Synthesizable values
- Loads
- PHIs
- Forwarding only parts of the tree
Differential Revision: https://reviews.llvm.org/D35754
llvm-svn: 308825
Summary:
For the ScopInfo lit testsuite, this patch removes some dependences on output behaviour of the legacy PM.
In most cases, these tests checked the tool output for labels created by the pass printer in the legacy PM. This doesn't work for the new PM anymore. Untangling the testcases is the first step to porting the testsuite for the new PM infrastructure.
Reviewers: grosser, Meinersbur, bollu
Reviewed By: grosser
Subscribers: llvm-commits, pollydev
Tags: #polly
Differential Revision: https://reviews.llvm.org/D35727
llvm-svn: 308754
Summary:
Added SPIR Code Generation to the PPCG Code Generator. This can be invoked using
the polly-gpu-arch flag value 'spir32' or 'spir64' for 32 and 64 bit code respectively.
In addition to that, runtime support has been added to execute said SPIR code on Intel
GPU's, where the system is equipped with Intel's open source driver Beignet (development
version). This requires the cmake flag 'USE_INTEL_OCL' to be turned on, and the polly-gpu-runtime
flag value to be 'libopencl'.
The transformation of LLVM IR to SPIR is currently quite a hack, consisting in part of regex
string transformations.
Has been tested (working) with Polybench 3.2 on an Intel i7-5500U (integrated graphics chip).
Reviewers: bollu, grosser, Meinersbur, singam-sanjay
Reviewed By: grosser, singam-sanjay
Subscribers: pollydev, nemanjai, mgorny, Anastasia, kbarton
Tags: #polly
Differential Revision: https://reviews.llvm.org/D35185
llvm-svn: 308751
When performing invariant load hoisting we check that invariant load expressions
are not too complex. Up to this commit, we performed this check by counting the
sum of dimensions in the access range as a very simple heuristic. This heuristic
is a little too conservative, as it prevents hoisting for any scops with a
very large number of parameters. Hence, we update the heuristic to only count
existentially quantified dimensions and set dimensions. We expect this to still
detect the problematic expressions in h264 because of which this check was
originally introduced.
For some unknown reason, this complexity check was originally committed in
IslNodeBuilder. It really belongs in ScopInfo, as there is no point in
optimizing a program which we could have known earlier cannot be code generated.
The benefit of running the check early is that we can avoid to even hoist checks
that are expensive to code generate as invariant loads. This can be seen in
the changed tests, where we now indeed detect the scop, but just not invariant
load hoist the complicated access.
We also improve the formatting of the code, document it, and use isl++ to
simplify expressions.
llvm-svn: 308659
Use a mark-and-sweep algorithm to find and remove unused instructions
and MemoryAccesses. This is useful in particular to remove scalar
writes that are never used anywhere. A scalar write in a loop induces
a write-after-write dependency that stops the loop iterations to be
rescheduled. Such writes can be a result of previous transformations
such as DeLICM and operand tree forwarding.
It adds a new class VirtualInstruction that represents an instruction in
a particular statement. At the moment an instruction can only belong to
the statement that represents a BasicBlock. In the future, instructions
can be in one of multiple statements representing a BasicBlock
(Nandini's work), in different statements than its BasicBlock would
indicate, and even multiple statements at once (by forwarding operand
trees). It also integrates nicely with the VirtualUse class.
ScopStmt::contains(Instruction*) currently uses the instruction's parent
BasicBlock to check whether it contains the instruction. It will need to
check the actual statement list when one of the aforementioned features
become possible.
Differential Revision: https://reviews.llvm.org/D35656
llvm-svn: 308626
This commit *WILL COMPILE*.
1. `PPCG` now uses `isl_multi_pw_aff` instead of an array of `pw_aff`.
This needs us to adjust how we index array bounds and how we construct
array bounds.
2. `PPCG` introduces two new kinds of nodes: `init_device` and `clear_device`.
We should investigate what the correct way to handle these are.
3. `PPCG` has gotten smarter with its use of live range reordering, so some of
the tests have a qualitative improvement.
4. `PPCG` changed its output style, so many test cases need to be updated to
fit the new style for `polly-acc-dump-code` checks.
Differential Revision: https://reviews.llvm.org/D35677
llvm-svn: 308625
This is one possible solution to implement wrap-arounds for integers in
unsigned icmp operations. For example,
store i32 -1, i32* %A_addr
%0 = load i32, i32* %A_addr
%1 = icmp ult i32 %0, 0
%1 should hold false, because under the assumption of unsigned integers,
-1 should wrap around to 2^32-1. However, previously. it was assumed
that the MSB (Most Significant Bit - aka the Sign bit) was never set for
integers in unsigned operations.
This patch modifies the buildConditionSets function in ScopInfo.cpp to
give better information about the integers in these unsigned
comparisons.
Contributed-by: Annanay Agarwal <cs14btech11001@iith.ac.in>
Differential Revision: https://reviews.llvm.org/D35464
llvm-svn: 308608
Some optimizations (e.g., DeLICM) can modify memory accesses (e.g., change
their MemoryKind). Consequently, the pattern matching should take it into
the account.
Reviewed-by: Tobias Grosser <tobias@grosser.es>,
Michael Kruse <llvm@meinersbur.de>
Differential Revision: https://reviews.llvm.org/D33138
llvm-svn: 308494
Utilizing newer LLVM diagnostic remark API in order to enable use of
opt-viewer tool. Polly Diagnostic Remarks also now appear in YAML
remark file.
In this patch, I've added the OptimizationRemarkEmitter into certain
classes where remarks are being emitted and update the remark emit calls
itself. I also provide each remark a BasicBlock or Instruction from where
it is being called, in order to compute the hotness of the remark.
Patch by Tarun Rajendran!
Differential Revision: https://reviews.llvm.org/D35399
llvm-svn: 308233
Summary:
We do not keep domain constraints on access functions when building the
scop. Hence, for consistency reasons, it makes also sense to not include
them when storing a new access function. This change results in simpler
access functions that make output easier to read.
This patch also helps to make DeLICMed memory accesses to be understood by
our matrix multiplication pattern matching pass. Further changes to the
matrix multiplication pattern matching are needed for this to work, so the
corresponding test case will be added in a future commit.
Reviewers: Meinersbur, bollu, gareevroman, efriedma, huihuiz, sebpop, simbuerg
Subscribers: pollydev, llvm-commits
Tags: #polly
Differential Revision: https://reviews.llvm.org/D35237
llvm-svn: 308215
- We should call `preloadInvariantLoads` to make sure that code is
generated for invariant loads in the kernel.
Differential Revision: https://reviews.llvm.org/D35410
llvm-svn: 308187
This patch makes sure that in case a loop is not fully contained within a region
that later forms a SCoP, none of the loop backedges are allowed to be part of
the region. We currently do not support the situation where only some of a loops
backedges are part of a scop. Today, this can break both scop modeling and code
generation. One such breaking test case is for example
test/ScopDetectionDiagnostics/loop_partially_in_scop-2.ll, where we totally
forgot to code generate some of the backedges. Fortunately, it is commonly not
necessary to support these partial loops, it is way more common that either
no backedge is included in a region or all loop backedge are included.
This fixes a recent miscompile in
MultiSource/Benchmarks/MiBench/consumer-typeset which was exposed after
r306477.
llvm-svn: 308113
- There is a conditional branch that is used to switch between the old
and new versions of the code.
- If we detect that the build was unsuccessful, `PPCGCodeGeneration` will
change the runtime check to be always set to false.
- To actually *reach* this runtime check instruction, `PPCGCodeGeneration`
was using assumptions about the layout of the BBs.
- However, invariant load hoisting violates this assumption by inserting
an extra basic block in the middle.
- Fix the assumption on the layout by having `createScopConditionally`
return the conditional branch instruction.
- Use this reference to set to always-false.
llvm-svn: 308010
We need to relax constraints on invariant loads so that they do not
create fake RAW dependences. So, we do not consider invariant loads as
scalar dependences in a region.
During these changes, it turned out that we do not consider `llvm::Value`
replacements correctly within `PPCGCodeGeneration` and `ISLNodeBuilder`.
The replacements dictated by `ValueMap` were not being followed in all
places. This was fixed in this commit. There is no clean way to decouple
this change because this bug only seems to arise when the relaxed
version of invariant load hoisting was enabled.
Differential Revision: https://reviews.llvm.org/D35120
llvm-svn: 307907
Summary:
Add a sequence number that identifies a ptx_kernel's parent Scop within a function to it's name to differentiate it from other kernels produced from the same function, yet different Scops.
Kernels produced from different Scops can end up having the same name. Consider a function with 2 Scops and each Scop being able to produce just one kernel. Both of these kernels have the name "kernel_0". This can lead to the wrong kernel being launched when the runtime picks a kernel from its cache based on the name alone. This patch supplements D33985, by differentiating kernels across Scops as well.
Previously (even before D33985) while profiling kernels generated through JIT e.g. Julia, [[ https://groups.google.com/d/msg/polly-dev/J1j587H3-Qw/mR-jfL16BgAJ | kernels associated with different functions, and even different SCoPs within a function, would be grouped together due to the common name ]]. This patch prevents this grouping and the kernels are reported separately.
Reviewers: grosser, bollu
Reviewed By: grosser
Subscribers: mehdi_amini, nemanjai, pollydev, kbarton
Tags: #polly
Differential Revision: https://reviews.llvm.org/D35176
llvm-svn: 307814
- `lit.util.capture` was removed in `r306625`.
- Replace `lit.util.capture` to `subprocess.check_output` as LLVM did.
- LLVM revision of this change: `https://reviews.llvm.org/D35088`.
Differential Revision: https://reviews.llvm.org/D35255
llvm-svn: 307765
Summary:
There is a bug in the current lit configurations for the unittests. If gtest is not available, the site-config for the unit tests won't be generated. Because lit recurses through the test directory, the lit configuration for the unit tests will be discovered nevertheless, leading to a fatal error in lit.
This patch semi-gracefully skips the unittests if gtest is not available. As a result, running lit now prints this: `warning: test suite 'Polly-Unit' contained no test`.
If people think that this is too annoying, the alternative would be to pick apart the test directory, so that the lit testsuite discovery will always only find one configuration. In fact, both of these things could be combined. While it's certainly nice that running a single lit command runs all the tests, I suppose people use the `check-polly` make target over lit most of the time, so the difference might not be noticed.
Reviewers: Meinersbur, grosser
Reviewed By: grosser
Subscribers: mgorny, bollu, pollydev, llvm-commits
Tags: #polly
Differential Revision: https://reviews.llvm.org/D34053
llvm-svn: 307651
Summary:
As of now, Polly uses llvm-config to set up LLVM dependencies in an out-of-tree build.
This is problematic for two reasons:
1) Right now, in-tree and out-of-tree builds in fact do different things. E.g., in an in-tree build, libPolly depends on a handful of LLVM libraries, while in an out-of-tree build it depends on all of them. This means that we often need to treat both paths seperately.
2) I'm specifically unhappy with the way libPolly is linked right now, because it just blindly links against all the LLVM libs. That doesn't make a lot of sense. For instance, one of these libs is LLVMTableGen, which contains a command line definition of a -o option. This means that I can not link an out-of-tree libPolly into a tool which might want to offer a -o option as well.
This patch (mostly) drop the use of llvm-config in favor of LLVMs exported cmake package. However, building Polly with unittests requires access to the gtest sources (in the LLVM source tree). If we're building against an LLVM installation, this source tree is unavailable and must specified. I'm using llvm-config to provide a default in this case.
Reviewers: Meinersbur, grosser
Reviewed By: grosser
Subscribers: tstellar, bollu, chapuni, mgorny, pollydev, llvm-commits
Differential Revision: https://reviews.llvm.org/D33299
llvm-svn: 307650
For the previous commit I accidentally added this change to lit.site.cfg, which
is autogenerated and was consequently not part of the previous commit.
llvm-svn: 307648
When providing the option "-polly-ast-print-accesses" Polly also prints the
memory accesses that are generated:
#pragma known-parallel
for (int c0 = 0; c0 <= 1023; c0 += 4)
#pragma simd
for (int c1 = c0; c1 <= c0 + 3; c1 += 1)
Stmt_for_body(
/* read */ &MemRef_B[0]
/* write */ MemRef_A[c1]
);
This makes writing and debugging memory layout transformations easier.
Based on a patch contributed by Thomas Lang (ETH Zurich)
llvm-svn: 307579
- Check that we have invariant accesses.
- Use `-polly-use-llvm-names` for better names in the test.
- Rename test function to `f` for brevity.
llvm-svn: 307401
- This already works, but add this to ensure that there is no
regressions when I expand the invariant load hoisting ability of
`PPCGCodeGeneration`.
llvm-svn: 307398
- Instead of running with -O0, we enable the highest optimization level, but
then disable optimizations. This ensures that possibly important metadata
is still emitted.
- Update the code for attribute removal to work with latest LLVM
- Do not cut an arbitrary number of lines from the LL file. It is undocumented
why this was needed at the first place, and such a feature is likely to
break with trivial IR changes that may come in the future.
llvm-svn: 307355
- By definition, we can pass something as a `kill` to PPCG if we know
that no data can flow across a kill.
- This is useful for more complex examples where we have scalars that
are local to a scop.
- If the local is only used within a scop, we are free to kill it.
Differential Revision: https://reviews.llvm.org/D35045
llvm-svn: 307260
Summary:
Provide more context to the name of a GPU kernel by prefixing its name with the host function that calls it. E.g. The first kernel called by `gemm` would be `FUNC_gemm_KERNEL_0`.
Kernels currently follow the "kernel_#" (# = 0,1,2,3,...) nomenclature. This patch makes it easier to map host caller and device callee, especially when there are many kernels produced by Polly-ACC.
Reviewers: grosser, Meinersbur, bollu, philip.pfaffe, kbarton!
Reviewed By: grosser
Subscribers: nemanjai, pollydev
Tags: #polly
Differential Revision: https://reviews.llvm.org/D33985
llvm-svn: 307173
Polly did not use PPCG's live range reordering feature. Teach
PPCGCodeGeneration to use this.
Documentation on this is sparse, so much of the code is conservative.
We currently kill all phi nodes in a Scop by appending them to the
must_kill map we pass to PPCG. I do not have a proof of correctness,
but it seems to be intuitively correct.
We also do not handle `array_order`, which, quoting PPCG, is:
PPCG/gpu.h: "Order dependences on non-scalars."
It seems to consist of RAW dependences between arrays. We need to
pass this information for more complex privatization cases.
Differential Revision: https://reviews.llvm.org/D34941
llvm-svn: 307163
Summary: This is a general maintenance update
Reviewers: grosser
Subscribers: srhines, fedor.sergeev, pollydev, llvm-commits
Contributed-by: Maximilian Falkenstein <falkensm@student.ethz.ch>
Differential Revision: https://reviews.llvm.org/D34903
llvm-svn: 307090
This patch aims to implement the option of allocating new arrays created
by polly on heap instead of stack. To enable this option, a key named
'allocation' must be written in the imported json file with the value
'heap'.
We need such a feature because in a next iteration, we will implement a
mechanism of maximal static expansion which will need a way to allocate
arrays on heap. Indeed, the expansion is very costly in terms of memory
and doing the allocation on stack is not worth considering.
The malloc and the free are added respectively at polly.start and
polly.exiting such that there is no use-after-free (for instance in case
of Scop in a loop) and such that all memory cells allocated with a
malloc are free'd when we don't need them anymore.
We also add :
- In the class ScopArrayInfo, we add a boolean as member called IsOnHeap
which represents the fact that the array in allocated on heap or not.
- A new branch in the method allocateNewArrays in the ISLNodeBuilder for
the case of heap allocation. allocateNewArrays now takes a BBPair
containing polly.start and polly.exiting. allocateNewArrays takes this
two blocks and add the malloc and free calls respectively to
polly.start and polly.exiting.
- As IntPtrTy for the malloc call, we use the DataLayout one.
To do that, we have modified :
- createScopArrayInfo and getOrCreateScopArrayInfo such that it returns
a non-const SAI, in order to be able to call setIsOnHeap in the
JSONImporter.
- executeScopConditionnaly such that it return both start block and end
block of the scop, because we need this two blocs to be able to add
the malloc and the free calls at the right position.
Differential Revision: https://reviews.llvm.org/D33688
llvm-svn: 306540
This test fails, if polly is not linked into LLVM's tools. Our
lit site-config already deals with this by not adding the -load
option, if polly is linked into LLVM's tools.
llvm-svn: 306395
- In D33414, if any function call was found within a kernel, we would bail out.
- This is an over-approximation. This patch changes this by allowing the
`llvm.sqrt.*` family of intrinsics.
- This introduces an additional step when creating a separate llvm::Module
for a kernel (GPUModule). We now copy function declarations from the
original module to new module.
- We also populate IslNodeBuilder::ValueMap so it replaces the function
references to the old module to the ones in the new module
(GPUModule).
Differential Revision: https://reviews.llvm.org/D34145
llvm-svn: 306284
The condition that disallowed code generation in PPCGCodeGeneration with
invariant loads is not required. I haven't been able to construct a
counterexample where this generates invalid code.
Differential Revision: https://reviews.llvm.org/D34604
llvm-svn: 306245
This reduces the compilation time of one reduced test case from Android from
16 seconds to 100 mseconds (we bail out), without negatively impacting any
other test case we currently have.
We still saw occasionally compilation timeouts on the AOSP buildbot. Hopefully,
those will go away with this change.
llvm-svn: 306235
r303971 added an assertion that SCEV addition involving an AddRec
and a SCEVUnknown must involve a dominance relation: either the
SCEVUnknown value dominates the AddRec's loop, or the AddRec's
loop header dominates the SCEVUnknown. This is generally fine
for most usage of SCEV because it isn't possible to write an
expression in IR which would violate it, but it's a bit inconvenient
here for polly.
To solve the issue, just avoid creating a SCEV expression which
triggers the asssertion.
I'm not really happy with this solution, but I don't have any better
ideas.
Fixes https://bugs.llvm.org/show_bug.cgi?id=33464.
Differential Revision: https://reviews.llvm.org/D34259
llvm-svn: 305864
Ensure that all array base pointers are assigned before generating
aliasing metadata by allocating new arrays beforehand.
Before this patch, getBasePtr() returned nullptr for new arrays because
the arrays were created at a later point. Nullptr did not match to any
array after the created array base pointers have been assigned and when
the loads/stores are generated.
llvm-svn: 305675
In r304074 we introduce a patch to accept results from side effect free
functions into SCEV modeling. This causes rejection of cases where the
call is happening outside the SCoP. This patch checks if the call is
outside the Region and treats the results as a parameter (SCEVType::PARAM)
to the SCoP instead of returning SCEVType::INVALID.
Patch by Sameer Abu Asal.
llvm-svn: 305423
In `PPCGCodeGeneration`, we try to take the references of every `Value`
that is used within a Scop to offload to the kernel. This occurs in
`GPUNodeBuilder::createLaunchParameters`.
This breaks if one of the values is a function pointer, since one of
these cases will trigger:
1. We try to to take the references of an intrinsic function, and this
breaks at `verifyModule`, since it is illegal to take the reference of
an intrinsic.
2. We manage to take the reference to a function, but this fails at
`verifyModule` since the function will not be present in the module that
is created in the kernel.
3. Even if `verifyModule` succeeds (which should not occur), we would
then try to call a *host function* from the *device*, which is
illegal runtime behaviour.
So, we disable this entire range of possibilities by simply not allowing
function references within a `Scop` which corresponds to a kernel.
However, note that this is too conservative. We *can* allow intrinsics
within kernels if the backend can lower the intrinsic correctly. For
example, an intrinsic like `llvm.powi.*` can actually be lowered by the `NVPTX`
backend.
We will now gradually whitelist intrinsics which are known to be safe.
Differential Revision: https://reviews.llvm.org/D33414
llvm-svn: 305185
Iterate through memory accesses in execution order (first all implicit reads,
then explicit accesses, then implicit writes).
In the test case this caused an implicit load to be handled as if it was loaded
after the write. That is, the value being written before it is available.
This fixes llvm.org/PR33323
llvm-svn: 304810
Summary:
The RegionGenerator traditionally kept a BlockMap that mapped from original
basic blocks to newly generated basic blocks. With the introduction of partial
writes such a 1:1 mapping is not possible any more, as a single basic block
can be code generated into multiple basic blocks. Hence, depending on the use
case we need to either use the first basic block or the last basic block.
This is intended to address the last four cases of incorrect code generation
in our AOSP buildbot and hopefully should turn it green.
Reviewers: Meinersbur, bollu, gareevroman, efriedma, huihuiz, sebpop, simbuerg
Reviewed By: Meinersbur
Subscribers: pollydev, llvm-commits
Tags: #polly
Differential Revision: https://reviews.llvm.org/D33767
llvm-svn: 304808
This adds test coverage for regions with non-affine loops, which we
unfortunately missed when committing this features years ago. We will add
more test coverage over time.
llvm-svn: 304672
- Add a counter that is incremented once on exit from a scop.
- Test cases got split into two: one to test the cycles, and another one
to test trip counts.
- Sample output:
```name=sample-output.txt
scop function, entry block name, exit block name, total time, trip count
warmup, %entry.split, %polly.merge_new_and_old, 5180, 1
f, %entry.split, %polly.merge_new_and_old, 409944, 500
g, %entry.split, %polly.merge_new_and_old, 1226, 1
```
Differential Revision: https://reviews.llvm.org/D33822
llvm-svn: 304543
Previously, we would generate one performance counter for all scops.
Now, we generate both the old information, as well as a per-scop
performance counter to generate finer grained information.
This patch needed a way to generate a unique name for a `Scop`.
The start region, end region, and function name combined provides a
unique `Scop` name. So, `Scop` has a new public API to provide its start
and end region names.
Differential Revision: https://reviews.llvm.org/D33723
llvm-svn: 304528
Ignored intrinsics are ignored at code generation, therefore do not
need to be part of the instruction list.
Specifically, llvm.lifetime.* intrinisics are removed before code
generation, referencing them would cause a use-after-free error.
Contributed-by: Nandini Singhal <cs15mtech01004@iith.ac.in>
Differential Revision: https://reviews.llvm.org/D33768
llvm-svn: 304483
A partial write is a write where the domain of the values written is a subset of
the execution domain of the parent statement containing the write. Originally,
we directly checked this subset relation whereas it is indeed only important
that the subset relation holds for the parameter values that are known to be
valid in the execution context of the scop. We update our check to avoid the
unnecessary introduction of partial writes in situations where the write appears
to be partial without context information, but where context information allows
us to understand that a full write can be generated.
This change fixes (hides) a recent regression introduced in r303517, which broke
our AOSP builds. The part that is correctly fixed in this change is that we do
not any more unnecessarily generate a partial write. This is good performance
wise and, as we currently do not yet explicitly introduce partial writes in the
default configuration, this also hides possible bugs in the partial writes
implementation. The crashes that we have originally seen were caused by such
a bug, where partial writes were incorrectly generated in region statements. An
additional patch in a subsequent commit is needed to address this problem.
Reported-by: Reported-by: Eli Friedman <efriedma@codeaurora.org>
Differential Revision: https://reviews.llvm.org/D33759
llvm-svn: 304398
Such instructions are generates on-demand by the CodeGenerator and thus
do not need representation in a statement.
Differential Revision: https://reviews.llvm.org/D33642
llvm-svn: 304151
Certain affine memory accesses which we model today might contain products of
parameters which we might combined into a new parameter to be able to create an
affine expression that represents these memory accesses. Especially in the
context of OpenCL, this approach looses information as memory accesses such as
A[get_global_id(0) * N + get_global_id(1)] are assumed to be linear. We
correctly recover their multi-dimensional structure by assuming that parameters
that are the result of a function call at IR level likely are not parameters,
but indeed induction variables. The resulting access is now
A[get_global_id(0)][get_global_id(1)] for an array A[][N].
llvm-svn: 304075
Side-effect free function calls with only constant parameters can be easily
re-generated and consequently do not prevent us from modeling a SCEV. This
change allows array subscripts to reference function calls such as
'get_global_id()' as used in OpenCL.
We use the function name plus the constant operands to name the parameter. This
is possible as the function name is required and is not dropped in release
builds the same way names of llvm::Values are dropped. We also provide more
readable names for common OpenCL functions, to make it easy to understand the
polyhedral model we generate.
llvm-svn: 304074
Summary: This patch outputs all the list of instructions in BlockStmts.
Reviewers: Meinersbur, grosser, bollu
Subscribers: bollu, llvm-commits, pollydev
Differential Revision: https://reviews.llvm.org/D33163
llvm-svn: 304062
Summary:
My goal is to make the newly added `AllowWholeFunctions` options more usable/powerful.
The changes to ScopBuilder.cpp are exclusively checks to prevent `Region.getExit()` from being dereferenced, since Top Level Regions (TLRs) don't have an exit block.
In ScopDetection's `isValidCFG`, I removed a check that disallowed ReturnInstructions to have return values. This might of course have been intentional, so I would welcome your feedback on this and maybe a small explanation why return values are forbidden. Maybe it can be done but needs more changes elsewhere?
The remaining changes in ScopDetection are simply to consider the AllowWholeFunctions option in more places, i.e. allow TLRs when it is set and once again avoid derefererncing `getExit()` if it doesn't exist.
Finally, in ScopHelper.cpp I extended `polly::isErrorBlock` to handle regions without exit blocks as well: The original check was if a given BasicBlock dominates all predecessors of the exit block. Therefore I do the same for TLRs by regarding all BasicBlocks terminating with a ReturnInst as predecessors of a "virtual" function exit block.
Patch by: Lukas Boehm
Reviewers: philip.pfaffe, grosser, Meinersbur
Reviewed By: grosser
Subscribers: pollydev, llvm-commits, bollu
Tags: #polly
Differential Revision: https://reviews.llvm.org/D33411
llvm-svn: 303790
Enable the use for partial writes for PHI write accesses with a switch.
This simply skips the test for whether a PHI write would be partial.
The analog test for partial value writes also protects for partial reads
which we do not support (yet). It is possible to test for partial reads
separately such that we could skip the partial write check as well. In
case this shows up to be useful, I can implement it as well.
Differential Revision: https://reviews.llvm.org/D33487
llvm-svn: 303762
Without this patch, the JSONImporter did not verify if the data it loads
were correct or not (Bug llvm.org/PR32543). I add some checks in the
JSONImporter class and some test cases.
Here are the checks (and test cases) I added :
JSONImporter::importContext
- The "context" key does not exist.
- The context was not parsed successfully by ISL.
- The isl_set has the wrong number of parameters.
- The isl_set is not a parameter set.
JSONImporter::importSchedule
- The "statements" key does not exist.
- There is not the right number of statement in the file.
- The "schedule" key does not exist.
- The schedule was not parsed successfully by ISL.
JSONImporter::importAccesses
- The "statements" key does not exist.
- There is not the right number of statement in the file.
- The "accesses" key does not exist.
- There is not the right number of memory accesses in the file.
- The "relation" key does not exist.
- The memory access was not parsed successfully by ISL.
JSONImporter::areArraysEqual
- The "type" key does not exist.
- The "sizes" key does not exist.
- The "name" key does not exist.
JSONImporter::importArrays
/!\ Do not check if there is an key name "arrays" because it is not
considered as an error.
All checks are already in place or implemented in
JSONImporter::areArraysEqual.
Contributed-by: Nicolas Bonfante <nicolas.bonfante@insa-lyon.fr>
Differential Revision: https://reviews.llvm.org/D32739
llvm-svn: 303759
This speeds up scop modeling for scops with many redundent existentially
quantified constraints. For the attached test case, this change reduces
scop modeling time from minutes (hours?) to 0.15 seconds.
This change resolves a compilation timeout on the AOSP build.
Thanks Eli for reporting _and_ reducing the test case!
Reported-by: Eli Friedman <efriedma@codeaurora.org>
llvm-svn: 303600
The SCEVs of loops surrounding the escape users of a merge blocks are
forgotten, so that loop trip counts based on old values can be revoked.
This fixes llvm.org//PR32536
Contributed-by: Baranidharan Mohan <mbdharan@gmail.com>
Differential Revision: https://reviews.llvm.org/D33195
llvm-svn: 303561
Allow the BlockGenerator to generate memory writes that are not defined
over the complete statement domain, but only over a subset of it. It
generates a condition that evaluates to 1 if executing the subdomain,
and only then execute the access.
Only write accesses are supported. Read accesses would require a PHINode
which has a value if the access is not executed.
Partial write makes DeLICM able to apply mappings that are not defined
over the entire domain (for instance, a branch that leaves a loop with
a PHINode in its header; a MemoryKind::PHI write when leaving is never
read by its PHI read).
Differential Revision: https://reviews.llvm.org/D33255
llvm-svn: 303517
A test case with a GPU runline was added without setting 'REQUIRES=pollyacc'. We
drop the GPU run line, as the basic functionality can already be tested with
the normal code generation.
llvm-svn: 303485
- We use the outermost dimension of arrays since we need this
information to generate GPU transfers.
- In general, if we do not know the outermost dimension of the array
(because the indexing expression is non-affine, for example) then we
simply cannot generate transfer code.
- However, for Fortran arrays, we can use the Fortran array
representation which stores the dimensions of all arrays.
- This patch uses the Fortran array representation to generate code that
computes the outermost dimension size.
Differential Revision: https://reviews.llvm.org/D32967
llvm-svn: 303429
Summary:
- Rename global / local naming convention that did not make much sense
to Visible / Invisible, where the visible refers to whether the ALLOCATE
call to the Fortran array is present in the current module or not.
- This match now works on both cross fortran module globals and on
parameters to functions since neither of them are necessarily allocated
at the point of their usage.
- Add testcase that matches against both a load and a store against
function parameters.
Differential Revision: https://reviews.llvm.org/D33190
llvm-svn: 303356
- This breaks the previous assumption that Fortran Arrays are `GlobalValue`.
- The names of functions were getting unwieldy. So, I renamed the
Fortran related functions.
Differential Revision: https://reviews.llvm.org/D33075
llvm-svn: 303040
At the time of code generation, an instruction with an llvm intrinsic is ignored
in copyBB. However, if the value of the instruction is used later in the
program, the value needs to be synthesized. However, this is causing some issues
with the instructions being generated in a hoisted basic block.
Removing llvm.expect from the list of ignored intrinsics fixes this bug.
This resolves http://llvm.org/PR32324.
Contributed-by: Annanay Agarwal <cs14btech11001@iith.ac.in>
Tags: #polly
Differential Revision: https://reviews.llvm.org/D32992
llvm-svn: 303006
Removal of overwritten writes currently encompasses all the cases
of the identical write removal.
There is an observable behavioral change in that the last, instead
of the first, MemoryAccess is kept. This should not affect the
generated code, however.
Differential Revision: https://reviews.llvm.org/D33143
llvm-svn: 302987
Remove memory writes that are overwritten by later writes. This works
for StoreInsts:
store double 21.0, double* %A
store double 42.0, double* %A
scalar writes at the end of a statement and mixes of these.
Multiple writes can be the result of DeLICM, which might map multiple
writes to the same location when it knows that these do no conflict
(for instance because they write the same value). Such writes
interfere with pattern-matched optimization such as gemm and may not
get removed by other LLVM passes after code generation.
Differential Revision: https://reviews.llvm.org/D33142
llvm-svn: 302986
- Move the testcases to ScopInfo/ since the processing takes place in
ScopBuilder.
- Cleanup testcases, run -polly-canonicalize on them, find minimal set
of opt parameters.
llvm-svn: 302886
Today Polly generates induction variable in this way:
polly.indvar = phi 0, polly.indvar.next
...
polly.indvar.next = polly.indvar + stide
polly.loop_cond = predicate polly.indvar, (UB - stride)
Instead of:
polly.indvar = phi 0, polly.indvar.next
...
polly.indvar.next = polly.indvar + stide
polly.loop_cond = predicate polly.indvar.next, UB
The way Polly generate induction variable cause some problem in the indvar simplify pass.
This patch make polly generate the later form, by assuming the induction variable never overflow
Differential Revision: https://reviews.llvm.org/D33089
llvm-svn: 302866
After DeLICM, it is possible to have two writes of the same value to
the same location in the same statement when it determined that those
writes do not conflict (write the same value).
Teach -polly-simplify to remove one of the writes. It interferes with
the pattern matching of matrix-multiplication kernels and also seem
to not be optimized away by LLVM.
The algorthm is simple, has O(n^2) behaviour (n = max number of
MemoryAccesses in a statement) and only matches the most obvious cases,
but seem to be enough to pattern-match Boost ublas gemm.
Not handled cases include:
- StoreInst instructions (a.k.a. explicit writes), since the value might
be loaded or overwritten between the two stores.
- PHINode, especially LCSSA, when the PHI value matches with on other's.
- Partial writes (in preparation)
llvm-svn: 302805
Some isl functions can simplify their __isl_keep arguments. The
argument object after the call uses different contraints to represent
the same set. Different contraints can result in different outputs
when printed to a string.
In assert builds additional isl functions are called (in assert() or
mentioned, these can change the internal representation of its read-only
arguments such that printed strings are different in debug and non-debug
builds.
What happened here is that a call to isl_set_is_equal inside an assert
in getScatterFor normalizes one of its arguments such that one redundant
constraint is removed. The redundant constraint therefore does not appear
in the string representing the domain, which FileCheck notices as a
regression test failure compared to a build with assertions disabled.
This fix removes the redundant contraints the domain from the start such
that the redundant contraint is removed in assert and non-assert builds.
Isl adds a flag to such sets such that the removal of redundancies is
not done multiple times (here: by isl_set_is_equal).
Thanks to Tobias Grosser for reporting and hinting to the cause.
llvm-svn: 302711
Add the ability to tag certain memory accesses as those belonging to
Fortran arrays. We do this by pattern matching against known patterns
of Dragonegg's LLVM IR output from Fortran code.
Fortran arrays have metadata stored with them in a struct. This struct
is called the "Fortran array descriptor", and a reference to this is
stored in each MemoryAccess.
Differential Revision: https://reviews.llvm.org/D32639
llvm-svn: 302653
Summary:
In case two arrays share base pointers in the same invariant load equivalence
class, we canonicalize all memory accesses to the first of these arrays
(according to their order in the equivalence class).
This enables us to optimize kernels such as boost::ublas by ensuring that
different references to the C array are interpreted as accesses to the same
array. Before this change the runtime alias check for ublas would fail, as it
would assume models of the C array with differing (but identically valued) base
pointers would reference distinct regions of memory whereas the referenced
memory regions were indeed identical.
As part of this change we remove most of the MemoryAccess::get*BaseAddr
interface. We removed already all references to get*BaseAddr in previous
commits to ensure that no code relies on matching base pointers between
memory accesses and scop arrays -- except for three remaining uses where we
need the original base pointer. We document for these situations that
MemoryAccess::getOriginalBaseAddr may return a base pointer that is distinct
to the base pointer of the scop array referenced by this memory access.
Reviewers: sebpop, Meinersbur, zinob, gareevroman, pollydev, huihuiz, efriedma, jdoerfert
Reviewed By: Meinersbur
Subscribers: etherzhhb
Tags: #polly
Differential Revision: https://reviews.llvm.org/D28518
llvm-svn: 302636
Summary: PPCGCodeGeneration now attaches the size of the kernel launch parameters at the end of the parameter list. For the existing CUDA Runtime, this gets ignored, but the OpenCL Runtime knows to check for kernel-argument size at the end of the parameter list. (The resulting parameters list is twice as long. This has been accounted for in the corresponding test cases).
Reviewers: grosser, Meinersbur, bollu
Reviewed By: bollu
Subscribers: nemanjai, yaxunl, Anastasia, pollydev, llvm-commits
Tags: #polly
Differential Revision: https://reviews.llvm.org/D32961
llvm-svn: 302515
Summary:
When compiling for GPU, one can now choose to compile for OpenCL or CUDA,
with the corresponding polly-gpu-runtime flag (libopencl / libcudart). The
GPURuntime library (GPUJIT) has been extended with the OpenCL Runtime library
for that purpose, correctly choosing the corresponding library calls to the
option chosen when compiling (via different initialization calls).
Additionally, a specific GPU Target architecture can now be chosen with -polly-gpu-arch (only nvptx64 implemented thus far).
Reviewers: grosser, bollu, Meinersbur, etherzhhb, singam-sanjay
Reviewed By: grosser, Meinersbur
Subscribers: singam-sanjay, llvm-commits, pollydev, nemanjai, mgorny, yaxunl, Anastasia
Tags: #polly
Differential Revision: https://reviews.llvm.org/D32431
llvm-svn: 302379
Extend the Knowledge class to store information about the contents
of array elements and which values are written. Two knowledges do
not conflict the known content is the same. The content information
if computed from writes to and loads from the array elements, and
represented by "ValInst": isl spaces that compare equal if the value
represented is the same.
Differential Revision: https://reviews.llvm.org/D31247
llvm-svn: 302339
SCoPs with unfeasible runtime context are thrown away and therefore
do not need their uses verified.
The added test case requires a complexity limit to exceed.
Normally, error statements are removed from the SCoP and for that
reason are skipped during the verification. If there is a unfeasible
runtime context (here: because of the complexity limit being reached),
the removal of error statements and other SCoP construction steps are
skipped to not waste time. Error statements are not modeled in SCoPs
and therefore have no requirements on whether the scalars used in
them are available.
llvm-svn: 302234
Since r294891, in MemoryAccess::computeBoundsOnAccessRelation(), we skip
manually bounding the access relation in case the parameter of the load
instruction is already a wrapped set. Later on we assume that the lower
bound on the set is always smaller or equal to the upper bound on the
set. Bug 32715 manages to construct a sign wrapped set, in which case
the assertion does not necessarily hold. Fix this by handling a sign
wrapped set similar to a normal wrapped set, that is skipping the
computation.
Contributed-by: Maximilian Falkenstein <falkensm@student.ethz.ch>
Reviewers: grosser
Subscribers: pollydev, llvm-commits
Tags: #Polly
Differential Revision: https://reviews.llvm.org/D32893
llvm-svn: 302231
This reverts commit 17a84e414adb51ee375d14836d4c2a817b191933.
Patches should have been submitted in the order of:
1. D32852
2. D32854
3. D32431
I mistakenly pushed D32431(3) first. Reverting to push in the correct
order.
llvm-svn: 302217
Summary:
When compiling for GPU, one can now choose to compile for OpenCL or CUDA,
with the corresponding polly-gpu-runtime flag (libopencl / libcudart). The
GPURuntime library (GPUJIT) has been extended with the OpenCL Runtime library
for that purpose, correctly choosing the corresponding library calls to the
option chosen when compiling (via different initialization calls).
Additionally, a specific GPU Target architecture can now be chosen with -polly-gpu-arch (only nvptx64 implemented thus far).
Reviewers: grosser, bollu, Meinersbur, etherzhhb, singam-sanjay
Reviewed By: grosser, Meinersbur
Subscribers: singam-sanjay, llvm-commits, pollydev, nemanjai, mgorny, yaxunl, Anastasia
Tags: #polly
Differential Revision: https://reviews.llvm.org/D32431
llvm-svn: 302215
The test subdirectory POLLY_TEST_DIRECTORIES was heavily outdated and
only used in out-of-LLVM-tree builds
(to generate polly-test-${subdir} targets).
llvm-svn: 302142
This makes sure we still test the case that a PHI-NODE cannot be analyzed by
scalar evolution and consequently must be code generated explicitly. As
Michael's optimization triggers only on a very specific "add %iv, %step"
pattern, just changing 'add' to 'mul' adds back test coverage.
llvm-svn: 302132
LLVM-IR names are commonly available in debug builds, but often not in release
builds. Hence, using LLVM-IR names to identify statements or memory reference
results makes the behavior of Polly depend on the compile mode. This is
undesirable. Hence, we now just number the statements instead of using LLVM-IR
names to identify them (this issue has previously been brought up by Zino
Benaissa).
However, as LLVM-IR names help in making test cases more readable, we add an
option '-polly-use-llvm-names' to still use LLVM-IR names. This flag is by
default set in the polly tests to make test cases more readable.
This change reduces the time in ScopInfo from 32 seconds to 2 seconds for the
following test case provided by Eli Friedman <efriedma@codeaurora.org> (already
used in one of the previous commits):
struct X { int x; };
void a();
#define SIG (int x, X **y, X **z)
typedef void (*fn)SIG;
#define FN { for (int i = 0; i < x; ++i) { (*y)[i].x += (*z)[i].x; } a(); }
#define FN5 FN FN FN FN FN
#define FN25 FN5 FN5 FN5 FN5
#define FN125 FN25 FN25 FN25 FN25 FN25
#define FN250 FN125 FN125
#define FN1250 FN250 FN250 FN250 FN250 FN250
void x SIG { FN1250 }
For a larger benchmark I have on-hand (10000 loops), this reduces the time for
running -polly-scops from 5 minutes to 4 minutes, a reduction by 20%.
The reason for this large speedup is that our previous use of printAsOperand
had a quadratic cost, as for each printed and unnamed operand the full function
was scanned to find the instruction number that identifies the operand.
We do not need to adjust the way memory reference ids are constructured, as
they do not use LLVM values.
Reviewed by: efriedma
Tags: #polly
Differential Revision: https://reviews.llvm.org/D32789
llvm-svn: 302072
- Fixes breakage from commit 5536f.
- Interference with commit 764f3 caused testcase to fail. Reverting
764f3 allows commit 5536f to succeed.
- Generated kernel code was slightly different due to 764f3, which
caused testcase to fail.
llvm-svn: 302021
Before this change a memory reference identifier had the form:
<STMT>_<ACCESSTYPE><ID>_<MEMREF>, e.g., Stmt_bb9_Write0_MemRef_tmp11
After this change, we use the format:
<STMT>_<ACCESSTYPE><ID>, e.g., Stmt_bb9_Write0
The name of the array that is accessed through a memory reference is not
necessary to uniquely identify a memory reference, but was only added to
provide additional information for debugging. We drop this information now
for the following two reasons:
1) This shortens the names and consequently improves readability
2) This removes a second location where we decide on the name of a scop array,
leaving us only with the location where the actual scop array is created.
Having after 2) only a single location to name scop arrays will allow us to
change the naming convention of scop arrays more easily, which we will do
in a future commit to reduce compilation time.
llvm-svn: 302004
This makes it easier to read and possibly even modify the test cases, as there
is no need to keep the variable increment in steps of one. More importantly, by
using explicit variable names we do not need to rely on the implicit numbering
of statements when dumping the scop information.
This makes it easier to read and possibly even modify the test cases.
Furthermore, by using explicit variables we do not need to rely on the implicit
numbering of statements when dumping the scop information. In a future commit,
this implicit numbering will likely not be used any more to refer to LLVM-IR
values as it is very expensive to construct.
llvm-svn: 301689
generation.
This needs changes to GPURuntime to expose synchronization between host
and device.
1. Needs better function naming, I want a better name than
"getOrCreateManagedDeviceArray"
2. DeviceAllocations is used by both the managed memory and the
non-managed memory path. This exploits the fact that the two code paths
are never run together. I'm not sure if this is the best design decision
Reviewed by: PhilippSchaad
Tags: #polly
Differential Revision: https://reviews.llvm.org/D32215
llvm-svn: 301640
When we introduced in r297375 support for hoisting loads that are known
to be dereferencable without any conditional guard, we forgot to keep the check
to verify that no other write into the very same location exists. This
change ensures now that dereferencable loads are allowed to access everything,
but can only be hoisted in case no conflicting write exists.
This resolves llvm.org/PR32778
Reported-by: Huihui Zhang <huihuiz@codeaurora.org>
llvm-svn: 301582
Added a small change to the way pointer arguments are set in the kernel
code generation. The way the pointer is retrieved now, specifically requests
global address space to be annotated. This is necessary, if the IR should be
run through NVPTX to generate OpenCL compatible PTX.
The changes do not affect the PTX Strings generated for the CUDA target
(nvptx64-nvidia-cuda), but are necessary for OpenCL (nvptx64-nvidia-nvcl).
Additionally, the data layout has been updated to what the NVPTX Backend requests/recommends.
Contributed-by: Philipp Schaad
Reviewers: Meinersbur, grosser, bollu
Reviewed By: grosser, bollu
Subscribers: jlebar, pollydev, llvm-commits, nemanjai, yaxunl, Anastasia
Tags: #polly
Differential Revision: https://reviews.llvm.org/D32215
llvm-svn: 301299
Earlier, the call to buildFlow was:
WAR = buildFlow(Write, Read, MustWrite, Schedule).
This meant that Read could block another Read, since must-sources can
block each other.
Fixed the call to buildFlow to correctly compute Read. The resulting
code needs to do some ISL juggling to get the output we want.
Bug report: https://bugs.llvm.org/show_bug.cgi?id=32623
Reviewers: Meinersbur
Tags: #polly
Differential Revision: https://reviews.llvm.org/D32011
llvm-svn: 301266
The isl unittest modified its PATH variable to point to the LLVM bin dir.
When building out-of-LLVM-tree, it does not contain the
polly-isl-test executable, hence the test fails.
Ensure that the polly-isl-test is written to a bin directory in the
build root, just like it would happen in an inside-LLVM build.
Then, change PATH to include that dir such that the executable in it
is prioritized before any other location.
llvm-svn: 301096
Dimensions of band nodes can be implicitly permuted by the algorithm applied
during the schedule generation.
For example, in case of the following matrix-matrix multiplication,
for (i = 0; i < 1024; i++)
for (k = 0; k < 1024; k++)
for (j = 0; j < 1024; j++)
C[i][j] += A[i][k] * B[k][j];
it can produce the following schedule tree
domain: "{ Stmt_for_body6[i0, i1, i2] : 0 <= i0 <= 1023 and 0 <= i1 <= 1023 and
0 <= i2 <= 1023 }"
child:
schedule: "[{ Stmt_for_body6[i0, i1, i2] -> [(i0)] },
{ Stmt_for_body6[i0, i1, i2] -> [(i1)] },
{ Stmt_for_body6[i0, i1, i2] -> [(i2)] }]"
permutable: 1
coincident: [ 1, 1, 0 ]
The current implementation of the pattern matching optimizations relies on the
initial ordering of dimensions. Otherwise, it can produce the miscompilation
(e.g., [1]).
This patch helps to restore the initial ordering of dimensions by recreating
the band node when the corresponding conditions are satisfied.
Refs.:
[1] - https://bugs.llvm.org/show_bug.cgi?id=32500
Reviewed-by: Michael Kruse <llvm@meinersbur.de>
Differential Revision: https://reviews.llvm.org/D31741
llvm-svn: 299662
Because Polly exposes parameters that directly influence tile size
calculations, one can setup situations like divide-by-zero.
Check against a possible divide-by-zero in getMacroKernelParams
and return early.
Also assert at the end of getMacroKernelParams that the block sizes
computed for matrices are positive (>= 1).
Tags: #polly
Differential Revision: https://reviews.llvm.org/D31708
llvm-svn: 299633
The current StackColoring algorithm does not correctly handle the
situation when some, but not all paths from a BB to the entry node
cross a llvm.lifetime.start. According to an interpretation of the
language reference at
http://llvm.org/docs/LangRef.html#llvm-lifetime-start-intrinsic
this might be correct, but it would cost too much effort to handle
in StackColoring.
To be on the safe side, remove all lifetime markers even in the original
code version (they have never been copied to the optimized version)
to ensure that no path to the entry block will cross a
llvm.lifetime.start.
The same principle applies to paths the a function return and the
llvm.lifetime.end marker, so we remove them as well.
This fixes llvm.org/PR32251.
Also see the discussion at
http://lists.llvm.org/pipermail/llvm-dev/2017-March/111551.html
llvm-svn: 299585
= Change of WAR, WAW generation: =
- `buildFlow(Sink, MustSource, MaySource, Sink)` treates any flow of the form
`sink <- may source <- must source` as a *may* dependence.
- we used to call:
```lang=cpp, name=old-flow-call.cpp
Flow = buildFlow(MustWrite, MustWrite, Read, Schedule);
WAW = isl_union_flow_get_must_dependence(Flow);
WAR = isl_union_flow_get_may_dependence(Flow);
```
- This caused some WAW dependences to be treated as WAR dependences.
- Incorrect semantics.
- Now, we call WAR and WAW correctly.
== Correct WAW: ==
```lang=cpp, name=new-waw-call.cpp
Flow = buildFlow(Write, MustWrite, MayWrite, Schedule);
WAW = isl_union_flow_get_may_dependence(Flow);
isl_union_flow_free(Flow);
```
== Correct WAR: ==
```lang=cpp, name=new-war-call.cpp
Flow = buildFlow(Write, Read, MustaWrite, Schedule);
WAR = isl_union_flow_get_must_dependence(Flow);
isl_union_flow_free(Flow);
```
- We want the "shortest" WAR possible (exact dependences).
- We mark all the *must-writes* as may-source, reads as must-souce.
- Then, we ask for *must* dependence.
- This removes all the reads that flow through a *must-write*
before reaching a sink.
- Note that we only block ealier writes with *must-writes*. This is
intuitively correct, as we do not want may-writes to block
must-writes.
- Leaves us with direct (R -> W).
- This affects reduction generation since RED is built using WAW and WAR.
= New StrictWAW for Reductions: =
- We used to call:
```lang=cpp,name=old-waw-war-call.cpp
Flow = buildFlow(MustWrite, MustWrite, Read, Schedule);
WAW = isl_union_flow_get_must_dependence(Flow);
WAR = isl_union_flow_get_may_dependence(Flow);
```
- This *is* the right model of WAW we need for reductions, just not in general.
- Reductions need to track only *strict* WAW, without any interfering reductions.
= Explanation: Why the new WAR dependences in tests are correct: =
- We no longer set WAR = WAR - WAW
- Hence, we will have WAR dependences that were originally removed.
- These may look incorrect, but in fact make sense.
== Code: ==
```lang=llvm, name=new-war-dependence.ll
; void manyreductions(long *A) {
; for (long i = 0; i < 1024; i++)
; for (long j = 0; j < 1024; j++)
; S0: *A += 42;
;
; for (long i = 0; i < 1024; i++)
; for (long j = 0; j < 1024; j++)
; S1: *A += 42;
;
```
=== WAR dependence: ===
{ S0[1023, 1023] -> S1[0, 0] }
- Between `S0[1023, 1023]` and `S1[0, 0]`, we will have the dependences:
```lang=cpp, name=dependence-incorrect, counterexample
S0[1023, 1023]:
*-- tmp = *A (load0)--*
WAR 2 add = tmp + 42 |
*-> *A = add (store0) |
WAR 1
S1[0, 0]: |
tmp = *A (load1) |
add = tmp + 42 |
A = add (store1)<-*
```
- One may assume that WAR2 *hides* WAR1 (since store0 happens before
store1). However, within a statement, Polly has no idea about the
ordering of loads and stores.
- Hence, according to Polly, the code may have looked like this:
```lang=cpp, name=dependence-correct
S0[1023, 1023]:
A = add (store0)
tmp = A (load0) ---*
add = A + 42 |
WAR 1
S1[0, 0]: |
tmp = A (load1) |
add = A + 42 |
A = add (store1) <-*
```
- So, Polly generates (correct) WAR dependences. It does not make sense
to remove these dependences, since they are correct with respect to
Polly's model.
Reviewers: grosser, Meinersbur
tags: #polly
Differential revision: https://reviews.llvm.org/D31386
llvm-svn: 299429
Add support for -polly-codegen-perf-monitoring. When performance monitoring
is enabled, we emit performance monitoring code during code generation that
prints after program exit statistics about the total number of cycles executed
as well as the number of cycles spent in scops. This gives an estimate on how
useful polyhedral optimizations might be for a given program.
Example output:
Polly runtime information
-------------------------
Total: 783110081637
Scops: 663718949365
In the future, we might also add functionality to measure how much time is spent
in optimized scops and how many cycles are spent in the fallback code.
Reviewers: bollu,sebpop
Tags: #polly
Differential Revision: https://reviews.llvm.org/D31599
llvm-svn: 299359
Trivial fix for two testcases. When Polly isn't linked into opt,
independent of whether it's built in-tree or not, these testcases forget
to load the appropriate library.
Contributed-by: Philip Pfaffe <philip.pfaffe@gmail.com>
Differential Revision: https://reviews.llvm.org/D31596
llvm-svn: 299357
Introduce another level of alias metadata to distinguish the individual
non-aliasing accesses that have inter iteration alias-free base pointers
marked with "Inter iteration alias-free" mark nodes. It can be used to,
for example, distinguish different stores (loads) produced by unrolling of
the innermost loops and, subsequently, sink (hoist) them by LICM.
Reviewed-by: Tobias Grosser <tobias@grosser.es>
Differential Revision: https://reviews.llvm.org/D30606
llvm-svn: 298510
Map the new load to the base pointer of the invariant load hoisted load
to be able to find the alias information for it.
Reviewed-by: Tobias Grosser <tobias@grosser.es>
Differential Revision: https://reviews.llvm.org/D30605
llvm-svn: 298507
When not adding constraints on parameters using -polly-ignore-parameter-bounds,
the context may not necessarily list all parameter dimensions. To support code
generation in this situation, we now always iterate over the actual parameter
list, rather than relying on the context to list all parameter dimensions.
llvm-svn: 298197
In commit r219005 lifetime markers have been introduced to mark the lifetime of
the OpenMP context data structure. However, their use seems incorrect and
recently caused a miscompile in ASC_Sequoia/CrystalMk after r298053 which was
not at all related to r298053. r298053 only caused a change in the loop order,
as this change resulted in a different isl internal representation which caused
the scheduler to derive a different schedule. This change then caused the IR to
change, which apparently created a pattern in which LLVM exploites the lifetime
markers. It seems we are using the OpenMP context outside of the lifetime
markers. Even though CrystalMk could probably be fixed by expanding the scope of
the lifetime markers, it is not clear what happens in case the OpenMP function
call is in a loop which will cause a sequence of starting and ending lifetimes.
As it is unlikely that the lifetime markers give any performance benefit, we
just drop them to remove complexity.
llvm-svn: 298192
ScopInfo's normal profitability heuristic considers SCoPs where all
statements have scalar writes as not profitably optimizable and
invalidate the SCoP in that case. However, -polly-delicm and
-polly-simplify may be able to remove some of the scalar writes such
that the flag -polly-unprofitable-scalar-accs=false allows disabling
that part of the heuristic.
In cases where DeLICM (or other passes after ScopInfo) are not
successful in removing scalar writes, the SCoP is still not profitably
optimizable. The schedule optimizer would again try computing another
schedule, resulting in slower compilation.
The -polly-prune-unprofitable pass applies the profitability heuristic
again before the schedule optimizer Polly can still bail out even with
-polly-unprofitable-scalar-accs=false.
Differential Revision: https://reviews.llvm.org/D31033
llvm-svn: 298080
Dependences::calculateDependences.
This ensures that we handle may-writes correctly when building
dependence information. Also add a test case checking correctness of
may-write information. Not handling it before was an oversight.
Differential Revision: https://reviews.llvm.org/D31075
llvm-svn: 298074
In ScheduleOptimizer::isTileableBand(), allow the case in which
the band node's child is an isl_schedule_sequence_node and its
grandchildren isl_schedule_leaf_nodes. This case can arise when
two or more statements are fused by the isl scheduler.
The tile_after_fusion.ll test has two statements in separate
loop nests and checks whether they are tiled after being fused
when polly-opt-fusion equals "max".
Reviewers: grosser
Subscribers: gareevroman, pollydev
Tags: #polly
Contributed-by: Theodoros Theodoridis <theodort@student.ethz.ch>
Differential Revision: https://reviews.llvm.org/D30815
llvm-svn: 297587
This new pass removes unnecessary accesses and writes. It currently
supports 2 simplifications, but more are planned.
It removes write accesses that write a loaded value back to the location
it was loaded from. It is a typical artifact from DeLICM. Removing it
will get rid of bogus dependencies later in dependency analysis.
It also removes statements without side-effects. ScopInfo already
removes these, but the removal of unnecessary writes can result in
more side-effect free statements.
Differential Revision: https://reviews.llvm.org/D30820
llvm-svn: 297473
In case LLVM pointers are annotated with !dereferencable attributes/metadata
or LLVM can look at the allocation from which a pointer is derived, we can know
that dereferencing pointers is safe and can be done unconditionally. We use this
information to proof certain pointers as save to hoist and then hoist them
unconditionally.
llvm-svn: 297375
One of the current limitations of DeLICM is that it only creates
PHI WRITEs that it knows are read by some PHI. Such writes may not span
all instances of a statement. Polly's code generator currently does not
support MemoryAccesses that are not executed in all instances
('partial accesses') and so has to give up on a possible mapping.
This workaround has once been suggested by Tobias Grosser: Try to
interpolate an arbitrary expansion to all instances. It will be checked
for possible conflicts with the existing Knowledge and can be applied if
the conflict checking result is that no semantics are changed.
Expansion is done by simplifying the mapping by coalescing with the hope
that coalescing will find a polyhedral 'rule' of the relevant map. It is
then 'gist'-ed using the domain of the relevant instances such that the
rule is expanded to the universe and finally intersected with the domain
of all statement instances.
The expansion makes conflicts become more likely, the found rule may
still not encompass all statement instances and the found rule exposes
internals of isl's implementation of coalesce and gist. The latter means
that the result depends on how much effort the implementation invests
into finding a rule which may change between versions of isl. Trivial
implementations of gist and coalesce just return the input arguments.
A patch that makes codegen support partial accesses is in preparation
as well.
Differential Revision: https://reviews.llvm.org/D30763
llvm-svn: 297373
Simplify ScopDetection::isInvariant(). Essentially deny everything that
is defined within the SCoP and is not load-hoisted.
The previous understanding of "invariant" has a few holes:
- Expressions without side-effects with only invariant arguments, but
are defined withing the SCoP's region with the exception of selects
and PHIs. These should be part of the index expression derived by
ScalarEvolution and not of the base pointer.
- Function calls with that are !mayHaveSideEffects() (typically
functions with "readnone nounwind" attributes). An example is given
below.
@C = external global i32
declare float* @getNextBasePtr(float*) readnone nounwind
...
%ptr = call float* @getNextBasePtr(float* %A, float %B)
The call might return:
* %A, so %ptr aliases with it in the SCoP
* %B, so %ptr aliases with it in the SCoP
* @C, so %ptr aliases with it in the SCoP
* a new pointer everytime it is called, such as malloc()
* a pointer into the allocated block of one of the aforementioned
* any of the above, at random at each call
Hence and contrast to a comment in the base_pointer.ll regression
test, %ptr is not necessarily the same all the time. It might also
alias with anything and no AliasAnalysis can tell otherwise if the
definition is external. It is hence not suitable in the role of a
base pointer.
The practical problem with base pointers defined in SCoP statements is
that it is not available globally in the SCoP. The statement instance
must be executed first before the base pointer can be used. This is no
problem if the base pointer is transferred as a scalar value between
statements. Uses of MemoryAccess::setNewAccessRelation may add a use of
the base pointer anywhere in the array. setNewAccessRelation is used by
JSONImporter, DeLICM and D28518. Indeed, BlockGenerator currently
assumes that base pointers are available globally and generates invalid
code for new access relation (referring to the base pointer of the
original code) if not, even if the base pointer would be available in
the statement.
This could be fixed with some added complexity and restrictions. The
ExprBuilder must lookup the local BBMap and code that call
setNewAccessRelation must check whether the base pointer is available
first.
The code would still be incorrect in the presence of aliasing. There
is the switch -polly-ignore-aliasing to explicitly allow this, but
it is hardly a justification for the additional complexity. It would
still be mostly useless because in most cases either getNextBasePtr()
has external linkage in which case the readnone nounwind attributes
cannot be derived in the translation unit itself, or is defined in the
same translation unit and gets inlined.
Reviewed By: grosser
Differential Revision: https://reviews.llvm.org/D30695
llvm-svn: 297281
Only when load-hoisted we can be sure the base pointer is invariant
during the SCoP's execution. Most of the time it would be added to
the required hoists for the alias checks anyway, except with
-polly-ignore-aliasing, -polly-use-runtime-alias-checks=0 or if
AliasAnalysis is already sure it doesn't alias with anything
(for instance if there is no other pointer to alias with).
Two more parts in Polly assume that this load-hoisting took place:
- setNewAccessRelation() which contains an assert which tests this.
- BlockGenerator which would use to the base ptr from the original
code if not load-hoisted (if the access expression is regenerated)
Differential Revision: https://reviews.llvm.org/D30694
llvm-svn: 297195
There is no point in optimizing unreachable code, hence our test cases should
always return.
This commit is part of a series that makes Polly more robust on the presence of
unreachables.
llvm-svn: 297158
These test cases should work in combination with
https://reviews.llvm.org/D12676, but became outdated over time. Update them
in preparation of discussions with Daniel Berlin on how to represent unreachable
in the post-dominator tree.
llvm-svn: 297157
Scops that exit with an unreachable are today still permitted, but make little
sense to optimize. We therefore can already skip them during scop detection.
This speeds up scop detection in certain cases and also ensures that bugpoint
does not introduce unreachables when reducing test cases.
In practice this change should have little impact, as the performance of
unreachable code is unlikely to matter.
This commit is part of a series that makes Polly more robust in the presence
of unreachables.
llvm-svn: 297151
There is no point in optimizing unreachable code, hence our test cases should
always return.
This commit is part of a series that makes Polly more robust on the presence of
unreachables.
llvm-svn: 297150
There is no point in optimizing unreachable code, hence our test cases should
always return.
This commit is part of a series that makes Polly more robust on the presence of
unreachables.
llvm-svn: 297147
r296992 made ScalarEvolution's CompareValueComplexity less aggressive,
and that broke the polly test being fixed in this change. This change
explicitly bumps CompareValueComplexity in said test case to make it
pass.
Can someone from the polly team please can give me an idea on if this
case is important enough to have
scalar-evolution-max-value-compare-depth be 3 by default?
llvm-svn: 296994
Some Polly ACC test cases fail without a working NVPTX backend. We explicitly
specify this dependence in REQUIRES. Alternatively, we could have only marked
polly-acc as supported in case the NVPTX backend is available, but as we might
use other backends in the future, this does not seem to be the best choice.
For this to work, we also need to make the 'targets_to_build' information
available.
Suggested-by: Michael Kruse <llvm@meinersbur.de>
llvm-svn: 296853
These loads cannot be savely hoisted as the condition guarding the
non-affine region cannot be duplicated to also protect the hoisted load
later on. Today they are dropped in ScopInfo. By checking for this early, we
do not even try to model them and possibly can still optimize smaller regions
not containing this specific required-invariant load.
llvm-svn: 296744
Multi-disjunct access maps can easily result in inbound assumptions which
explode in case of many memory accesses and many parameters. This change reduces
compilation time of some larger kernel from over 15 minutes to less than 16
seconds.
Interesting is the test case test/ScopInfo/multidim_param_in_subscript.ll
which has a memory access
[n] -> { Stmt_for_body3[i0, i1] -> MemRef_A[i0, -1 + n - i1] }
which requires folding, but where only a single disjunct remains. We can still
model this test case even when only using limited memory folding.
For people only reading commit messages, here the comment that explains what
memory folding is:
To recover memory accesses with array size parameters in the subscript
expression we post-process the delinearization results.
We would normally recover from an access A[exp0(i) * N + exp1(i)] into an
array A[][N] the 2D access A[exp0(i)][exp1(i)]. However, another valid
delinearization is A[exp0(i) - 1][exp1(i) + N] which - depending on the
range of exp1(i) - may be preferrable. Specifically, for cases where we
know exp1(i) is negative, we want to choose the latter expression.
As we commonly do not have any information about the range of exp1(i),
we do not choose one of the two options, but instead create a piecewise
access function that adds the (-1, N) offsets as soon as exp1(i) becomes
negative. For a 2D array such an access function is created by applying
the piecewise map:
[i,j] -> [i, j] : j >= 0
[i,j] -> [i-1, j+N] : j < 0
After this patch we generate only the first case, except for situations where
we can proove the first case to be invalid and can consequently select the
second without introducing disjuncts.
llvm-svn: 296679
Without this simplification for a loop nest:
void foo(long n1_a, long n1_b, long n1_c, long n1_d,
long p1_b, long p1_c, long p1_d,
float A_1[][p1_b][p1_c][p1_d]) {
for (long i = 0; i < n1_a; i++)
for (long j = 0; j < n1_b; j++)
for (long k = 0; k < n1_c; k++)
for (long l = 0; l < n1_d; l++)
A_1[i][j][k][l] += i + j + k + l;
}
the assumption:
n1_a <= 0 or (n1_a > 0 and n1_b <= 0) or
(n1_a > 0 and n1_b > 0 and n1_c <= 0) or
(n1_a > 0 and n1_b > 0 and n1_c > 0 and n1_d <= 0) or
(n1_a > 0 and n1_b > 0 and n1_c > 0 and n1_d > 0 and
p1_b >= n1_b and p1_c >= n1_c and p1_d >= n1_d)
is taken rather than the simpler assumption:
p9_b >= n9_b and p9_c >= n9_c and p9_d >= n9_d.
The former is less strict, as it allows arbitrary values of p1_* in case, the
loop is not executed at all. However, in practice these precise constraints
explode when combined across different accesses and loops. For now it seems
to make more sense to take less precise, but more scalable constraints by
default. In case we find a practical example where more precise constraints
are needed, we can think about allowing such precise constraints in specific
situations where they help.
This change speeds up the new test case from taking very long (waited at least
a minute, but it probably takes a lot more) to below a second.
llvm-svn: 296456
This patch adds an option to build against a version of libisl already
installed on the system. The installation is autodetected using the
pkg-config file shipped with isl.
The detection of the library is in the FindISL.cmake module that creates
an imported target.
Contributed-by: Philip Pfaffe <philip.pfaffe@gmail.com>
Differential Revision: https://reviews.llvm.org/D30043
llvm-svn: 296361
These verify that some scalars are not mapped because it would be
incorrect to do so.
For these check we verify that no transformation has been executed from
output of the pass's '-analyze'. Adding optimization remarks is not useful
as it would result in too many messages, even repeated ones. I avoided
checking the '-debug-only=polly-delicm' output which is an antipattern.
llvm-svn: 296348
Currently, pattern based optimizations of Polly can identify matrix
multiplication and optimize it according to BLIS matmul optimization pattern
(see ScheduleTreeOptimizer for details). This patch makes optimizations
based on pattern matching be enabled by default.
Reviewed-by: Tobias Grosser <tobias@grosser.es>
Differential Revision: https://reviews.llvm.org/D30293
llvm-svn: 295958
These tests were not included in the main DeLICM commit. These check the
cases where zone analysis cannot be successful because of assumption
violations.
We use the LLVM optimization remark infrastructure as it seems to be the
best fit for this kind of messages. I tried to make use if the
OptimizationRemarkEmitter. However, it would insert additional function
passes into the pass manager to get the hotness information. The pass
manager would insert them between the flatten pass and delicm, causing
the ScopInfo with the flattened schedule being thrown away.
Differential Revision: https://reviews.llvm.org/D30253
llvm-svn: 295846
Implement the -polly-delicm pass. The pass intends to undo the
effects of LoopInvariantCodeMotion (LICM) which adds additional scalar
dependencies into SCoPs. DeLICM will try to map those scalars back to
the array elements they were promoted from, as long as the array
element is unused.
The is the main patch from the DeLICM/DePRE patch series. It does not
yet undo GVN PRE for which additional information about known values
is needed and does not handle PHI write accesses that have have no
target. As such its usefulness is limited. Patches for these issues
including regression tests for error situatons will follow.
Reviewers: grosser
Differential Revision: https://reviews.llvm.org/D24716
llvm-svn: 295713
Instead of counting the number of read-only accesses, we now count the number of
distinct read-only array references when checking if a run-time alias check
may be too complex. The run-time alias check is quadratic in the number of
base pointers, not the number of accesses.
Before this change we accidentally skipped SPEC's lbm test case.
llvm-svn: 295567
This test case is a mini performance test case that shows the time needed for a
couple of simple reductions. It takes today about 325ms on my machine to run
this test case through 'opt' with scop construction and reduction detection. It
can be used as mini-proxy for further tuning of the reduction code.
Generally we do not commit performance test cases, but as this is very
small and also very fast it seems OK to keep it in the lit test suite.
This test case will also help to verify that future changes to the reduction
code will not affect the ordering of the reduction sets and will consequently
not cause spurious performance changes that only result from reordering of
dependences in the reduction set.
llvm-svn: 295549