Commit Graph

248 Commits

Author SHA1 Message Date
Alex Zinenko 3ba14fa0ce [mlir] Introduce data layout modeling subsystem
Data layout information allows to answer questions about the size and alignment
properties of a type. It enables, among others, the generation of various
linear memory addressing schemes for containers of abstract types and deeper
reasoning about vectors. This introduces the subsystem for modeling data
layouts in MLIR.

The data layout subsystem is designed to scale to MLIR's open type and
operation system. At the top level, it consists of attribute interfaces that
can be implemented by concrete data layout specifications; type interfaces that
should be implemented by types subject to data layout; operation interfaces
that must be implemented by operations that can serve as data layout scopes
(e.g., modules); and dialect interfaces for data layout properties unrelated to
specific types. Built-in types are handled specially to decrease the overall
query cost.

A concrete default implementation of these interfaces is provided in the new
Target dialect. Defaults for built-in types that match the current behavior are
also provided.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D97067
2021-03-11 16:54:47 +01:00
Christian Sigg bafe418d12 [mlir] Change test-gpu-to-cubin to derive from SerializeToBlobPass
Clean-up after D98279, remove one call to createConvertGPUKernelToBlobPass().

Depends On D98203

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D98360
2021-03-11 10:42:20 +01:00
Diego Caballero 2de6dbda66 [mlir] Add 'Skip' result to Operation visitor
This patch is a follow-up on D97217. It adds a new 'Skip' result to the Operation visitor
so that a callback can stop the ongoing visit of an operation/block/region and
continue visiting the next one without fully interrupting the walk. Skipping is
needed to be able to erase an operation/block in pre-order and do not continue
visiting the internals of that operation/block.

Related to the skipping mechanism, the patch also introduces the following changes:
 * Added new TestIRVisitors pass with basic testing for the IR visitors.
 * Fixed missing early increment ranges in visitor implementation.
 * Updated documentation of walk methods to include erasure information and walk
   order information.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D97820
2021-03-06 00:02:20 +02:00
Eugene Zhulenev f99ccf6516 [mlir] Add math polynomial approximation pass
This gives ~30x speedup compared to expanding Tanh into exp operations:

```
name                  old cpu/op  new cpu/op  delta
BM_mlir_Tanh_f32/10    253ns ± 3%    55ns ± 7%  -78.35%  (p=0.000 n=44+41)
BM_mlir_Tanh_f32/100  2.21µs ± 4%  0.14µs ± 8%  -93.85%  (p=0.000 n=48+49)
BM_mlir_Tanh_f32/1k   22.6µs ± 4%   0.7µs ± 5%  -96.68%  (p=0.000 n=32+42)
BM_mlir_Tanh_f32/10k   225µs ± 5%     7µs ± 6%  -96.88%  (p=0.000 n=49+55)

name                  old time/op             new time/op             delta
BM_mlir_Tanh_f32/10    259ns ± 1%               56ns ± 2%  -78.31%        (p=0.000 n=41+39)
BM_mlir_Tanh_f32/100  2.27µs ± 1%             0.14µs ± 5%  -93.89%        (p=0.000 n=46+49)
BM_mlir_Tanh_f32/1k   22.9µs ± 1%              0.8µs ± 4%  -96.67%        (p=0.000 n=30+42)
BM_mlir_Tanh_f32/10k   230µs ± 0%                7µs ± 3%  -96.88%        (p=0.000 n=37+55)
```

This approximations is based on Eigen::generic_fast_tanh function

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D96739
2021-02-19 12:43:36 -08:00
River Riddle b9c876bd7e [mlir] Add initial support for an alias analysis framework in MLIR
This revision adds a new `AliasAnalysis` class that represents the main alias analysis interface in MLIR. The purpose of this class is not to hold the aliasing logic itself, but to provide an interface into various different alias analysis implementations. As it evolves this should allow for users to plug in specialized alias analysis implementations for their own needs, and have them immediately usable by other analyses and transformations.

This revision also adds an initial simple generic alias, LocalAliasAnalysis, that provides support for performing stateless local alias queries between values. This class is similar in scope to LLVM's BasicAA.

Differential Revision: https://reviews.llvm.org/D92343
2021-02-09 14:21:27 -08:00
MaheshRavishankar 98835e3d98 [mlir][Linalg] Enable TileAndFusePattern to work with tensors.
Differential Revision: https://reviews.llvm.org/D94531
2021-01-28 14:13:01 -08:00
ergawy 1d0dc9be6d [MLIR][SPIRV] Add rewrite pattern to convert select+cmp into GLSL clamp.
Adds rewrite patterns to convert select+cmp instructions into clamp
instructions whenever possible. Support is added to convert:

- FOrdLessThan, FOrdLessThanEqual to GLSLFClampOp.
- SLessThan, SLessThanEqual to GLSLSClampOp.
- ULessThan, ULessThanEqual to GLSLUClampOp.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D93618
2020-12-23 15:47:19 +01:00
River Riddle abfd1a8b3b [mlir][PDL] Add support for PDL bytecode and expose PDL support to OwningRewritePatternList
PDL patterns are now supported via a new `PDLPatternModule` class. This class contains a ModuleOp with the pdl::PatternOp operations representing the patterns, as well as a collection of registered C++ functions for native constraints/creations/rewrites/etc. that may be invoked via the pdl patterns. Instances of this class are added to an OwningRewritePatternList in the same fashion as C++ RewritePatterns, i.e. via the `insert` method.

The PDL bytecode is an in-memory representation of the PDL interpreter dialect that can be efficiently interpreted/executed. The representation of the bytecode boils down to a code array(for opcodes/memory locations/etc) and a memory buffer(for storing attributes/operations/values/any other data necessary). The bytecode operations are effectively a 1-1 mapping to the PDLInterp dialect operations, with a few exceptions in cases where the in-memory representation of the bytecode can be more efficient than the MLIR representation. For example, a generic `AreEqual` bytecode op can be used to represent AreEqualOp, CheckAttributeOp, and CheckTypeOp.

The execution of the bytecode is split into two phases: matching and rewriting. When matching, all of the matched patterns are collected to avoid the overhead of re-running parts of the matcher. These matched patterns are then considered alongside the native C++ patterns, which rewrite immediately in-place via `RewritePattern::matchAndRewrite`,  for the given root operation. When a PDL pattern is matched and has the highest benefit, it is passed back to the bytecode to execute its rewriter.

Differential Revision: https://reviews.llvm.org/D89107
2020-12-01 15:05:50 -08:00
Jacques Pienaar e534cee26a [mlir] Add a shape function library op
Op with mapping from ops to corresponding shape functions for those op
in the library and mechanism to associate shape functions to functions.
The mapping of operand to shape function is kept separate from the shape
functions themselves as the operation is associated to the shape
function and not vice versa, and one could have a common library of
shape functions that can be used in different contexts.

Use fully qualified names and require a name for shape fn lib ops for
now and an explicit print/parse (based around the generated one & GPU
module op ones).

This commit reverts d9da4c3e73. Fixes
missing headers (don't know how that was working locally).

Differential Revision: https://reviews.llvm.org/D91672
2020-11-29 11:15:30 -08:00
Mehdi Amini d9da4c3e73 Revert "[mlir] Add a shape function library op"
This reverts commit 6dd9596b19.

Build is broken.
2020-11-29 05:28:42 +00:00
Jacques Pienaar 6dd9596b19 [mlir] Add a shape function library op
Op with mapping from ops to corresponding shape functions for those op
in the library and mechanism to associate shape functions to functions.
The mapping of operand to shape function is kept separate from the shape
functions themselves as the operation is associated to the shape
function and not vice versa, and one could have a common library of
shape functions that can be used in different contexts.

Use fully qualified names and require a name for shape fn lib ops for
now and an explicit print/parse (based around the generated one & GPU
module op ones).

Differential Revision: https://reviews.llvm.org/D91672
2020-11-28 15:53:59 -08:00
MaheshRavishankar e65a5e5b00 [mlir][Linalg] Fuse sequence of Linalg operation (on buffers)
Enhance the tile+fuse logic to allow fusing a sequence of operations.

Make sure the value used to obtain tile shape is a
SubViewOp/SubTensorOp. Current logic used to get the bounds of loop
depends on the use of `getOrCreateRange` method on `SubViewOp` and
`SubTensorOp`. Make sure that the value/dim used to compute the range
is from such ops.  This fix is a reasonable WAR, but a btter fix would
be to make `getOrCreateRange` method be a method of `ViewInterface`.

Differential Revision: https://reviews.llvm.org/D90991
2020-11-23 10:30:51 -08:00
Mikhail Goncharov 0caa82e2ac Revert "[mlir][Linalg] Fuse sequence of Linalg operation (on buffers)"
This reverts commit f8284d21a8.

Revert "[mlir][Linalg] NFC: Expose some utility functions used for promotion."

This reverts commit 0c59f51592.

Revert "Remove unused isZero function"

This reverts commit 0f9f0a4046.

Change f8284d21 led to multiple failures in IREE compilation.
2020-11-20 13:12:54 +01:00
MaheshRavishankar f8284d21a8 [mlir][Linalg] Fuse sequence of Linalg operation (on buffers)
Enhance the tile+fuse logic to allow fusing a sequence of operations.

Differential Revision: https://reviews.llvm.org/D90991
2020-11-19 19:03:06 -08:00
Aart Bik eced4a8e6f [mlir] [sparse] start of sparse tensor compiler support
As discussed in https://llvm.discourse.group/t/mlir-support-for-sparse-tensors/2020
this CL is the start of sparse tensor compiler support in MLIR. Starting with a
"dense" kernel expressed in the Linalg dialect together with per-dimension
sparsity annotations on the tensors, the compiler automatically lowers the
kernel to sparse code using the methods described in Fredrik Kjolstad's thesis.

Many details are still TBD. For example, the sparse "bufferization" is purely
done locally since we don't have a global solution for propagating sparsity
yet. Furthermore, code to input and output the sparse tensors is missing.
Nevertheless, with some hand modifications, the generated MLIR can be
easily converted into runnable code already.

Reviewed By: nicolasvasilache, ftynse

Differential Revision: https://reviews.llvm.org/D90994
2020-11-17 13:10:42 -08:00
Sean Silva 7c62c6313b [mlir] Add DecomposeCallGraphTypes pass.
This replaces the old type decomposition logic that was previously mixed
into bufferization, and makes it easily accessible.

This also deletes TestFinalizingBufferize, because after we remove the type
decomposition, it doesn't do anything that is not already provided by
func-bufferize.

Differential Revision: https://reviews.llvm.org/D90899
2020-11-16 12:25:35 -08:00
Eugene Zhulenev bb0d5f767d [mlir] Add NumberOfExecutions analysis + update RegionBranchOpInterface interface to query number of region invocations
Implements RFC discussed in: https://llvm.discourse.group/t/rfc-operationinstancesinterface-or-any-better-name/2158/10

Reviewed By: silvas, ftynse, rriddle

Differential Revision: https://reviews.llvm.org/D90922
2020-11-11 01:43:17 -08:00
Alexander Belyaev 9d02e0e38d [mlir][std] Add ExpandOps pass.
The pass combines patterns of ExpandAtomic, ExpandMemRefReshape,
StdExpandDivs passes. The pass is meant to legalize STD for conversion to LLVM.

Differential Revision: https://reviews.llvm.org/D91082
2020-11-09 21:58:28 +01:00
Stella Laurenzo 86b011777e Remove TOSA test passes from non test registration.
* Wires them in the same way that peer-dialect test passes are registered.
* Fixes the build for -DLLVM_INCLUDE_TESTS=OFF.

Differential Revision: https://reviews.llvm.org/D91022
2020-11-07 18:34:11 -08:00
Sean Silva f7bc568266 [mlir] Remove AppendToArgumentsList functionality from BufferizeTypeConverter.
This functionality is superceded by BufferResultsToOutParams pass (see
https://reviews.llvm.org/D90071) for users the require buffers to be
out-params. That pass should be run immediately after all tensors are gone from
the program (before buffer optimizations and deallocation insertion), such as
immediately after a "finalizing" bufferize pass.

The -test-finalizing-bufferize pass now defaults to what used to be the
`allowMemrefFunctionResults=true` flag. and the
finalizing-bufferize-allowed-memref-results.mlir file is moved
to test/Transforms/finalizing-bufferize.mlir.

Differential Revision: https://reviews.llvm.org/D90778
2020-11-05 11:20:09 -08:00
Alexander Belyaev 72c65b698e [mlir] Move TestDialect and its passes to mlir::test namespace.
TestDialect has many operations and they all live in ::mlir namespace.
Sometimes it is not clear whether the ops used in the code for the test passes
belong to Standard or to Test dialects.

Also, with this change it is easier to understand what test passes registered
in mlir-opt are actually passes in mlir/test.

Differential Revision: https://reviews.llvm.org/D90794
2020-11-05 15:29:15 +01:00
Sean Silva 773ad135a3 [mlir][Bufferize] Rename TestBufferPlacement to TestFinalizingBufferize
BufferPlacement is no longer part of bufferization. However, this test
is an important test of "finalizing" bufferize passes.
A "finalizing" bufferize conversion is one that performs a "full"
conversion and expects all tensors to be gone from the program. This in
particular involves rewriting funcs (including block arguments of the
contained region), calls, and returns. The unique property of finalizing
bufferization passes is that they cannot be done via a local
transformation with suitable materializations to ensure composability
(as other bufferization passes do). For example, if a call is
rewritten, the callee needs to be rewritten otherwise the IR will end up
invalid. Thus, finalizing bufferization passes require an atomic change
to the entire program (e.g. the whole module).

This new designation makes it clear also that it shouldn't be testing
bufferization of linalg ops, so the tests have been updated to not use
linalg.generic ops. (linalg.copy is still used as the "copy" op for
copying into out-params)

Differential Revision: https://reviews.llvm.org/D89979
2020-11-02 12:42:32 -08:00
ergawy 90a8260cb4 [MLIR][SPIRV] Start module combiner.
This commit adds a new library that merges/combines a number of spv
modules into a combined one. The library has a single entry point:
combine(...).

To combine a number of MLIR spv modules, we move all the module-level ops
from all the input modules into one big combined module. To that end, the
combination process can proceed in 2 phases:

  (1) resolving conflicts between pairs of ops from different modules
  (2) deduplicate equivalent ops/sub-ops in the merged module. (TODO)

This patch implements only the first phase.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D90477
2020-10-30 16:55:43 -04:00
Geoffrey Martin-Noble 1142eaed9d Revert "[MLIR][SPIRV] Start module combiner."
This reverts commit 27324f2855.

Shared libs build is broken linking lib/libMLIRSPIRVModuleCombiner.so:

```
ModuleCombiner.cpp:
  undefined reference to `mlir::spirv::ModuleOp::addressing_model()
```

https://buildkite.com/mlir/mlir-core/builds/8988#e3d966b9-ea43-492e-a192-b28e71e9a15b
2020-10-30 13:34:15 -07:00
ergawy 27324f2855 [MLIR][SPIRV] Start module combiner.
This commit adds a new library that merges/combines a number of spv
modules into a combined one. The library has a single entry point:
combine(...).

To combine a number of MLIR spv modules, we move all the module-level ops
from all the input modules into one big combined module. To that end, the
combination process can proceed in 2 phases:

  (1) resolving conflicts between pairs of ops from different modules
  (2) deduplicate equivalent ops/sub-ops in the merged module. (TODO)

This patch implements only the first phase.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D90477
2020-10-30 14:58:17 -04:00
Mehdi Amini b3430ed05f Revert "[MLIR][SPIRV] Start module combiner"
This reverts commit 316593ce83.
Build is broken with:

TestModuleCombiner.cpp:(.text._ZN12_GLOBAL__N_122TestModuleCombinerPass14runOnOperationEv+0x195): undefined reference to `mlir::spirv::combine(llvm::MutableArrayRef<mlir::spirv::ModuleOp>, mlir::OpBuilder&, llvm::function_ref<void (mlir::spirv::ModuleOp, llvm::StringRef, llvm::StringRef)>)'
2020-10-30 15:09:21 +00:00
ergawy 316593ce83 [MLIR][SPIRV] Start module combiner
This commit adds a new library that merges/combines a number of spv
modules into a combined one. The library has a single entry point:
combine(...).

To combine a number of MLIR spv modules, we move all the module-level ops
from all the input modules into one big combined module. To that end, the
combination process can proceed in 2 phases:

  (1) resolving conflicts between pairs of ops from different modules
  (2) deduplicate equivalent ops/sub-ops in the merged module. (TODO)

This patch implements only the first phase.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D90022
2020-10-30 09:37:28 -04:00
Nicolas Vasilache 9b17bf2e54 [mlir][Linalg] Make Linalg fusion a test pass
Linalg "tile-and-fuse" is currently exposed as a Linalg pass "-linalg-fusion" but only the mechanics of the transformation are currently relevant.
Instead turn it into a "-test-linalg-greedy-fusion" pass which performs canonicalizations to enable more fusions to compose.
This allows dropping the OperationFolder which is not meant to be used with the pattern rewrite infrastructure.

Differential Revision: https://reviews.llvm.org/D90394
2020-10-29 15:18:51 +00:00
Alexander Belyaev 7a996027b9 [mlir] Convert memref_reshape to memref_reinterpret_cast.
Differential Revision: https://reviews.llvm.org/D90235
2020-10-28 21:15:32 +01:00
Mehdi Amini e7021232e6 Remove global dialect registration
This has been deprecated for >1month now and removal was announced in:

https://llvm.discourse.group/t/rfc-revamp-dialect-registration/1559/11

Differential Revision: https://reviews.llvm.org/D86356
2020-10-24 00:35:55 +00:00
Mehdi Amini 6a72635881 Revert "Remove global dialect registration"
This reverts commit b22e2e4c6e.

Investigating broken builds
2020-10-23 21:26:48 +00:00
Mehdi Amini b22e2e4c6e Remove global dialect registration
This has been deprecated for >1month now and removal was announced in:

https://llvm.discourse.group/t/rfc-revamp-dialect-registration/1559/11

Differential Revision: https://reviews.llvm.org/D86356
2020-10-23 20:41:44 +00:00
Nicolas Vasilache af5be38a01 [mlir][Linalg] Make a Linalg CodegenStrategy available.
This revision adds a programmable codegen strategy from linalg based on staged rewrite patterns. Testing is exercised on a simple linalg.matmul op.

Differential Revision: https://reviews.llvm.org/D89374
2020-10-14 11:11:26 +00:00
ahmedsabie c0b3abd19a [MLIR] Add a foldTrait() mechanism to allow traits to define folding and test it with an Involution trait
This is the same diff as https://reviews.llvm.org/D88809/ except side effect
free check is removed for involution and a FIXME is added until the dependency
is resolved for shared builds. The old diff has more details on possible fixes.

Reviewed By: rriddle, andyly

Differential Revision: https://reviews.llvm.org/D89333
2020-10-13 21:26:21 +00:00
Mehdi Amini 5367a8b67f Revert "[MLIR] Add a foldTrait() mechanism to allow traits to define folding and test it with an Involution trait"
This reverts commit 1ceaffd95a.

The build is broken with  -DBUILD_SHARED_LIBS=ON ; seems like a possible
layering issue to investigate:

tools/mlir/lib/IR/CMakeFiles/obj.MLIRIR.dir/Operation.cpp.o: In function `mlir::MemoryEffectOpInterface::hasNoEffect(mlir::Operation*)':
Operation.cpp:(.text._ZN4mlir23MemoryEffectOpInterface11hasNoEffectEPNS_9OperationE[_ZN4mlir23MemoryEffectOpInterface11hasNoEffectEPNS_9OperationE]+0x9c): undefined reference to `mlir::MemoryEffectOpInterface::getEffects(llvm::SmallVectorImpl<mlir::SideEffects::EffectInstance<mlir::MemoryEffects::Effect> >&)'
2020-10-09 06:16:42 +00:00
ahmedsabie 1ceaffd95a [MLIR] Add a foldTrait() mechanism to allow traits to define folding and test it with an Involution trait
This change allows folds to be done on a newly introduced involution trait rather than having to manually rewrite this optimization for every instance of an involution

Reviewed By: rriddle, andyly, stephenneuendorffer

Differential Revision: https://reviews.llvm.org/D88809
2020-10-09 03:25:53 +00:00
Stephen Neuendorffer b0dce6b37f Revert "[RFC] Factor out repetitive cmake patterns for llvm-style projects"
This reverts commit e9b87f43bd.

There are issues with macros generating macros without an obvious simple fix
so I'm going to revert this and try something different.
2020-10-04 15:17:34 -07:00
Stephen Neuendorffer e9b87f43bd [RFC] Factor out repetitive cmake patterns for llvm-style projects
New projects (particularly out of tree) have a tendency to hijack the existing
llvm configuration options and build targets (add_llvm_library,
add_llvm_tool).  This can lead to some confusion.

1) When querying a configuration variable, do we care about how LLVM was
configured, or how these options were configured for the out of tree project?
2) LLVM has lots of defaults, which are easy to miss
(e.g. LLVM_BUILD_TOOLS=ON).  These options all need to be duplicated in the
CMakeLists.txt for the project.

In addition, with LLVM Incubators coming online, we need better ways for these
incubators to do things the "LLVM way" without alot of futzing.  Ideally, this
would happen in a way that eases importing into the LLVM monorepo when
projects mature.

This patch creates some generic infrastructure in llvm/cmake/modules and
refactors MLIR to use this infrastructure.  This should expand to include
add_xxx_library, which is by far the most complicated bit of building a
project correctly, since it has to deal with lots of shared library
configuration bits.  (MLIR currently hijacks the LLVM infrastructure for
building libMLIR.so, so this needs to get refactored anyway.)

Differential Revision: https://reviews.llvm.org/D85140
2020-10-03 17:12:35 -07:00
MaheshRavishankar c694588fc5 [mlir][Linalg] Add pattern to tile and fuse Linalg operations on buffers.
The pattern is structured similar to other patterns like
LinalgTilingPattern. The fusion patterns takes options that allows you
to fuse with producers of multiple operands at once.
- The pattern fuses only at the level that is known to be legal, i.e
  if a reduction loop in the consumer is tiled, then fusion should
  happen "before" this loop. Some refactoring of the fusion code is
  needed to fuse only where it is legal.
- Since the fusion on buffers uses the LinalgDependenceGraph that is
  not mutable in place the fusion pattern keeps the original
  operations in the IR, but are tagged with a marker that can be later
  used to find the original operations.

This change also fixes an issue with tiling and
distribution/interchange where if the tile size of a loop were 0 it
wasnt account for in these.

Differential Revision: https://reviews.llvm.org/D88435
2020-09-30 14:56:58 -07:00
Mehdi Amini fb1de7ed92 Implement a new kind of Pass: dynamic pass pipeline
Instead of performing a transformation, such pass yields a new pass pipeline
to run on the currently visited operation.
This feature can be used for example to implement a sub-pipeline that
would run only on an operation with specific attributes. Another example
would be to compute a cost model and dynamic schedule a pipeline based
on the result of this analysis.

Discussion: https://llvm.discourse.group/t/rfc-dynamic-pass-pipeline/1637

Recommit after fixing an ASAN issue: the callback lambda needs to be
allocated to a temporary to have its lifetime extended to the end of the
current block instead of just the current call expression.

Reviewed By: silvas

Differential Revision: https://reviews.llvm.org/D86392
2020-09-22 18:51:54 +00:00
Thomas Joerg 0356a413a4 Revert "Implement a new kind of Pass: dynamic pass pipeline"
This reverts commit 385c3f43fc.

Test  mlir/test/Pass:dynamic-pipeline-fail-on-parent.mlir.test fails
when run with ASAN:

ERROR: AddressSanitizer: stack-use-after-scope on address ...

Reviewed By: bkramer, pifon2a

Differential Revision: https://reviews.llvm.org/D88079
2020-09-22 12:00:30 +02:00
Mehdi Amini 385c3f43fc Implement a new kind of Pass: dynamic pass pipeline
Instead of performing a transformation, such pass yields a new pass pipeline
to run on the currently visited operation.
This feature can be used for example to implement a sub-pipeline that
would run only on an operation with specific attributes. Another example
would be to compute a cost model and dynamic schedule a pipeline based
on the result of this analysis.

Discussion: https://llvm.discourse.group/t/rfc-dynamic-pass-pipeline/1637

Reviewed By: silvas

Differential Revision: https://reviews.llvm.org/D86392
2020-09-22 01:24:25 +00:00
Navdeep Kumar 0602e8f77f [MLIR][Affine] Add parametric tile size support for affine.for tiling
Add support to tile affine.for ops with parametric sizes (i.e., SSA
values). Currently supports hyper-rectangular loop nests with constant
lower bounds only. Move methods

  - moveLoopBody(*)
  - getTileableBands(*)
  - checkTilingLegality(*)
  - tilePerfectlyNested(*)
  - constructTiledIndexSetHyperRect(*)

to allow reuse with constant tile size API. Add a test pass -test-affine
-parametric-tile to test parametric tiling.

Differential Revision: https://reviews.llvm.org/D87353
2020-09-17 23:39:14 +05:30
MaheshRavishankar 0a391c6079 [mlir][Analysis] Allow Slice Analysis to work with linalg::LinalgOp
Differential Revision: https://reviews.llvm.org/D87307
2020-09-10 18:54:22 -07:00
Jakub Lichman 67b37f571c [mlir] Conv ops vectorization pass
In this commit a new way of convolution ops lowering is introduced.
The conv op vectorization pass lowers linalg convolution ops
into vector contractions. This lowering is possible when conv op
is first tiled by 1 along specific dimensions which transforms
it into dot product between input and kernel subview memory buffers.
This pass converts such conv op into vector contraction and does
all necessary vector transfers that make it work.

Differential Revision: https://reviews.llvm.org/D86619
2020-09-08 08:47:42 +00:00
Mehdi Amini 63d1dc6665 Add a doc/tutorial on traversing the IR
Reviewed By: stephenneuendorffer

Differential Revision: https://reviews.llvm.org/D87221
2020-09-08 00:07:03 +00:00
Ni Hui df2efd7700 Fix MLIR build with MLIR_INCLUDE_TESTS=OFF
error message

/usr/bin/ld: CMakeFiles/mlir-opt.dir/mlir-opt.cpp.o: in function `main':
mlir-opt.cpp:(.text.startup.main+0xb9): undefined reference to `mlir::registerTestDialect(mlir::DialectRegistry&)'

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D86592
2020-08-27 04:04:20 +00:00
Mehdi Amini f9dc2b7079 Separate the Registration from Loading dialects in the Context
This changes the behavior of constructing MLIRContext to no longer load globally
registered dialects on construction. Instead Dialects are only loaded explicitly
on demand:
- the Parser is lazily loading Dialects in the context as it encounters them
during parsing. This is the only purpose for registering dialects and not load
them in the context.
- Passes are expected to declare the dialects they will create entity from
(Operations, Attributes, or Types), and the PassManager is loading Dialects into
the Context when starting a pipeline.

This changes simplifies the configuration of the registration: a compiler only
need to load the dialect for the IR it will emit, and the optimizer is
self-contained and load the required Dialects. For example in the Toy tutorial,
the compiler only needs to load the Toy dialect in the Context, all the others
(linalg, affine, std, LLVM, ...) are automatically loaded depending on the
optimization pipeline enabled.

To adjust to this change, stop using the existing dialect registration: the
global registry will be removed soon.

1) For passes, you need to override the method:

virtual void getDependentDialects(DialectRegistry &registry) const {}

and registery on the provided registry any dialect that this pass can produce.
Passes defined in TableGen can provide this list in the dependentDialects list
field.

2) For dialects, on construction you can register dependent dialects using the
provided MLIRContext: `context.getOrLoadDialect<DialectName>()`
This is useful if a dialect may canonicalize or have interfaces involving
another dialect.

3) For loading IR, dialect that can be in the input file must be explicitly
registered with the context. `MlirOptMain()` is taking an explicit registry for
this purpose. See how the standalone-opt.cpp example is setup:

  mlir::DialectRegistry registry;
  registry.insert<mlir::standalone::StandaloneDialect>();
  registry.insert<mlir::StandardOpsDialect>();

Only operations from these two dialects can be in the input file. To include all
of the dialects in MLIR Core, you can populate the registry this way:

  mlir::registerAllDialects(registry);

4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in
the context before emitting the IR: context.getOrLoadDialect<ToyDialect>()

Differential Revision: https://reviews.llvm.org/D85622
2020-08-19 01:19:03 +00:00
Mehdi Amini e75bc5c791 Revert "Separate the Registration from Loading dialects in the Context"
This reverts commit d14cf45735.
The build is broken with GCC-5.
2020-08-19 01:19:03 +00:00
Mehdi Amini d14cf45735 Separate the Registration from Loading dialects in the Context
This changes the behavior of constructing MLIRContext to no longer load globally
registered dialects on construction. Instead Dialects are only loaded explicitly
on demand:
- the Parser is lazily loading Dialects in the context as it encounters them
during parsing. This is the only purpose for registering dialects and not load
them in the context.
- Passes are expected to declare the dialects they will create entity from
(Operations, Attributes, or Types), and the PassManager is loading Dialects into
the Context when starting a pipeline.

This changes simplifies the configuration of the registration: a compiler only
need to load the dialect for the IR it will emit, and the optimizer is
self-contained and load the required Dialects. For example in the Toy tutorial,
the compiler only needs to load the Toy dialect in the Context, all the others
(linalg, affine, std, LLVM, ...) are automatically loaded depending on the
optimization pipeline enabled.

To adjust to this change, stop using the existing dialect registration: the
global registry will be removed soon.

1) For passes, you need to override the method:

virtual void getDependentDialects(DialectRegistry &registry) const {}

and registery on the provided registry any dialect that this pass can produce.
Passes defined in TableGen can provide this list in the dependentDialects list
field.

2) For dialects, on construction you can register dependent dialects using the
provided MLIRContext: `context.getOrLoadDialect<DialectName>()`
This is useful if a dialect may canonicalize or have interfaces involving
another dialect.

3) For loading IR, dialect that can be in the input file must be explicitly
registered with the context. `MlirOptMain()` is taking an explicit registry for
this purpose. See how the standalone-opt.cpp example is setup:

  mlir::DialectRegistry registry;
  registry.insert<mlir::standalone::StandaloneDialect>();
  registry.insert<mlir::StandardOpsDialect>();

Only operations from these two dialects can be in the input file. To include all
of the dialects in MLIR Core, you can populate the registry this way:

  mlir::registerAllDialects(registry);

4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in
the context before emitting the IR: context.getOrLoadDialect<ToyDialect>()

Differential Revision: https://reviews.llvm.org/D85622
2020-08-18 23:23:56 +00:00